CN114743122A - Self-learning identification method and device for self-moving equipment, self-moving equipment and medium - Google Patents

Self-learning identification method and device for self-moving equipment, self-moving equipment and medium Download PDF

Info

Publication number
CN114743122A
CN114743122A CN202210261120.1A CN202210261120A CN114743122A CN 114743122 A CN114743122 A CN 114743122A CN 202210261120 A CN202210261120 A CN 202210261120A CN 114743122 A CN114743122 A CN 114743122A
Authority
CN
China
Prior art keywords
image
feature
self
identified
repairing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210261120.1A
Other languages
Chinese (zh)
Inventor
杨勇
张康健
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen 3irobotix Co Ltd
Original Assignee
Shenzhen 3irobotix Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen 3irobotix Co Ltd filed Critical Shenzhen 3irobotix Co Ltd
Priority to CN202210261120.1A priority Critical patent/CN114743122A/en
Publication of CN114743122A publication Critical patent/CN114743122A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a self-learning identification method and device of self-moving equipment, the self-moving equipment and a medium, wherein an identification object repairing instruction and an object repairing image to be identified are received, and similar feature extraction is carried out on the object repairing image to be identified according to the identification object repairing instruction to obtain a first feature; wherein the object to be recognized repairing image comprises a repairing object to be recognized; when the self-moving equipment detects that the to-be-identified restoration object exists in an initial image obtained by shooting the to-be-cleaned area, performing similar feature extraction on a first image of the to-be-identified restoration object to obtain a second feature; wherein the initial image comprises the first image; and when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold, updating the first image. The method and the device can avoid the situations of false recognition and missed recognition, and improve the cleaning accuracy rate of the mobile equipment.

Description

Self-learning identification method and device for self-moving equipment, self-moving equipment and medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a self-learning identification method and device for self-moving equipment, the self-moving equipment and a medium.
Background
The AI object recognition technology has been widely used at present on the sweeper, like discerning objects such as shoes, socks, electric wire, pet excrement and urine, personal weighing scale to realize the obstacle avoidance function of different modes or correspond the demonstration on app according to the different objects of discernment.
However, in an open scene, various objects exist, and therefore, certain features of an identified object and features of an object to be identified have certain similarity, which easily causes misrecognition of the self-mobile device, once the misrecognition is performed, a misrecognition icon is displayed on the app, which directly causes missed room scanning and missed dragging in severe cases, and in addition, missed recognitions may also occur, such as missed recognitions of socks and wires, which are directly rolled into the self-mobile device to affect the motion of the self-mobile device, so that the cleaning accuracy of the self-mobile device is low due to the fact that misrecognition and/or missed recognitions easily occur at present.
Disclosure of Invention
The application mainly aims to provide a self-learning identification method, device, equipment and medium for self-moving equipment, and aims to solve the technical problem that the cleaning accuracy of the self-moving equipment is low due to error identification or missing identification at present.
In order to achieve the above object, an embodiment of the present application provides a self-learning identification method for a self-mobile device, where the self-learning identification method for the self-mobile device includes:
receiving an identification object restoration instruction and an object restoration image to be identified, and extracting similar features of the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature; wherein the object to be recognized repairing image comprises a repairing object to be recognized;
when the self-moving equipment detects that the to-be-identified restoration object exists in an initial image obtained by shooting the to-be-cleaned area, performing similar feature extraction on a first image of the to-be-identified restoration object to obtain a second feature; wherein the initial image comprises the first image;
and when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold, updating the first image.
Preferably, the step of updating the first image comprises:
and replacing the first image according to the object image corresponding to the second characteristic.
Preferably, the step of updating the first image further comprises:
and displaying the first image, and determining the object class corresponding to and associated with the second feature as the class of the object to be identified and repaired.
Preferably, the step of receiving the recognition object repairing instruction and the object repairing image to be recognized, and performing similar feature extraction on the object repairing image to be recognized according to the recognition object repairing instruction to obtain the first feature includes:
receiving the identification object repairing instruction and the object to be identified repairing image;
and if the recognized object repairing instruction is a false recognition self-learning repairing instruction, extracting similar features of the to-be-recognized object repairing image according to the false recognition self-learning repairing instruction to obtain the first feature, and storing the first feature in a preset feature database.
Preferably, the step of receiving the recognition object repairing instruction and the object repairing image to be recognized, and performing similar feature extraction on the object repairing image to be recognized according to the recognition object repairing instruction to obtain the first feature further includes:
receiving the identification object repairing instruction and the object to be identified repairing image;
if the identification object restoration instruction is a missed identification object restoration instruction, performing similar feature extraction on the object restoration image to be identified according to the missed identification object restoration instruction to obtain the first feature, and acquiring category information of the first feature;
and storing the first characteristic and the category information of the first characteristic in a preset characteristic database in an associated manner.
Preferably, when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold, the step of updating the first image includes:
and respectively comparing the similarity of the feature vector of the preset dimension in the second feature with the similarity of the feature vector of the preset dimension of the first feature prestored in a preset feature database, and determining the similarity between the first feature and the second feature.
Preferably, before the step of receiving the recognition object repairing instruction and the object repairing image to be recognized, and performing similar feature extraction on the object repairing image to be recognized according to the recognition object repairing instruction to obtain the first feature, the method further includes:
acquiring an object image and marking information thereof as a training sample;
constructing an initial feature extraction network;
training the initial feature extraction network by using a triple loss function according to the training sample to obtain a pre-trained feature extraction network;
receiving an identification object repairing instruction and an object repairing image to be identified;
and based on the pre-trained feature extraction network, performing similar feature extraction on the object repairing image to be recognized according to the recognized object repairing instruction to obtain the first feature.
In order to achieve the above object, the present application also provides a cleaning apparatus for a self-moving device, including:
the first feature extraction module is used for receiving an identification object restoration instruction and an object restoration image to be identified, and extracting similar features of the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature; wherein the object to be recognized repairing image comprises a repairing object to be recognized;
the second feature extraction module is used for performing similar feature extraction on the first image of the object to be identified and repaired to obtain a second feature when the self-moving equipment detects that the object to be identified and repaired exists in the initial image obtained by shooting the area to be cleaned; wherein the initial image comprises the first image;
and the updating module is used for updating the first image when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold.
Further, to achieve the above object, the present application also provides a self-moving device, which includes a memory, a processor, and a cleaning program stored in the memory and running on the processor, wherein the cleaning program of the self-moving device, when executed by the processor, implements the steps of the self-learning identification method of the self-moving device.
Further, to achieve the above object, the present application also provides a medium, which is a computer readable storage medium, on which a cleaning program of a self-moving device is stored, and when the cleaning program of the self-moving device is executed by a processor, the steps of the self-learning identification method of the self-moving device are implemented.
The embodiment of the application provides a self-learning identification method and device of self-moving equipment, the self-moving equipment and a medium, wherein an identification object repairing instruction and an object repairing image to be identified are received, and similar feature extraction is performed on the object repairing image to be identified according to the identification object repairing instruction to obtain a first feature; the object to be identified repairing image comprises an object to be identified repairing image so as to obtain a first characteristic and provide a judgment basis for subsequent similarity comparison; when the self-moving equipment detects that the to-be-identified restoration object exists in an initial image obtained by shooting the to-be-cleaned area, performing similar feature extraction on a first image of the to-be-identified restoration object to obtain a second feature; the initial image comprises the first image so as to obtain a second characteristic of the initial image and provide a judgment basis for subsequent similarity comparison; when the similarity of the first feature and the second feature is larger than or equal to a preset similarity threshold value, the first image is updated, the situations of false recognition and missing recognition are avoided by updating the first image, and the cleaning accuracy of the mobile equipment is improved. According to the method and the device, in the cleaning process of the mobile equipment, each object in the area to be cleaned is accurately identified firstly, and then the cleaning is carried out according to the correct image or category of the object, so that the conditions of error identification and missing identification are avoided, and the cleaning accuracy rate of the mobile equipment is improved.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment involved in a mobile device according to the present application;
FIG. 2 is a flowchart illustrating a first embodiment of a self-learning identification method of a self-mobile device according to the present application;
FIG. 3 is a flowchart illustrating an embodiment of a self-learning identification method of a self-mobile device according to the present application;
FIG. 4 is a flow chart illustrating another embodiment of a self-learning identification method of a self-mobile device according to the present application;
fig. 5 is a functional block diagram of a cleaning apparatus of a mobile device according to a preferred embodiment of the present invention.
The implementation, functional features and advantages of the object of the present application will be further explained with reference to the embodiments, and with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As shown in fig. 1, fig. 1 is a possible application scenario provided by the embodiment of the present disclosure, where the application scenario includes a self-moving device, and specifically may be a cleaning robot, such as a sweeping robot, a mopping robot, and the like. In some implementations, the cleaning robot may be an automatic cleaning device, in particular an automatic floor sweeping robot, an automatic floor mopping robot. In an implementation, the cleaning robot may be provided with a navigation system that may self-detect the defined work area and determine the specific location of the cleaning robot in the work area. The cleaning robot may be provided with various sensors, such as infrared, laser, etc., for detecting the floor debris condition of the work area in real time. In other embodiments, the automatic cleaning device may be provided with a touch-sensitive display to receive user-input operating instructions. Automatic cleaning device can also be provided with wireless communication modules such as WIFI module, Bluetooth module to be connected with intelligent terminal, and receive the operating instruction that the user utilized intelligent terminal to transmit through wireless communication module. The automatic cleaning device comprises a machine body 100, a sensing system, a control system, a drive system 106, a cleaning system, an energy system, a human-machine interaction system and a memory.
The sensing system includes a position determining device 102 located above the machine body, a collision sensor, a fall prevention sensor, an ultrasonic sensor, an infrared sensor, an accelerometer, a gyroscope, a odometer and other sensing devices 104 located at the forward portion of the machine body, and provides various position information and motion state information of the machine and other environmental information to the control system. The position determining device includes, but is not limited to, a camera, a laser distance measuring device (LDS, i.e., lidar). In the embodiment of the application, the position determining device adopts laser radar equipment, and the laser radar equipment can be a single-line laser radar and also can be a multi-line laser radar. The control system comprises a processor, a memory and a cleaning program which is stored on the memory and can run on the processor from the mobile equipment; a processor in the control system may be configured to invoke a cleaning program stored in memory from the mobile device and perform the following operations:
receiving an identification object restoration instruction and an object restoration image to be identified, and extracting similar features of the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature; wherein the object to be recognized repairing image comprises a repairing object to be recognized;
when the self-moving equipment detects that the to-be-identified restoration object exists in an initial image obtained by shooting the to-be-cleaned area, performing similar feature extraction on a first image of the to-be-identified restoration object to obtain a second feature; wherein the initial image comprises the first image;
and when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold, updating the first image.
Further, the step of updating the first image comprises:
and replacing the first image according to the object image corresponding to the second characteristic.
Further, the step of updating the first image further comprises:
and displaying the first image, and determining the object class corresponding to and associated with the second feature as the class of the object to be identified and repaired.
Further, the step of receiving the recognition object repairing instruction and the object repairing image to be recognized, and performing similar feature extraction on the object repairing image to be recognized according to the recognition object repairing instruction to obtain the first feature includes:
receiving the identification object repairing instruction and the object to be identified repairing image;
and if the recognized object repairing instruction is a false recognition self-learning repairing instruction, extracting similar features of the to-be-recognized object repairing image according to the false recognition self-learning repairing instruction to obtain the first feature, and storing the first feature in a preset feature database.
Further, the step of receiving the recognition object repairing instruction and the object repairing image to be recognized, and performing similar feature extraction on the object repairing image to be recognized according to the recognition object repairing instruction to obtain the first feature further includes:
receiving the identification object repairing instruction and the to-be-identified object repairing image;
if the identification object restoration instruction is a missed identification object restoration instruction, performing similar feature extraction on the object restoration image to be identified according to the missed identification object restoration instruction to obtain the first feature, and acquiring category information of the first feature;
and storing the first characteristic and the category information of the first characteristic in a preset characteristic database in an associated manner.
Further, when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold, the step of updating the first image includes:
and respectively comparing the similarity of the feature vector of the preset dimension in the second feature with the similarity of the feature vector of the preset dimension of the first feature prestored in a preset feature database, and determining the similarity between the first feature and the second feature.
Further, before the step of receiving the recognition object repairing instruction and the object repairing image to be recognized, and performing similar feature extraction on the object repairing image to be recognized according to the recognition object repairing instruction to obtain the first feature, the method further includes:
acquiring an object image and marking information thereof as a training sample;
constructing an initial feature extraction network;
training the initial feature extraction network by using a triple loss function according to the training sample to obtain a pre-trained feature extraction network;
receiving an identification object repairing instruction and an object repairing image to be identified;
and based on the pre-trained feature extraction network, performing similar feature extraction on the object repairing image to be recognized according to the recognized object repairing instruction to obtain the first feature.
For a better understanding of the above technical solutions, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Referring to fig. 2, fig. 2 is a flowchart illustrating a self-learning identification method of a self-moving device according to a first embodiment of the present application. In this embodiment, the self-learning identification method for the self-moving device includes the following steps:
step S10, receiving an identification object restoration instruction and an object restoration image to be identified, and extracting similar features of the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature.
The self-learning identification method of the self-moving device in the embodiment is applied to the self-moving device, wherein the self-moving device in the embodiment may be the floor sweeping robot, the floor mopping robot, or the like. The self-moving equipment is connected with intelligent terminals such as a smart phone and a tablet personal computer, and an application program for controlling the self-moving equipment and inquiring information can be arranged in the intelligent terminal, so that a user can perform man-machine interaction with the self-moving equipment through the application program to control the self-moving equipment to operate and inquire various information of the self-moving equipment. The front end of the self-moving equipment is provided with a camera which is used for carrying out obstacle detection on an area pointed by the camera so as to carry out cleaning work and obstacle avoidance, and the working efficiency of the self-moving equipment is ensured; and configuring a feature extraction network inside the mobile device, where the feature extraction network may specifically be a similarity feature extraction network, and the type of the feature extraction network may specifically be CNN (Convolutional Neural Networks), and is used to perform feature extraction on an input image and output each feature in the image.
Specifically, after power is turned on, the self-moving device may perform area scanning through the front-end camera, generate a corresponding map after the scanning is completed, for example, scan an area included in the three-room-one hall of the user, obtain a house type map of the user after the scanning is completed, for example, generate and display a house type map of the three-room-one hall on a display interface of an application program, and display specific area size information, for example, display that the total area of the three-room-one hall is 100 square meters.
Step S10 includes:
step A1, receiving the recognition object restoration instruction and the object restoration image to be recognized;
step A2, if the recognition object restoration instruction is a misrecognition self-learning restoration instruction, extracting similar features of the object restoration image to be recognized according to the misrecognition self-learning restoration instruction to obtain the first feature, and storing the first feature in a preset feature database.
Specifically, the self-moving device may capture an image or a video through the camera, perform object recognition on the captured image or video through the ai recognition algorithm, use each recognized object as a to-be-recognized restoration object, draw an object frame for the recognized to-be-recognized restoration object in the image or the video captured by the camera, and intercept the content in the object frame in the image or the video to form an image including the to-be-recognized restoration object as the to-be-recognized restoration image. The object repairing image to be recognized is transmitted to the application program from the mobile device, the object repairing image to be recognized is displayed at the corresponding position of the map, the display position of the object repairing image to be recognized can be specifically determined according to the coordinate information of the object frame, namely the object repairing image to be recognized with the corresponding size is displayed at the actual position of the object in the map and serves as a display icon for a user to view.
Further, if the user determines that the false recognition occurs after checking the object repairing image to be recognized and comparing the object repairing image with the actual scene, the application program triggers (for example, clicks a button in a screen or an entity button of the intelligent terminal) the false recognition self-learning repairing button, the false recognition self-learning repairing button is generated by the application program, and a false recognition self-learning repairing instruction based on the object repairing image to be recognized is sent to the self-moving device, so that the object repairing image to be recognized is sent to the self-moving device.
The method comprises the steps of receiving a misrecognition self-learning repair instruction sent by a user through an application program from a mobile device, analyzing an object repair image to be recognized from the misrecognition self-learning repair instruction, transmitting the object repair image to be recognized to an AI program through a main program, inputting the object repair image to be recognized into a pre-trained feature extraction network by the AI program, performing feature extraction on the object repair image to be recognized by operating the pre-trained feature extraction network, outputting misrecognition object features of the object repair image to be recognized, which are used as misrecognition objects, specifically, feature vectors with 128 dimensions, and storing the misrecognition object features as first features in a preset feature database. In addition, the user can correct the object to be recognized repairing image corresponding to the object which is mistakenly recognized, specifically, the user can shoot the correct image of the object by himself, and the correct image is taken as the corrected image of the object to be recognized repairing image and is stored in association with the object to be recognized repairing image. For example, if an image of a boot is displayed on a map after being photographed and recognized by a mobile device, and a user determines that the boot should actually be a stocking after comparison, the image of the stocking can be photographed and uploaded to an application program, and the correct image of the stocking can be stored in a preset feature database in association with the misidentification features of the boot as a corrected image of the original boot image.
And step S10 further includes:
step B1, if a missed identification object repairing instruction is received, acquiring a missed identification image and a category of a missed identification object in the missed identification object repairing instruction;
and step B2, performing feature extraction on the missed identification image based on a pre-trained feature extraction network to obtain the features of the missed identification object, and storing the features of the missed identification object as first features and the categories of the missed identification object in a preset feature database in an associated manner.
It can be understood that after the specified area is cleaned by the mobile device and the corresponding object image is displayed on the map as the display icon, if the user finds that the object is missed through actual comparison, the user can control the mobile device to photograph the area where the object which is deemed to be missed is located by the user through the application program, and after the photographing is completed, the user can frame the object which is deemed to be missed and capture the content in the frame in the image preview interface in a frame pulling mode to form a missed recognition image. The categories of the missing identification objects include, but are not limited to, shoes, socks, wires, scales, feces, fan mounts, human legs, purses, carpets, etc.
And receiving a missing identification object repairing instruction sent by a user based on an application program from the mobile equipment, analyzing the missing identification object repairing instruction, and analyzing the category of the missing identification image and the missing identification object from the missing identification object repairing instruction. Further, the missed recognition image is transmitted to an AI program through a main program, the AI program inputs the missed recognition image to a pre-trained feature extraction network, the missed recognition image is subjected to feature extraction by operating the pre-trained feature extraction network, features serving as missed recognition objects are output and serve as missed recognition object features, specifically, feature vectors with 128 dimensions, and the missed recognition object features serve as first features and are associated with the categories of the missed recognition objects and stored in a preset feature database.
It is understood that the preset feature database may include a plurality of first features, and the first features may be the object features which are misrecognized or the object features which are not misrecognized.
It should be noted that before the feature extraction is performed on the repaired image of the object to be recognized based on the pre-trained feature extraction network to obtain the first feature, the method further includes:
step C1, acquiring an object image and labeling information thereof as a training sample;
step C2, constructing an initial feature extraction network;
and step C3, training the initial feature extraction network by a triple loss function according to the training sample to obtain a pre-trained feature extraction network.
The self-moving equipment needs to construct an initial feature extraction network and pre-train the initial feature extraction network, and can extract features of the object repairing image to be recognized through the pre-trained feature extraction network after the object repairing image to be recognized is generated. Specifically, an object image and its annotation information are obtained as a training sample, where the obtaining mode may be an object image and its annotation information captured by a camera of a mobile device and annotated by a user through an application program, or an object image captured by another person and annotation information annotated to the object image may be obtained from a browser, for example, if the object image captured by the camera includes a wallet, and if the user annotates that an object in the object image is specifically a wallet through the application program, the object image including the wallet and the annotation information included in the image include a wallet are obtained. It should be noted that, taking each image and its label information as a sample, a training sample is formed by obtaining a large number of samples, for example, 10000 groups of samples, 100000 groups of samples, 100000000 groups of samples form a training sample.
Meanwhile, the self-mobile device constructs an initial feature extraction network based on the current scene, and specifically, the initial feature extraction network is constructed by adopting an efcient-lite 0 network structure. Further, in the embodiment, the loss function adopts a triple loss function, that is, a triple loss function, and the purpose of triple-loss optimization is to make embedding (embedding) distances of the same object as close as possible, and embedding distances between different objects as far as possible. Embedding is a word in topology, and is often used in combination with Manifold in the deep learning field. Several examples can be used to illustrate, for example, a three-dimensional space sphere is a two-dimensional manifold embedded in a three-dimensional space (2D manipulated embedded in a 3D space). It is said to be a two-dimensional manifold because any point on the sphere need only be expressed in a two-dimensional longitude and latitude. For another example, the rotation matrix of a two-dimensional space is a 2x2 matrix, which only needs one angle to express, that is, a one-dimensional manifold is embedded in the 2x2 matrix space. More specifically, a picture is selected as an anchor, Positive is the same object as the anchor, Negative is different object from the anchor, and it is desirable to make the anchor closer to Positive and farther from Negative by learning, so that the combination of the three pictures is directly called as a triplet, and then loss can be defined as shown in the following formula:
Figure BDA0003549728580000111
wherein the content of the first and second substances,
Figure BDA0003549728580000112
a feature vector representing sample i, 128-dimensional, such as a 128-dimensional feature vector of a shoe image;
Figure BDA0003549728580000113
the representation and the sample i vector are the same identified feature vector, 128-dimensional, for example, the same shoe as the upper side, but shot at different angles;
Figure BDA0003549728580000114
the feature vector representing sample i is not the same identified feature vector, 128-dimensional, e.g., other shoes are not the same as the upper shoe; | x | is the euclidean distance; alpha means that a minimum interval exists between the Euclidean distance between x _ a and x _ n and between x _ a and x _ p, namely an offset, in order to prevent the loss from becoming 0 after being too small, so that the gradient is 0, and a smoothing effect is realized, particularly a learnable hyper parameter; + represents [, ]]When the internal value is more than 0, the value is taken as loss, and when the internal value is less than 0, 0 is taken as loss; 0,1,2, N.
Therefore, after the self-moving device acquires the training samples and constructs the initial feature extraction network, the initial feature extraction network can be trained by using the triple loss function as the loss function through a large number of samples, so that the prediction effect of the trained feature extraction network is optimal, the pre-trained feature extraction network is obtained, and the features contained in the image are accurately extracted from the input image through the pre-trained feature extraction network.
Step S20, when the self-moving device detects that the repair object to be recognized exists in the initial image obtained by shooting the area to be cleaned, performing similar feature extraction on the first image of the repair object to be recognized to obtain a second feature.
After the map is generated and displayed by the self-mobile device, when a user has a cleaning requirement, an area needing cleaning can be selected in the displayed map through an application program, for example, three rooms or all areas of the map are cleaned, and the application program generates a cleaning instruction based on user operation and sends the cleaning instruction to the self-mobile device.
After the mobile equipment receives the cleaning instruction, the area selected by the user and contained in the cleaning instruction is used as the area to be cleaned, and the area to be cleaned is cleaned. In this embodiment, when the self-moving device performs cleaning, object detection may be performed on an area that is not cleaned, and it is determined whether the area where the object is located needs to be cleaned, for example, a carpet and the like may be cleaned in a pressurization mode, and shoes, socks and the like need to bypass the cleaning. In the cleaning process, if any object exists in the area to be cleaned after an initial image or video shot by the camera is identified through an ai (artificial intelligence) identification algorithm, and the object is determined to be a first object, an object frame can be drawn for the detected first object in the image shot by the camera, and the content in the object frame in the image is intercepted to form an image including the first object as the first image. The method and the device have the advantages that the feature extraction is conveniently carried out on the first image subsequently, the extracted feature is compared with the feature stored in the preset feature database in the similarity degree, corresponding processing is carried out according to the similarity degree comparison result, the situations of error identification and missing identification are avoided, and the cleaning accuracy rate of the mobile device is improved. The preset feature database is a database for storing feature data, and the database is a "warehouse for organizing, storing, and managing data according to a data structure". Is an organized, sharable, uniformly managed collection of large amounts of data that is stored long term within a computer.
After a first image including a first object is generated, an identified object repairing instruction is received, the first image is input into a feature extraction network which is configured inside the first image and is pre-trained from a mobile device, feature extraction is performed on the input object repairing image to be identified based on the pre-trained feature extraction network and the identified object repairing instruction, after the feature extraction of the object repairing image to be identified is completed by the pre-trained feature extraction network, a feature vector of a preset dimension corresponding to the object repairing image to be identified is output as a second feature, wherein the preset dimension can be 64 dimensions, 128 dimensions, 256 dimensions and the like, and 128 dimensions can be preferred in the embodiment. The feature extraction network in this embodiment may be specifically a CNN, and since it needs to be deployed on a terminal, the CNN in this embodiment may be more specifically a MobileNet, a shuffleNet, an efficienet-lite series, and the like, and is preferably efficienet-lite0 of the efficienet-lite series. The MobileNet, shuffenet and efficient-lite are all currently existing neural network types, and this embodiment will not be described in detail here.
Step S30, when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold, updating the first image.
In step S30, the self-mobile device compares the second features with the respective first features pre-stored in the preset feature database, and determines whether the similarity between the second features and the first features in the preset feature database is greater than a preset similarity threshold, where the preset similarity threshold is a numerical value that can be set according to actual requirements, such as 0.7, 0.8, 0.9, and the like. Therefore, the identification of the object shot by the camera is more accurately finished, the situations of error identification and missed identification are avoided, and the cleaning accuracy of the mobile equipment is improved.
Further, in the present embodiment, the first feature and the second feature both include a feature vector of a predetermined dimension, for example, 128 dimensions. The step of comparing the similarity of the second characteristic with the first characteristic prestored in a preset characteristic database comprises:
step S31, comparing similarity between the feature vector of the preset dimension in the second feature and a feature vector of a preset dimension of a first feature pre-stored in a preset feature database, and determining similarity between the first feature and the second feature.
When the second feature is compared with the similarity of the first feature prestored in the preset feature database, specifically, the similarity of a preset dimension, for example, a 128-dimensional feature vector in the second feature may be compared with the similarity of 128-dimensional feature vectors of each first feature in the preset feature database, more specifically, the similarity of the feature vectors may be compared by using euclidean distances or cosine distances, that is, the euclidean distances between the 128-dimensional feature vectors in the first feature and the 128-dimensional feature vectors in the second feature are calculated, so as to obtain the similarity between the first feature and each second feature; or respectively calculating cosine distances between the 128-dimensional feature vectors in the first features and the 128-dimensional feature vectors in the second features to obtain the similarity between the first features and each second feature.
After the similarity comparison is carried out on the first feature and a second feature prestored in a preset feature database, if it is determined that the first feature with the similarity to the second feature being greater than a preset similarity threshold exists in the preset feature database, for example, if the similarity to the first feature of one second feature in the preset feature database is greater than 0.7, it indicates that the mobile device has been subjected to false recognition or missed recognition in the previous cleaning and recognition process; therefore, when the first feature with the similarity greater than the preset similarity threshold with the second feature is the feature of the missed-identification object, the category of the missed-identification object corresponding to the first feature is determined as the category of the first object, and the cleaning mode of the area to be cleaned is determined according to the category of the first object, wherein the cleaning mode is specifically cleaning or bypassing without cleaning in the embodiment. Therefore, it is determined whether the area where the first object is located needs to be cleaned according to the category of the first object, and if so, the area is bypassed to clean the rest of the area to be cleaned. When the first feature with the similarity greater than the preset similarity threshold with the second feature is the object feature of the false recognition, replacing the second image with the correct image of the object of the false recognition corresponding to the first feature, and determining the cleaning mode of the area to be cleaned according to the correct image, specifically, determining whether the area where the object corresponding to the image is located needs to be cleaned according to the correct image, if so, cleaning, and if not, bypassing the area to clean the rest areas in the area to be cleaned. When determining whether to clean the area where the object is located according to the category of the first object or determining whether to clean the area where the object corresponding to the image is located according to the correct image, if the category of the first object or the object in the correct image is a carpet or the like, which does not affect the object and an object from the mobile device, cleaning the area where the object is located is needed, for example, cleaning the area of the whole carpet. If the type of the first object or the object in the correct image is an object that may affect the object or the mobile device, such as socks or shoes, it is not necessary or possible to clean the area where the object is located. Therefore, the situations of false recognition and missed recognition are avoided, and the cleaning accuracy of the mobile equipment is improved.
It can be understood that, if the similarity between all the first features and the second features in the preset feature database is less than or equal to the preset similarity threshold, it is determined that there is no false recognition or missing recognition of the detected first object, and the detected first object belongs to normal recognition, the category of the first object in the first image and the first image is transmitted to the application program, and the application program displays the first image at the corresponding position of the map as the display icon of the object at the position and displays the category of the object at the same time.
The embodiment provides a self-learning identification method of self-moving equipment, which comprises the steps of receiving an identification object restoration instruction and an object restoration image to be identified, and extracting similar features of the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature; wherein the object to be recognized repairing image comprises a repairing object to be recognized; when the self-moving equipment detects that the to-be-identified restoration object exists in an initial image obtained by shooting the to-be-cleaned area, performing similar feature extraction on a first image of the to-be-identified restoration object to obtain a second feature; wherein the initial image comprises the first image; and when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold, updating the first image. According to the method and the device, the initial image with the object identification can be generated when the object is detected in the cleaning process, after the generated initial image is subjected to feature extraction to obtain the second feature, the second feature is compared with the first feature stored in the preset feature database in the similarity degree, when the similarity degree of the first feature and the second feature is larger than the preset similarity degree threshold value, the category of the object which is missed to be identified and corresponds to the first feature is determined as the category of the object, or the correct image of the object which is mistakenly identified and corresponds to the first feature is used as the replacement first image, so that each object in the area to be cleaned is accurately identified firstly, then cleaning is carried out according to the correct image or category of the object, the situations of mistaken identification and missed identification are avoided, and the cleaning accuracy of the mobile device is improved.
Further, based on the first embodiment of the self-learning identification method of the self-mobile device of the present application, a second embodiment of the self-learning identification method of the self-mobile device of the present application is proposed, in which the second feature includes a missing identification object feature, and the step of determining the category of the missing identification object corresponding to the second feature as the category of the first object includes:
step S411, if the second characteristic is a missed identification object characteristic, acquiring the category of the missed identification object corresponding to the missed identification object characteristic;
step S412, displaying the first image, and determining the object class associated with the second feature as the class of the object to be identified.
After the similarity between the first feature and the second feature is determined to be larger than a preset similarity threshold, if the first feature is the missed recognition object feature, and it is determined that the currently detected obstacle has been previously determined as the missed recognition object by the user, the object category corresponding to the missed recognition object feature is obtained as the category of the missed recognition object, and the category of the missed recognition object is determined as the category of the currently detected first object, so that whether cleaning is needed to be performed on the corresponding area of the first object is determined according to the category of the first object. For example: if the similarity between the currently detected characteristics of one sock and the characteristics of one sock prestored in the preset characteristic data is larger than 0.7, which indicates that the same sock or similar socks are subjected to missed recognition before, the category set by the user for confirming that the missed recognition is the missed recognition sock is determined as the category of the object, namely the sock.
Further, the first feature includes a feature of a misrecognized object, and the step of replacing the first image with a correct image of the misrecognized object according to the first feature includes:
step S421, if the second feature is a misrecognized object feature, acquiring a correct image of the misrecognized object corresponding to the misrecognized object feature;
step S422, the first image is replaced by the object image corresponding to the second feature.
After the similarity between the first feature and the second feature is determined to be larger than a preset similarity threshold, if the first feature is the misrecognized object feature, and the currently detected first object is determined to be the misrecognized object by the user, a correct image associated with the misrecognized object corresponding to the misrecognized object feature of the user is obtained, the first image is replaced through the correct image, and the correct image is transmitted to the mobile device to be displayed. For example: if one boot exists in the area to be cleaned, but the self-moving device is recognized as a stocking after being recognized, the similarity of the extracted features and the features of the boot which is also recognized as the stocking before is compared by the self-moving device, and then the similarity of the extracted features and the features of the boot which is recognized as the stocking is determined to be more than 0.7, the correct image of the boot shot by the user is obtained, and the first image is replaced by the correct image of the boot.
It can be understood that, after determining that the similarity between the first feature and the second feature is greater than the preset similarity threshold, if the first feature is the misrecognized object feature, the embodiment may also directly filter the first image, that is, the display icon of the misrecognized image is not displayed in the map. For example, if the stocking is erroneously recognized as a boot, the image generated originally is filtered and is not displayed on the map.
After the similarity between the first feature and the second feature is determined to be greater than the preset similarity threshold, whether the area corresponding to the first object needs to be cleaned or not can be determined according to the processing mode corresponding to the false recognition or the missing recognition, the situations of the false recognition and the missing recognition are avoided, and the cleaning accuracy of the mobile equipment is improved.
In an embodiment of the present application, referring to fig. 3, fig. 3 is a flowchart illustrating a self-learning identification method of a self-mobile device according to an embodiment of the present application; in this embodiment, the mobile device is a sweeper, the application program is called app, and after the sweeper detects an obstacle and recognizes the obstacle as a shoe, the image of the shoe is uploaded to the app to be displayed on the map. After the App displays the images of the shoes in the map, the user judges whether the shoes are mistakenly identified, and if not, the shoes are not processed; and if so, clicking a self-learning repair button with the misrecognition on the app by the user, and issuing the first image which is misrecognized as a small picture to the sweeper end through the app. After receiving the small picture, the sweeper transmits the small picture to the AI program through the main program. And the AI program runs a pre-trained similarity feature extraction network, outputs 128-dimensional features and stores the features in a local misrecognized feature database. When the next recognition is started, namely when the next round of cleaning work is started, if an obstacle is detected and a small graph is obtained according to the obstacle, the small graph is input into the similarity feature extraction network, and 128-dimensional features are output by operating the similarity feature extraction network. And comparing the similarity of the features with a local false recognition feature library, and judging whether the similarity meets a preset similarity threshold value. If the similarity of the feature and a certain feature is larger after comparison, for example, the similarity is larger than 0.7, the small graph is filtered out and is not output and displayed on the map of the app; if the similarity between the feature and all the features in the local misrecognized feature library is small, for example, less than or equal to 0.7, a normal recognition process is performed, that is, it is determined that there is no misrecognization or missing recognition, and a small graph is output to an app map for display.
In another embodiment of the present application, referring to fig. 4, fig. 4 is a schematic flowchart of another embodiment of a self-learning identification method of a self-mobile device according to the present application; in this embodiment, the mobile device is a sweeper, and the application program is called app. Specifically, since the object with a fixed appearance is often not recognized, the user can take a picture of the object through the app control machine, that is, the sweeper, and in the picture taking interface, the user can draw the frame to frame the object, select the type of the object after framing the object, click the missed recognition object repair button, send the image of the object in the frame to the sweeper end as a small picture and type information, and after receiving the small picture, the sweeper sends the small picture to the AI program through the main program. And the AI program runs a pre-trained similarity feature extraction network, outputs 128-dimensional features and stores the features in a local missing recognition feature database. When the next recognition is started, namely when the next round of cleaning work is started, if an obstacle is detected and a small graph is obtained according to the obstacle, the small graph is input into the similarity feature extraction network, and 128-dimensional features are output by operating the similarity feature extraction network. And comparing the similarity of the features with a local missing identification feature library, and judging whether the similarity meets a preset similarity threshold value. If the similarity of the feature and a certain feature is larger after comparison, for example, the similarity is larger than 0.7, outputting a category and a small graph corresponding to the feature; if the similarity between the feature and all the features in the local misrecognized feature library is small, for example, less than or equal to 0.7, a normal recognition process is performed, that is, no misrecognization or missing recognition exists, and the small graph is output to the map of the app for display.
Further, this application still provides a cleaning device from mobile device.
Referring to fig. 5, fig. 5 is a functional module schematic diagram of a cleaning apparatus of a mobile device according to a first embodiment of the present application.
The cleaning device of the self-moving equipment comprises:
the first feature extraction module 10 is configured to receive an identification object restoration instruction and an object restoration image to be identified, and perform similar feature extraction on the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature; wherein the object to be recognized repair image includes a repair object to be recognized.
The second feature extraction module 20 is configured to, when the self-moving device detects that the object to be identified and repaired exists in an initial image obtained by shooting the area to be cleaned, perform similar feature extraction on the first image of the object to be identified and repaired to obtain a second feature; wherein the initial image comprises the first image.
An updating module 30, configured to update the first image when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold.
Furthermore, the present application also provides a medium, preferably a computer readable storage medium, on which a cleaning program of a self-moving device is stored, which when executed by a processor implements the steps of the embodiments of the self-learning identification method of a self-moving device described above.
In the embodiments of the self-moving device and the computer-readable storage medium of the present application, all technical features of the embodiments of the self-learning identification method of the self-moving device are included, and the description and explanation contents are basically the same as those of the embodiments of the self-learning identification method of the self-moving device, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present application or a part contributing to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., a ROM/RAM, a magnetic disk, and an optical disk), and includes a plurality of instructions for enabling a terminal device (which may be a fixed terminal, such as an internet of things smart device including smart homes, such as a smart air conditioner, a smart lamp, a smart power supply, and a smart router, or a mobile terminal, including a smart phone, a wearable networked AR/VR device, a smart sound box, and a network device such as an auto-driven automobile) to execute the method according to the embodiments of the present application.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A self-learning identification method of a self-moving device is characterized by comprising the following steps:
receiving an identification object restoration instruction and an object restoration image to be identified, and extracting similar features of the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature; wherein the object to be recognized repairing image comprises a repairing object to be recognized;
when the self-moving equipment detects that the to-be-identified restoration object exists in an initial image obtained by shooting the to-be-cleaned area, performing similar feature extraction on a first image of the to-be-identified restoration object to obtain a second feature; wherein the initial image comprises the first image;
and when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold, updating the first image.
2. The self-learning identification method of a self-moving device of claim 1, wherein the step of updating the first image comprises:
and replacing the first image according to the object image corresponding to the second characteristic.
3. The self-learning identification method of the self-moving device of claim 1, wherein the step of updating the first image further comprises:
and displaying the first image, and determining the object class corresponding to and associated with the second feature as the class of the object to be identified.
4. The self-learning identification method of self-moving equipment according to claim 1, wherein the step of receiving an identification object restoration instruction and an object restoration image to be identified, and performing similar feature extraction on the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature comprises the steps of:
receiving the identification object repairing instruction and the object to be identified repairing image;
and if the recognized object repairing instruction is a false recognition self-learning repairing instruction, extracting similar features of the to-be-recognized object repairing image according to the false recognition self-learning repairing instruction to obtain the first feature, and storing the first feature in a preset feature database. (the characteristics of the image corresponding to the object which is identified by mistake are extracted according to the content of man-machine interaction and are stored in a database so as to be used for comparing the similarity when cleaning next time, thereby avoiding the condition of identifying by mistake and improving the cleaning accuracy rate of the mobile equipment).
5. The self-learning identification method of self-moving equipment according to claim 1, wherein the step of receiving an identification object restoration instruction and an object restoration image to be identified, and performing similar feature extraction on the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature further comprises:
receiving the identification object repairing instruction and the to-be-identified object repairing image;
if the identification object restoration instruction is a missed identification object restoration instruction, performing similar feature extraction on the object restoration image to be identified according to the missed identification object restoration instruction to obtain the first feature, and acquiring category information of the first feature;
and storing the first characteristic and the category information of the first characteristic in a preset characteristic database in an associated manner. (according to the content of man-machine interaction, the characteristics of the image corresponding to the object which is missed to be identified are extracted and stored in a database so as to compare the similarity when cleaning next time, thereby avoiding the condition of missed identification and improving the cleaning accuracy rate of the mobile equipment).
6. The self-learning identification method of self-moving device of claim 1, wherein the step of updating the first image when the similarity of the first feature and the second feature is greater than or equal to a preset similarity threshold comprises:
and respectively comparing the similarity of the feature vector of the preset dimension in the second feature with the similarity of the feature vector of the preset dimension of the first feature prestored in a preset feature database, and determining the similarity between the first feature and the second feature. (a specific similarity comparison process, specifically, similarity comparison is performed through feature vectors of preset dimensions, so that the scheme is clearer).
7. The self-learning identification method of self-moving equipment according to claim 1, wherein before the step of receiving the identification object restoration instruction and the object restoration image to be identified, and performing similar feature extraction on the object restoration image to be identified according to the identification object restoration instruction to obtain the first feature, the method further comprises:
acquiring an object image and marking information thereof as a training sample;
constructing an initial feature extraction network;
training the initial feature extraction network by using a triple loss function according to the training sample to obtain a pre-trained feature extraction network;
the step of receiving the identification object restoration instruction and the object restoration image to be identified, and extracting the similar features of the object restoration image to be identified according to the identification object restoration instruction to obtain the first features comprises the following steps:
receiving an identification object repairing instruction and an object repairing image to be identified;
and based on the pre-trained feature extraction network, performing similar feature extraction on the object repairing image to be recognized according to the recognized object repairing instruction to obtain the first feature. (the feature extraction network is constructed and trained in advance, and the main difference is that the triple loss function is adopted for training).
8. A cleaning apparatus for a self-moving device, the cleaning apparatus comprising:
the first feature extraction module is used for receiving an identification object restoration instruction and an object restoration image to be identified, and extracting similar features of the object restoration image to be identified according to the identification object restoration instruction to obtain a first feature; wherein the object to be recognized repairing image comprises a repairing object to be recognized;
the second feature extraction module is used for performing similar feature extraction on the first image of the object to be identified and repaired to obtain a second feature when the self-moving equipment detects that the object to be identified and repaired exists in the initial image obtained by shooting the area to be cleaned; wherein the initial image comprises the first image;
and the updating module is used for updating the first image when the similarity between the first feature and the second feature is greater than or equal to a preset similarity threshold.
9. A self-moving device, characterized in that the self-moving device comprises a memory, a processor and a cleaning program of the self-moving device stored on the memory and executable on the processor, the cleaning program of the self-moving device realizing the steps of the self-learning identification method of the self-moving device according to any one of claims 1-7 when executed by the processor.
10. A medium, which is a computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a cleaning program of a self-moving device, which when executed by a processor implements the steps of the self-learning identification method of a self-moving device according to any one of claims 1-7.
CN202210261120.1A 2022-03-16 2022-03-16 Self-learning identification method and device for self-moving equipment, self-moving equipment and medium Pending CN114743122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210261120.1A CN114743122A (en) 2022-03-16 2022-03-16 Self-learning identification method and device for self-moving equipment, self-moving equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210261120.1A CN114743122A (en) 2022-03-16 2022-03-16 Self-learning identification method and device for self-moving equipment, self-moving equipment and medium

Publications (1)

Publication Number Publication Date
CN114743122A true CN114743122A (en) 2022-07-12

Family

ID=82277912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210261120.1A Pending CN114743122A (en) 2022-03-16 2022-03-16 Self-learning identification method and device for self-moving equipment, self-moving equipment and medium

Country Status (1)

Country Link
CN (1) CN114743122A (en)

Similar Documents

Publication Publication Date Title
CN110974088B (en) Sweeping robot control method, sweeping robot and storage medium
US10198823B1 (en) Segmentation of object image data from background image data
US9501725B2 (en) Interactive and automatic 3-D object scanning method for the purpose of database creation
TWI684136B (en) Robot, control system and method for operating the robot
US8402050B2 (en) Apparatus and method for recognizing objects using filter information
EP2495632A1 (en) Map generating and updating method for mobile robot position recognition
CN111465960A (en) Image acquisition apparatus and method of controlling image acquisition apparatus
CN110134117B (en) Mobile robot repositioning method, mobile robot and electronic equipment
CN112711249B (en) Robot positioning method and device, intelligent robot and storage medium
CN107486863A (en) A kind of robot active exchange method based on perception
CN111476894A (en) Three-dimensional semantic map construction method and device, storage medium and electronic equipment
Heya et al. Image processing based indoor localization system for assisting visually impaired people
CN112070053B (en) Background image self-updating method, device, equipment and storage medium
JP2018120283A (en) Information processing device, information processing method and program
CN115346256A (en) Robot searching method and system
CN111630346B (en) Improved positioning of mobile devices based on images and radio words
CN108881846B (en) Information fusion method and device and computer readable storage medium
CN107862852B (en) Intelligent remote control device adaptive to multiple devices based on position matching and control method
CN117077081A (en) Human body pointing prediction method, device, robot and storage medium
CN114743122A (en) Self-learning identification method and device for self-moving equipment, self-moving equipment and medium
JP6773825B2 (en) Learning device, learning method, learning program, and object recognition device
CN114935341B (en) Novel SLAM navigation computation video identification method and device
CN116992377A (en) Method and system for graph level anomaly detection
US11551379B2 (en) Learning template representation libraries
CN114089364A (en) Integrated sensing system device and implementation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination