CN111950431B - Object searching method and device - Google Patents

Object searching method and device Download PDF

Info

Publication number
CN111950431B
CN111950431B CN202010788797.1A CN202010788797A CN111950431B CN 111950431 B CN111950431 B CN 111950431B CN 202010788797 A CN202010788797 A CN 202010788797A CN 111950431 B CN111950431 B CN 111950431B
Authority
CN
China
Prior art keywords
working area
reminding
determining
robot
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010788797.1A
Other languages
Chinese (zh)
Other versions
CN111950431A (en
Inventor
范钦臣
刘宇翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Orion Star Technology Co Ltd
Original Assignee
Beijing Orion Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Orion Star Technology Co Ltd filed Critical Beijing Orion Star Technology Co Ltd
Priority to CN202010788797.1A priority Critical patent/CN111950431B/en
Publication of CN111950431A publication Critical patent/CN111950431A/en
Application granted granted Critical
Publication of CN111950431B publication Critical patent/CN111950431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The application discloses an object searching method and device. When the reminding service triggering condition of a target reminding event is met, controlling a robot to acquire an environment image of a current working area; if the characteristic information of the target object in the environment image is matched with the characteristic information of the reminding object in the target reminding event, determining that the reminding object is found; if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and returning to the step of executing the control robot to collect the environment image of the current working area until the reminding object is determined to be found. According to the method, the target reminding user can be actively found under the condition of being separated from the portable terminal equipment and avoiding the monitoring of the whole working area, so that the reminding service for the user is realized, the intellectualization of the robot is improved, and the use experience of the user on the robot is improved.

Description

Object searching method and device
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to an object searching method and apparatus.
Background
The current reminding function is mainly applied to terminal equipment such as mobile phones and intelligent sound boxes, and benefits from development of voice recognition technology, besides conventional screen click interaction setting, reminding events such as an alarm clock can be quickly set through voice interaction and analysis of user intention, and when the alarm clock is executed, the reminding events in the alarm clock can be restated through voice synthesis such as 'owner' besides conventional personalized music and voice.
However, the current reminding function needs to be carried about by the user to remind the user, or the user can be reminded successfully only in the working range of the terminal device, such as an intelligent sound box, and the user to be reminded corresponding to the current reminding service cannot be found out from multiple users, so that great inconvenience is brought to the user.
Disclosure of Invention
The embodiment of the application provides an object searching method and device, which solve the problems existing in the prior art and improve the use experience of a user on a robot.
In a first aspect, an object finding method is provided, which may include:
when the reminding service triggering condition of the target reminding event is met, controlling the robot to acquire an environment image of the current working area;
If the characteristic information of the target object in the environment image is matched with the characteristic information of the reminding object in the target reminding event, determining to find the reminding object;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and returning to the step of executing the control robot to collect the environment image of the current working area until the reminding object is determined to be found.
In one possible implementation, when a reminder service trigger condition of a target reminder event is met, controlling the robot to collect an environmental image of a current working area includes:
when the current time is a preset time before the reminding time, controlling the robot to acquire an environment image of the current working area.
In one possible implementation, determining the at least one work area not found as a new current work area in turn includes:
determining a current moving path according to the position of the unsearched at least one working area;
and controlling the robot to move to a movement termination position of the current movement path according to the current movement path, and determining a working area to which the movement termination position belongs as a new current working area.
In one possible implementation, determining the current movement path according to the position of the unsearched at least one work area includes:
determining the current position of the robot as a movement starting position;
calculating the distance between the determined movement starting position and the position of each unseeded working area;
determining the position of the working area corresponding to the calculated minimum distance as a movement termination position;
and determining a current moving path based on the moving starting position and the moving ending position.
In one possible implementation, determining the current movement path according to the position of the unsearched at least one work area includes:
determining the current position of the robot as a movement starting position;
determining a movement termination position according to the position of the unsearched at least one working area and the preset working area searching priority corresponding to each object;
and determining a current moving path based on the moving starting position and the moving ending position.
In one possible implementation, determining the current movement path according to the position of the unsearched at least one work area includes:
determining the current position of the robot as a movement starting position;
Determining a movement termination position according to the position of the unsearched at least one working area and a preset working area searching sequence;
and determining a current moving path based on the moving starting position and the moving ending position.
In one possible implementation, before determining the at least one work area not found as the new current work area in turn, the method further includes:
controlling the robot to output inquiry voice, wherein the inquiry voice is used for inquiring whether the robot is the reminding object;
if the feature information of each object in the environment image is not matched with the feature information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, wherein the method comprises the following steps:
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object and the reply voice is not received within a preset time period, sequentially determining at least one unsearched working area as a new current working area;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, negative reply voice is received, and the voiceprint characteristics of the negative reply voice are not matched with the voiceprint characteristics of the reminding object, determining at least one unsearched working area as a new current working area in sequence.
In one possible implementation, the method further comprises:
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, but positive reply voice is received, determining that the object corresponding to the positive reply voice is the reminding object;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, and negative reply voice is received, but the voiceprint characteristics of the negative reply voice are matched with the voiceprint characteristics of the reminding object, determining that the object corresponding to the negative reply voice is the reminding object.
In one possible implementation, after determining to find the reminder object, the method further includes:
and controlling the robot to output the reminding content of the target reminding event to the reminding object.
In one possible implementation, before controlling the robot to acquire the environmental image of the current working area, the method further includes:
detecting whether an object identifier of a reminding object in the target reminding event exists in a stored robot service object list; the robot service object list comprises object identifiers of registered objects;
Controlling the robot to collect an environmental image of the current working area, comprising:
and when the object identification of the reminding object exists in the robot service object list, controlling the robot to acquire an environment image of the current working area.
In a second aspect, an object finding apparatus is provided, which may include: a control unit and a determination unit;
the control unit is used for controlling the robot to acquire an environment image of the current working area when the reminding service triggering condition of the target reminding event is met;
the determining unit is further configured to determine that the reminding object is found if the feature information of the target object in the environmental image matches the feature information of the reminding object in the target reminding event;
and if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and triggering the control unit to collect the environment image of the current working area until the reminding object is determined to be found.
In one possible implementation, the control unit is specifically configured to control the robot to collect an environmental image of the current working area when the current time is a preset time before the reminding time is reached.
In a possible implementation, the determining unit is specifically configured to determine a current movement path according to a position of the at least one working area that is not searched;
the control unit is further used for controlling the robot to move to a movement termination position of the current movement path according to the current movement path;
the determining unit is further specifically configured to determine a working area to which the movement termination position belongs as a new current working area.
In a possible implementation, the determining unit is specifically configured to determine the current position of the robot as a movement starting position;
calculating the distance between the determined movement starting position and the position of each unsearched working area;
determining the position of the working area corresponding to the calculated minimum distance as a movement termination position;
and determining a current moving path based on the moving starting position and the moving ending position.
In a possible implementation, the determining unit is specifically configured to determine the current position of the robot as a movement starting position;
and determining a mobile termination position according to the position of the unsearched at working area and the preset working area searching priority corresponding to each object;
And determining a current moving path based on the moving starting position and the moving ending position.
In a possible implementation, the determining unit is specifically configured to determine the current position of the robot as a movement starting position;
determining a movement termination position according to the position of the unsearched at working area and a preset working area searching sequence;
and determining a current moving path based on the moving starting position and the moving ending position.
In one possible implementation, the control unit is further configured to control the robot to output an inquiry voice for the reminding object;
the determining unit is specifically configured to determine, in sequence, at least one working area that is not searched as a new current working area if the feature information of each object in the environmental image is not matched with the feature information of the reminding object and no reply voice is received within a preset time period;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, negative reply voice is received, and the voiceprint characteristics of the negative reply voice are not matched with the voiceprint characteristics of the reminding object, determining at least one unsearched working area as a new current working area in sequence.
In one possible implementation, the determining unit is further configured to determine that an object corresponding to the affirmative reply voice is the reminding object if the feature information of each object in the environmental image is not matched with the feature information of the reminding object, but the affirmative reply voice is received;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, and negative reply voice is received, but the voiceprint characteristics of the negative reply voice are matched with the voiceprint characteristics of the reminding object, determining that the object corresponding to the negative reply voice is the reminding object.
In one possible implementation, the control unit is further configured to control the robot to output the reminder content of the target reminder event to the reminder object.
In one possible implementation, the apparatus further includes: a detection unit;
the detection unit is used for detecting whether the object identification of the reminding object in the target reminding event exists in the stored robot service object list; the robot service object list comprises object identifiers of registered objects;
the control unit is specifically configured to control the robot to collect an environmental image of the current working area when the object identifier of the reminding object exists in the service object list of the robot.
In a third aspect, an object finding device is provided, comprising at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method steps of any one of the first aspects.
In a fourth aspect, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any of the first aspects.
In a fifth aspect, a computer program product is provided which, when invoked for execution by an object lookup device, causes the object lookup device to perform the method steps of any of the first aspects above.
According to the robot control method provided by the embodiment of the invention, when the reminding service triggering condition of the target reminding event is met, the robot is controlled to acquire the environment image of the current working area; if the characteristic information of the target object in the environment image is matched with the characteristic information of the reminding object in the target reminding event, determining that the reminding object is found; if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and returning to the step of executing the control robot to collect the environment image of the current working area until the reminding object is determined to be found. Compared with the prior art, the method can actively find the target to remind the user under the scene of separating from the portable terminal equipment and avoiding the monitoring of the full working area, thereby realizing the reminding service for the user, improving the intellectualization of the robot and improving the use experience of the user for the robot.
Drawings
Fig. 1 is a schematic flow chart of an object searching method according to an embodiment of the present invention;
fig. 2A is a schematic diagram of path planning according to an embodiment of the present invention;
FIG. 2B is a schematic diagram of another path planning according to an embodiment of the present invention;
FIG. 2C is a schematic diagram of another path planning according to an embodiment of the present invention;
FIG. 2D is a schematic diagram of another path planning according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an object searching device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an object searching device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
The object searching method provided by the embodiment of the invention can be applied to the robot control equipment but is not limited to the application.
The robot control equipment judges whether to control the robot to acquire the environment image of the current working area according to whether the reminding service triggering condition of the target reminding event is met or not; if the characteristic information of the target object in the environment image is matched with the characteristic information of the reminding object in the target reminding event, determining that the reminding object is found; if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and returning to the step of executing the control robot to collect the environment image of the current working area, namely, sequentially searching each unsearched working area until the reminding object is determined to be searched. The at least one working area which is not searched for is other working areas except the searched working area in the robot working environment information.
The robot control device may be a control device inside the robot or may be a control device outside the robot, such as a server communicatively connected to the robot.
The robot working environment information in the embodiment of the invention refers to a working environment map obtained by a robot through a self-mapping function or a server, wherein the working environment map comprises, but is not limited to, a laser map and a visual map. Wherein, the map can include information such as the position of at least one marked working area, the mark of each working area and the like.
If the working environment map is a home scene indoor map and each room is a working area, the working environment map comprises the position of each room, the mark of each room, such as a kitchen, a living room, a primary bedroom, a secondary bedroom and the like, and the bedroom of milk, the bedroom of parents and the like.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and are not intended to limit the present invention, and the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is a flow chart of an object searching method according to an embodiment of the present invention. As shown in fig. 1, the method may include:
step S110, judging whether the reminding service triggering condition of the target reminding event is met.
In specific implementation, a reminding event set by a user and a reminding service triggering condition corresponding to the corresponding reminding event are received in advance. Meanwhile, checking whether the object identification of the reminding object in the reminding event exists in the stored robot service object list, if so, outputting indication information of successful setting of the reminding event, and if not, outputting indication information of non-existence of the reminding object to indicate that a user registers the reminding object in the robot service object list.
The triggering condition of the reminding service corresponding to the reminding event can be that whether the current time is a preset time before the reminding time is reached or not is judged, if so, whether the current time is five minutes before the reminding time is reached or not.
And when the current time is a preset time before the reminding time, determining that the reminding service triggering condition is met.
Wherein, for setting the reminding event, at least the following modes can be included:
in the first mode, an alarm setting instruction mode is adopted.
The user can directly set an instruction of reminding milk to take medicine at two o' clock for the voice of the robot; or, the mobile phone of the user is connected with the robot, and an alarm clock is set by the mobile phone, so that a parent is called by 10 am on Saturday; alternatively, the user manually sets an alarm clock directly on the robot screen.
The robot can extract the reminding objects, the reminding moments and the reminding contents from the alarm clock setting instruction, and certain rules can be set to complement the information of the reminding event when the information is extracted. For example, if the "reminding object" cannot be obtained from the command, the object which sends the command may be regarded as the reminding object; alternatively, the reminder object may be validated in a multi-round interactive manner, such as "is the owner required to call the parent 10 am".
And secondly, setting an IFTTT task by an IFTTT engine in the robot.
The IFTTT task setting is to set the triggering condition (This) and the corresponding action (the) of the corresponding task respectively through the sentence pattern of 'If This Then That'. When the trigger condition is checked to be achieved, the action corresponding to the trigger condition is automatically executed.
The user can set the corresponding IFTTT task for the robot voice setting instruction, can set the IFTTT task through an IFTTT template on a mobile phone connected with the robot, or can directly select the IFTTT task recommended by the target application program to take effect.
It should be noted that, whether in the alarm clock setting mode or the IFTTT task setting mode, the obtained basic information such as "reminding object", "reminding time", "reminding content" needs to be recorded and extracted, so as to generate the task to be executed by the robot. When the robot is in a starting working state, whether the task execution condition is met or not needs to be detected in real time, namely whether the reminding service triggering condition is met or not is detected in real time.
The reminding event may have other setting modes besides an alarm clock setting mode and an IFTTT task setting mode, for example, a child learning plan setting mode, a learning schedule setting mode, and the like, and the embodiment of the invention is not limited herein.
And step 120, controlling the robot to acquire an environment image of the current working area when the reminding service triggering condition of the target reminding event is met.
In a specific implementation, before executing the step, checking whether a reminding object of the target reminding event exists in a service object list of the robot includes:
and detecting whether an object identifier of a reminding object in the target reminding event exists in the stored robot service object list.
If the object identification of the reminding object does not exist in the robot service object list, ending the flow.
And if the object identification of the reminding object exists in the robot service object list, controlling the robot to acquire the environment image of the current working area.
The robot control device can control the robot to rotate the body at the current position in the current working area, for example, the robot is controlled to rotate the body to collect the environment image of the current working area, or the robot is controlled to move to a preset positioning point of the current working area, and then the robot is controlled to rotate the body to collect the environment image of the current working area. The robot rotating body means that the robot rotates in situ to change the orientation angle.
Step S130, determining whether the reminding object is found according to the matching result of the characteristic information of each object in the environment image and the characteristic information of the reminding object.
The robot control device can upload the collected environment image to the server so that the server performs feature detection on the environment image to obtain a detection result and send the detection result to the robot control device, wherein the feature detection can comprise human feature detection and human face feature detection, namely human body recognition and human face recognition;
if the detection result is that the characteristic information of the target object in the environment image is matched with the characteristic information of the reminding object in the target reminding event, determining that the reminding object is found;
if the detection result is that the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining that the reminding object is not found, sequentially determining at least one working area which is not found as a new current working area, and returning to the step of executing the control robot to collect the environment image of the current working area until the reminding object is determined to be found.
In a specific implementation, human body feature detection is performed on the environment image, if the human body feature detection result includes the detected object and the position of the corresponding object, the robot control device can determine whether the object is in the identifiable region of the robot according to the position of the object in the human body feature detection result, if so, human face feature detection is performed on the object, and whether the object is a reminding object to be searched or not is determined.
If the object is in the identifiable region of the robot, the robot is controlled to be kept still; if the distance between the object and the robot is higher than the identifiable region of the robot, controlling the robot to approach the object so that the object is located in the identifiable region of the robot; and if the distance between the object and the robot is lower than the identifiable region of the robot, controlling the robot to be far away from the object so that the object is in the identifiable region of the robot.
If the face features of the target object in the face feature detection result are matched with the stored face features of the reminding object, the reminding object is determined to be found in the current working area. And simultaneously, after the reminding object is determined to be found, stopping traversing the rest of the unsearched working areas.
If the human body feature detection result shows that no object exists in the environment image or the human face features of all objects in the human face feature detection result are not matched with the human face features of the stored reminding objects, determining that the reminding objects are not found in the current working area. At this time, at least one unsearched working area needs to be determined as a new current working area in sequence, so that the traversing searching of the rest unsearched working areas is realized until the reminding object is determined to be searched.
Based on the above embodiment, in a specific implementation, the current moving path may be determined according to the position of at least one working area that is not searched; it is understood that the current moving path is the shortest unobstructed path.
And controlling the robot to move to the movement termination position of the current movement path according to the current movement path, and determining the working area to which the movement termination position belongs as a new current working area so as to search each unsearched working area.
The method for determining the current moving path according to the position of the unsearched at least one working area at least can comprise the following steps:
in one aspect, in order to improve the searching efficiency, a target working area closest to a current working area may be searched to determine a current moving path between the current working area and the target working area, including:
determining a current position of the robot as a movement starting position;
calculating the distance between the determined movement starting position and the position of each unseeded working area;
determining the position of the working area corresponding to the calculated minimum distance as a movement termination position;
the current movement path is determined based on the movement start position and the movement end position.
For example, there are three working areas: the working area A, the working area B and the working area C are not searched, and the current position of the robot is the P point in the working area A.
As shown in fig. 2A, if no reminding object is found in the working area a, a distance D (P-B) and a distance D (P-C) between the current position and the position of the working area B and the position of the working area C are calculated, the distance D (P-B) and the distance D (P-C) are compared and determined, the current position of the robot is determined as a movement starting position, the position of the working area B corresponding to the minimum distance D (P-B) is determined as a movement ending position, and thus a current movement path is determined, and the robot is controlled to move into the working area B according to the current movement path, so that the reminding object is found in the working area B. If the reminding object is not found in the working area B, determining that the current moving path moves into the working area C again so as to find the reminding object in the working area C.
In a possible embodiment, if the minimum distance corresponds to two working areas, the position of the working area may be selected randomly as the movement termination position, or the position of the working area with the higher priority in the two working areas may be selected as the movement termination position according to the preset priority order of the working areas, or the working areas may be selected by adopting other selection manners, which is not limited herein.
Secondly, determining the current position of the robot as a movement starting position;
determining a mobile termination position according to the position of at least one unsearched working area and the preset working area searching priority corresponding to each object;
the current movement path is determined based on the movement start position and the movement end position.
If the reminding object is a mother, the work area searching priority corresponding to the mother needs to be searched in the preset work area searching priorities corresponding to the objects: work area a, work area C, work area B. That is, for the mother, the search priorities of the work areas a, C, and B are sequentially lowered.
As shown in fig. 2B, taking a current working area as a working area a, and both a working area B and a working area C as unseen working areas as examples, if a mother is not found in the working area a, searching priority according to the searched working area, determining the unseen working area C as a movement termination position, that is, determining the unseen working area C with the highest searching priority as the movement termination position, thereby determining a current movement path, and controlling the robot to move into the working area C according to the current movement path so as to search the mother in the working area C. If the mother is not found in the working area C, determining that the current moving path moves into the working area B again so as to find the mother in the working area B.
As shown in fig. 2C, taking a current working area as a working area C, where both the working area a and the working area B are unsearched working areas, if no mother is searched in the working area C, searching priority according to the searched working area, determining the unsearched working area a as a movement termination position, that is, determining the unsearched working area a with the highest searching priority as the movement termination position, thereby determining a current movement path, and controlling the robot to move into the working area a according to the current movement path, so as to search the mother in the working area a. If the mother is not found in the working area A, determining that the current moving path moves into the working area B again so as to find the mother in the working area B.
It should be noted that, the priority of searching the working area corresponding to each object may be configured in combination with the activity frequency of each object in each working area, or may be configured according to the activity condition of each object in each working area in each time period, for example, 7-8 am is in the kitchen, 9-10 dad is in the study room, etc., which is not limited in this embodiment of the present invention.
Determining the current position of the robot as a movement starting position;
Determining a movement termination position according to the position of at least one unseeded working area and a preset working area searching sequence; the preset work area searching sequence is a unified sequence of searching the work areas, which is required to be followed by searching any object.
The current movement path is determined based on the movement start position and the movement end position.
It should be noted that, the search sequence of the working areas may be determined by the sequence of traversing each working area when the robot is idle, or may be preset by the user, where the manner of traversing each working area when the robot is idle may be the same as the manner of traversing the existing sweeper when the robot is working, and the embodiments of the present invention are not repeated herein.
For example, the preset work area search sequence is: the working area a, the working area C, and the working area B are sequentially circulated.
As shown in fig. 2D, taking a current working area as a working area C, where both the working area a and the working area B are unset working areas, if no reminding object is found in the working area C, determining the unset working area B as a movement termination position according to a preset working area searching sequence, thereby determining a current movement path, and controlling the robot to move into the working area B according to the current movement path, so as to find the reminding object in the working area B. If the reminding object is not found in the working area B, determining that the current moving path moves into the working area A again so as to find the reminding object in the working area A.
Taking the current working area as a working area B, taking the working areas A and C as unsearched working areas as examples, if no reminding object is found in the working area B, determining the unsearched working area A as a movement termination position according to a preset working area searching sequence, determining a current movement path, and controlling the robot to move into the working area A according to the current movement path so as to search the reminding object in the working area A. If the reminding object is not found in the working area A, determining that the current moving path moves into the working area C again so as to find the reminding object in the working area C.
Based on the above embodiment, in order to further improve the accuracy of object searching, after determining that the feature information of each object in the environmental image of the current working area is not matched with the feature information of the reminding object, and before sequentially determining at least one working area which is not searched as a new current working area, the robot may be controlled to output an inquiry voice, where the inquiry voice is used for inquiring whether the object is the reminding object;
for example, the sentence pattern of the query speech may be: "do you find [ who ], ask you are [ who? ".
(1) If the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object and the reply voice is not received within a preset time period, sequentially determining at least one unsearched working area as a new current working area;
(2) If the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, and negative reply voice is received, if 'I' is not, and the voiceprint characteristics of the negative reply voice are not matched with the voiceprint characteristics of the reminding object, determining at least one unsearched working area as a new current working area in sequence.
(3) If the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, but positive reply voice is received, if 'I' is received, determining that the object corresponding to the positive reply voice is the reminding object;
since the voiceprint characteristics of the same subject may be changed due to some conditions, such as cold, throat inflammation, wearing of mask, etc., when a positive reply is received, whether the voiceprint characteristics are matched or not, the subject corresponding to the positive reply voice is determined to be a reminding subject.
(4) If the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, and negative reply voice is received, but the voiceprint characteristics of the negative reply voice are matched with the voiceprint characteristics of the reminding object, determining that the object corresponding to the negative reply voice is the reminding object. This approach may avoid the user from deliberately replying to negative reply speech.
Based on any embodiment, after the reminding object is determined to be found, the robot can be controlled to output the inquiry voice for inquiring whether the reminding object is the reminding object, so that the secondary confirmation of the reminding object is realized, and the object finding accuracy is further improved;
it should be noted that, the operation screen of the robot may support the clicking operation of the user, and for the query voice, the object may click the reply button on the operation screen, so that the robot receives the reply information input by the object; if the reply information is affirmative, determining that the operation object is a reminding object; if the reply information is negative reply information, outputting an instruction for instructing the operation object to reply by voice so as to determine whether the operation object is a reminding object according to a matching result of the voiceprint characteristics of the reply voice and the voiceprint characteristics of the reminding object.
Based on any of the above embodiments, after determining that the reminding object is found, the robot may be controlled to output the reminding content of the target reminding event to the reminding object.
In the implementation, after the reminding object is found, the robot can be controlled to play the reminding content through voice. If a stop command of the reminding object is received, if the similar instruction indicating that the reminding object has received the reminding content such as 'thank you reminding' is known, the robot is controlled to stop playing, the reminding service is ended, and at the moment, the robot can automatically navigate back to a default position or is controlled to return to the default position.
Meanwhile, the reminding content can be sent to the portable terminal equipment of the reminding object, such as a mobile phone.
If no reminding object is found through traversing each working area, the reminding content can be sent in a set mode, for example, other objects related to the reminding object, such as other family members of the reminding object, are found, and/or the reminding object is reminded and/or the reminding content is sent to a portable terminal device, such as a mobile phone, of the reminding object.
Based on any of the above embodiments, the state information of the robot for performing object searching may be recorded in real time, where the state information may include a state that the robot is searching for the reminding object, a state that the robot successfully searches for the reminding object, and a state that the robot fails to search for the reminding object.
According to the object searching method provided by the embodiment of the invention, when the reminding service triggering condition of the target reminding event is met, the robot is controlled to acquire the environment image of the current working area; if the characteristic information of the target object in the environment image is matched with the characteristic information of the reminding object in the target reminding event, determining that the reminding object is found; if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and returning to the step of executing the control robot to collect the environment image of the current working area until the reminding object is determined to be found. Compared with the prior art, the method can actively find the target to remind the user under the scene of separating from the portable terminal equipment and avoiding the monitoring of the whole working area, thereby realizing the reminding service for the user, improving the intellectualization of the robot and the use experience of the user for the robot.
Corresponding to the above method, the embodiment of the present invention further provides a robot control device, as shown in fig. 3, where the robot control device includes: a control unit 310 and a determination unit 320;
the control unit 310 is configured to control the robot to collect an environmental image of the current working area when a trigger condition of a reminder service of the target reminder event is satisfied;
the determining unit 320 is further configured to determine that the reminding object is found if the feature information of the target object in the environmental image matches the feature information of the reminding object in the target reminding event;
and if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and triggering the control unit to collect the environment image of the current working area until the reminding object is determined to be found.
In one possible implementation, the control unit 310 is specifically configured to control the robot to collect an environmental image of the current working area when the current time is a preset time before the reminding time is reached.
In a possible implementation, the determining unit 320 is specifically configured to determine the current movement path according to the location of the at least one working area that is not searched;
The control unit 310 is further configured to control the robot to move to a movement termination position of the current movement path according to the current movement path;
the determining unit 320 is further specifically configured to determine a working area to which the movement termination position belongs as a new current working area.
In a possible implementation, the determining unit 320 is specifically configured to determine the current position of the robot as a movement start position;
calculating the distance between the determined movement starting position and the position of each unsearched working area;
determining the position of the working area corresponding to the calculated minimum distance as a movement termination position;
and determining a current moving path based on the moving starting position and the moving ending position.
In a possible implementation, the determining unit 320 is specifically configured to determine the current position of the robot as a movement start position;
and determining a mobile termination position according to the position of the unsearched at working area and the preset working area searching priority corresponding to each object;
and determining a current moving path based on the moving starting position and the moving ending position.
In a possible implementation, the determining unit 320 is specifically configured to determine the current position of the robot as a movement start position;
Determining a movement termination position according to the position of the unsearched at working area and a preset working area searching sequence;
and determining a current moving path based on the moving starting position and the moving ending position.
In a possible implementation, the control unit 310 is further configured to control the robot to output an inquiry voice for the reminding object;
the determining unit 320 is specifically configured to determine, in sequence, at least one working area that is not searched as a new current working area if the feature information of each object in the environmental image is not matched with the feature information of the reminding object and no reply voice is received within a preset time period;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, negative reply voice is received, and the voiceprint characteristics of the negative reply voice are not matched with the voiceprint characteristics of the reminding object, determining at least one unsearched working area as a new current working area in sequence.
In one possible implementation, the determining unit 320 is further configured to determine that the object corresponding to the affirmative reply voice is the reminding object if the feature information of each object in the environmental image does not match the feature information of the reminding object, but the affirmative reply voice is received;
If the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, and negative reply voice is received, but the voiceprint characteristics of the negative reply voice are matched with the voiceprint characteristics of the reminding object, determining that the object corresponding to the negative reply voice is the reminding object.
In a possible implementation, the control unit 310 is further configured to control the robot to output the reminder content of the target reminder event to the reminder object.
In one possible implementation, the apparatus further includes: a detection unit 330;
the detecting unit 330 is configured to detect whether an object identifier of a reminder object in the target reminder event exists in the stored list of robot service objects; the robot service object list comprises object identifiers of registered objects;
the control unit 310 is specifically configured to control the robot to collect an environmental image of the current working area when the object identifier of the reminding object exists in the service object list of the robot.
The functions of each functional unit of the robot control device provided by the embodiment of the present invention may be implemented through the steps of the method, so that the specific working process and beneficial effects of each unit in the robot control device provided by the embodiment of the present invention are not repeated herein.
The embodiment of the invention also provides an object searching device which can be particularly but not exclusively a control device inside or outside the robot.
As shown in fig. 4, the system includes a processor 410, a communication interface 420, a memory 430 and a communication bus 440, wherein the processor 410, the communication interface 420 and the memory 430 communicate with each other through the communication bus 440.
A memory 430 for storing a computer program;
the processor 410 is configured to execute the program stored in the memory 430, and implement the following steps:
when the reminding service triggering condition of the target reminding event is met, controlling the robot to acquire an environment image of the current working area;
if the characteristic information of the target object in the environment image is matched with the characteristic information of the reminding object in the target reminding event, determining to find the reminding object;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and returning to the step of executing the control robot to collect the environment image of the current working area until the reminding object is determined to be found.
In one possible implementation, when a reminder service trigger condition of a target reminder event is met, controlling the robot to collect an environmental image of a current working area includes:
when the current time is a preset time before the reminding time, controlling the robot to acquire an environment image of the current working area.
In one possible implementation, determining the at least one work area not found as a new current work area in turn includes:
determining a current moving path according to the position of the unsearched at least one working area;
and controlling the robot to move to a movement termination position of the current movement path according to the current movement path, and determining a working area to which the movement termination position belongs as a new current working area.
In one possible implementation, determining the current movement path according to the position of the unsearched at least one work area includes:
determining the current position of the robot as a movement starting position;
calculating the distance between the determined movement starting position and the position of each unseeded working area;
determining the position of the working area corresponding to the calculated minimum distance as a movement termination position;
And determining a current moving path based on the moving starting position and the moving ending position.
In one possible implementation, determining the current movement path according to the position of the unsearched at least one work area includes:
determining the current position of the robot as a movement starting position;
determining a movement termination position according to the position of the unsearched at least one working area and the preset working area searching priority corresponding to each object;
and determining a current moving path based on the moving starting position and the moving ending position.
In one possible implementation, determining the current movement path according to the position of the unsearched at least one work area includes:
determining the current position of the robot as a movement starting position;
determining a movement termination position according to the position of the unsearched at least one working area and a preset working area searching sequence;
and determining a current moving path based on the moving starting position and the moving ending position.
In one possible implementation, before determining the at least one work area not found as the new current work area in turn, the method further includes:
Controlling the robot to output inquiry voice, wherein the inquiry voice is used for inquiring whether the robot is the reminding object;
if the feature information of each object in the environment image is not matched with the feature information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, wherein the method comprises the following steps:
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object and the reply voice is not received within a preset time period, sequentially determining at least one unsearched working area as a new current working area;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, negative reply voice is received, and the voiceprint characteristics of the negative reply voice are not matched with the voiceprint characteristics of the reminding object, determining at least one unsearched working area as a new current working area in sequence.
In one possible implementation, the method further comprises:
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, but positive reply voice is received, determining that the object corresponding to the positive reply voice is the reminding object;
If the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, and negative reply voice is received, but the voiceprint characteristics of the negative reply voice are matched with the voiceprint characteristics of the reminding object, determining that the object corresponding to the negative reply voice is the reminding object.
In one possible implementation, after determining to find the reminder object, the method further includes:
and controlling the robot to output the reminding content of the target reminding event to the reminding object.
In one possible implementation, before controlling the robot to acquire the environmental image of the current working area, the method further includes:
detecting whether an object identifier of a reminding object in the target reminding event exists in a stored robot service object list; the robot service object list comprises object identifiers of registered objects;
controlling the robot to collect an environmental image of the current working area, comprising:
and when the object identification of the reminding object exists in the robot service object list, controlling the robot to acquire an environment image of the current working area.
The communication bus mentioned above may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the object searching device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Since the implementation manner and the beneficial effects of the solution of the problem by each device of the object searching apparatus in the foregoing embodiment may be implemented by referring to each step in the embodiment shown in fig. 1, the specific working process and the beneficial effects of the object searching apparatus provided by the embodiment of the present invention are not repeated herein.
In a further embodiment of the present invention, there is also provided a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the object searching method steps of any of the above embodiments.
In a further embodiment of the present invention, a computer program product comprising instructions is provided, which, when executed by an object-finding device, causes the object-finding device to perform the object-finding method steps of any of the above embodiments.
It will be apparent to those skilled in the art that embodiments in the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted to embrace the preferred embodiments and all such variations and modifications as fall within the scope of the embodiments herein.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments in the present application fall within the scope of the claims and the equivalents thereof in the embodiments of the present application, such modifications and variations are also intended to be included in the embodiments of the present application.

Claims (8)

1. An object finding method, the method comprising:
when the reminding service triggering condition of the target reminding event is met, controlling the robot to acquire an environment image of the current working area;
if the characteristic information of the target object in the environment image is matched with the characteristic information of the reminding object in the target reminding event, determining to find the reminding object;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and returning to the step of executing the control robot to collect the environment image of the current working area until the reminding object is determined to be found;
Controlling the robot to output the reminding content of the target reminding event to the reminding object;
the method sequentially determines at least one unsearched working area as a new current working area, and comprises the following steps:
determining a current moving path according to the position of the unsearched at least one working area; controlling the robot to move to a movement termination position of the current movement path according to the current movement path, and determining a working area to which the movement termination position belongs as a new current working area;
wherein determining the current movement path according to the position of the unsearched at least one working area comprises:
determining the current position of the robot as a movement starting position; calculating the distance between the determined movement starting position and the position of each unseeded working area; determining the position of the working area corresponding to the calculated minimum distance as a movement termination position; determining a current movement path based on the movement start position and the movement end position; or (b)
Determining the current position of the robot as a movement starting position; determining a movement termination position according to the position of the unsearched at least one working area and the preset working area searching priority corresponding to each object; determining a current movement path based on the movement start position and the movement end position; or (b)
Determining the current position of the robot as a movement starting position; determining a movement termination position according to the position of the unsearched at least one working area and a preset working area searching sequence; and determining a current moving path based on the moving starting position and the moving ending position.
2. The method of claim 1, wherein controlling the robot to acquire an environmental image of the current work area when a reminder service trigger condition of the target reminder event is met comprises:
when the current time is a preset time before the reminding time, controlling the robot to acquire an environment image of the current working area.
3. The method of claim 1, wherein the method further comprises, before sequentially determining the at least one work area not found as a new current work area:
controlling the robot to output inquiry voice, wherein the inquiry voice is used for inquiring whether the robot is the reminding object;
if the feature information of each object in the environment image is not matched with the feature information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, wherein the method comprises the following steps:
If the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object and the reply voice is not received within a preset time period, sequentially determining at least one unsearched working area as a new current working area;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, negative reply voice is received, and the voiceprint characteristics of the negative reply voice are not matched with the voiceprint characteristics of the reminding object, determining at least one unsearched working area as a new current working area in sequence.
4. A method as claimed in claim 3, wherein the method further comprises:
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, but positive reply voice is received, determining that the object corresponding to the positive reply voice is the reminding object;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, and negative reply voice is received, but the voiceprint characteristics of the negative reply voice are matched with the voiceprint characteristics of the reminding object, determining that the object corresponding to the negative reply voice is the reminding object.
5. The method of any of claims 1-4, wherein prior to controlling the robot to acquire an environmental image of the current work area, the method further comprises:
detecting whether an object identifier of a reminding object in the target reminding event exists in a stored robot service object list; the robot service object list comprises object identifiers of registered objects;
controlling the robot to collect an environmental image of the current working area, comprising:
and when the object identification of the reminding object exists in the robot service object list, controlling the robot to acquire an environment image of the current working area.
6. An object finding apparatus, the apparatus comprising: a control unit and a determination unit;
the control unit is used for controlling the robot to acquire an environment image of the current working area when the reminding service triggering condition of the target reminding event is met;
the determining unit is further configured to determine that the reminding object is found if the feature information of the target object in the environmental image matches the feature information of the reminding object in the target reminding event;
if the characteristic information of each object in the environment image is not matched with the characteristic information of the reminding object, determining at least one unsearched working area as a new current working area in sequence, and triggering the control unit to collect the environment image of the current working area until the reminding object is determined to be found;
The control unit is further used for controlling the robot to output reminding contents of the target reminding event to the reminding object;
the determining unit is specifically configured to determine a current moving path according to a position of the at least one unsearched working area;
the control unit is further used for controlling the robot to move to a movement termination position of the current movement path according to the current movement path;
the determining unit is further specifically configured to determine a working area to which the movement termination position belongs as a new current working area;
the determining unit is specifically configured to determine a current position of the robot as a movement starting position; calculating the distance between the determined movement starting position and the position of each unseeded working area; determining the position of the working area corresponding to the calculated minimum distance as a movement termination position; determining a current movement path based on the movement start position and the movement end position; or (b)
Determining the current position of the robot as a movement starting position; determining a movement termination position according to the position of the unsearched at least one working area and the preset working area searching priority corresponding to each object; determining a current movement path based on the movement start position and the movement end position; or (b)
Determining the current position of the robot as a movement starting position; determining a movement termination position according to the position of the unsearched at least one working area and a preset working area searching sequence; and determining a current moving path based on the moving starting position and the moving ending position.
7. An object finding apparatus, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1 to 5.
CN202010788797.1A 2020-08-07 2020-08-07 Object searching method and device Active CN111950431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010788797.1A CN111950431B (en) 2020-08-07 2020-08-07 Object searching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010788797.1A CN111950431B (en) 2020-08-07 2020-08-07 Object searching method and device

Publications (2)

Publication Number Publication Date
CN111950431A CN111950431A (en) 2020-11-17
CN111950431B true CN111950431B (en) 2024-03-26

Family

ID=73332068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010788797.1A Active CN111950431B (en) 2020-08-07 2020-08-07 Object searching method and device

Country Status (1)

Country Link
CN (1) CN111950431B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105301997A (en) * 2015-10-22 2016-02-03 深圳创想未来机器人有限公司 Intelligent prompting method and system based on mobile robot
CN106956266A (en) * 2017-05-16 2017-07-18 北京京东尚科信息技术有限公司 robot control method, device and robot
CN107168337A (en) * 2017-07-04 2017-09-15 武汉视览科技有限公司 A kind of mobile robot path planning and dispatching method of view-based access control model identification
CN107958212A (en) * 2017-11-20 2018-04-24 珠海市魅族科技有限公司 A kind of information cuing method, device, computer installation and computer-readable recording medium
CN108924365A (en) * 2018-07-26 2018-11-30 深圳云天励飞技术有限公司 Stroke reminding method, apparatus, equipment and computer readable storage medium
CN109676611A (en) * 2019-01-25 2019-04-26 北京猎户星空科技有限公司 Multirobot cooperating service method, device, control equipment and system
CN109953700A (en) * 2017-12-26 2019-07-02 杭州萤石软件有限公司 A kind of cleaning method and clean robot
CN110135644A (en) * 2019-05-17 2019-08-16 北京洛必德科技有限公司 A kind of robot path planning method for target search
CN110710852A (en) * 2019-10-30 2020-01-21 广州铁路职业技术学院(广州铁路机械学校) Meal delivery method, system, medium and intelligent device based on meal delivery robot
CN111216127A (en) * 2019-12-31 2020-06-02 深圳优地科技有限公司 Robot control method, device, server and medium
WO2020133080A1 (en) * 2018-12-27 2020-07-02 深圳市优必选科技有限公司 Object positioning method and apparatus, computer device, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105301997A (en) * 2015-10-22 2016-02-03 深圳创想未来机器人有限公司 Intelligent prompting method and system based on mobile robot
CN106956266A (en) * 2017-05-16 2017-07-18 北京京东尚科信息技术有限公司 robot control method, device and robot
CN107168337A (en) * 2017-07-04 2017-09-15 武汉视览科技有限公司 A kind of mobile robot path planning and dispatching method of view-based access control model identification
CN107958212A (en) * 2017-11-20 2018-04-24 珠海市魅族科技有限公司 A kind of information cuing method, device, computer installation and computer-readable recording medium
CN109953700A (en) * 2017-12-26 2019-07-02 杭州萤石软件有限公司 A kind of cleaning method and clean robot
CN108924365A (en) * 2018-07-26 2018-11-30 深圳云天励飞技术有限公司 Stroke reminding method, apparatus, equipment and computer readable storage medium
WO2020133080A1 (en) * 2018-12-27 2020-07-02 深圳市优必选科技有限公司 Object positioning method and apparatus, computer device, and storage medium
CN109676611A (en) * 2019-01-25 2019-04-26 北京猎户星空科技有限公司 Multirobot cooperating service method, device, control equipment and system
CN110135644A (en) * 2019-05-17 2019-08-16 北京洛必德科技有限公司 A kind of robot path planning method for target search
CN110710852A (en) * 2019-10-30 2020-01-21 广州铁路职业技术学院(广州铁路机械学校) Meal delivery method, system, medium and intelligent device based on meal delivery robot
CN111216127A (en) * 2019-12-31 2020-06-02 深圳优地科技有限公司 Robot control method, device, server and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Cognitive computing and wireless communications on the edge for healthcare service robots;Shaohua Wan等;Computer Communications;第149卷;99-106 *
Lio-a personal robot assistant for human-robot interaction and care applications;Justinas Mišeikis等;IEEE Robotics and Automation Letters;第5卷(第4期);5339 - 5346 *
基于深度相机的移动机器人SLAM研究;陈文;中国优秀硕士学位论文全文数据库 信息科技辑;I140-512 *

Also Published As

Publication number Publication date
CN111950431A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN110310641B (en) Method and device for voice assistant
WO2018213740A1 (en) Action recipes for a crowdsourced digital assistant system
WO2019148491A1 (en) Human-computer interaction method and device, robot, and computer readable storage medium
KR20200036678A (en) Cleaning robot and Method of performing task thereof
WO2018120033A1 (en) Method and device for assisting user in finding object
CN109287511B (en) Method and device for training pet control equipment and wearable equipment for pet
WO2020248480A1 (en) Building positioning method and electronic device
CN110619027B (en) House source information recommendation method and device, terminal equipment and medium
CN110730218A (en) Intelligent garbage putting method and system and storage medium
CN111505206B (en) Gas concentration warning method, device and system
US11340925B2 (en) Action recipes for a crowdsourced digital assistant system
CN110765371A (en) Position selection method and system of movable garbage can and storage medium
CN107421557B (en) Navigation destination determining method, intelligent terminal and device with storage function
CN111615048A (en) Positioning method, positioning device, electronic equipment and storage medium
CN111950431B (en) Object searching method and device
CN110647045A (en) Intelligent household control method and device and computer readable storage medium
WO2024007807A1 (en) Error correction method and apparatus, and mobile device
CN112656309A (en) Function execution method and device of sweeper, readable storage medium and electronic equipment
CN110611880B (en) Household WiFi prediction method and device, electronic equipment and storage medium
WO2018000208A1 (en) Method and system for searching for and positioning skill packet, and robot
CN108174369B (en) Microphone starting method and device of early education equipment, early education equipment and storage medium
WO2021012488A1 (en) Positioning method based on wearable device, and wearable device
CN106856554A (en) A kind of camera control method and terminal
WO2016110156A1 (en) Voice search method and apparatus, terminal and computer storage medium
JP2018163293A (en) Information terminal, information terminal control method, and control program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant