CN112991527A - Target object avoiding method and device, storage medium and electronic device - Google Patents

Target object avoiding method and device, storage medium and electronic device Download PDF

Info

Publication number
CN112991527A
CN112991527A CN202110180718.3A CN202110180718A CN112991527A CN 112991527 A CN112991527 A CN 112991527A CN 202110180718 A CN202110180718 A CN 202110180718A CN 112991527 A CN112991527 A CN 112991527A
Authority
CN
China
Prior art keywords
robot
dimensional point
surface model
target object
moving part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110180718.3A
Other languages
Chinese (zh)
Other versions
CN112991527B (en
Inventor
霍江浩
李丰果
常新伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Dreame Innovation Technology Suzhou Co Ltd
Original Assignee
Zhuichuang Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuichuang Technology Suzhou Co Ltd filed Critical Zhuichuang Technology Suzhou Co Ltd
Priority to CN202210261795.6A priority Critical patent/CN114742960A/en
Priority to CN202110180718.3A priority patent/CN112991527B/en
Publication of CN112991527A publication Critical patent/CN112991527A/en
Priority to PCT/CN2021/137063 priority patent/WO2022166397A1/en
Application granted granted Critical
Publication of CN112991527B publication Critical patent/CN112991527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mechanical Engineering (AREA)
  • Geometry (AREA)
  • Robotics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a target object avoiding method and device, a storage medium and an electronic device, wherein the method comprises the following steps: determining a target object needing to be avoided by the robot in a traveling area; acquiring a first three-dimensional point-surface model of the target object, and acquiring a second three-dimensional point-surface model corresponding to each moving part of the robot, wherein the robot is provided with a plurality of moving parts; respectively determining whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models; and under the condition that the intersection exists, adjusting a target moving component corresponding to a second three-dimensional point-surface model which intersects with the first three-dimensional point-surface model so as to enable the robot to successfully avoid the target object. By adopting the technical scheme, the problem that the part of the robot cannot be adjusted in the prior art so that the robot can successfully avoid obstacles is solved.

Description

Target object avoiding method and device, storage medium and electronic device
[ technical field ] A method for producing a semiconductor device
The present invention relates to the field of communications, and in particular, to a method and an apparatus for avoiding a target object, a storage medium, and an electronic apparatus.
[ background of the invention ]
With the rapid development of robot technology, robots have now been applied to various fields of use for various industries. In the use process of the robot, the robot avoids obstacles, which is an important technology in the robot technology, and is very important for the application of the robot in various environments. In the prior art, the robot is regarded as a big square, and the square does not touch an obstacle, so that the robot can safely pass through to complete the work of the robot. However, in some complex environments and special fields, the terrain is complex, the number of obstacles is large, and the passage through which the robot passes is narrow. In the application scenario of the robot, the robot can not be regarded as a large block any more, and the robot can only safely pass through a complex and narrow passage by flexibly adjusting the components of the robot. If the robot is also understood as an integral block, the robot is difficult to pass through a region with complex terrain, and the prior art cannot adjust the components of the robot, so that the robot can successfully avoid the problem of obstacles.
Accordingly, there is a need for improvements in the art that overcome the deficiencies in the prior art.
[ summary of the invention ]
The invention aims to provide a target object avoiding method and device, a storage medium and an electronic device, which at least solve the problem that the parts of a robot cannot be adjusted in the prior art, so that the robot can successfully avoid obstacles.
The purpose of the invention is realized by the following technical scheme:
according to an alternative embodiment of the present invention, there is provided a target object evasion method, the method comprising: determining a target object needing to be avoided by the robot in a traveling area; acquiring a first three-dimensional point-surface model of the target object, and acquiring a second three-dimensional point-surface model corresponding to each moving part of the robot, wherein the robot is provided with a plurality of moving parts; respectively determining whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models; and under the condition that the intersection exists, adjusting a target moving component corresponding to a second three-dimensional point-surface model which intersects with the first three-dimensional point-surface model so as to enable the robot to successfully avoid the target object.
Optionally, obtaining a second three-dimensional point-plane model corresponding to each moving part of the robot includes: determining position information of each moving part, a part structure of each moving part, and a control instruction sequence of the robot in a preset time period after the current time, wherein the control instruction sequence is used for indicating operations to be executed by the robot at different times in the preset time period; and determining a second three-dimensional point-surface model corresponding to each moving component according to the position information, the component structure and the control instruction sequence so as to obtain a plurality of second three-dimensional point-surface models corresponding to the plurality of moving components.
Optionally, determining the position information of each moving part includes: acquiring positioning and map building information detected by a perception intelligent algorithm of the robot, acquiring a rotation angle of each moving part detected by an encoder respectively arranged on each moving part, and acquiring data detected by an inertial measurement unit of the robot; and determining the position information of each mobile component according to the positioning and mapping information, the rotation angle and the data.
Optionally, determining the position information of each mobile component according to the positioning and mapping information, the rotation angle and the data includes: calculating the positioning and map construction information, the rotation angle and the data through a Kalman filter to obtain a calculation result; and determining the position information of each moving part according to the operation result.
Optionally, respectively determining whether the first three-dimensional point-and-plane model intersects with any one of the second three-dimensional point-and-plane models includes: and for any second three-dimensional storefront model, determining whether the first three-dimensional point-surface model and any second three-dimensional point-surface model have intersection at different moments according to a convex hull intersection algorithm.
Optionally, adjusting a target moving component corresponding to a second three-dimensional point-surface model intersecting with the first three-dimensional point-surface model to enable the robot to successfully avoid the target object includes: adjusting the target moving part to cause the robot to successfully avoid the target object by one of: increasing the target moving part to a target position, prohibiting the target moving part from moving, and adjusting the moving direction of the target moving part.
Optionally, after determining whether there is an intersection between the first three-dimensional point-plane model and any one of the second three-dimensional point-plane models, the method further includes: and in the case of no intersection, instructing the robot to sequentially execute the control instruction sequence in the preset time period.
According to yet another alternative embodiment of the present invention, there is provided an evasive device for a target object, the method including: the first determination module is used for determining a target object needing to be avoided in a traveling area of the robot; the acquisition module is used for acquiring a first three-dimensional point-surface model of the target object and acquiring a second three-dimensional point-surface model corresponding to each moving part of the robot, wherein the robot is provided with a plurality of moving parts; the second determination module is used for respectively determining whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models; and the adjusting module is used for adjusting a target moving part corresponding to a second three-dimensional point-surface model intersected with the first three-dimensional point-surface model under the condition that the first three-dimensional point-surface model is intersected with the second three-dimensional point-surface model so as to enable the robot to successfully avoid the target object.
According to yet another alternative embodiment of the present invention, a computer-readable storage medium is provided, in which a computer program is stored, wherein the computer program is arranged to perform the method of evasive of a target object as described in any of the above when run.
According to yet another alternative embodiment of the present invention, there is provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the method of target object avoidance described in any of the above.
Compared with the prior art, the invention has the following beneficial effects: the method comprises the steps of acquiring a first three-dimensional point-surface model of a target object and a second three-dimensional point-surface model corresponding to each moving component of the robot under the condition that the target object needing to be avoided in a traveling area of the robot is determined, then respectively determining whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models, and if the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models, adjusting a target moving component corresponding to the second three-dimensional point-surface model intersected with the first three-dimensional point-surface model, so that the robot can successfully avoid the target object. Therefore, the problem that the part of the robot cannot be adjusted in the prior art so that the robot can successfully avoid the obstacle is solved, the robot can be flexibly controlled to avoid the obstacle, and the robot can be applied to scenes with complex terrains and many obstacles.
[ description of the drawings ]
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a robot of an evasive method of a target object according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of evasive of a target object according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart diagram of a method of evasive action on a target object according to an embodiment of the invention;
fig. 4 is a block diagram illustrating an evading device for a target object according to an embodiment of the present invention.
[ detailed description ] embodiments
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided by the embodiments of the present invention may be executed in a robot or a similar computing device. Taking a robot as an example, fig. 1 is a hardware block diagram of a robot of an evasive method of a target object according to an embodiment of the present invention. As shown in fig. 1, the robot may include one or more processors 102 (only one is shown in fig. 1), wherein the processors 102 may include, but are not limited to, a Microprocessor (MPU), a Programmable Logic Device (PLD), and a memory 104 for storing data, and optionally, the robot may further include a transmission device 106 for communication function and an input/output device 108, it is understood by those skilled in the art that the structure shown in fig. 1 is merely illustrative and does not limit the structure of the robot.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the target object avoiding method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located from the processor 102, which may be connected to the robot over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the robot. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, an evasive method operating on the target object is provided, and fig. 2 is a flowchart of an evasive method of a target object according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, determining a target object needing to be avoided by the robot in a traveling area;
step S204, a first three-dimensional point-surface model of the target object is obtained, and a second three-dimensional point-surface model corresponding to each moving part of the robot is obtained, wherein the robot is provided with a plurality of moving parts;
step S206, respectively determining whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models;
and step S208, under the condition that the intersection exists, adjusting a target moving part corresponding to a second three-dimensional point-surface model which intersects with the first three-dimensional point-surface model so as to enable the robot to successfully avoid the target object.
Through the technical scheme, under the condition that a target object needing to be avoided in a traveling area of the robot is determined, the first three-dimensional point-surface model of the target object and the second three-dimensional point-surface model corresponding to each moving component of the robot are obtained, then whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models is determined respectively, and if the intersection exists, the target moving component corresponding to the second three-dimensional point-surface model intersected with the first three-dimensional point-surface model is adjusted, so that the robot can successfully avoid the target object. Therefore, the problem that the robot can successfully avoid obstacles due to the fact that parts of the robot cannot be adjusted in the prior art is solved, and the robot can be applied to scenes with complex terrains and many obstacles.
The robot according to the embodiment of the present invention may be a biped robot, a tripodia robot, or a quadruped robot, but the embodiment of the present invention is not limited thereto.
The implementation method of step S204 is various, and in an optional embodiment, the second three-dimensional point-plane model corresponding to each moving component of the robot is obtained, which may be implemented by the following scheme: determining position information of each moving part, a part structure of each moving part, and a control instruction sequence of the robot in a preset time period after the current time, wherein the control instruction sequence is used for indicating operations to be executed by the robot at different times in the preset time period; and determining a second three-dimensional point-surface model corresponding to each moving component according to the position information, the component structure and the control instruction sequence so as to obtain a plurality of second three-dimensional point-surface models corresponding to the plurality of moving components.
It is understood that, for example, in a four-footed robot, each moving part in the four-footed robot needs to determine the second three-dimensional point-plane model, the determination process of the three-dimensional point-plane models of the four moving parts may be performed synchronously or sequentially, and the moving parts may be understood as structural members of the upper leg and the foot of the robot.
It should be noted that, the technical solution of the embodiment of the present invention: for the robot with the environment perception capability, collision detection is carried out on areas or obstacles which cannot be touched by each part in the advancing process of the robot, and the moving parts of the robot are adjusted in time according to the detection result, so that the walking safety of the quadruped robot in the complex ground environment is ensured. According to the embodiment of the invention, the robot can accurately avoid the obstacle through the first three-dimensional point-surface model of the target object (the obstacle or the area which can not be touched by the robot) and the second three-dimensional point-surface model corresponding to each moving part of the robot. In order to achieve the above object, in an embodiment of the present invention, a first three-dimensional point-surface model of a target object and a second three-dimensional point-surface model corresponding to each moving part of a robot are required to be obtained, where one second three-dimensional point-surface model corresponds to each moving part of the robot, and a plurality of second three-dimensional point-surface models are obtained in the embodiment of the present invention.
In order to determine the second three-dimensional point-and-plane model corresponding to each moving part of the robot, the position information of each moving part, the part structure of each moving part, and the control instruction sequence of the robot within a preset time period after the current time are acquired. It should be noted that the control instruction sequence is used to instruct the robot to perform operations at different times within a preset time period, and a series of control instruction sequences within the preset time period after the current time may be taken from a robot control algorithm, where the number of the control instruction sequences is usually 5 to 1000. The second three-dimensional point-surface model corresponding to each moving part can be determined through the position information, the part structure and the control instruction sequence of each moving part of the robot, and then a plurality of second three-dimensional point-surface models corresponding to the plurality of moving parts can be obtained. In the method for obtaining the second three-dimensional point-and-plane model, the second three-dimensional point-and-plane model corresponding to the moving component at each time in the control command sequence is inversely calculated from the position information of the moving component and the component structure.
In an alternative embodiment, determining the position information of each of the moving parts comprises: acquiring positioning and map building information detected by a perception intelligent algorithm of the robot, acquiring a rotation angle of each moving part detected by an encoder respectively arranged on each moving part, and acquiring data detected by an inertial measurement unit of the robot; and determining the position information of each mobile component according to the positioning and mapping information, the rotation angle and the data.
In order to specify the second three-dimensional point-and-plane model corresponding to each moving part of the robot, it is necessary to acquire position information of each moving part. In order to acquire the position information of each moving member, it is necessary to acquire the positioning and map construction information detected by the robot, the rotation angle of each moving part of the robot, and the robot inertia data (equivalent to the data detected by the inertia measurement unit of the above-described embodiment). It should be noted that the positioning and mapping information may be obtained by detection of a perception intelligent algorithm of the robot, the rotation angle of each moving part of the robot may be obtained by detection of an encoder respectively arranged on each moving part, and the inertial data of the robot may be obtained by detection of an inertial measurement unit of the robot. The robot inertial data is data on the current action or motion of the robot. According to the positioning and mapping information, the rotation angle of each moving part of the robot and the inertia data of the robot, the position information of each moving part of the robot can be determined.
In an alternative embodiment, determining the position information of each mobile component according to the positioning and mapping information, the rotation angle and the data can be implemented by: calculating the positioning and map construction information, the rotation angle and the data through a Kalman filter to obtain a calculation result; and determining the position information of each moving part according to the operation result.
It should be noted that, in the embodiment of the present invention, the positioning and mapping information, the rotation angle of each moving part of the robot, and the inertial data of the robot may be calculated through a kalman filter, so as to obtain the position information of each moving part of the robot. In the embodiment of the invention, the positioning and mapping information, the rotation angle of each moving part of the robot and the inertia data of the robot are filtered through the Kalman filter, and the interference in the positioning and mapping information, the rotation angle of each moving part of the robot and the inertia data of the robot is filtered. And simultaneously, performing optimal estimation on the positioning and map construction information, the rotation angle of each moving part of the robot and the inertial data of the robot through a linear system state equation in a Kalman filter, and further obtaining the position information of each moving part of the robot.
In the implementation scheme of step S206, it is respectively determined whether the first three-dimensional point-plane model intersects with any one of the second three-dimensional point-plane models, and the method may be implemented by: and for any second three-dimensional storefront model, determining whether the first three-dimensional point-surface model and any second three-dimensional point-surface model have intersection at different moments according to a convex hull intersection algorithm.
It should be noted that, after the first three-dimensional point-plane model and the plurality of second three-dimensional point-plane models are obtained, the embodiment of the present invention determines whether the first three-dimensional point-plane model and any one of the second three-dimensional point-plane models intersect at different times through a convex hull intersection algorithm, and further determines whether an obstacle exists or not and whether the robot can pass through. It should be noted that, in the embodiment of the present invention, the first three-dimensional point-and-plane model and the plurality of second three-dimensional point-and-plane models are actually sets of spatial coordinates of points. Each model corresponds to a set of spatial coordinates for points on a part of each moving part of the robot or target object. And (3) calculating a set of space coordinates of points on the target object corresponding to the first three-dimensional point-surface model and a set of space coordinates of each moving part of the robot corresponding to any second three-dimensional point-surface model, which are related to the points, by a convex hull intersection algorithm, so that whether the first three-dimensional point-surface model and any second three-dimensional point-surface model are intersected or not can be determined.
It should be noted that, it is determined whether the intersection exists between the first three-dimensional point-surface model and any one of the second three-dimensional point-surface models, and it is actually a prejudgment on whether the robot collides with an obstacle at the target position after the robot is about to reach the target position. The method for obtaining the second three-dimensional point-and-surface model is to inversely calculate the second three-dimensional point-and-surface model corresponding to the moving component at each moment in the control command sequence according to the position information and the component structure of the moving component. That is, the robot determines whether the second three-dimensional point-surface models of the moving parts of the robot have an intersection with the first three-dimensional point-surface model of the obstacle at the target position (the model corresponds to a set of spatial coordinates of points) according to the control command sequence, if the intersection exists, it is indicated that the robot may collide with the obstacle, and it is necessary to adjust the moving parts of the robot and the robot so that the adjusted second three-dimensional point-surface models of the moving parts do not have an intersection with the first three-dimensional point-surface model of the obstacle at the target position, and the robot passes safely.
Optionally, adjusting a target moving component corresponding to a second three-dimensional point-surface model intersecting with the first three-dimensional point-surface model to enable the robot to successfully avoid the target object includes: adjusting the target moving part to cause the robot to successfully avoid the target object by one of: increasing the target moving part to a target position, prohibiting the target moving part from moving, and adjusting the moving direction of the target moving part.
It should be noted that, if there is an intersection between the second three-dimensional point-plane model of each moving part of the robot and the first three-dimensional point-plane model of the obstacle at the target position, it is necessary to adjust the multiple moving parts of the robot and the robot if the robot may collide with the obstacle, and the adjustment methods for adjusting the multiple moving parts of the robot may include, but are not limited to, the following: (1) the height adjustment is realized in a specific mode that the target moving part can be adjusted to a target position (the barrier is at a lower position, and the target moving part and the barrier are staggered up and down); (2) forbidding movement, wherein the specific implementation mode is that the target moving part is forbidden to move (the channel is narrow, and the target moving part collides with an obstacle if moving); (3) adjusting the moving direction, wherein the specific implementation mode is to adjust the moving direction of the target moving part (the robot selects another channel to pass through); (4) the method is characterized in that the method is realized by adjusting the foot-drop point of a moving part of the robot to the left or the right in order to avoid obstacles.
It should be noted that the above four adjustment manners may be used in combination, and the embodiment of the present invention does not limit this, and in an optional embodiment, the height adjustment may be performed first, and if the obstacle still cannot be avoided, the adjustment may be attempted by dropping to the foot, and if the obstacle still cannot be avoided, the movement may be attempted to be prohibited.
After step S206 is executed, that is, after it is determined whether there is an intersection between the first three-dimensional point-plane model and any one of the second three-dimensional point-plane models, the method further includes: and in the case of no intersection, instructing the robot to sequentially execute the control instruction sequence in the preset time period.
It should be noted that if there is no intersection between the first three-dimensional point-plane model and any one of the second three-dimensional point-plane models, that is, there is no intersection between the second three-dimensional point-plane model of each moving part of the robot and the first three-dimensional point-plane model of the obstacle at the target position, it indicates that the robot does not collide with the obstacle, and the robot can pass through. In the above case, the robot may execute the control instruction sequences within the preset time period in sequence.
In order to better understand the technical scheme, the invention also provides an alternative embodiment for explaining the technical scheme.
Fig. 3 is a schematic flowchart of an evasive method for a target object according to an embodiment of the present invention, as shown in fig. 3:
s302, receiving environment information sent by an upper layer perception algorithm, wherein the environment information comprises a 2D map, three-dimensional environment information, position information of a robot and information of an article (equivalent to a target object of the embodiment) needing the robot to accurately avoid obstacles;
s304, establishing a first three-dimensional point-surface model of the target object;
s306, acquiring the position information, the component structure and the control instruction sequence of each moving component of the robot;
s308, establishing a second three-dimensional point-surface model corresponding to each moving part of the robot to obtain a plurality of second three-dimensional point-surface models;
s310, determining whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models;
s312, if the intersection exists, adjusting a target moving part corresponding to a second three-dimensional point-surface model intersected with the first three-dimensional point-surface model so as to enable the robot to successfully avoid the target object;
and S314, if no intersection exists, continuing to execute the control instruction sequence.
Through the technical scheme, under the condition that a target object needing to be avoided in a traveling area of the robot is determined, the first three-dimensional point-surface model of the target object and the second three-dimensional point-surface model corresponding to each moving component of the robot are obtained, then whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models is determined respectively, and if the intersection exists, the target moving component corresponding to the second three-dimensional point-surface model intersected with the first three-dimensional point-surface model is adjusted, so that the robot can successfully avoid the target object. Therefore, the problem that the robot can successfully avoid obstacles due to the fact that parts of the robot cannot be adjusted in the prior art is solved, and the robot can be applied to scenes with complex terrains and many obstacles.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be substantially embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
In this embodiment, a device for avoiding a target object is further provided, and the device for avoiding a target object is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of an evading device for a target object according to an alternative embodiment of the present invention, as shown in fig. 4:
the first determination module 40 is used for determining a target object needing to be avoided in a traveling area of the robot;
an obtaining module 42, configured to obtain a first three-dimensional point-and-plane model of the target object, and obtain a second three-dimensional point-and-plane model corresponding to each moving component of the robot, where the robot is provided with multiple moving components;
a second determining module 44, configured to respectively determine whether there is an intersection between the first three-dimensional point-and-plane model and any one of the second three-dimensional point-and-plane models;
and an adjusting module 46, configured to, in the case that there is an intersection, adjust a target moving component corresponding to a second three-dimensional point-and-plane model that has an intersection with the first three-dimensional point-and-plane model, so that the robot successfully avoids the target object.
Through the technical scheme, under the condition that a target object needing to be avoided in a traveling area of the robot is determined, the first three-dimensional point-surface model of the target object and the second three-dimensional point-surface model corresponding to each moving component of the robot are obtained, then whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models is determined respectively, and if the intersection exists, the target moving component corresponding to the second three-dimensional point-surface model intersected with the first three-dimensional point-surface model is adjusted, so that the robot can successfully avoid the target object. Therefore, the problem that the robot can successfully avoid obstacles due to the fact that parts of the robot cannot be adjusted in the prior art is solved, and the robot can be applied to scenes with complex terrains and many obstacles.
Optionally, the obtaining module 42 is further configured to determine position information of each moving component, a component structure of each moving component, and a control instruction sequence of the robot within a preset time period after the current time, where the control instruction sequence is used to instruct the robot to perform operations to be performed at different times within the preset time period; and determining a second three-dimensional point-surface model corresponding to each moving component according to the position information, the component structure and the control instruction sequence so as to obtain a plurality of second three-dimensional point-surface models corresponding to the plurality of moving components.
It is understood that, for example, in a four-footed robot, each moving part in the four-footed robot needs to determine the second three-dimensional point-plane model, the determination process of the three-dimensional point-plane models of the four moving parts may be performed synchronously or sequentially, and the moving parts may be understood as structural members of the upper leg and the foot of the robot.
It should be noted that, the technical solution of the embodiment of the present invention: for the robot with the environment perception capability, collision detection is carried out on areas or obstacles which cannot be touched by each part in the advancing process of the robot, and the moving parts of the robot are adjusted in time according to the detection result, so that the walking safety of the quadruped robot in the complex ground environment is ensured. According to the embodiment of the invention, the robot can accurately avoid the obstacle through the first three-dimensional point-surface model of the target object (the obstacle or the area which can not be touched by the robot) and the second three-dimensional point-surface model corresponding to each moving part of the robot. In order to achieve the above object, in an embodiment of the present invention, a first three-dimensional point-surface model of a target object and a second three-dimensional point-surface model corresponding to each moving part of a robot are required to be obtained, where one second three-dimensional point-surface model corresponds to each moving part of the robot, and a plurality of second three-dimensional point-surface models are obtained in the embodiment of the present invention.
In order to determine the second three-dimensional point-and-plane model corresponding to each moving part of the robot, the position information of each moving part, the part structure of each moving part, and the control instruction sequence of the robot within a preset time period after the current time are acquired. It should be noted that the control instruction sequence is used to instruct the robot to perform operations at different times within a preset time period, and a series of control instruction sequences within the preset time period after the current time may be taken from a robot control algorithm, where the number of the control instruction sequences is usually 5 to 1000. The second three-dimensional point-surface model corresponding to each moving part can be determined through the position information, the part structure and the control instruction sequence of each moving part of the robot, and then a plurality of second three-dimensional point-surface models corresponding to the plurality of moving parts can be obtained. In the method for obtaining the second three-dimensional point-and-plane model, the second three-dimensional point-and-plane model corresponding to the moving component at each time in the control command sequence is inversely calculated from the position information of the moving component and the component structure. That is, the control command sequence of the robot is a plurality of commands for controlling the robot based on the second three-dimensional point-and-plane model within a preset time period after the current time, and therefore, the second three-dimensional point-and-plane model within the preset time period after the current time can be back-calculated by the control command sequence.
Optionally, the obtaining module 42 is further configured to obtain positioning and mapping information detected by a perception intelligence algorithm of the robot, obtain a rotation angle of each moving part detected by an encoder respectively arranged on each moving part, and obtain data detected by an inertial measurement unit of the robot; and determining the position information of each mobile component according to the positioning and mapping information, the rotation angle and the data.
In order to specify the second three-dimensional point-and-plane model corresponding to each moving part of the robot, it is necessary to acquire position information of each moving part. In order to acquire the position information of each moving part, it is necessary to acquire the positioning and map construction information detected by the robot, the rotation angle of each moving part of the robot, and the robot inertia data (the data). It should be noted that the positioning and mapping information may be obtained by detection of a perception intelligent algorithm of the robot, the rotation angle of each moving part of the robot may be obtained by detection of an encoder respectively arranged on each moving part, and the inertial data of the robot may be obtained by detection of an inertial measurement unit of the robot. The robot inertial data is data on the current action or motion of the robot. And determining the position information of each moving part of the robot according to the positioning and mapping information, the rotation angle of each moving part of the robot and the inertia data of the robot.
Optionally, the obtaining module 42 is further configured to perform an operation on the positioning and map building information, the rotation angle, and the data through a kalman filter to obtain an operation result; and determining the position information of each moving part according to the operation result.
It should be noted that, in the embodiment of the present invention, the positioning and mapping information, the rotation angle of each moving part of the robot, and the inertial data of the robot may be calculated through a kalman filter, so as to obtain the position information of each moving part of the robot. In the embodiment of the invention, the positioning and mapping information, the rotation angle of each moving part of the robot and the inertia data of the robot are filtered through the Kalman filter, and the interference in the positioning and mapping information, the rotation angle of each moving part of the robot and the inertia data of the robot is filtered. And simultaneously, performing optimal estimation on the positioning and map construction information, the rotation angle of each moving part of the robot and the inertial data of the robot through a linear system state equation in a Kalman filter, and further obtaining the position information of each moving part of the robot.
Optionally, the second determining module 44 is further configured to determine, according to a convex hull intersection algorithm, whether there is an intersection between the first three-dimensional point-and-face model and any of the second three-dimensional point-and-face models at the different time.
It should be noted that after the first three-dimensional point-plane model and the plurality of second three-dimensional point-plane models are obtained, the method determines whether the first three-dimensional point-plane model and any one of the second three-dimensional point-plane models intersect at different times through a convex hull intersection algorithm, and further determines whether an obstacle exists or not and whether the robot can pass through. In the present invention, the first three-dimensional point-surface model and the plurality of second three-dimensional point-surface models are actually sets of spatial coordinates of points. Each model corresponds to a set of spatial coordinates for points on a part of each moving part of the robot or target object. And (3) calculating a set of space coordinates of points on the target object corresponding to the first three-dimensional point-surface model and a set of space coordinates of each moving part of the robot corresponding to any second three-dimensional point-surface model, which are related to the points, by a convex hull intersection algorithm, so that whether the first three-dimensional point-surface model and any second three-dimensional point-surface model are intersected or not can be determined.
It should be noted that, it is determined whether the intersection exists between the first three-dimensional point-surface model and any one of the second three-dimensional point-surface models, and it is actually a prejudgment on whether the robot collides with an obstacle at the target position after the robot is about to reach the target position. The method for obtaining the second three-dimensional point-and-surface model is to inversely calculate the second three-dimensional point-and-surface model corresponding to the moving component at each moment in the control command sequence according to the position information and the component structure of the moving component. That is, the robot determines whether the second three-dimensional point-surface models of the moving parts of the robot have an intersection with the first three-dimensional point-surface model of the obstacle at the target position (the model corresponds to a set of spatial coordinates of points) according to the control command sequence, if the intersection exists, it is indicated that the robot may collide with the obstacle, and it is necessary to adjust the moving parts of the robot and the robot so that the adjusted second three-dimensional point-surface models of the moving parts do not have an intersection with the first three-dimensional point-surface model of the obstacle at the target position, and the robot passes safely.
Optionally, the adjusting module 46 is further configured to adjust the target moving component to make the robot successfully avoid the target object by one of the following manners: increasing the target moving part to a target position, prohibiting the target moving part from moving, and adjusting the moving direction of the target moving part.
It should be noted that, if there is an intersection between the second three-dimensional point-plane model of each moving part of the robot and the first three-dimensional point-plane model of the obstacle at the target position, it is necessary to adjust the multiple moving parts of the robot and the robot if the robot may collide with the obstacle, and the adjustment methods for adjusting the multiple moving parts of the robot may include, but are not limited to, the following: (1) the height adjustment is realized in a specific mode that the target moving part can be adjusted to a target position (the barrier is at a lower position, and the target moving part and the barrier are staggered up and down); (2) forbidding movement, wherein the specific implementation mode is that the target moving part is forbidden to move (the channel is narrow, and the target moving part collides with an obstacle if moving); (3) adjusting the moving direction, wherein the specific implementation mode is to adjust the moving direction of the target moving part (the robot selects another channel to pass through); (4) the method is characterized in that the method is realized by adjusting the foot-drop point of a moving part of the robot to the left or the right in order to avoid obstacles.
It should be noted that the above four adjustment manners may be used in combination, and the embodiment of the present invention does not limit this, and in an optional embodiment, the height adjustment may be performed first, and if the obstacle still cannot be avoided, the adjustment may be attempted by dropping to the foot, and if the obstacle still cannot be avoided, the movement may be attempted to be prohibited.
Optionally, the adjusting module 46 is further configured to instruct the robot to sequentially execute the control instruction sequences within the preset time period in the case that there is no intersection.
It should be noted that if there is no intersection between the first three-dimensional point-plane model and any one of the second three-dimensional point-plane models, that is, there is no intersection between the second three-dimensional point-plane model of each moving part of the robot and the first three-dimensional point-plane model of the obstacle at the target position, it indicates that the robot does not collide with the obstacle, and the robot can pass through. In the above case, the robot may execute the control instruction sequences within the preset time period in sequence.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, determining a target object needing to be avoided by the robot in the traveling area;
s2, acquiring a first three-dimensional point-surface model of the target object and acquiring a second three-dimensional point-surface model corresponding to each moving part of the robot, wherein the robot is provided with a plurality of moving parts;
s3, respectively determining whether the first three-dimensional point-surface model intersects with any one of the second three-dimensional point-surface models;
and S4, if the intersection exists, adjusting a target moving component corresponding to a second three-dimensional point-surface model which is intersected with the first three-dimensional point-surface model so as to enable the robot to successfully avoid the target object.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a U disk, a read only memory ROM, a random access memory RAM, a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining a target object needing to be avoided by the robot in the traveling area;
s2, acquiring a first three-dimensional point-surface model of the target object and acquiring a second three-dimensional point-surface model corresponding to each moving part of the robot, wherein the robot is provided with a plurality of moving parts;
s3, respectively determining whether the first three-dimensional point-surface model intersects with any one of the second three-dimensional point-surface models;
and S4, if the intersection exists, adjusting a target moving component corresponding to a second three-dimensional point-surface model which is intersected with the first three-dimensional point-surface model so as to enable the robot to successfully avoid the target object.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of evasion of a target object, the method comprising:
determining a target object needing to be avoided by the robot in a traveling area;
acquiring a first three-dimensional point-surface model of the target object, and acquiring a second three-dimensional point-surface model corresponding to each moving part of the robot, wherein the robot is provided with a plurality of moving parts;
respectively determining whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models;
and under the condition that the intersection exists, adjusting a target moving component corresponding to a second three-dimensional point-surface model which intersects with the first three-dimensional point-surface model so as to enable the robot to successfully avoid the target object.
2. An evasive method for a target object as recited in claim 1, wherein: acquiring a second three-dimensional point-surface model corresponding to each moving part of the robot, wherein the second three-dimensional point-surface model comprises the following steps:
determining position information of each moving part, a part structure of each moving part, and a control instruction sequence of the robot in a preset time period after the current time, wherein the control instruction sequence is used for indicating operations to be executed by the robot at different times in the preset time period;
and determining a second three-dimensional point-surface model corresponding to each moving component according to the position information, the component structure and the control instruction sequence so as to obtain a plurality of second three-dimensional point-surface models corresponding to the plurality of moving components.
3. An evasive method for a target object as recited in claim 2, wherein: determining position information for each of the moving parts, comprising:
acquiring positioning and map building information detected by a perception intelligent algorithm of the robot, acquiring a rotation angle of each moving part detected by an encoder respectively arranged on each moving part, and acquiring data detected by an inertial measurement unit of the robot;
and determining the position information of each mobile component according to the positioning and mapping information, the rotation angle and the data.
4. An evasive method for a target object as recited in claim 3, wherein: determining the position information of each mobile component according to the positioning and mapping information, the rotation angle and the data, comprising:
calculating the positioning and map construction information, the rotation angle and the data through a Kalman filter to obtain a calculation result;
and determining the position information of each moving part according to the operation result.
5. An evasive method for a target object as recited in claim 2, wherein: respectively determining whether the first three-dimensional point-surface model intersects with any one of the second three-dimensional point-surface models, including:
and for any second three-dimensional storefront model, determining whether the first three-dimensional point-surface model and any second three-dimensional point-surface model have intersection at different moments according to a convex hull intersection algorithm.
6. An evasive method for a target object as recited in claim 1, wherein: adjusting a target moving part corresponding to a second three-dimensional point-surface model intersected with the first three-dimensional point-surface model so as to enable the robot to successfully avoid the target object, wherein the method comprises the following steps:
adjusting the target moving part to cause the robot to successfully avoid the target object by one of:
increasing the target moving part to a target position, prohibiting the target moving part from moving, and adjusting the moving direction of the target moving part.
7. An evasive method for a target object as recited in claim 2, wherein: after determining whether there is an intersection between the first three-dimensional point-plane model and any one of the second three-dimensional point-plane models, the method further includes:
and in the case of no intersection, instructing the robot to sequentially execute the control instruction sequence in the preset time period.
8. An avoidance apparatus for a target object, comprising:
the first determination module is used for determining a target object needing to be avoided in a traveling area of the robot;
the acquisition module is used for acquiring a first three-dimensional point-surface model of the target object and acquiring a second three-dimensional point-surface model corresponding to each moving part of the robot, wherein the robot is provided with a plurality of moving parts;
the second determination module is used for respectively determining whether the first three-dimensional point-surface model is intersected with any one of the second three-dimensional point-surface models;
and the adjusting module is used for adjusting a target moving part corresponding to a second three-dimensional point-surface model intersected with the first three-dimensional point-surface model under the condition that the first three-dimensional point-surface model is intersected with the second three-dimensional point-surface model so as to enable the robot to successfully avoid the target object.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method of any one of claims 1 to 7 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.
CN202110180718.3A 2021-02-08 2021-02-08 Target object avoiding method and device, storage medium and electronic device Active CN112991527B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202210261795.6A CN114742960A (en) 2021-02-08 2021-02-08 Target object avoiding method and device, storage medium and electronic device
CN202110180718.3A CN112991527B (en) 2021-02-08 2021-02-08 Target object avoiding method and device, storage medium and electronic device
PCT/CN2021/137063 WO2022166397A1 (en) 2021-02-08 2021-12-10 Method and apparatus for avoiding target object, storage medium and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110180718.3A CN112991527B (en) 2021-02-08 2021-02-08 Target object avoiding method and device, storage medium and electronic device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210261795.6A Division CN114742960A (en) 2021-02-08 2021-02-08 Target object avoiding method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN112991527A true CN112991527A (en) 2021-06-18
CN112991527B CN112991527B (en) 2022-04-19

Family

ID=76393815

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110180718.3A Active CN112991527B (en) 2021-02-08 2021-02-08 Target object avoiding method and device, storage medium and electronic device
CN202210261795.6A Pending CN114742960A (en) 2021-02-08 2021-02-08 Target object avoiding method and device, storage medium and electronic device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210261795.6A Pending CN114742960A (en) 2021-02-08 2021-02-08 Target object avoiding method and device, storage medium and electronic device

Country Status (2)

Country Link
CN (2) CN112991527B (en)
WO (1) WO2022166397A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022166397A1 (en) * 2021-02-08 2022-08-11 追觅创新科技(苏州)有限公司 Method and apparatus for avoiding target object, storage medium and electronic apparatus

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003136466A (en) * 2001-10-25 2003-05-14 Murata Mach Ltd Control device of industrial robot
JP2004202627A (en) * 2002-12-25 2004-07-22 Yaskawa Electric Corp Interference checking device and method of horizontal multi-articulated robot
CN107688342A (en) * 2017-03-27 2018-02-13 平安科技(深圳)有限公司 The obstruction-avoiding control system and method for robot
CN108475059A (en) * 2015-12-15 2018-08-31 高通股份有限公司 Autonomous vision guided navigation
CN108733065A (en) * 2017-09-29 2018-11-02 北京猎户星空科技有限公司 A kind of barrier-avoiding method of robot, device and robot
CN109063635A (en) * 2018-07-27 2018-12-21 平安科技(深圳)有限公司 A kind of apparatus control method, terminal and computer-readable medium
TW201931039A (en) * 2018-01-12 2019-08-01 鴻海精密工業股份有限公司 A robot capable of taking elevator autonomously and a method for receiving and delivering articles using the same
CN110271006A (en) * 2019-07-19 2019-09-24 北京农业智能装备技术研究中心 Mechanical arm visual guide method and device
CN110632921A (en) * 2019-09-05 2019-12-31 北京百度网讯科技有限公司 Robot path planning method and device, electronic equipment and storage medium
CN110658819A (en) * 2019-09-30 2020-01-07 北京猎户星空科技有限公司 Obstacle avoidance method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070271011A1 (en) * 2006-05-12 2007-11-22 Samsung Electronics Co., Ltd. Indoor map building apparatus, method, and medium for mobile robot
CN109407705A (en) * 2018-12-14 2019-03-01 厦门理工学院 A kind of method, apparatus, equipment and the storage medium of unmanned plane avoiding barrier
CN111743464A (en) * 2020-07-06 2020-10-09 追创科技(苏州)有限公司 Obstacle avoidance method and device based on line laser
CN112991527B (en) * 2021-02-08 2022-04-19 追觅创新科技(苏州)有限公司 Target object avoiding method and device, storage medium and electronic device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003136466A (en) * 2001-10-25 2003-05-14 Murata Mach Ltd Control device of industrial robot
JP2004202627A (en) * 2002-12-25 2004-07-22 Yaskawa Electric Corp Interference checking device and method of horizontal multi-articulated robot
CN108475059A (en) * 2015-12-15 2018-08-31 高通股份有限公司 Autonomous vision guided navigation
CN107688342A (en) * 2017-03-27 2018-02-13 平安科技(深圳)有限公司 The obstruction-avoiding control system and method for robot
CN108733065A (en) * 2017-09-29 2018-11-02 北京猎户星空科技有限公司 A kind of barrier-avoiding method of robot, device and robot
TW201931039A (en) * 2018-01-12 2019-08-01 鴻海精密工業股份有限公司 A robot capable of taking elevator autonomously and a method for receiving and delivering articles using the same
CN109063635A (en) * 2018-07-27 2018-12-21 平安科技(深圳)有限公司 A kind of apparatus control method, terminal and computer-readable medium
CN110271006A (en) * 2019-07-19 2019-09-24 北京农业智能装备技术研究中心 Mechanical arm visual guide method and device
CN110632921A (en) * 2019-09-05 2019-12-31 北京百度网讯科技有限公司 Robot path planning method and device, electronic equipment and storage medium
CN110658819A (en) * 2019-09-30 2020-01-07 北京猎户星空科技有限公司 Obstacle avoidance method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANTONIO SGORBISSA: "Integrated robot planning, path following, and obstacle avoidance in two and three dimensions: Wheeled robots, underwater vehicles, and multicopters:互联网学术, robot obstacle component dimensional", 《THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH》 *
李丰果等: "人工智能算法赋能视觉导航清洁机器人", 《人工智能》 *
杨兴: "室内自主导航移动机器人路径规划研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022166397A1 (en) * 2021-02-08 2022-08-11 追觅创新科技(苏州)有限公司 Method and apparatus for avoiding target object, storage medium and electronic apparatus

Also Published As

Publication number Publication date
CN112991527B (en) 2022-04-19
WO2022166397A1 (en) 2022-08-11
CN114742960A (en) 2022-07-12

Similar Documents

Publication Publication Date Title
Mac et al. The development of an autonomous navigation system with optimal control of an UAV in partly unknown indoor environment
Gan et al. Multi-UAV target search using explicit decentralized gradient-based negotiation
EP3410246A1 (en) Robot obstacle avoidance control system and method, robot, and storage medium
CN110986953B (en) Path planning method, robot and computer readable storage medium
CN110524544A (en) A kind of control method of manipulator motion, terminal and readable storage medium storing program for executing
CN112306050B (en) Autonomous robot, walking path planning method and device thereof and storage medium
CN108733065B (en) Obstacle avoidance method and device for robot and robot
CN112991527B (en) Target object avoiding method and device, storage medium and electronic device
Hatanaka et al. A passivity-based approach to human–swarm collaboration and passivity analysis of human operators
CN110940341A (en) Path planning method, robot and computer readable storage medium
US9360320B2 (en) Autonomous coordination of agents via attraction and repulsion
CN113848893A (en) Robot navigation method, device, equipment and storage medium
CN116520852A (en) Method, device and equipment for capturing multiple targets by group robots under local information
Maniscalco et al. Robust mobile robot self-localization by soft sensor paradigm
CN114488853A (en) Scene simulation method and device for robot path planning
Gautam et al. Multi-robot online terrain coverage under communication range restrictions–an empirical study
Gautam et al. Experimental evaluation of multi-robot online terrain coverage approach
EP4080309A1 (en) Own-position estimating device, moving body, own-position estimating method, and own-position estimating program
Almeida et al. Real-time tracking of multiple moving objects using particle filters and probabilistic data association
Fabiani et al. Tracking an unpredictable target among occluding obstacles under localization uncertainties
CN113467452A (en) Avoidance method and device for mobile robot, storage medium, and electronic device
Nametala et al. A new distance diffusion algorithm for a path-planning model based on cellular automata
Charalampous et al. Autonomous robot path planning techniques using cellular automata
CN112445211B (en) Robot control method and device, storage medium and electronic device
CN112462765B (en) Robot, control method and device thereof, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215104 E3, building 16, No. 2288, Wuzhong Avenue, Yuexi, Wuzhong District, Suzhou City, Jiangsu Province

Applicant after: Pursuit innovation technology (Suzhou) Co.,Ltd.

Address before: 215104 E3, building 16, No. 2288, Wuzhong Avenue, Yuexi, Wuzhong District, Suzhou City, Jiangsu Province

Applicant before: ZHUICHUANG TECHNOLOGY (SUZHOU) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221123

Address after: No.018, 8th floor, building 6, No.33 yard, middle Xierqi Road, Haidian District, Beijing 100085

Patentee after: BEIJING XIAOMI MOBILE SOFTWARE Co.,Ltd.

Patentee after: Pursuit innovation technology (Suzhou) Co.,Ltd.

Address before: 215104 E3, building 16, No. 2288, Wuzhong Avenue, Yuexi, Wuzhong District, Suzhou City, Jiangsu Province

Patentee before: Pursuit innovation technology (Suzhou) Co.,Ltd.