CN110990594A - Robot space cognition method and system based on natural language interaction - Google Patents

Robot space cognition method and system based on natural language interaction Download PDF

Info

Publication number
CN110990594A
CN110990594A CN201911207208.XA CN201911207208A CN110990594A CN 110990594 A CN110990594 A CN 110990594A CN 201911207208 A CN201911207208 A CN 201911207208A CN 110990594 A CN110990594 A CN 110990594A
Authority
CN
China
Prior art keywords
distance
target
point
relationship
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911207208.XA
Other languages
Chinese (zh)
Other versions
CN110990594B (en
Inventor
付艳
邱侃
李世其
王峻峰
程力
王晓怡
谭杰
李雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201911207208.XA priority Critical patent/CN110990594B/en
Publication of CN110990594A publication Critical patent/CN110990594A/en
Application granted granted Critical
Publication of CN110990594B publication Critical patent/CN110990594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/38Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/387Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a robot space cognition method and system based on natural language interaction, which comprises the following steps: establishing a spatial information corpus based on natural language expression, comprising: a target attribute feature description corpus and a target location feature description corpus; converting the target attribute feature description corpus and the target position feature description corpus into a keyword array according to a preset grammar rule; judging the calculation relationship between the object type and the spatial position of the target object and the reference object according to the related characteristics of the object contained in the keyword array, wherein the calculation relationship of the spatial position comprises at least one of the following relationships: the direction relation of the target object relative to the reference object, the distance relation of the target object relative to the reference object and the topological relation of the target object relative to at least two reference objects; and determining the coordinate range of the target object according to the calculation relationship between the categories and the spatial positions of the target object and the reference object so as to search for the target object in the following process. The invention can reduce the interaction frequency between the human and the robot and improve the human-computer interaction efficiency.

Description

Robot space cognition method and system based on natural language interaction
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a robot space cognition method and system based on natural language interaction.
Background
Under unstructured environments such as planet detection and field rescue, the robot cannot understand all environmental information, and the difficulty and efficiency of independently completing tasks still have great problems. The capability advantages of people in the aspects of comprehensive perception, prediction judgment, spatial reasoning and the like can well make up for the deficiency, and man-machine cooperation is an effective method for executing tasks. The man-machine interaction mode is one of key points in the man-machine cooperation process, and the man-machine interaction mode which is natural and friendly can effectively improve the interaction level. The natural language is one of natural interaction modes, is unlimited, does not need to distort natural thinking and behavior modes to adapt to the requirements of the robot, has low requirements on environment and equipment, is suitable for unstructured environment, and is widely applied to the field of mobile robots.
The process of completing tasks by human-computer cooperation relates to the processing of spatial information, namely spatial cognition. Because the space cognition mechanism between the robots is greatly different, the robots are difficult to understand the space information expressed based on the natural language and can only receive the control instruction with the unidirectional structure, and the operation efficiency is greatly influenced by frequent and low-speed interaction. To solve this problem, the emphasis is on how the robot can understand the cognitive expression of the human on the spatial information. In the prior art, the simulation of the human cognition process is realized based on a cognitive theory framework, namely, a space reference framework selection and space inference characteristic in human language command and communication is obtained through experiments, and a space cognition and inference module is tried to be constructed for a robot. In the field of space information interaction oriented to man-machine cooperation, a robot space cognition method meeting natural language interaction needs to be urgently needed.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to solve the technical problems that the space cognition mechanism between the robots is greatly different, the robots are difficult to understand the space information expressed based on the natural language, only can receive unidirectional structured control instructions, and frequent and low-speed interaction greatly influences the operation efficiency.
In order to achieve the above object, in a first aspect, the present invention provides a robot space recognition method based on natural language interaction, including the following steps:
establishing a spatial information corpus based on natural language expression, comprising: a target attribute feature description corpus and a target location feature description corpus;
converting the target attribute feature description corpus and the target position feature description corpus into a keyword array according to a preset grammar rule; the keywords include: the target object name, the reference object name, the direction relation and the distance relation;
according to the related characteristics of the objects contained in the keyword array, judging the categories of the target object and the reference object and the spatial position calculation relationship, wherein the categories comprise a point object and a planar object, and the spatial position calculation relationship comprises at least one of the following relationships: the direction relation of the target object relative to the reference object, the distance relation of the target object relative to the reference object and the topological relation of the target object relative to at least two reference objects;
and determining the coordinate range of the target object according to the calculation relationship among the target object, the category of the reference object and the space position so as to search for the target object in the following process.
Optionally, the category to which the object belongs is determined by:
if the object of the category to be judged is an independent object, abstracting the object into a point object or a planar object without influencing the spatial position expression of the object or other objects except the object of the category to be judged, and regarding the object as a point object;
if the area ratio of the object of the category to be judged is larger than the preset value, the object is regarded as a planar object when the object is abstracted to be a point object and the spatial position expression of the object or other objects except the object of the category to be judged is influenced.
Optionally, according to the directional relationship of the object with respect to the reference object, the coordinate range of the object is obtained by the following steps:
when the reference object is a point object, an eight-direction conical model is adopted to divide the whole two-dimensional space plane into eight parts with directivity, and the interval between every two directions is 45 degrees; setting a point-shaped reference object at the origin of a coordinate system, and for any point-shaped target object in the space, obtaining a coordinate position set of the point-shaped target object relative to the point-shaped reference object in different directions according to a plurality of preset straight line constraints;
when the reference object is a planar reference object, determining the planar reference object and a minimum circumscribed rectangle thereof by using a minimum boundary rectangle model, and taking straight lines where four rectangle sides of the minimum circumscribed rectangle are located as boundary lines in all directions; determining a coordinate position set of the point-shaped object relative to the planar reference object in different directions according to the boundary lines in all directions;
if two reference objects exist, the coordinate position ranges of the target object are determined respectively only according to different reference object position descriptions, and then the intersection of the two ranges is obtained.
Optionally, according to a distance relationship between the target object and the reference object, the distance relationship includes: quantitative, qualitative, or temporal distances; solving the coordinate range of the object by the following steps:
when the distance relation is a quantitative distance, if the reference object is a point-shaped reference object, the distance between the point-shaped target object and the point-shaped reference object is a quantitative distance and an error distance range area;
when the distance relation is a qualitative distance, presetting different distance thresholds for distances of different granularity levels, and if the reference object is a point-shaped reference object, setting the distance of the point-shaped target object from the point-shaped reference object as a qualitative distance range area;
when the distance relation is time distance, converting the time distance into quantitative distance, and then determining the coordinate range of the point-like reference object;
when the distance relationship between the target object and two reference objects is used for describing the position of the target object, the coordinate ranges of the target object need to be respectively determined according to different reference object distance descriptions, and the coordinate ranges of the target object and the two reference objects are intersected to determine the final coordinate range of the target object.
Optionally, according to the distance relationship and the direction relationship of the target object relative to the reference object, the coordinate range of the target object is solved through the following steps:
and solving the coordinate range of the target object according to two constraint conditions of the distance relation and the direction relation of the target object relative to the reference object, and finally solving the intersection of the two coordinate ranges to determine the final coordinate range of the target object.
Optionally, according to the topological relation of the object with respect to the at least two reference objects, the coordinate range of the object is solved by the following steps:
if the topological relation is that the target object is between the two reference objects:
when the two reference objects are both point reference objects, the target object is in the range of the line segment formed by connecting the two reference objects and the area with the distance line segment as the preset distance;
when the two reference objects are planar reference objects, the range of the target object is determined according to the rectangular edge of the minimum circumscribed rectangle of the two planar reference objects;
when the two reference objects are the point-like reference object and the planar reference object, the range of the target object is determined according to the coordinates of the point-like reference object and the rectangular side of the rectangular minimum circumscribed rectangle of the planar reference object.
In a second aspect, the present invention provides a robot spatial recognition system based on natural language interaction, including:
the corpus establishing unit is used for establishing a spatial information corpus based on natural language expression, and comprises the following steps: a target attribute feature description corpus and a target location feature description corpus;
a keyword determining unit, configured to convert the target attribute feature description corpus and the target location feature description corpus into a keyword array according to a preset grammar rule; the keywords include: the target object name, the reference object name, the direction relation and the distance relation;
a feature determination unit, configured to determine, according to the object-related features included in the keyword array, a category and a spatial position calculation relationship to which the target object and the reference object belong, where the category includes a point object and a planar object, and the spatial position calculation relationship includes at least one of the following relationships: the direction relation of the target object relative to the reference object, the distance relation of the target object relative to the reference object and the topological relation of the target object relative to at least two reference objects;
and the target object coordinate determining unit is used for determining the coordinate range of the target object according to the target object, the category of the reference object and the spatial position calculation relationship so as to search for the target object subsequently.
Optionally, the keyword determination unit determines the category to which the object belongs by: if the object of the category to be judged is an independent object, abstracting the object into a point object or a planar object without influencing the spatial position expression of the object or other objects except the object of the category to be judged, and regarding the object as a point object; if the area ratio of the object of the category to be judged is larger than the preset value, the object is regarded as a planar object when the object is abstracted to be a point object and the spatial position expression of the object or other objects except the object of the category to be judged is influenced.
Optionally, according to the directional relationship of the object with respect to the reference object, the object coordinate determining unit solves the coordinate range of the object by: when the reference object is a point object, an eight-direction conical model is adopted to divide the whole two-dimensional space plane into eight parts with directivity, and the interval between every two directions is 45 degrees; setting a point-shaped reference object at the origin of a coordinate system, and for any point-shaped target object in the space, obtaining a coordinate position set of the point-shaped target object relative to the point-shaped reference object in different directions according to a plurality of preset straight line constraints; when the reference object is a planar reference object, determining the planar reference object and a minimum circumscribed rectangle thereof by using a minimum boundary rectangle model, and taking straight lines where four rectangle sides of the minimum circumscribed rectangle are located as boundary lines in all directions; determining a coordinate position set of the point-shaped object relative to the planar reference object in different directions according to the boundary lines in all directions; if two reference objects exist, the coordinate position ranges of the target object are determined respectively only according to different reference object position descriptions, and then the intersection of the two ranges is obtained.
Optionally, according to a distance relationship between the target object and the reference object, the distance relationship includes: quantitative, qualitative, or temporal distances; the object coordinate determination unit solves the coordinate range of the object by the following steps: when the distance relation is a quantitative distance, if the reference object is a point-shaped reference object, the distance between the point-shaped target object and the point-shaped reference object is a quantitative distance and an error distance range area; when the distance relation is a qualitative distance, presetting different distance thresholds for distances of different granularity levels, and if the reference object is a point-shaped reference object, setting the distance of the point-shaped target object from the point-shaped reference object as a qualitative distance range area; when the distance relation is time distance, converting the time distance into quantitative distance, and then determining the coordinate range of the point-like reference object; when the distance relationship between the target object and two reference objects is used for describing the position of the target object, the coordinate ranges of the target object need to be respectively determined according to different reference object distance descriptions, and the coordinate ranges of the target object and the two reference objects are intersected to determine the final coordinate range of the target object.
Generally, compared with the prior art, the above technical solution conceived by the present invention has the following beneficial effects:
compared with a space cognition mode that a traditional robot mainly receives a unidirectional structured control instruction, the robot space cognition method and system based on natural language interaction increase the cognition of the robot to natural language, find out keywords in the natural language instruction and determine the coordinate range of a target object from the keywords, are convenient for the robot to perceive the natural language, can reduce the interaction frequency between the robot and the robot, and improve the man-machine interaction efficiency.
Compared with the traditional robot space cognition mode in which people mainly send structured control instructions and need to learn a large number of expression rules, the robot space cognition method and system based on natural language interaction provided by the invention only need to use a natural language expression mode to express space information, and the cognition load is reduced.
Drawings
FIG. 1 is a schematic flow chart of a robot space cognition method based on natural language interaction according to the present invention;
FIG. 2 is a schematic diagram of an eight-directional cone model provided by the present invention;
FIG. 3 is a schematic diagram of a Minimum Bounding Rectangle (MBR) model provided by the present invention;
FIG. 4 is a schematic diagram of coordinate system transformation provided by the present invention;
FIG. 5 is a schematic diagram illustrating a distance relationship provided by the present invention;
FIG. 6 is a schematic diagram of a point-like reference object description only described in a topological relation provided by the present invention;
FIG. 7 is a schematic diagram of a planar reference object described only by a topological relation provided by the present invention;
FIG. 8 is a schematic diagram of the description of a "point-like + planar" reference object only described by a topological relation provided by the present invention;
fig. 9 is a robot space recognition system architecture diagram based on natural language interaction provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In order to solve the defects of the prior art, namely the problem that the robot is difficult to recognize and understand the spatial information expressed based on the natural language in the complex task scene, the invention provides a robot spatial recognition method meeting natural language interaction, and the robot can understand the complex spatial information expressed based on the natural language, such as target description, positioning and the like. The specific technical route is as follows:
the robot space cognition method comprises a natural language processing module and a space cognition module, and is divided into five main steps.
(1) And establishing a corresponding spatial information expression corpus facing to a task scene, wherein the corpus comprises target object attribute feature description and target position description corpora.
(2) And the natural language processing module converts the spatial information expressed based on the natural language into related keyword arrays according to grammar rules and the corpus in the step 1, wherein the keyword arrays comprise the name of the target object, the attribute of each dimension of the target object, the type of the target object, the name of the reference object and the like.
(3) And the space cognition module stores knowledge related to the object characteristics according to the keyword array and judges the category and the spatial position calculation type of the object. The object belongs to the category comprising a point object and a planar object, and the spatial position calculation type comprises a single (two) reference object + direction relation, a single (two) reference object + distance relation, a single (two) reference object + direction relation + distance relation and a two reference object + only topological relation.
(4) And the space cognition module calculates the space position according to the space relations of different types.
(5) And the space cognition module is used for completing the matching of the target object characteristics and the target object coordinates.
As shown in fig. 1, a technical route diagram of the robot spatial awareness method is shown. The system is divided into two main parts, namely a natural language processing module and a space cognition module. The natural language processing module comprises a space information corpus, processes space information expressed by natural language and analyzes corresponding keywords. The space cognition module completes the calculation reasoning function of the space position. After receiving the keywords, storing knowledge related to the object characteristics according to the keyword array, judging the category of the reference object and the spatial position calculation type related to the target object, performing reasoning calculation of the spatial position according to different types, and finally completing matching of the target object characteristics and the target object coordinates. The specific contents are detailed as follows:
1. corpus creation and natural language parsing
According to the task scene, after a large amount of linguistic data expressed by spatial information are collected through experiments, a spatial information corpus is formed, corresponding natural language expression rules are obtained, and important keywords are extracted from the linguistic data, wherein the linguistic data comprises the following steps: target object name, reference object name, orientation relationship, distance relationship, etc. On the basis, the natural language can be analyzed by compiling a corpus covering the task process. For example, the natural language expresses that the red flag is 5 meters to the left of the stone. "resolvable to" target name based on expression rules: red flag, reference: stone, azimuth relation: left, distance relationship: 5 m ", table 1 shows part of the grammar rules.
Table 1 grammar rules example table
Figure BDA0002297173670000081
In table 1, "{ }" denotes a keyword, "[ ]" denotes an assist word, and "|" denotes a relationship of or.
2. And judging the object type and the spatial relationship calculation type.
Since the description of the position of a human to a target object is related to the nature and characteristics of the object, the object subjected to a spatial cognitive task is divided into two types, namely a point-like object and a planar object, by referring to related concepts in geography. If the object is an independent object, the area ratio is of unimportant attribute, namely the object is abstracted to be a point object or a planar object without influencing the spatial position expression of the object or other objects, and the object can be regarded as the point object; if the area ratio of the object is large, the spatial position expression of the object or other objects is greatly influenced when the object is abstracted to be a point object, and the object is regarded as a planar object.
In view of the spatial recognition scenario, the target object to be located is generally regarded as a point object, and the reference object may be a point or a planar object. The judgment of the object belonging category mainly depends on the prior knowledge base of the robot, and the object belonging category is obtained by matching the name of the target object or the reference object in the keyword array with the knowledge base.
The position description of the target object is a representation of the spatial relationship of different objects, namely, the object position is commonly characterized by the target object, the reference object and the spatial relationship.
The reference can be divided into a single reference and two references. The spatial relationship comprises a topological relationship, a directional relationship and a distance relationship.
The topological relation refers to the relation that each space element is connected, separated, adjacent and the like without considering the specific position. When a person uses a natural language to describe the position of an object, the description of the phase-separation relationship is mainly involved, and the topological relationship such as intersection, inclusion and the like is basically absent, and the phase-separation relationship is generally considered by default.
Directional relationships refer to the position of one object in space relative to one or more other objects. In describing the directional relationship, at least three elements are generally required, namely the target object, the reference object, and the reference frame used. The directional relationship can be divided into an absolute directional relationship and a relative directional relationship according to the used reference frame. The absolute directional relationship means that a world coordinate system is used when the directional relationship description is made, and the relative directional relationship means that a relative coordinate system is used when the directional relationship description is made.
The distance relationship reflects the geometric proximity between objects in different spaces, and the daily life uses a quantitative distance, i.e. a distance described by a numerical value, which is generally based on an artificially set measurement system.
Qualitative analysis can be divided into two categories, one is described by using a degree adverb + quantitative distance of about 5 m; and the other is described using only the very similar adverbs.
The time distance is expressed in the form "moving pattern + time".
Combining the above results, the calculation types of the spatial position are divided into the following categories: (1) single (two) reference + direction relationship; (2) single (two) reference + distance relationship; (3) single (two) reference objects + direction relation + distance relation; (4) two references + topological relationship only.
3. Calculation of spatial position
(1) Single (two) reference object + direction relation
The directional relationship can be divided into an absolute directional relationship and a relative directional relationship. On the premise of using the absolute direction relation: when the reference object is a point-like reference object, an eight-direction conical model is generally adopted, and as shown in fig. 2, the model divides the whole two-dimensional space plane into eight parts with directivity, and the interval between every two directions is 45 degrees.
In order to accurately describe the directional relationship of the space object, a coordinate system as shown in fig. 2 is established. Assuming that the point-like reference object A is located at the origin O of the coordinate system, then for any point-like object target B in space, according to L1、L2、L3、L4And (3) linear constraint, namely obtaining a coordinate set of the point-shaped reference object A in different directions, as shown in the table 2.
TABLE 2 schematic table for calculating spatial relationship of point-like reference object
Figure BDA0002297173670000101
When the reference object is a planar reference object, a Minimum Bounding Rectangle (MBR) model is used, as shown in fig. 3. In fig. 3, the hatched portion is a planar reference object, which can be referred to as a, and the rectangle abcd is the smallest circumscribed rectangle, so that the straight line side where the four rectangular sides ab, ac, bc, and bd are located can be used as the boundary in each direction.
A coordinate system as shown in fig. 3 is established. Four vertices of a minimum bounding rectangle abcd of the planar reference object a are a (x)1,y1),b(x1,y2),c(x2,y2),d(x2,y1) For any point object in space, target B, according to L1、L2、L3、L4The constraint of the linear equation can obtain the coordinate set of the planar reference object A in different directions,as shown in table 3, Dir () in table 3 represents its orientation. If the relative direction relationship is used, it needs to be converted into the absolute direction relationship. I.e. the conversion from the relative coordinate system to the world coordinate system is completed.
TABLE 3 schematic space calculation table for planar reference object
Figure BDA0002297173670000111
And establishing a coordinate system by taking the starting point of the robot as the origin of a world coordinate system, taking the positive east direction as the positive direction of the x axis and taking the positive north direction as the positive direction of the y axis. Suppose that the position of the robot at time t is (x)t,yt) The deflection angle is θ (relative to the positive x-axis), as shown in FIG. 4.
The position of the object P is the right front of the robot, and the coordinate is P' (x) in a coordinate system centered on the robotp',yp') now converted to coordinates in the world coordinate system, assumed to be P (x)p,yp) The conversion formula of the two can be obtained by derivation according to the geometric relationship as follows:
Figure BDA0002297173670000121
similarly, when the origin coordinates (a, b) of the relative coordinate system, the deflection angle θ of the relative coordinate system with respect to the world coordinate system, and the position coordinate (x) of the point P in the relative coordinate system are determinedp',yp') can be converted to obtain the coordinate P (x) of the point P in the world coordinate systemp,yp) The method comprises the following steps:
Figure BDA0002297173670000122
if two reference objects exist, the coordinate positions of the objects are determined respectively only according to different reference object position descriptions, and then the intersection of the two ranges is obtained.
(2) Single (two) reference object + distance relation
The distance relationship includes quantitative distance, qualitative distance and timeDistance. Since the distance relationship is more used to describe the point-like reference object, the distance relationship is considered to be the point-like reference object in a simplified manner. A quantitative distance is a distance described by a numerical value, typically based on an artificially set metrology system. However, since the cognitive abilities of people to the space are not completely the same, and the description of the partial quantitative distance has a large deviation, an error parameter d needs to be introduced when the position of the distance relationship is calculated. In the description of natural language, when the distance between the point-shaped target object A and the point-shaped reference object B is D, the actual distance D is1D ± D. According to the related conclusions of the prior art, it can be approximately considered that:
d=k×D (3)
in the formula (3), k is an error proportionality coefficient.
As shown in fig. 5, a is a point-like reference object, and if a point-like object B is considered to be at a distance D from a, the gray area range is the actual area of B.
Qualitative distances can be broadly divided into those described using the "degree adverb + quantitative distance" and those described using only the degree adverb, such as "where the ball is in close proximity to the cart".
For the first case, a quantitative distance calculation mode can be directly adopted, and the error condition is considered when the quantitative distance is used for position calculation, so that the error does not need to be calculated; for the second case, the concept of qualitative distance description framework is adopted, and distance relation division can be performed on different granularity levels according to different research tasks.
The four qualitative distance relationship granularity levels, namely, very close, near, far and very far, are introduced, and different qualitative distance relationships are subjected to quantization processing and set as quantitative distances, as shown in table 4:
TABLE 4 qualitative distance relationship correspondence table
Figure BDA0002297173670000131
The time distance is converted into a quantitative distance for calculation. If the time distance is L, the speed is v, and the consumed time is t, then the distance relationship is:
L=vt (4)
the moving speed v of the introduced robot is 2m/s to participate in calculation, and the time-distance relation is quantified.
When the distance relationship between two reference objects is used for describing the position of the target object, the coordinate ranges of the target object need to be respectively determined according to different reference object distance descriptions, and the intersection is obtained by the two reference object distance descriptions.
(3) Single (two) reference object + direction relation + distance relation
In the foregoing model, the relative direction relationship may be converted into an absolute direction relationship, and the qualitative distance relationship and the time-distance relationship may be converted into a quantitative distance relationship, so that the different types of "direction relationship + distance relationship" are finally converted into "absolute direction relationship + quantitative distance relationship" for position calculation.
If the target object M is located 5 meters east of the dotted reference object a (0, 0), the coordinate constraint of M is:
Figure BDA0002297173670000132
wherein x and y represent the abscissa and ordinate values of the target M, respectively.
When the direction relation and the distance relation of the two reference objects are used for description, the direction relation and the distance relation are respectively converted into an absolute direction relation and a quantitative distance relation, the respective constraint conditions of the two reference objects are calculated, and the intersection of the two reference objects is obtained.
(4) Two reference objects + topological relation only
There are other spatial relationships that are not categorized in the direction and distance relationships, and the spatial relationship expression is described using only the topological relationship, such as "ball between you and cart", as discussed below.
If the two reference objects are both point-like reference objects, the two point-like reference objects are respectively set as a and B, as shown in fig. 6, the line segment AB is taken as a central axis, the possible positions of the target object are around the line segment AB, and in order to quantitatively calculate the possible positions, an error parameter d is introduced, namely the target object is in an area with the distance d from the line segment AB.
Because the slope k of the straight line AB is different, the expression of the formed region is also different, as shown in fig. 6. And respectively calculating the M coordinate range set of the target object under different slopes.
In FIG. 6 (a), there is A (x)A,yA),B(xA,yB). The M coordinate ranges are:
Figure BDA0002297173670000141
in FIG. 6 (b), there is A (x)A,yA),B(xB,yA). The M coordinate ranges are:
Figure BDA0002297173670000142
in FIG. 6 (c), there is A (x)A,yA),B(xB,yB). The M coordinate ranges are:
Figure BDA0002297173670000143
wherein x isM,yMRespectively, the abscissa and ordinate values of the target M, and θ represents the deflection angle of the robot with respect to the world coordinate system.
If both the two reference objects are planar reference objects, as shown in fig. 7, the target object is M, A, B are the two planar reference objects, the rectangles abcd and efgh are minimum circumscribed rectangles of A, B, respectively, and the gray area in the figure is the position area of the target object M.
The location areas of M in different cases are calculated separately.
In FIG. 7 (a), there is a (x)a,ya),b(xb,ya),c(xb,yc),d(xa,yc),e(xe,ye),f(xf,yf),g(xg,yg),h(xh,yh). M coordinate rangeThe method comprises the following steps:
Figure BDA0002297173670000151
in FIG. 7 (b), there is a (x)a,ya),b(xb,ya),c(xb,yc),d(xa,yc),e(xe,ye),f(xf,yf),g(xg,yg),h(xh,yh). The M coordinate ranges are:
Figure BDA0002297173670000152
when the two reference objects are the dot reference object and the planar reference object, respectively, as shown in fig. 8, the target object M has the dot reference object a, the planar reference object B, and the rectangle abcd is the smallest circumscribed rectangle of the B. The two cases can be divided into two cases in fig. 8 according to the relative direction of a and B, and the gray area in the figure is M range.
With a (x)a,ya),b(xb,ya),c(xb,yc),d(xa,yc),A(xA,yA) The position areas of the target M in different cases are calculated, respectively.
In fig. 8 (a), there are M coordinate ranges:
Figure BDA0002297173670000153
in fig. 8 (b), there are M coordinate ranges:
Figure BDA0002297173670000154
4. object feature matching
And the spatial cognition module performs matching synthesis on the name and the attribute characteristics of the target object and the calculated object coordinate constraint range. Assuming that the set of attributes of object A is N, then there are:
n ═ name, color, size, shape, coordinate }
Considering the requirement of the robot for path planning, selecting a point with the shortest distance in the coordinate range as an end point to perform the processes of path planning, moving and the like, and performing related operations such as subsequent object searching and the like according to the coordinate range.
Fig. 9 is an architecture diagram of a robot spatial recognition system based on natural language interaction according to the present invention, as shown in fig. 9, the system includes: corpus creating unit 910, keyword determining unit 920, feature determining unit 930, and target object coordinate determining unit 940.
A corpus establishing unit 910, configured to establish a spatial information corpus based on natural language expression, including: a target attribute feature description corpus and a target location feature description corpus;
a keyword determining unit 920, configured to convert the target attribute feature description corpus and the target location feature description corpus into a keyword array according to a preset grammar rule; the keywords include: the target object name, the reference object name, the direction relation and the distance relation;
a feature determining unit 930, configured to determine, according to the object-related features included in the keyword array, a category and a spatial position calculation relationship, where the category includes a point object and a planar object, the spatial position calculation relationship includes at least one of the following relationships: the direction relation of the target object relative to the reference object, the distance relation of the target object relative to the reference object and the topological relation of the target object relative to at least two reference objects;
and an object coordinate determining unit 940, configured to determine a coordinate range of the object according to the spatial position calculation relationship and the category to which the object and the reference object belong, so as to perform object search in the following.
Alternatively, the keyword determination unit 920 determines the category to which the object belongs by: if the object of the category to be judged is an independent object, abstracting the object into a point object or a planar object without influencing the spatial position expression of the object or other objects except the object of the category to be judged, and regarding the object as a point object; if the area ratio of the object of the category to be judged is larger than the preset value, the object is regarded as a planar object when the object is abstracted to be a point object and the spatial position expression of the object or other objects except the object of the category to be judged is influenced.
Optionally, according to the directional relationship of the object with respect to the reference object, the object coordinate determining unit solves the coordinate range of the object by: when the reference object is a point object, an eight-direction conical model is adopted to divide the whole two-dimensional space plane into eight parts with directivity, and the interval between every two directions is 45 degrees; setting a point-shaped reference object at the origin of a coordinate system, and for any point-shaped target object in the space, obtaining a coordinate position set of the point-shaped target object relative to the point-shaped reference object in different directions according to a plurality of preset straight line constraints; when the reference object is a planar reference object, determining the planar reference object and a minimum circumscribed rectangle thereof by using a minimum boundary rectangle model, and taking straight lines where four rectangle sides of the minimum circumscribed rectangle are located as boundary lines in all directions; determining a coordinate position set of the point-shaped object relative to the planar reference object in different directions according to the boundary lines in all directions; if two reference objects exist, the coordinate position ranges of the target object are determined respectively only according to different reference object position descriptions, and then the intersection of the two ranges is obtained.
Optionally, according to a distance relationship between the target object and the reference object, the distance relationship includes: quantitative, qualitative, or temporal distances; the object coordinate determination unit solves the coordinate range of the object by the following steps: when the distance relation is a quantitative distance, if the reference object is a point-shaped reference object, the distance between the point-shaped target object and the point-shaped reference object is a quantitative distance and an error distance range area; when the distance relation is a qualitative distance, presetting different distance thresholds for distances of different granularity levels, and if the reference object is a point-shaped reference object, setting the distance of the point-shaped target object from the point-shaped reference object as a qualitative distance range area; when the distance relation is time distance, converting the time distance into quantitative distance, and then determining the coordinate range of the point-like reference object; when the distance relationship between the target object and two reference objects is used for describing the position of the target object, the coordinate ranges of the target object need to be respectively determined according to different reference object distance descriptions, and the coordinate ranges of the target object and the two reference objects are intersected to determine the final coordinate range of the target object.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1.一种基于自然语言交互的机器人空间认知方法,其特征在于,包括以下步骤:1. a robot spatial cognition method based on natural language interaction, is characterized in that, comprises the following steps: 建立基于自然语言表达的空间信息语料库,包括:目标属性特征描述语料和目标位置特征描述语料;Establish a spatial information corpus based on natural language expression, including: target attribute feature description corpus and target location feature description corpus; 根据预设的语法规则将所述目标属性特征描述语料和目标位置特征描述语料转化成关键词数组;所述关键词包括:目标物名称、参照物名称、方向关系以及距离关系;Convert the target attribute feature description corpus and target location feature description corpus into a keyword array according to preset grammar rules; the keywords include: target object name, reference object name, direction relationship and distance relationship; 根据所述关键词数组中包含的物体相关特征,判断目标物、参照物所属类别和空间位置计算关系,所述类别包括点状物和面状物,所述空间位置计算关系包括如下关系中的至少一种:目标物相对参照物的方向关系、目标物相对参照物的距离关系以及目标物相对至少两个参照物的拓扑关系;According to the object-related features contained in the keyword array, determine the category to which the target object and the reference object belong and the spatial position calculation relationship, where the categories include point objects and plane objects, and the spatial position calculation relationship includes the following relationships: At least one: the directional relationship between the target object and the reference object, the distance relationship between the target object and the reference object, and the topological relationship between the target object and at least two reference objects; 根据目标物、参照物所属类别和所述空间位置计算关系,确定目标物的坐标范围,以便后续进行目标物搜索。According to the category to which the target object, the reference object belong, and the spatial position calculation relationship, the coordinate range of the target object is determined, so that the target object can be searched subsequently. 2.根据权利要求1所述的基于自然语言交互的机器人空间认知方法,其特征在于,物体所属类别,通过如下方式判断:2. The robot space cognition method based on natural language interaction according to claim 1, wherein the category to which the object belongs is judged in the following manner: 若待判断类别的物体是独立的物体,将其抽象为点状物体或者是面状物体都不影响自身或者除待判断类别物体以外的其他物体的空间位置表达时,将该物体视为点状物;If the object of the category to be judged is an independent object, and abstracting it as a point-like object or a planar object does not affect the spatial position expression of itself or other objects except the object of the category to be judged, the object is regarded as a point-like object thing; 若待判断类别的物体面积占比大于预设值,将其抽象为点状物体时影响其自身或除待判断类别物体以外的其他物体的空间位置表达时,将该物体视为面状物。If the area ratio of the object of the category to be judged is greater than the preset value, when abstracting it as a point-like object, it affects the spatial position expression of itself or other objects except the object of the category to be judged, and the object is regarded as a surface-like object. 3.根据权利要求2所述的基于自然语言交互的机器人空间认知方法,其特征在于,根据目标物相对参照物的方向关系,通过如下步骤求解目标物的坐标范围:3. The robot space cognition method based on natural language interaction according to claim 2, wherein, according to the directional relationship of the target relative to the reference, the coordinate range of the target is solved by the following steps: 当参照物为点状物时,采用八方向锥形模型,将整个二维的空间平面分成带有方向性的八个部分,每两个方向的间隔为45度;设点状参照物位于坐标系原点,对于空间中任意一点状目标物,根据预设的多条直线约束,得到点状目标物相对于点状参照物在不同方向下的坐标位置集合;When the reference object is a point-like object, the eight-direction cone model is used to divide the entire two-dimensional space plane into eight parts with directionality, and the interval between each two directions is 45 degrees; the point-like reference object is set at the coordinate is the origin, for any point-shaped object in space, according to the preset multiple straight line constraints, the set of coordinate positions of the point-shaped object in different directions relative to the point-shaped reference object is obtained; 当参照物为面状参照物时,使用最小边界矩形模型,确定面状参照物及其最小外接矩形,将所述最小外接矩形的四条矩形边所在直线作为各个方向的分界线;根据所述各个方向的分界线确定点状目标物相对于面状参照物在不同方向下的坐标位置集合;When the reference object is a planar reference object, the minimum bounding rectangle model is used to determine the planar reference object and its minimum circumscribed rectangle, and the straight lines where the four rectangular sides of the minimum circumscribed rectangle are located are used as the boundary lines in each direction; The boundary line of the direction determines the set of coordinate positions of the point target relative to the planar reference in different directions; 若有两个参照物,则只需根据不同的参照物方位描述分别确定目标物坐标位置范围,然后求两者范围交集。If there are two reference objects, it is only necessary to determine the coordinate position range of the target object according to the orientation descriptions of different reference objects, and then find the intersection of the two ranges. 4.根据权利要求3所述的基于自然语言交互的机器人空间认知方法,其特征在于,根据目标物相对参照物的距离关系,所述距离关系包括:定量距离、定性距离或时间距离;通过如下步骤求解目标物的坐标范围:4. The robot space cognition method based on natural language interaction according to claim 3, wherein, according to the distance relationship between the target object and the reference object, the distance relationship comprises: quantitative distance, qualitative distance or time distance; The following steps solve the coordinate range of the target object: 当距离关系为定量距离,若参照物为点状参照物,点状目标物在距离点状参照物的距离为定量距离以及误差距离范围区域;When the distance relationship is a quantitative distance, if the reference object is a point-like reference object, the distance between the point-like target object and the point-like reference object is the quantitative distance and the error distance range area; 当距离关系为定性距离,为不同粒度级的距离预设不同的距离阈值,若参照物为点状参照物,点状目标物在距离点状参照物的距离为定性距离范围区域;When the distance relationship is a qualitative distance, different distance thresholds are preset for the distances of different granularity levels. If the reference object is a point-like reference object, the distance between the point-like target object and the point-like reference object is the qualitative distance range area; 当距离关系为时间距离,将时间距离转化成定量距离之后,再确定点状参照物的坐标范围;When the distance relationship is time distance, after the time distance is converted into quantitative distance, the coordinate range of the point-like reference object is determined; 当使用目标物与两个参照物的距离关系来描述目标物位置时,需根据不同的参照物距离描述分别确定目标物体的坐标范围,两者求交集确定最终的目标物坐标范围。When the distance relationship between the target and two reference objects is used to describe the position of the target object, the coordinate range of the target object needs to be determined according to the distance description of different reference objects, and the intersection of the two objects determines the final coordinate range of the target object. 5.根据权利要求4所述的基于自然语言交互的机器人空间认知方法,其特征在于,根据目标物相对参照物的距离关系和方向关系,通过如下步骤求解目标物的坐标范围:5. The robot space cognition method based on natural language interaction according to claim 4, characterized in that, according to the distance relationship and the direction relationship of the target relative to the reference, the coordinate range of the target is solved by the following steps: 分别按照目标物相对参照物的距离关系和方向关系两个约束条件求解目标物的坐标范围,最后求两个坐标范围的交集,确定最终的目标物坐标范围。According to the two constraints of the distance relationship and the direction relationship between the target object and the reference object, the coordinate range of the target object is solved, and finally the intersection of the two coordinate ranges is obtained to determine the final target object coordinate range. 6.根据权利要求1至5任一项所述的基于自然语言交互的机器人空间认知方法,其特征在于,根据目标物相对至少两个参照物的拓扑关系,通过如下步骤求解目标物的坐标范围:6. The robot space cognition method based on natural language interaction according to any one of claims 1 to 5, wherein, according to the topological relationship of the target relative to at least two reference objects, the coordinates of the target are solved by the following steps scope: 若所述拓扑关系为目标物在两个参照物之间:If the topological relationship is that the target object is between two reference objects: 当两个参照物均为点状参照物时,目标物在两个参照物连成的线段及距离线段为预设距离的区域范围内;When the two reference objects are both point reference objects, the target object is within the range of the line segment formed by the two reference objects and the distance from the line segment is the preset distance; 当两个参照物为面状参照物时,目标物的范围根据两个面状参照物最小外接矩形的矩形边确定;When the two reference objects are planar reference objects, the range of the target object is determined according to the rectangular side of the minimum circumscribed rectangle of the two planar reference objects; 当两个参照物分别为点状参照物和面状参照物时,目标物的范围根据点状参照物的坐标和面状参照物最小外接矩形的矩形边确定。When the two reference objects are respectively a point reference object and a plane reference object, the range of the target object is determined according to the coordinates of the point reference object and the rectangular side of the minimum circumscribed rectangle of the plane reference object. 7.一种基于自然语言交互的机器人空间认知系统,其特征在于,包括:7. A robot spatial cognition system based on natural language interaction, characterized in that it comprises: 语料库建立单元,用于建立基于自然语言表达的空间信息语料库,包括:目标属性特征描述语料和目标位置特征描述语料;The corpus establishment unit is used to establish a spatial information corpus based on natural language expression, including: target attribute feature description corpus and target position feature description corpus; 关键词确定单元,用于根据预设的语法规则将所述目标属性特征描述语料和目标位置特征描述语料转化成关键词数组;所述关键词包括:目标物名称、参照物名称、方向关系以及距离关系;A keyword determination unit, configured to convert the target attribute feature description corpus and target location feature description corpus into a keyword array according to preset grammar rules; the keywords include: target object name, reference object name, direction relationship, and distance relationship; 特征判断单元,用于根据所述关键词数组中包含的物体相关特征,判断目标物、参照物所属类别和空间位置计算关系,所述类别包括点状物和面状物,所述空间位置计算关系包括如下关系中的至少一种:目标物相对参照物的方向关系、目标物相对参照物的距离关系以及目标物相对至少两个参照物的拓扑关系;A feature judging unit, used for judging the category to which the target object and the reference object belong and the spatial position calculation relationship according to the object-related features contained in the keyword array, the categories include point objects and plane objects, and the spatial position calculation The relationship includes at least one of the following relationships: a directional relationship between the target object and the reference object, a distance relationship between the target object and the reference object, and a topological relationship between the target object and at least two reference objects; 目标物坐标确定单元,用于根据目标物、参照物所属类别和所述空间位置计算关系,确定目标物的坐标范围,以便后续进行目标物搜索。The target object coordinate determination unit is used for determining the coordinate range of the target object according to the category to which the target object, the reference object belong, and the spatial position calculation relationship, so as to perform the target object search later. 8.根据权利要求7所述的基于自然语言交互的机器人空间认知系统,其特征在于,关键词确定单元通过如下方式判断物体所属类别:若待判断类别的物体是独立的物体,将其抽象为点状物体或者是面状物体都不影响自身或者除待判断类别物体以外的其他物体的空间位置表达时,将该物体视为点状物;若待判断类别的物体面积占比大于预设值,将其抽象为点状物体时影响其自身或除待判断类别物体以外的其他物体的空间位置表达时,将该物体视为面状物。8. The robot spatial cognition system based on natural language interaction according to claim 7, wherein the keyword determination unit judges the category of the object by the following method: if the object of the category to be judged is an independent object, abstract it When a point-like object or a planar object does not affect the spatial position expression of itself or other objects except the object of the category to be judged, the object is regarded as a point-like object; if the proportion of the object of the category to be judged is larger than the preset When abstracting it as a point-like object, it affects the spatial position expression of itself or other objects other than the object of the category to be judged, and the object is regarded as a surface-like object. 9.根据权利要求8所述的基于自然语言交互的机器人空间认知系统,其特征在于,根据目标物相对参照物的方向关系,目标物坐标确定单元通过如下步骤求解目标物的坐标范围:当参照物为点状物时,采用八方向锥形模型,将整个二维的空间平面分成带有方向性的八个部分,每两个方向的间隔为45度;设点状参照物位于坐标系原点,对于空间中任意一点状目标物,根据预设的多条直线约束,得到点状目标物相对于点状参照物在不同方向下的坐标位置集合;当参照物为面状参照物时,使用最小边界矩形模型,确定面状参照物及其最小外接矩形,将所述最小外接矩形的四条矩形边所在直线作为各个方向的分界线;根据所述各个方向的分界线确定点状目标物相对于面状参照物在不同方向下的坐标位置集合;若有两个参照物,则只需根据不同的参照物方位描述分别确定目标物坐标位置范围,然后求两者范围交集。9. The robot spatial cognition system based on natural language interaction according to claim 8, wherein, according to the directional relationship of the target relative to the reference, the target coordinate determination unit solves the coordinate range of the target through the following steps: when When the reference object is a point object, the eight-direction cone model is used to divide the entire two-dimensional space plane into eight parts with directionality, and the interval between each two directions is 45 degrees; the point-shaped reference object is set in the coordinate system Origin, for any point-shaped object in space, according to the preset multiple straight line constraints, the set of coordinate positions of the point-shaped object in different directions relative to the point-shaped reference object is obtained; when the reference object is a planar reference object, Use the minimum bounding rectangle model to determine the planar reference object and its minimum circumscribed rectangle, and use the straight lines where the four rectangular sides of the minimum circumscribed rectangle are located as the dividing line in each direction; The set of coordinate positions of the planar reference object in different directions; if there are two reference objects, it is only necessary to determine the coordinate position range of the target object according to the orientation description of the different reference objects, and then find the intersection of the two ranges. 10.根据权利要求9所述的基于自然语言交互的机器人空间认知系统,其特征在于,根据目标物相对参照物的距离关系,所述距离关系包括:定量距离、定性距离或时间距离;目标物坐标确定单元通过如下步骤求解目标物的坐标范围:当距离关系为定量距离,若参照物为点状参照物,点状目标物在距离点状参照物的距离为定量距离以及误差距离范围区域;当距离关系为定性距离,为不同粒度级的距离预设不同的距离阈值,若参照物为点状参照物,点状目标物在距离点状参照物的距离为定性距离范围区域;当距离关系为时间距离,将时间距离转化成定量距离之后,再确定点状参照物的坐标范围;当使用目标物与两个参照物的距离关系来描述目标物位置时,需根据不同的参照物距离描述分别确定目标物的坐标范围,两者求交集确定最终的目标物坐标范围。10. The robot spatial cognition system based on natural language interaction according to claim 9, wherein, according to the distance relationship between the target object and the reference object, the distance relationship comprises: quantitative distance, qualitative distance or time distance; target The object coordinate determination unit solves the coordinate range of the target object through the following steps: when the distance relationship is a quantitative distance, if the reference object is a point-like reference object, the distance between the point-like object and the point-like reference object is the quantitative distance and the error distance range area. ; When the distance relationship is a qualitative distance, different distance thresholds are preset for the distances of different granularity levels. If the reference object is a point-like reference object, the distance between the point-like target object and the point-like reference object is the qualitative distance range area; when the distance The relationship is time distance. After the time distance is converted into quantitative distance, the coordinate range of the point-like reference object is determined; when the distance relationship between the target object and two reference objects is used to describe the position of the target object, the distance between the reference objects needs to be determined according to different distances. The description determines the coordinate range of the target object respectively, and the intersection of the two determines the final coordinate range of the target object.
CN201911207208.XA 2019-11-29 2019-11-29 Robot space cognition method and system based on natural language interaction Active CN110990594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911207208.XA CN110990594B (en) 2019-11-29 2019-11-29 Robot space cognition method and system based on natural language interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911207208.XA CN110990594B (en) 2019-11-29 2019-11-29 Robot space cognition method and system based on natural language interaction

Publications (2)

Publication Number Publication Date
CN110990594A true CN110990594A (en) 2020-04-10
CN110990594B CN110990594B (en) 2023-07-04

Family

ID=70088801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911207208.XA Active CN110990594B (en) 2019-11-29 2019-11-29 Robot space cognition method and system based on natural language interaction

Country Status (1)

Country Link
CN (1) CN110990594B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114064940A (en) * 2021-11-12 2022-02-18 南京师范大学 Fuzzy position description positioning method based on super-assignment semantics
CN114139069A (en) * 2021-11-05 2022-03-04 深圳职业技术学院 Indoor positioning method and system based on voice interaction and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090192647A1 (en) * 2008-01-29 2009-07-30 Manabu Nishiyama Object search apparatus and method
CN102915039A (en) * 2012-11-09 2013-02-06 河海大学常州校区 Multi-robot combined target searching method of animal-simulated space cognition
CN108377467A (en) * 2016-11-21 2018-08-07 深圳光启合众科技有限公司 Indoor positioning and interactive approach, the device and system of target object
CN108680163A (en) * 2018-04-25 2018-10-19 武汉理工大学 A kind of unmanned boat route search system and method based on topological map
WO2018191970A1 (en) * 2017-04-21 2018-10-25 深圳前海达闼云端智能科技有限公司 Robot control method, robot apparatus and robot device
CN109614550A (en) * 2018-12-11 2019-04-12 平安科技(深圳)有限公司 Public opinion monitoring method, device, computer equipment and storage medium
CN109670262A (en) * 2018-12-28 2019-04-23 江苏艾佳家居用品有限公司 A kind of area of computer aided domestic layout optimization method and system
US20190122552A1 (en) * 2017-06-19 2019-04-25 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for displaying a movement of a vehicle on a map
CN110019863A (en) * 2017-12-26 2019-07-16 深圳市优必选科技有限公司 Object searching method and device, terminal equipment and storage medium
CN110110823A (en) * 2019-04-25 2019-08-09 浙江工业大学之江学院 Object based on RFID and image recognition assists in identifying system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090192647A1 (en) * 2008-01-29 2009-07-30 Manabu Nishiyama Object search apparatus and method
CN102915039A (en) * 2012-11-09 2013-02-06 河海大学常州校区 Multi-robot combined target searching method of animal-simulated space cognition
CN108377467A (en) * 2016-11-21 2018-08-07 深圳光启合众科技有限公司 Indoor positioning and interactive approach, the device and system of target object
WO2018191970A1 (en) * 2017-04-21 2018-10-25 深圳前海达闼云端智能科技有限公司 Robot control method, robot apparatus and robot device
US20190122552A1 (en) * 2017-06-19 2019-04-25 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for displaying a movement of a vehicle on a map
CN110019863A (en) * 2017-12-26 2019-07-16 深圳市优必选科技有限公司 Object searching method and device, terminal equipment and storage medium
CN108680163A (en) * 2018-04-25 2018-10-19 武汉理工大学 A kind of unmanned boat route search system and method based on topological map
CN109614550A (en) * 2018-12-11 2019-04-12 平安科技(深圳)有限公司 Public opinion monitoring method, device, computer equipment and storage medium
CN109670262A (en) * 2018-12-28 2019-04-23 江苏艾佳家居用品有限公司 A kind of area of computer aided domestic layout optimization method and system
CN110110823A (en) * 2019-04-25 2019-08-09 浙江工业大学之江学院 Object based on RFID and image recognition assists in identifying system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
R MORATZ: "Intuitive Linguistic Joint Object Reference in Human-Robot Interaction" *
李敏等: "基于四层树状语义模型的场景语义识别方法" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114139069A (en) * 2021-11-05 2022-03-04 深圳职业技术学院 Indoor positioning method and system based on voice interaction and electronic equipment
CN114064940A (en) * 2021-11-12 2022-02-18 南京师范大学 Fuzzy position description positioning method based on super-assignment semantics
CN114064940B (en) * 2021-11-12 2023-07-11 南京师范大学 Fuzzy location description and localization method based on superassignment semantics

Also Published As

Publication number Publication date
CN110990594B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
Yao et al. Fuzzy critical path method based on signed distance ranking of fuzzy numbers
Yongda et al. Research on multimodal human-robot interaction based on speech and gesture
CN111098301A (en) A control method of task-based robot based on scene knowledge graph
Liu et al. Target localization in local dense mapping using RGBD SLAM and object detection
WO2020211605A1 (en) Grid map fusion method based on maximum common subgraph
CN114372173A (en) Natural language target tracking method based on Transformer architecture
CN112099627B (en) A virtual reality real-time interactive platform for urban design based on artificial intelligence
Ioannidis et al. A path planning method based on cellular automata for cooperative robots
CN110990594A (en) Robot space cognition method and system based on natural language interaction
US20040186604A1 (en) Analytical shell-model producing apparatus
Zhu et al. Tri-HGNN: Learning triple policies fused hierarchical graph neural networks for pedestrian trajectory prediction
Zhang et al. Reinforcement learning–based tool orientation optimization for five-axis machining
CN117007046A (en) Path planning method and system for power inspection robot
CN116342899A (en) Target detection positioning method, device, equipment and storage medium
CN118674752B (en) Real-time target tracking method based on twin network and embedded device
Li et al. Inferring user intent to interact with a public service robot using bimodal information analysis
CN119077730A (en) Autonomous behavior control method and system of legged robot based on multimodal large model
CN118548870A (en) A robot positioning and mapping method based on SLAM and infrared imaging technology
Zhang et al. Trajectory Planning for Autonomous Driving in Unstructured Scenarios Based on Graph Neural Network and Numerical Optimization
CN112596659B (en) A painting method and device based on intelligent voice and image processing
KR20240085303A (en) Control Reinforcement Learning Method and System for Digital Twin-Based Logistics Transportation Robots
CN111445125B (en) Agricultural robot computing task cooperation method, system, medium and equipment
CN116453070B (en) Intersection face generation method and related device
Shen et al. Robot Collaborative Interactive Artistic Design for Digital Art Using Embedded Systems
CN117237451B (en) A 6D pose estimation method for industrial parts based on contour reconstruction and geometry guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant