CN111242994B - Semantic map construction method, semantic map construction device, robot and storage medium - Google Patents

Semantic map construction method, semantic map construction device, robot and storage medium Download PDF

Info

Publication number
CN111242994B
CN111242994B CN201911424096.3A CN201911424096A CN111242994B CN 111242994 B CN111242994 B CN 111242994B CN 201911424096 A CN201911424096 A CN 201911424096A CN 111242994 B CN111242994 B CN 111242994B
Authority
CN
China
Prior art keywords
semantic
information
map
space
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911424096.3A
Other languages
Chinese (zh)
Other versions
CN111242994A (en
Inventor
顾震江
孙其民
刘大志
罗沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uditech Co Ltd
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN201911424096.3A priority Critical patent/CN111242994B/en
Publication of CN111242994A publication Critical patent/CN111242994A/en
Application granted granted Critical
Publication of CN111242994B publication Critical patent/CN111242994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application is applicable to the technical field of service robot map construction, and relates to a semantic map construction method, a semantic map construction device, a robot and a storage medium, wherein the semantic map construction method comprises the following steps: receiving traveling instruction information sent by a user, and controlling the robot to travel according to the traveling instruction information; acquiring an image of a target area to construct a space map; collecting and identifying and analyzing the space semantic information of the semantic objects in the target area; acquiring time information when the space semantic information is acquired, and determining a key frame with the nearest time according to the time information; labeling the space semantic information to the space map according to the key frame; and detecting whether information for finishing the drawing is received, if so, forming a navigation map with semantics, if not, continuously controlling the robot to travel, and collecting images of the target area and space semantic information to continuously construct the space map. The method and the device can solve the problems of low efficiency and poor accuracy of the traditional manual labeling of the navigation map.

Description

Semantic map construction method, semantic map construction device, robot and storage medium
Technical Field
The application belongs to the technical field of service robot map construction, and particularly relates to a semantic map construction method, a semantic map construction device, a robot and a storage medium.
Background
With the development of robot technology and the continuous and deep research of artificial intelligence, service robots gradually take indispensable roles in human life, have more and more intelligent functions, and are widely applied to the fields of catering, cargo transportation and the like. In some practical application scenarios, the robot can reach a designated position on the map to provide service based on the autonomous positioning navigation function of the robot, but the service of the robot cannot leave the navigation map. Therefore, the navigation map plays a vital role in the global positioning and navigation process of the robot. In order to ensure that the robot navigation proceeds smoothly (starting point and end point are arbitrarily designated), a relatively complete map needs to be constructed. Meanwhile, the map semantic information is indispensable for navigation planning and robot service management.
However, the existing robot navigation map construction needs more manual labeling and semantic adding, such as labeling service points of guest room positions, guest welcome positions and the like on the map, while hotels generally have a plurality of guest rooms, if manual labeling is used, the labeling speed is low due to large workload, and deviation of position information labeling occurs.
Disclosure of Invention
The embodiment of the application provides a semantic map construction method, a semantic map construction device, a robot and a storage medium, which can solve the problems of low efficiency and poor accuracy of the traditional manual labeling navigation map.
In a first aspect, an embodiment of the present application provides a semantic map construction method, applied to a robot, where the method includes:
receiving traveling instruction information sent by a user, and controlling the robot to travel according to the traveling instruction information;
collecting space information of a target area and constructing a space map;
collecting and identifying and analyzing the space semantic information of the semantic objects in the target area;
acquiring time information when the space semantic information is acquired, and determining a key frame with the nearest time according to the time information;
labeling the space semantic information to the space map according to the key frame;
and detecting whether information for finishing the drawing is received, if so, forming a navigation map with semantics, if not, continuously controlling the robot to travel, and collecting images of the target area and space semantic information to continuously construct the space map.
In a second aspect, an embodiment of the present application provides a semantic map building apparatus, the apparatus including:
the traveling control module is used for receiving traveling instruction information sent by a user and controlling the robot to travel according to the traveling instruction information;
the map construction module is used for acquiring the space information of the target area and constructing a space map;
the semantic analysis module is used for collecting and identifying and analyzing the space semantic information of the semantic objects in the target area;
the determining module is used for acquiring time information when the spatial semantic information is acquired and determining a key frame with the nearest time according to the time information;
the labeling module is used for labeling the space semantic information to the space map according to the key frame;
and the map forming module is used for detecting whether the information of finishing the map building is received, if so, forming a navigation map with semantics, if not, continuously controlling the robot to travel, and collecting the image of the target area and the space semantic information to continuously construct the space map.
In a third aspect, embodiments of the present application provide a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the method when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program that when executed by a processor implements the method.
In a fifth aspect, embodiments of the present application provide a computer program product for, when run on a terminal device, causing the terminal device to perform the method of any one of the first aspects.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Compared with the prior art, the embodiment of the application has the beneficial effects that: the method and the device can solve the problems of low efficiency and poor accuracy of the traditional manual marking of the navigation map, thereby improving the on-site deployment and on-line engineering efficiency of the service robot, improving the engineering efficiency, reducing the workload of engineering personnel and reducing the service operation cost.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a semantic map building method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a mobile phone of a semantic map building apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural view of a robot according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a robot and a working module according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The semantic map construction method provided by the embodiment of the application can be suitable for the service robot. In practical application, when the autonomous navigation of the robot is performed, the navigation map is not separated. The navigation map provides a basis for path planning and walking obstacle avoidance of the service robot, and is basically enough for solving the walking problem of the service robot such as a sweeper. However, for service management such as hotel service robots and hospital service robots, necessary semantic information is required on a map for convenience. Therefore, the rapid construction of the semantic map is very important for the service robot. However, the existing service robot navigation map construction needs more manual labeling, such as labeling service points of guest room positions, guest welcome positions and the like on a map, and hotels generally have a plurality of guest rooms, if manual labeling is used, the labeling speed is low due to large workload, and deviation of position information labeling occurs.
Therefore, the semantic map construction method can solve the problems of low efficiency and poor accuracy of the traditional manual labeling navigation map.
The semantic map construction method provided by the application is described in an exemplary manner in connection with the specific embodiments.
Referring to fig. 1, a schematic flowchart of a semantic map constructing method is provided in an embodiment of the present application. The implementation main body of the semantic map construction method in the embodiment is a robot, the robot comprises an artificial investigation auxiliary module and a map construction module, the artificial investigation auxiliary module can acquire visual images and depth information images by using an acquisition unit (such as a depth camera), and the map construction system acquires data by using a distance sensor (such as a laser radar) to construct a space map. The method comprises the following steps:
s101: and receiving the traveling instruction information sent by the user, and controlling the robot to travel according to the traveling instruction information.
In this embodiment, when the robot scans and constructs a spatial map of a target area, travel instruction information sent by a user from a control terminal is received in advance, so that the robot is controlled to acquire spatial information of the target area according to the travel instruction information. The travel instruction information may include a voice travel instruction and a wireless remote control travel instruction.
S102: and collecting the space information of the target area and constructing a space map.
In this embodiment, the robot may precisely measure distance information to objects around the target area by using a laser radar, and construct a space map using laser radar acquisition data. The space map may be a 2D planar map or a 3D stereoscopic map. The spatial information may be a distribution image of objects within the target area.
S103: and collecting and identifying and analyzing the space semantic information of the semantic object in the target area.
In this embodiment, the robot acquires the visual image and the depth information image in the target area through the acquisition unit of the industrial and exploration auxiliary module, and simultaneously identifies the spatial semantic information of the semantic object in the target area. The spatial semantic information comprises one or more of text information and spatial distance information, wherein the spatial distance information comprises a horizontal distance and a vertical distance.
S104: and acquiring time information when the spatial semantic information is acquired, and determining a key frame with the nearest time according to the time information.
In this embodiment, the mapping module of the robot receives the spatial semantic information sent by the mapping auxiliary module, determines time information when the spatial semantic information is collected, determines a plurality of key frames before the time information, performs difference operation on the time information and the time information of each key frame collected, and determines the nearest key frame, so that the image information collected by the mapping module is further associated with the spatial semantic information, and navigation is performed by using the map.
S105: and labeling the space semantic information to the space map according to the key frame.
In this embodiment, the coordinate position of the key frame in the space map is determined according to the key frame, and then the space semantic information is marked on the map by using the space distance information of the identified space semantic information.
S106: and detecting whether information for finishing the drawing is received, if so, forming a navigation map with semantics, if not, continuously controlling the robot to travel, and collecting images of the target area and space semantic information to continuously construct the space map.
In this embodiment, the information for ending the mapping may be instruction information for stopping the mapping sent by the user through a terminal device, or trigger information when the robot returns to the starting point through a set planned path, or the robot confirms that the robot travels to the edge of the target area through an acquisition unit in the working and exploration auxiliary module, and indicates that the mapping of the target area is completed.
In an embodiment, in order to facilitate the acquisition of the spatial semantic information, the auxiliary industrial and mining module may set to start scene recognition, for example, when the acquisition unit recognizes a hotel foreground and a guest room entrance, it confirms that a target scene is recognized, and then starts to recognize the semantic information in the target scene, and analyzes the semantic information mark in the target scene to the spatial map, so as to avoid error or unnecessary navigation indication during navigation.
By way of example, another embodiment of the present application provides a semantic map construction method, which mainly relates to a process of performing scene recognition on the target area. The method comprises the following steps:
and carrying out target recognition on the target area through a deep learning neural network so as to confirm whether a target scene exists in the target area.
And if the target scene exists, responding to and identifying and acquiring the space semantic information of the semantic object in the target scene.
In this embodiment, the scene recognition may be achieved by training through the deep learning neural network. The target scene may be a hotel foreground, a guest room gate, a passenger elevator, or the like. When the target area is subjected to scene recognition, if a certain semantic object is appointed to be subjected to semantic extraction, the semantic extraction is started when the target area is identified to have the target scene of the semantic object through the deep learning neural network, for example, only the room entrance sign is appointed to be subjected to semantic extraction, so that the room entrance scene recognition can be started, and when no room entrance is found, the semantic extraction is not performed, so that the complexity of semantic image construction can be further simplified, and the engineering efficiency is improved.
In an embodiment, when a semantic object is identified, tracking the semantic object along with the movement of a robot, and when the distance between the central point of the semantic object and the acquisition unit is minimum, acquiring the horizontal distance and the vertical distance of the semantic object, and marking the spatial map, so that the robot can quickly confirm the semantic object during subsequent navigation, and further provide services.
The embodiment of the application provides a semantic map construction method, which mainly relates to a process of calculating the shortest distance between a semantic object and a collection unit of a robot. The method comprises the following steps:
the step of collecting and identifying and analyzing the space semantic information of the semantic object in the target area further comprises the following steps:
when the semantic object is identified, N distances between the semantic object and the robot are sequentially detected and obtained, wherein N is more than or equal to 0, and N is an integer.
In this embodiment, the distance from the semantic object is calculated by the acquisition unit on the robot so as to determine and calculate the closest distance between the acquisition unit and the semantic object.
And determining whether the N-1 th distance is larger than the N-th distance according to the N distances so as to determine the time point when the distance between the acquisition unit and the semantic object is gradually reduced to be just increased.
If the N-1 distance is smaller than the N distance, determining the time point for acquiring the N-1 distance as the moment when the semantic object is closest to the robot, and calculating to obtain the horizontal distance and the vertical distance between the semantic object and the robot.
Optionally, after the calculating, the calculating includes: identifying and analyzing the semantic object to obtain text information; forming the text information, the horizontal distance and the vertical distance into space semantic information; the auxiliary module for engineering investigation can send the space semantic information to the map building module and mark the space map.
The three-dimensional transformation from the local coordinate system of the acquisition unit to the robot coordinate system is calculated by measuring the spatial position relation between each acquisition unit of the engineering investigation auxiliary module and the projection center point of the robot ground, and the transformation is used for transforming the target coordinate (such as a house number) measured and calculated by the acquisition unit to the robot coordinate system in the construction process.
In an embodiment, when the space map is a plane map, the text information and the horizontal distance information of the semantic object are marked to the plane map according to the key frame.
In an embodiment, when the spatial map is a stereoscopic map, text information, horizontal distance information and vertical distance information of the semantic object are marked to the stereoscopic map according to the key frame.
In an embodiment, as shown in fig. 4, the auxiliary module for industrial investigation is used when the map is built in the field, and the auxiliary module for industrial investigation can be fixed on the outer side of the robot when in use, and the auxiliary module for industrial investigation and the map building module of the robot are in data communication connection and cooperate to complete the construction of the space semantic map.
The auxiliary module for the engineering investigation has the characteristic of being detachable, and can be detached to be installed on other robots after the construction is completed.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the semantic map construction method described in the above embodiments, fig. 2 shows a block diagram of the semantic map construction apparatus provided in the embodiment of the present application, and for convenience of explanation, only the portions relevant to the embodiment of the present application are shown.
Referring to fig. 2, the apparatus includes: the system comprises a travel control module 100, a map construction module 200, a semantic analysis module 300, a determination module 400, a labeling module 500 and a map formation module 600.
The travel control module is used for receiving travel instruction information sent by a user and controlling the robot to travel according to the travel instruction information.
The map construction module is used for acquiring the space information of the target area and constructing a space map.
The semantic analysis module is used for collecting and identifying and analyzing the space semantic information of the semantic objects in the target area.
The determining module is used for acquiring time information when the spatial semantic information is acquired and determining a key frame with the nearest time according to the time information.
And the labeling module is used for labeling the space semantic information to the space map according to the key frame.
And the map forming module is used for detecting whether the information of finishing the map building is received, if yes, forming a navigation map with semantics, if not, continuously controlling the robot to travel, and collecting the image of the target area and the space semantic information to continuously construct the space map.
Fig. 3 is a schematic structural diagram of a robot according to an embodiment of the present disclosure. As shown in fig. 3, the robot 3 of this embodiment includes: at least one processor 30 (only one processor is shown in fig. 3), a memory 31 and a computer program 32 stored in the memory 31 and executable on the at least one processor 30, the processor 30 implementing the steps in any of the various method embodiments described above when executing the computer program 32.
The robot 3 may include, but is not limited to, a processor 30, a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the robot 3 and is not meant to be limiting of the robot 3, and may include more or fewer components than shown, or may combine certain components, or may include different components, such as input-output devices, network access devices, etc.
The processor 30 may be a central processing unit (Central Processing Unit, CPU), the processor 30 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may in some embodiments be an internal storage unit of the robot 3, such as a hard disk or a memory of the robot 3. The memory 31 may in other embodiments also be an external storage device of the robot 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the robot 3. Further, the memory 31 may also include both an internal memory unit and an external memory device of the robot 3. The memory 31 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs etc., such as program codes of the computer program etc. The memory 31 may also be used for temporarily storing data that has been output or is to be output.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
Embodiments of the present application provide a computer program product enabling a robot to carry out the steps of the various method embodiments described above when the computer program product is run on the robot.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A semantic map construction method applied to a robot, the method comprising:
receiving traveling instruction information sent by a user, and controlling the robot to travel according to the traveling instruction information;
collecting space information of a target area and constructing a space map;
collecting and identifying and analyzing the space semantic information of the semantic objects in the target area;
acquiring time information when the space semantic information is acquired, and determining a key frame with the nearest time according to the time information;
labeling the space semantic information to the space map according to the key frame;
detecting whether information for finishing drawing is received, if so, forming a navigation map with semantics, if not, continuously controlling the robot to travel, and collecting images of the target area and space semantic information to continuously construct the space map;
wherein, when collecting and identifying and analyzing the space semantic information of the semantic object in the target area, the method further comprises:
when the semantic object is identified, sequentially detecting and acquiring N distances between the semantic object and the robot, wherein N is more than or equal to 0, and N is an integer;
determining whether the N-1 th distance is larger than the N th distance according to the N distances;
if the N-1 distance is smaller than the N distance, determining the time point for acquiring the N-1 distance as the closest moment of the semantic object and the robot.
2. The semantic map construction method according to claim 1, wherein the collecting and identifying spatial semantic information that parses semantic objects within the target region further comprises:
performing target recognition on the target area through a deep learning neural network to confirm whether a target scene exists in the target area;
and if the target scene exists, responding to and identifying and acquiring the space semantic information of the semantic object in the target scene.
3. The semantic map construction method according to claim 1, wherein the step of collecting and identifying spatial semantic information for parsing the semantic object in the target area further comprises:
and calculating to obtain the horizontal distance and the vertical distance between the semantic object and the robot.
4. The semantic map constructing method according to claim 3, wherein after the calculating, the horizontal distance and the vertical distance between the semantic object and the robot include:
identifying and analyzing the semantic object to obtain text information;
forming the text information, the horizontal distance and the vertical distance into space semantic information;
and carrying out space transformation on the space semantic information and marking the space semantic information on the space map.
5. The semantic map construction method according to claim 1, wherein the spatial semantic information includes one or more of text information and spatial distance information, wherein the spatial distance information includes a horizontal distance and a vertical distance.
6. The semantic map construction method according to claim 5, wherein when the spatial map is a planar map, text information and horizontal distance information of the semantic object are labeled to the planar map according to the key frame.
7. The semantic map construction method according to claim 5, wherein when the spatial map is a stereoscopic map, text information, horizontal distance information, and vertical distance information of the semantic object are labeled to the stereoscopic map according to the key frame.
8. A semantic map construction apparatus, the apparatus comprising:
the traveling control module is used for receiving traveling instruction information sent by a user and controlling the robot to travel according to the traveling instruction information;
the map construction module is used for acquiring the space information of the target area and constructing a space map;
the semantic analysis module is used for collecting and identifying and analyzing the space semantic information of the semantic objects in the target area;
the determining module is used for acquiring time information when the spatial semantic information is acquired and determining a key frame with the nearest time according to the time information;
the labeling module is used for labeling the space semantic information to the space map according to the key frame;
the map forming module is used for detecting whether information for finishing map building is received, if yes, forming a navigation map with semantics, if not, continuously controlling the robot to travel, and collecting images of the target area and space semantic information to continuously construct the space map;
the determining module is further used for sequentially detecting and acquiring N distances between the semantic object and the robot when the semantic object is identified, wherein N is more than or equal to 0, and N is an integer; determining whether the N-1 th distance is larger than the N th distance according to the N distances; if the N-1 distance is smaller than the N distance, determining the time point for acquiring the N-1 distance as the closest moment of the semantic object and the robot.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
CN201911424096.3A 2019-12-31 2019-12-31 Semantic map construction method, semantic map construction device, robot and storage medium Active CN111242994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911424096.3A CN111242994B (en) 2019-12-31 2019-12-31 Semantic map construction method, semantic map construction device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911424096.3A CN111242994B (en) 2019-12-31 2019-12-31 Semantic map construction method, semantic map construction device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN111242994A CN111242994A (en) 2020-06-05
CN111242994B true CN111242994B (en) 2024-01-09

Family

ID=70879597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911424096.3A Active CN111242994B (en) 2019-12-31 2019-12-31 Semantic map construction method, semantic map construction device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN111242994B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806455B (en) * 2020-06-12 2024-03-29 未岚大陆(北京)科技有限公司 Map construction method, device and storage medium
CN111743463A (en) * 2020-06-18 2020-10-09 小狗电器互联网科技(北京)股份有限公司 Cleaning method and device for target object, readable medium and electronic equipment
CN111679688A (en) * 2020-06-18 2020-09-18 小狗电器互联网科技(北京)股份有限公司 Charging method and device for self-walking robot, readable medium and electronic equipment
CN112632211A (en) * 2020-12-30 2021-04-09 上海思岚科技有限公司 Semantic information processing method and equipment for mobile robot
CN112883132B (en) * 2021-01-15 2024-04-30 北京小米移动软件有限公司 Semantic map generation method, semantic map generation device and electronic equipment
CN113469000A (en) * 2021-06-23 2021-10-01 追觅创新科技(苏州)有限公司 Regional map processing method and device, storage medium and electronic device
CN113552879A (en) * 2021-06-30 2021-10-26 北京百度网讯科技有限公司 Control method and device of self-moving equipment, electronic equipment and storage medium
CN115600156B (en) * 2022-11-14 2023-03-28 苏州魔视智能科技有限公司 Semantic map fusion method, device, equipment and medium based on minimum tree

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
CN109272554A (en) * 2018-09-18 2019-01-25 北京云迹科技有限公司 A kind of method and system of the coordinate system positioning for identifying target and semantic map structuring
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning
CN110298873A (en) * 2019-07-05 2019-10-01 青岛中科智保科技有限公司 Construction method, construction device, robot and the readable storage medium storing program for executing of three-dimensional map

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
CN109272554A (en) * 2018-09-18 2019-01-25 北京云迹科技有限公司 A kind of method and system of the coordinate system positioning for identifying target and semantic map structuring
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning
CN110298873A (en) * 2019-07-05 2019-10-01 青岛中科智保科技有限公司 Construction method, construction device, robot and the readable storage medium storing program for executing of three-dimensional map

Also Published As

Publication number Publication date
CN111242994A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111242994B (en) Semantic map construction method, semantic map construction device, robot and storage medium
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
US11204247B2 (en) Method for updating a map and mobile robot
US11340610B2 (en) Autonomous target following method and device
Xu et al. An occupancy grid mapping enhanced visual SLAM for real-time locating applications in indoor GPS-denied environments
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
US6690451B1 (en) Locating object using stereo vision
US20190026920A1 (en) Method, apparatus and terminal device for constructing map
CN110497901A (en) A kind of parking position automatic search method and system based on robot VSLAM technology
CN112161618B (en) Storage robot positioning and map construction method, robot and storage medium
CN102914303A (en) Navigation information acquisition method and intelligent space system with multiple mobile robots
CN109668563B (en) Indoor-based track processing method and device
CN103680291A (en) Method for realizing simultaneous locating and mapping based on ceiling vision
CN110751336B (en) Obstacle avoidance method and obstacle avoidance device of unmanned carrier and unmanned carrier
US20230358546A1 (en) Map matching trajectories
CN110717918A (en) Pedestrian detection method and device
Tomažič et al. An automated indoor localization system for online bluetooth signal strength modeling using visual-inertial slam
CN115420275A (en) Loop path prediction method and device, nonvolatile storage medium and processor
CN109190486B (en) Blind guiding control method and device
Wu et al. Indoor surveillance video based feature recognition for pedestrian dead reckoning
CN115273015A (en) Prediction method and device, intelligent driving system and vehicle
US20220004777A1 (en) Information processing apparatus, information processing system, information processing method, and program
KR101934297B1 (en) METHOD FOR DEVELOPMENT OF INTERSECTION RECOGNITION USING LINE EXTRACTION BY 3D LiDAR
CN114526720B (en) Positioning processing method, device, equipment and storage medium
Tang et al. An approach of dynamic object removing for indoor mapping based on UGV SLAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant