CN112597326A - Scene map file processing method and device - Google Patents

Scene map file processing method and device Download PDF

Info

Publication number
CN112597326A
CN112597326A CN202011463379.1A CN202011463379A CN112597326A CN 112597326 A CN112597326 A CN 112597326A CN 202011463379 A CN202011463379 A CN 202011463379A CN 112597326 A CN112597326 A CN 112597326A
Authority
CN
China
Prior art keywords
scene map
picture
map file
target
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011463379.1A
Other languages
Chinese (zh)
Inventor
刘万凯
盛兴东
颜长建
刘云辉
肖剑峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202011463379.1A priority Critical patent/CN112597326A/en
Publication of CN112597326A publication Critical patent/CN112597326A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a scene map file processing method which is used for improving matching precision of similar scenes and reducing the probability of mistaken sending of similar scene map files. The method comprises the following steps: receiving a first target picture sent by first target equipment; retrieving according to the first target picture to determine at least one picture to be detected corresponding to the first target picture; determining whether the image to be detected has the attitude characteristic points meeting specific conditions or not according to the attitude characteristic points in the first target image; and under the condition that the image to be detected has the attitude characteristic points meeting specific conditions, sending a scene map file corresponding to the image to be detected to the first target equipment so that the first target equipment executes positioning operation based on the scene map file. By adopting the scheme provided by the application, the three-dimensional space posture of the picture is checked, the matching precision of the similar scene is improved, and the probability of mistakenly sending the map file of the similar scene is reduced.

Description

Scene map file processing method and device
Technical Field
The present disclosure relates to the field of electronic maps, and in particular, to a method and an apparatus for processing a scene map file.
Background
Slam (simultaneous Localization and mapping), namely synchronous positioning and mapping, has the following principle: when the equipment is in an unknown environment, the motion state and the surrounding environment information are acquired through the sensor of the equipment, the three-dimensional structure of the surrounding environment is reconstructed in real time, and meanwhile, the equipment is positioned.
In the prior art, after the server receives the picture sent by the device, a search is performed based on the picture, a scene map file corresponding to the picture is determined based on the search result, and the scene map file is sent to the device. Therefore, matching of a picture taken by the device with a plurality of pictures in the server may be performed, for example, mismatching of pictures of similar tables and chairs or mismatching of pictures of different meeting rooms with similar tables and chairs exists, so that the server feeds back an incorrect scene map file to the device.
Disclosure of Invention
The embodiment of the application aims to provide a scene map file processing method, which is used for improving matching precision of similar scenes and reducing the probability of mistaken sending of similar scene map files.
In order to solve the technical problem, the embodiment of the application adopts the following technical scheme: a scene map file processing method comprises the following steps:
receiving a first target picture sent by first target equipment;
retrieving according to the first target picture to determine at least one picture to be detected corresponding to the first target picture;
determining whether the image to be detected has the attitude characteristic points meeting specific conditions or not according to the attitude characteristic points in the first target image;
and under the condition that the image to be detected has the attitude characteristic points meeting specific conditions, sending a scene map file corresponding to the image to be detected to the first target equipment so that the first target equipment executes positioning operation based on the scene map file.
In one embodiment, the method further comprises:
receiving a picture file and a scene map file sent by second target equipment;
normalizing the picture file and the scene map file sent by the second target equipment;
distributing scene identification for the scene map file;
and correspondingly storing the scene identification and the normalized picture file.
In one embodiment, determining whether there is a pose feature point meeting a specific condition in the to-be-detected picture according to the pose feature point in the first target picture includes:
determining whether a second target picture with gesture feature points meeting specific conditions exists in the picture to be detected, wherein the specific conditions comprise: the number of the posture characteristic points matched with the posture characteristic points in the picture to be detected is more than a specific number;
under the condition that the image to be detected has the attitude characteristic points meeting specific conditions, the scene map file corresponding to the image to be detected is sent to the first target device, and the method comprises the following steps:
and sending a scene map file corresponding to the second target picture to the first target equipment under the condition that the second target picture exists.
In one embodiment, before the sending of the scene map file corresponding to the second target picture to the first target device, the method further includes:
acquiring a scene identifier corresponding to the second target picture;
and determining the scene map file corresponding to the scene identifier as the scene map file corresponding to the second target picture.
In one embodiment, after the normalizing the picture file and the scene map file sent by the second target device, the method further includes:
determining whether adjacent scene map files are stored locally or not, wherein the adjacent scene map files are map files corresponding to a plurality of adjacent scenes;
and under the condition that the adjacent scene map files are locally stored, generating and storing description information for representing the adjacent relation between the adjacent scene map files.
In one embodiment, after sending the scene map file corresponding to the second target picture to the first target device, the method further includes:
under the condition that the scene map file has corresponding description information, determining an adjacent scene map file corresponding to the scene map file according to the description information;
and sending the adjacent scene map file corresponding to the scene map file to the first target device so that the first target device preloads the adjacent scene map file.
The application also provides a scene map file processing method, which comprises the following steps:
acquiring a first target picture corresponding to the environment;
sending a first target picture corresponding to the environment to a server;
receiving a scene map file sent by the server, wherein the scene map file is determined through the attitude feature points in the first target picture;
and loading the scene map file, and executing positioning operation based on the scene map file.
In one embodiment, the method further comprises:
acquiring a picture file and a scene map file corresponding to the environment;
sending a picture file and a scene map file corresponding to the environment to a server so that the server correspondingly stores the picture file and the scene map file; and when the server receives the first target picture, determining a scene map file corresponding to the first target picture according to the corresponding relation between the picture file and the scene map file stored in the server.
In one embodiment, the method further comprises:
receiving an adjacent scene map file corresponding to the scene map file sent by the server;
and preloading an adjacent scene map file corresponding to the scene map file.
The present application further provides a scene map file processing apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the method corresponding to any one of the above embodiments is executed.
The present application further provides an electronic device, comprising:
the first receiving module is used for receiving a first target picture sent by first target equipment;
the retrieval module is used for retrieving according to the first target picture so as to determine at least one picture to be detected corresponding to the first target picture;
the first determining module is used for determining whether the image to be detected has the attitude characteristic points meeting specific conditions according to the attitude characteristic points in the first target image;
and the first sending module is used for sending a scene map file corresponding to the picture to be detected to the first target equipment under the condition that the picture to be detected has the posture characteristic points meeting specific conditions, so that the first target equipment executes positioning operation based on the scene map file.
The present application further provides an electronic device, comprising:
the first acquisition module is used for acquiring a first target picture corresponding to the environment;
the first sending module is used for sending a first target picture corresponding to the environment to the server;
the first receiving module is used for receiving a scene map file sent by the server, wherein the scene map file is determined through the posture characteristic points in the first target picture;
and the first loading module is used for loading the scene map file and executing positioning operation based on the scene map file.
Determining whether the image to be detected has the attitude characteristic points meeting specific conditions or not according to the attitude characteristic points in the first target image; and then under the condition that the image to be detected has the attitude characteristic points meeting specific conditions, sending the scene map file corresponding to the image to be detected to the first target equipment, thereby realizing the matching of the first target image and the image to be detected through the attitude characteristic points, increasing the three-dimensional attitude check of the images, improving the matching precision of similar scenes and reducing the probability of mistakenly sending the map file of the similar scenes.
Drawings
Fig. 1 is a flowchart of a scene map file processing method executed by a server according to an embodiment of the present application;
fig. 2 is a flowchart of a scene map file processing method executed by a server according to another embodiment of the present application;
fig. 3 is a flowchart of creating a scene picture and a corresponding scene map file in an embodiment of the present application;
FIG. 4 is a flow chart for creating a scene picture, a scene map file, and an adjacency relationship between the scene map files;
FIG. 5 is a diagram illustrating interaction between a device and a server during device location and device roaming;
fig. 6 is a flowchart of a scene map file processing method executed by a device (a first target device or a second target device) in an embodiment of the present application;
FIG. 7 is a flowchart of a scene map file processing method executed by a device according to another embodiment of the present application;
fig. 8 is a block diagram of an electronic device corresponding to a server in an embodiment of the present application;
FIG. 9 is a block diagram of an electronic device corresponding to a server in another embodiment of the present application;
FIG. 10 is a block diagram of an electronic device in an embodiment of the present application;
fig. 11 is a block diagram of an electronic device according to another embodiment of the present application.
Detailed Description
Various aspects and features of the present application are described herein with reference to the drawings.
It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the application.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It is also to be understood that although the present application has been described with reference to some specific examples, those skilled in the art are able to ascertain many other equivalents to the practice of the present application.
The above and other aspects, features and advantages of the present application will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application of unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the application.
Fig. 1 is a flowchart of a scene map file processing method according to an embodiment of the present application, where the method includes the following steps S11-S14:
in step S11, receiving a first target picture sent by a first target device;
in step S12, performing a search according to the first target picture to determine at least one picture to be inspected corresponding to the first target picture;
in step S13, determining whether there is a pose feature point meeting a specific condition in the to-be-inspected picture according to the pose feature point in the first target picture;
in step S14, in the case that there is a posture feature point meeting a specific condition in the picture to be inspected, the scene map file corresponding to the picture to be inspected is sent to the first target device, so that the first target device performs a positioning operation based on the scene map file.
In this embodiment, a first target picture sent by a first target device is received; specifically, first target device can be equipment such as robot (like indoor wheeled robot), unmanned aerial vehicle, and this type of equipment need fix a position self at the operation in-process to guarantee that it removes smoothly. As shown in fig. 2, in the moving process of the device, a camera external to the device captures a picture of an environment where the first target device is located, that is, a first target picture, that is, a 2D image in fig. 2, and sends the first target picture to the server, and the server receives the first target picture sent by the first target device. The method includes the steps that normalization processing is carried out on a first target image, then a server carries out local retrieval according to the first target image after normalization processing so as to determine at least one to-be-detected image corresponding to the first target image, specifically, feature point information (such as corner point information, feature vectors and the like) used for describing the image can be extracted, then retrieval is carried out in a diagram base corresponding to the server based on the feature point information, and then one or more to-be-detected images corresponding to the first target image are determined.
After one or more pictures to be detected corresponding to the first target picture are determined, whether the picture to be detected has the posture characteristic points meeting specific conditions or not is determined according to the posture characteristic points in the first target picture; specifically, whether a second target picture with the posture characteristic point meeting specific conditions exists in the picture to be detected or not is determined, wherein the specific conditions comprise: the number of the posture characteristic points matched with the posture characteristic points in the picture to be detected is more than a specific number; further, the pose feature points may be divided into two types, namely, pose feature points for representing the plane geometric constraint relationship of each feature point in the first target picture and pose feature points for representing the solid geometric constraint relationship of each feature point in the first target picture. For convenience of description, the pose feature point used for representing the plane geometric constraint relation of each feature point in the first target picture is referred to as a first pose feature point, and the pose feature point used for representing the solid geometric constraint relation of each feature point in the first target picture is referred to as a second pose feature point. The planar geometric constraint relationship may refer to a relationship (e.g., a position relationship, an arrangement relationship, etc.) between a feature point and each feature point on the same plane, and the stereo geometric constraint relationship may refer to a position relationship, an arrangement relationship, etc. between a feature point and each feature point in a three-dimensional space.
After receiving the first target picture, the server determines whether a first posture feature point meeting specific conditions exists in the picture to be detected according to the first posture feature point in the first target picture, wherein the posture feature point meeting the specific conditions is the first posture feature point meeting specific number requirements in the same picture to be detected, which is equivalent to judging whether the picture to be detected exists in the picture to be detected, and the number of the corresponding first posture feature points successfully matched with the first posture feature points in the first target picture is more than one value. And if the picture to be detected exists, matching the second posture characteristic points of the picture to be detected with the second posture characteristic points of the first target picture, and if the number of the second posture characteristic points of the picture to be detected matched with the second posture characteristic points of the first target picture is more than a specific number, determining that the posture characteristic points meeting specific conditions exist in the picture to be detected.
For example, suppose three pictures a, b, and c to be detected are retrieved in step S12, two of the pictures are then screened out by matching the first pose feature points, and only the picture c to be detected remains (usually, only one picture is left in this step because the first pose feature points have a high probability of matching, and at this time, it is further determined whether the picture c and the first target picture represent the same scene by matching the second pose feature points. In the method and the device, only the first posture characteristic point can be set, and the first posture characteristic point and the second posture characteristic point with stronger constraint can be set at the same time, so that the matching precision of the similar scene is further improved, and the probability of mistaken sending of the map file of the similar scene is reduced.
And under the condition that the image to be detected has the attitude characteristic points meeting specific conditions, sending a scene map file corresponding to the image to be detected to the first target equipment so that the first target equipment executes positioning operation based on the scene map file. That is, when it is determined that there is a picture (e.g., picture c in the foregoing example) representing the same scene as the first target picture in the pictures to be inspected, a scene map file corresponding to the picture representing the same scene as the first target picture is sent to the first target picture, that is, a scene map file of the scene corresponding to the first target picture is sent to the first target device.
In one embodiment, as shown in FIG. 2, the method may also be implemented as steps S21-S24 as follows:
in step S21, receiving a picture file and a scene map file sent by a second target device;
in step S22, the picture file and the scene map file sent by the second target device are normalized;
in step S23, a scene identifier is assigned to the scene map file;
in step S24, the scene id is saved in association with the normalized picture file.
In this embodiment, the second target device may be a robot, an unmanned aerial vehicle, or the like, and the second target device is used for mapping. The second target device may be the same device as the first target device in the foregoing embodiment, specifically, if the picture and the scene map file corresponding to the scene where the second target device is currently located are correspondingly stored in the server, which means that the second target device enters a familiar scene, the second target device may perform the above steps S11-S14 to locate itself, and at this time, the second target device is equivalent to the first target device.
If the server does not store the picture and the scene map file corresponding to the current scene of the second target device, that is, the second target device enters an unfamiliar scene, at this time, the acquisition scene shown in fig. 3 is entered, the picture (namely, the scene picture) and the scene map file (namely, the SLAM map) corresponding to the current environment are acquired through a camera, a sensor and the like carried by the server, and the acquired picture and the scene map file are sent to the server, so that the server is helped to finish the map building operation of the environment where the second target device is located.
Specifically, in this embodiment, if the second target device enters an unfamiliar scene, the server receives a picture file and a scene map file sent by the second target device; then, normalizing the picture file and the scene map file sent by the second target device, namely normalizing the picture file, namely normalizing the two-dimensional coordinates of the picture characteristic points shown in the figure 3, and normalizing the scene map file, namely normalizing the point cloud map and the characteristic points shown in the figure 3; allocating scene marks for the scene map files; and correspondingly saving the scene identification (namely the scene id shown in the figure 3) and the normalized picture file (namely the scene map shown in the figure 3).
With the above embodiments, it can be understood by those skilled in the art that the "first target device" and the "second target device" in the present application may be the same device, that is, the "second target device" enters a familiar scene, the positioning operation of the "first target device" may be performed through the above steps S11-S14, and the mapping operation of the "second target device" may be performed through the above steps S21-S24 when the "first target device" enters a strange scene.
Therefore, the first target device or the second target device can help the server to perform mapping operation in an unfamiliar environment, and can also help the server to position itself in a familiar environment by using mapping data in the server, so that when the first target device or the second target device is multiple, data can be shared among devices through the server. Of course, it should be noted that when a connection is established between devices, data sharing between the devices can be achieved even without a server.
In one embodiment, the step S13 can be implemented as the following steps:
determining whether a second target picture with the posture characteristic point meeting specific conditions exists in the picture to be detected, wherein the specific conditions comprise: the number of the posture characteristic points matched with the posture characteristic points in the picture to be detected is more than a specific number;
the above step S14 can be implemented as the following steps:
and under the condition that the second target picture exists, sending a scene map file corresponding to the second target picture to the first target equipment.
Specifically, whether a second target picture with the posture characteristic point meeting specific conditions exists in the picture to be detected or not is determined, wherein the specific conditions comprise: the number of the posture characteristic points matched with the posture characteristic points in the picture to be detected is more than a specific number; further, the pose feature points may be divided into two types, namely, pose feature points for representing the plane geometric constraint relationship of each feature point in the first target picture and pose feature points for representing the solid geometric constraint relationship of each feature point in the first target picture. For convenience of description, the pose feature point used for representing the plane geometric constraint relation of each feature point in the first target picture is referred to as a first pose feature point, and the pose feature point used for representing the solid geometric constraint relation of each feature point in the first target picture is referred to as a second pose feature point. The planar geometric constraint relationship may refer to a relationship (e.g., a position relationship, an arrangement relationship, etc.) between a feature point and each feature point on the same plane, and the stereo geometric constraint relationship may refer to a position relationship, an arrangement relationship, etc. between a feature point and each feature point in a three-dimensional space.
After receiving the first target picture, the server determines whether a first posture feature point meeting specific conditions exists in the picture to be detected according to the first posture feature point in the first target picture, wherein the posture feature point meeting the specific conditions is the first posture feature point meeting specific number requirements in the same picture to be detected, which is equivalent to judging whether the picture to be detected exists in the picture to be detected, and the number of the corresponding first posture feature points successfully matched with the first posture feature points in the first target picture is more than one value. And if the picture to be detected exists, matching the second posture characteristic points of the picture to be detected with the second posture characteristic points of the first target picture, and if the number of the second posture characteristic points of the picture to be detected matched with the second posture characteristic points of the first target picture is more than a specific number, determining that the posture characteristic points meeting specific conditions exist in the picture to be detected.
For example, suppose three pictures a, b, and c to be detected are retrieved in step S12, two of the pictures are then screened out by matching the first pose feature points, and only the picture c to be detected remains (usually, only one picture is left in this step because the first pose feature points have a high probability of matching, and at this time, it is further determined whether the picture c and the first target picture represent the same scene by matching the second pose feature points. In the method and the device, only the first posture characteristic point can be set, and the first posture characteristic point and the second posture characteristic point with stronger constraint can be set at the same time, so that the matching precision of the similar scene is further improved, and the probability of mistaken sending of the map file of the similar scene is reduced.
And under the condition that the image to be detected has the attitude characteristic points meeting specific conditions, sending a scene map file corresponding to the image to be detected to the first target equipment so that the first target equipment executes positioning operation based on the scene map file. That is, when it is determined that there is a picture (e.g., picture c in the foregoing example) representing the same scene as the first target picture in the pictures to be inspected, a scene map file corresponding to the picture representing the same scene as the first target picture is sent to the first target picture, that is, a scene map file of the scene corresponding to the first target picture is sent to the first target device.
In one embodiment, prior to the above step S14, the method may also be implemented as the following steps a1-a 2:
in step a1, a scene identifier corresponding to the second target picture is obtained;
in step a2, it is determined that the scene map file corresponding to the scene identifier is the scene map file corresponding to the second target picture.
In this embodiment, before sending the scene map file corresponding to the picture to be inspected to the first target device, since it is determined that the second target picture and the first target picture stored in the server are used for representing the same scene, and the pre-corresponding stored scene identifier and the normalized picture are equivalent to storing the corresponding relationship between the scene identifier and the normalized picture, after the second target picture is determined, the scene identifier corresponding to the second target picture may be determined from the corresponding relationship between the pre-stored scene identifier and the normalized picture, and the scene map file corresponding to the scene identifier may be determined to be the scene map file corresponding to the second target picture, that is, the scene map file corresponding to the picture to be inspected referred to in step S14.
In one embodiment, after the above step S22, the method may also be implemented as the following steps B1-B2:
in step B1, determining whether an adjacent scene map file is locally stored, where the adjacent scene map file is a map file corresponding to a plurality of adjacent scenes;
in step B2, in the case where adjacent scene map files are locally stored, description information for characterizing the adjacent relationship between the adjacent scene map files is generated and stored.
In this embodiment, the server may maintain the neighboring scene map files by using description information representing a neighboring relationship between the neighboring scene map files, where the description information may refer to a adjacency matrix used for representing a corresponding relationship between the neighboring scene map files. Specifically, as shown in fig. 4, the creating scene can establish an adjacency matrix of an adjacency space (i.e., an adjacent map scene) according to the map space boundary.
In one embodiment, after the above step S14, the method may also be implemented as the following steps C1-C2:
in step C1, in the case that the scene map file has the corresponding description information, determining an adjacent scene map file corresponding to the scene map file according to the description information;
in step C2, the adjacent scene map file corresponding to the scene map file is sent to the first target device, so that the first target device preloads the adjacent scene map file.
In this embodiment, as shown in fig. 5, a first target device (or a second target device) sends a real-time location to a server in a scene roaming state, and the server retrieves a nearby map file based on an adjacency matrix, that is, determines whether an adjacent scene map file corresponding to a current scene map file exists nearby based on the adjacency matrix, and if so, sends the adjacent scene map file corresponding to the scene map file to the first target device (or the second target device) so that the first target device (or the second target device) preloads the adjacent scene map file. In this way, when the first object device (or the second object device) enters a new scene from the current scene, the first object device can be made to enter the new scene without feeling because the new scene is already preloaded.
Fig. 6 is a flowchart of a scene map file processing method according to an embodiment of the present application, where the method includes the following steps S61-S64:
in step S61, a first target picture corresponding to the environment is acquired;
in step S62, sending a first target picture corresponding to the environment to the server;
in step S63, receiving a scene map file sent by the server, where the scene map file is determined by the pose feature point in the first target picture;
in step S64, the scene map file is loaded, and a positioning operation is performed based on the scene map file.
The execution main part of this embodiment can be equipment such as robot, unmanned aerial vehicle, and is concrete, can be first target device or second target device in above-mentioned embodiment, and this type of equipment needs fix a position self in the operation process to guarantee that it removes smoothly. As shown in fig. 2, in the moving process of the device, a camera external to the device is used to take a picture corresponding to the environment where the device is located, that is, a first target picture, that is, the 2D image mentioned in fig. 2, and send the first target picture to the server. After the server receives the first target picture, the above steps S11-S14 are executed, so that the scene map file corresponding to the first target picture, that is, the scene map file corresponding to the current environment where the device is located, is determined by the pose feature point in the first target picture, and is sent back, then the execution subject of the embodiment receives the scene map file sent by the server, and then loads the scene map file, and performs the positioning operation based on the scene map file.
In one embodiment, the method may also be implemented as the following steps D1-D2:
in step D1, acquiring a picture file and a scene map file corresponding to the environment where the user is located;
in step D2, sending the picture file and the scene map file corresponding to the environment to the server, so that the server stores the picture file and the scene map file correspondingly; when the server receives the first target picture, the scene map file corresponding to the first target picture is determined according to the corresponding relation between the picture file and the scene map file stored in the server.
Specifically, the execution main body of this embodiment may be the second target device (or the first target device), as shown in fig. 3, when the device enters an unfamiliar scene, the device acquires a picture and a scene map file corresponding to the current environment through a camera, a sensor, and the like carried by the device, and sends the picture and the scene map file to the server, so as to help the server complete the map building operation on the environment where the execution main body of this embodiment is located. After the server receives the picture file and the scene map file corresponding to the environment where the execution subject is located in the present embodiment, the picture file and the scene map file are stored correspondingly through the above-described steps S21-S24.
In one embodiment, as shown in FIG. 7, the method may also be implemented as steps S71-S72 as follows:
in step S71, receiving an adjacent scene map file corresponding to the scene map file sent by the server;
in step S72, the adjacent scene map file corresponding to the scene map file is preloaded.
The server can maintain the adjacent scene map files through description information (such as an adjacency matrix) representing the adjacent relationship between the adjacent scene map files, so that after the scene map files are sent to the first target device or the second target device, the adjacent scene map files corresponding to the scene map files are determined based on the description information, then the adjacent scene map files corresponding to the scene map files are also sent to the first target device or the second target device, and after the adjacent scene map files corresponding to the scene map files sent by the server are received by the first target device or the second target device, the adjacent scene map files corresponding to the scene map files can be preloaded.
Fig. 8 is a block diagram of an electronic device according to an embodiment of the present application, where the electronic device includes the following modules:
a first receiving module 81, configured to receive a first target picture sent by a first target device;
a retrieval module 82, configured to perform retrieval according to the first target picture to determine at least one to-be-inspected picture corresponding to the first target picture;
the first determining module 83 is configured to determine whether a pose feature point meeting a specific condition exists in the to-be-detected picture according to the pose feature point in the first target picture;
a first sending module 84, configured to send a scene map file corresponding to the picture to be detected to the first target device when there is a gesture feature point meeting a specific condition in the picture to be detected, so that the first target device performs a positioning operation based on the scene map file.
In one embodiment, as shown in fig. 9, the electronic device further includes:
a second receiving module 91, configured to receive a picture file and a scene map file sent by a second target device;
a normalization module 92, configured to normalize the picture file and the scene map file sent by the second target device;
an allocating module 93, configured to allocate a scene identifier to the scene map file;
a saving module 94, configured to correspondingly save the scene identifier and the normalized picture file.
In one embodiment, the first determining module includes:
the determining submodule is used for determining whether a second target picture with the gesture feature point meeting specific conditions exists in the picture to be tested, wherein the specific conditions comprise: the number of the posture characteristic points matched with the posture characteristic points in the picture to be detected is more than a specific number;
a first sending module comprising:
and the sending submodule is used for sending the scene map file corresponding to the second target picture to the first target equipment under the condition that the second target picture exists.
In one embodiment, the electronic device further comprises:
the obtaining module is used for obtaining a scene identifier corresponding to the second target picture before the first sending module sends the scene map file corresponding to the second target picture to the first target device;
and the second determining module is used for determining that the scene map file corresponding to the scene identifier is the scene map file corresponding to the second target picture.
In one embodiment, the electronic device further comprises:
a third determining module, configured to determine whether an adjacent scene map file is locally stored after the image file and the scene map file sent by the second target device are normalized, where the adjacent scene map file is a map file corresponding to multiple adjacent scenes;
and the storage module is used for generating and storing description information for representing the adjacent relation between the adjacent scene map files under the condition that the adjacent scene map files are locally stored.
In one embodiment, the electronic device further comprises:
a fourth determining module, configured to determine, after the first sending module performs the step of sending the scene map file corresponding to the second target picture to the first target device, an adjacent scene map file corresponding to the scene map file according to the description information when the scene map file has corresponding description information;
and the second sending module is used for sending the adjacent scene map file corresponding to the scene map file to the first target device so that the first target device can pre-load the adjacent scene map file.
Fig. 10 is a block diagram of an electronic device according to an embodiment of the present application, where the electronic device includes the following modules:
the first obtaining module 101 is configured to obtain a first target picture corresponding to an environment where the first target picture is located;
the first sending module 102 is configured to send a first target picture corresponding to the environment to a server;
a first receiving module 103, configured to receive a scene map file sent by the server, where the scene map file is determined by a pose feature point in the first target picture;
a first loading module 104, configured to load the scene map file and perform a positioning operation based on the scene map file.
In one embodiment, as shown in fig. 11, the electronic device further includes:
the second obtaining module 111 is configured to obtain a picture file and a scene map file corresponding to the environment where the second obtaining module is located;
a second sending module 112, configured to send the picture file and the scene map file corresponding to the environment to a server, so that the server stores the picture file and the scene map file correspondingly; and when the server receives the first target picture, determining a scene map file corresponding to the first target picture according to the corresponding relation between the picture file and the scene map file stored in the server.
In one embodiment, the electronic device further comprises:
the second receiving module is used for receiving the adjacent scene map file corresponding to the scene map file sent by the server;
and the second loading module is used for preloading the adjacent scene map files corresponding to the scene map files.
The present application further provides a scene map file processing apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the method corresponding to any one of the above embodiments is executed.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.

Claims (10)

1. A scene map file processing method comprises the following steps:
receiving a first target picture sent by first target equipment;
retrieving according to the first target picture to determine at least one picture to be detected corresponding to the first target picture;
determining whether the image to be detected has the attitude characteristic points meeting specific conditions or not according to the attitude characteristic points in the first target image;
and under the condition that the image to be detected has the attitude characteristic points meeting specific conditions, sending a scene map file corresponding to the image to be detected to the first target equipment so that the first target equipment executes positioning operation based on the scene map file.
2. The method of claim 1, further comprising:
receiving a picture file and a scene map file sent by second target equipment;
normalizing the picture file and the scene map file sent by the second target equipment;
distributing scene identification for the scene map file;
and correspondingly storing the scene identification and the normalized picture file.
3. The method as claimed in claim 1 or 2, determining whether there is a pose feature point meeting a specific condition in the picture to be inspected according to the pose feature point in the first target picture, comprising:
determining whether a second target picture with gesture feature points meeting specific conditions exists in the picture to be detected, wherein the specific conditions comprise: the number of the posture characteristic points matched with the posture characteristic points in the picture to be detected is more than a specific number;
under the condition that the image to be detected has the attitude characteristic points meeting specific conditions, the scene map file corresponding to the image to be detected is sent to the first target device, and the method comprises the following steps:
and sending a scene map file corresponding to the second target picture to the first target equipment under the condition that the second target picture exists.
4. The method of claim 3, prior to said sending the scene map file corresponding to the second target picture to the first target device, the method further comprising:
acquiring a scene identifier corresponding to the second target picture;
and determining the scene map file corresponding to the scene identifier as the scene map file corresponding to the second target picture.
5. The method of claim 3, after the normalizing the picture file and the scene map file sent by the second target device, the method further comprising:
determining whether adjacent scene map files are stored locally or not, wherein the adjacent scene map files are map files corresponding to a plurality of adjacent scenes;
and under the condition that the adjacent scene map files are locally stored, generating and storing description information for representing the adjacent relation between the adjacent scene map files.
6. The method of claim 5, after sending the scene map file corresponding to the second target picture to the first target device, the method further comprising:
under the condition that the scene map file has corresponding description information, determining an adjacent scene map file corresponding to the scene map file according to the description information;
and sending the adjacent scene map file corresponding to the scene map file to the first target device so that the first target device preloads the adjacent scene map file.
7. A scene map file processing method comprises the following steps:
acquiring a first target picture corresponding to the environment;
sending a first target picture corresponding to the environment to a server;
receiving a scene map file sent by the server, wherein the scene map file is determined through the attitude feature points in the first target picture;
and loading the scene map file, and executing positioning operation based on the scene map file.
8. The method of claim 1, further comprising:
acquiring a picture file and a scene map file corresponding to the environment;
sending a picture file and a scene map file corresponding to the environment to a server so that the server correspondingly stores the picture file and the scene map file; and when the server receives the first target picture, determining a scene map file corresponding to the first target picture according to the corresponding relation between the picture file and the scene map file stored in the server.
9. The method of claim 7, further comprising:
receiving an adjacent scene map file corresponding to the scene map file sent by the server;
and preloading an adjacent scene map file corresponding to the scene map file.
10. A scene map file processing apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
performing the method of any one of claims 1-6;
or
Performing the method according to any of claims 7-9.
CN202011463379.1A 2020-12-11 2020-12-11 Scene map file processing method and device Pending CN112597326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011463379.1A CN112597326A (en) 2020-12-11 2020-12-11 Scene map file processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011463379.1A CN112597326A (en) 2020-12-11 2020-12-11 Scene map file processing method and device

Publications (1)

Publication Number Publication Date
CN112597326A true CN112597326A (en) 2021-04-02

Family

ID=75192701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011463379.1A Pending CN112597326A (en) 2020-12-11 2020-12-11 Scene map file processing method and device

Country Status (1)

Country Link
CN (1) CN112597326A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991713A (en) * 2017-04-13 2017-07-28 网易(杭州)网络有限公司 Method and apparatus, medium, processor and the terminal of scene in more new game
CN109086350A (en) * 2018-07-13 2018-12-25 哈尔滨工业大学 A kind of mixed image search method based on WiFi
CN109661659A (en) * 2018-07-19 2019-04-19 驭势科技(北京)有限公司 The storage of vision positioning map and loading method, device, system and storage medium
CN110413719A (en) * 2019-07-25 2019-11-05 Oppo广东移动通信有限公司 Information processing method and device, equipment, storage medium
CN110855601A (en) * 2018-08-21 2020-02-28 华为技术有限公司 AR/VR scene map acquisition method
CN111311684A (en) * 2020-04-01 2020-06-19 亮风台(上海)信息科技有限公司 Method and equipment for initializing SLAM
CN111652929A (en) * 2020-06-03 2020-09-11 全球能源互联网研究院有限公司 Visual feature identification and positioning method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991713A (en) * 2017-04-13 2017-07-28 网易(杭州)网络有限公司 Method and apparatus, medium, processor and the terminal of scene in more new game
CN109086350A (en) * 2018-07-13 2018-12-25 哈尔滨工业大学 A kind of mixed image search method based on WiFi
CN109661659A (en) * 2018-07-19 2019-04-19 驭势科技(北京)有限公司 The storage of vision positioning map and loading method, device, system and storage medium
CN110855601A (en) * 2018-08-21 2020-02-28 华为技术有限公司 AR/VR scene map acquisition method
CN110413719A (en) * 2019-07-25 2019-11-05 Oppo广东移动通信有限公司 Information processing method and device, equipment, storage medium
CN111311684A (en) * 2020-04-01 2020-06-19 亮风台(上海)信息科技有限公司 Method and equipment for initializing SLAM
CN111652929A (en) * 2020-06-03 2020-09-11 全球能源互联网研究院有限公司 Visual feature identification and positioning method and system

Similar Documents

Publication Publication Date Title
US11644338B2 (en) Ground texture image-based navigation method and device, and storage medium
KR102044491B1 (en) Create and update crowd-sourcing of zone description files for mobile device localization
CN110705574B (en) Positioning method and device, equipment and storage medium
EP2711670B1 (en) Visual localisation
US8861785B2 (en) Information processing device, information processing method and program
WO2019042426A1 (en) Augmented reality scene processing method and apparatus, and computer storage medium
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
CN110645986A (en) Positioning method and device, terminal and storage medium
JP2018128314A (en) Mobile entity position estimating system, mobile entity position estimating terminal device, information storage device, and method of estimating mobile entity position
US20220270323A1 (en) Computer Vision Systems and Methods for Supplying Missing Point Data in Point Clouds Derived from Stereoscopic Image Pairs
JP2020507853A (en) Method and apparatus for three-dimensional point cloud reconstruction
CN112015187B (en) Semantic map construction method and system for intelligent mobile robot
WO2024087962A1 (en) Truck bed orientation recognition system and method, and electronic device and storage medium
KR20120112293A (en) Apparatus and method for detecting position of moving unit
CN115049731B (en) Visual image construction and positioning method based on binocular camera
CN113324537A (en) Vehicle pose acquisition method, vehicle positioning method and device, equipment and medium
CN112597326A (en) Scene map file processing method and device
KR101758786B1 (en) Apparatus for determining location of special point in image and method thereof
CN116050763A (en) Intelligent building site management system based on GIS and BIM
CN115827812A (en) Relocation method, relocation device, relocation equipment and storage medium thereof
CN116136408A (en) Indoor navigation method, server, device and terminal
CN112686962A (en) Indoor visual positioning method and device and electronic equipment
CN115705670B (en) Map management method and device
CN115982399B (en) Image searching method, mobile device, electronic device and computer program product
WO2024001847A1 (en) 2d marker, and indoor positioning method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination