CN109543634B - Data processing method and device in positioning process, electronic equipment and storage medium - Google Patents

Data processing method and device in positioning process, electronic equipment and storage medium Download PDF

Info

Publication number
CN109543634B
CN109543634B CN201811442347.6A CN201811442347A CN109543634B CN 109543634 B CN109543634 B CN 109543634B CN 201811442347 A CN201811442347 A CN 201811442347A CN 109543634 B CN109543634 B CN 109543634B
Authority
CN
China
Prior art keywords
image data
segmentation
result
positioning
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811442347.6A
Other languages
Chinese (zh)
Other versions
CN109543634A (en
Inventor
王恺
林义闽
王洛威
韩立明
华敏杰
王响
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Beijing Technologies Co Ltd
Original Assignee
Cloudminds Beijing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Beijing Technologies Co Ltd filed Critical Cloudminds Beijing Technologies Co Ltd
Priority to CN201811442347.6A priority Critical patent/CN109543634B/en
Publication of CN109543634A publication Critical patent/CN109543634A/en
Application granted granted Critical
Publication of CN109543634B publication Critical patent/CN109543634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to the field of computer vision, and discloses a data processing method and device in a positioning process, electronic equipment and a storage medium. In some embodiments of the present application, a data processing method includes: acquiring current image data of an environment; positioning according to the feature points in the current image data and the tracking map, and determining a first positioning result; segmenting current image data, and determining a first segmentation result of the current image data; selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; determining a second positioning result according to the feature points and the tracking map for secondary positioning; and/or, obtaining a segmentation result of the previous image data and a positioning result of the previous image data; and adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result.

Description

Data processing method and device in positioning process, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of computer vision, in particular to a data processing method and device in a positioning process, electronic equipment and a storage medium.
Background
Positioning and segmentation are the two most basic tasks of robot motion and sensing. The former lets the robot know its current position and orientation, the latter helps to perceive the distribution and boundaries of objects of interest within the robot's field of view. Both of these techniques are essential in many robotic applications, such as autopilot, Unmanned Aerial Vehicle (UAV), robotic patrol, logistics, and the like.
For the positioning task, the vSLAM technology becomes one of the most promising positioning technologies in robot positioning because the hardware and computation costs of the visual Simultaneous positioning and Mapping (vSLAM) technology are relatively low. It uses the image sequence and some auxiliary sensor data, such as depth maps, Inertial Measurement Unit (IMU) data, etc., to create an environment map and return current location information. One challenge with vSLAM technology is that the environment in which robots are located is often variable.
For the segmentation task, 2D image-based semantic segmentation using deep neural networks has proven to be effective and has been widely used in many systems. It can output a series of segmented regions and their class boundaries.
However, the inventor finds in the course of studying the prior art that, when using vSLAM technology, the accuracy of the map is affected by the instantaneous movement of certain objects during the mapping process. Moreover, some objects move after the map building is completed, and the created map is no longer consistent with the environment, so that the subsequent positioning is inaccurate. When semantic segmentation, inaccurate manual labeling and the lack of similar training data often lead to inaccurate segmentation results, since deep learning methods rely on training data. Meanwhile, the single frame image in the video image may have the problems of blurring and the like, which may lead to inaccurate segmentation result.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the invention aims to provide a data processing method, a data processing device, electronic equipment and a storage medium in a positioning process, so that a positioning result and/or a segmentation result are more accurate.
In order to solve the above technical problem, an embodiment of the present invention provides a data processing method in a positioning process, including the following steps: acquiring current image data of an environment; positioning according to the feature points in the current image data and the tracking map, and determining a first positioning result; segmenting current image data, and determining a first segmentation result of the current image data; selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; determining a second positioning result according to the feature points and the tracking map for secondary positioning; and/or, obtaining a segmentation result of the previous image data and a positioning result of the previous image data; and adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result.
The embodiment of the present invention further provides a data processing apparatus in a positioning process, including: the system comprises an acquisition module, a first positioning module, a first segmentation module, a second positioning module and/or a second segmentation module; the acquisition module is used for acquiring current image data of the environment; the first positioning module is used for positioning according to the feature points in the current image data and the tracking map and determining a first positioning result; the first segmentation module is used for segmenting the current image data and determining a first segmentation result of the current image data; the second positioning module is used for selecting feature points for secondary positioning from the feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; determining a second positioning result according to the feature points and the tracking map for secondary positioning; the second segmentation module is used for obtaining the segmentation result of the previous image data and the positioning result of the previous image data; and adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the data processing method in the positioning process mentioned in the above embodiments.
The embodiment of the present invention also provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the data processing method in the positioning process mentioned in the above embodiment.
Compared with the prior art, the electronic equipment can optimize the segmentation result according to the positioning result and can also optimize the positioning result according to the segmentation result. The electronic equipment selects the feature points in the current image data according to the segmentation result, and performs secondary positioning according to the selected feature points, so that compared with a method for positioning by directly using the feature points in the current image data, the method reduces the influence of the feature points (such as the feature points of a moving object) which do not meet the requirements in the current image data on the positioning result, and improves the accuracy of the positioning result. The electronic equipment utilizes the positioning result of the current image data and the positioning result of the previous image data, so that the segmentation result of the current image data can be adjusted according to the segmentation result of the previous image, and the accuracy of the segmentation result of the current image data is improved.
In addition, selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map specifically includes: classifying the feature points in the current image data according to the first segmentation result; determining the overall state of the feature points of each category according to the first positioning result and the tracking map; wherein the overall state of the characteristic points of the category is a static state or a motion state; and determining the characteristic points for secondary positioning according to the overall state of the characteristic points of the category. In the implementation, the feature points in the current image data are screened, the feature points of the moving object are removed, the influence of the feature points of the moving object on the positioning result is avoided, and the accuracy of the positioning result is improved.
In addition, selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map specifically includes: acquiring a segmentation result of the previous image data and a positioning result of the previous image data; adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result; classifying the feature points in the current image data according to the second segmentation result; determining the overall state of the feature points of each category according to the first positioning result and the tracking map; wherein the overall state of the characteristic points of the category is a static state or a motion state; and determining the characteristic points for secondary positioning according to the overall state of the characteristic points of the category. In the implementation, the electronic device classifies the feature points according to a more accurate segmentation result, so that the situation that the feature points of different objects are classified into the same category is reduced, and the number of the feature points in a motion state in the selected feature points for secondary positioning is reduced.
In addition, determining the overall state of the feature points of each category according to the first positioning result and the tracking map specifically comprises: for each category, the following operations are performed: determining the state information of each feature point in the category according to the first positioning result and the tracking map, wherein the state information of the feature points indicates that the feature points are in a static state or a motion state; and determining the overall state of the characteristic points of the category according to the state information of each characteristic point in the category.
In addition, the first positioning result comprises first translation information and first rotation information; determining the state information of each feature point in the category according to the first positioning result and the tracking map, wherein the determining specifically comprises the following steps: projecting the characteristic points in the tracking map into the current image data according to the first translation information and the first rotation information; determining the corresponding relation between the feature points obtained by projection and the feature points in the category; for each feature point in the category, the following operations are respectively performed: determining the position relation between the characteristic points and the characteristic points obtained by projection corresponding to the characteristic points; and determining the state information of the characteristic points according to the position relation.
In addition, determining the overall state of the feature points of the category according to the state information of each feature point in the category specifically includes: judging whether the number of the feature points in the static state in the category is larger than a first threshold value or not according to the state information of each feature point in the category; if yes, determining the overall state of the feature points of the category as a static state; otherwise, determining the overall state of the characteristic points of the category as a motion state.
In addition, after determining the overall state of the feature points of each category from the first positioning result and the tracking map, the data processing method further includes:
and updating the tracking map and the long-term map according to the overall state of the feature points of each category. In the implementation, the electronic equipment adds the immovable characteristic points to the tracking map, so that the accuracy of the positioning result of the electronic equipment is improved, and the tracking stability and the track precision of the electronic equipment are further improved. In addition, the electronic device creates a long-term map for each area, avoiding repeated mapping calculations by the electronic device when accessing the same area.
In addition, according to the first positioning result, the positioning result of the previous image data, and the segmentation result of the previous image data, the first segmentation result is adjusted to obtain a second segmentation result, which specifically includes: projecting the segmentation area in the segmentation result of the previous image data to the current image data according to the positioning result and the first positioning result of the previous image data to obtain a projection area; and adjusting the first segmentation result according to the projection area to obtain a second segmentation result.
In addition, according to the first positioning result, the positioning result of the previous image data, and the segmentation result of the previous image data, the first segmentation result is adjusted to obtain a second segmentation result, which specifically includes: selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; determining a second positioning result according to the feature points and the tracking map for secondary positioning; projecting the segmentation area in the segmentation result of the previous image data to the current image data according to the positioning result and the second positioning result of the previous image data to obtain a projection area; and adjusting the first segmentation result according to the projection area to obtain a second segmentation result. In this implementation, the electronic device projects the segmentation result of the image data according to the more accurate second positioning result, which may reduce the projection error.
In addition, according to the projection area, adjusting the first segmentation result to obtain a second segmentation result, which specifically includes: determining the corresponding relation between the projection area and an initial segmentation area in a first segmentation result of the current image data; for each projection region, the following operations are respectively carried out: judging whether an initial segmentation area corresponding to the projection area exists or not; if the projection area and the projection area are determined to exist, determining an intersection area of the projection area and an initial segmentation area corresponding to the projection area, determining a first proportion of the intersection area in the corresponding initial segmentation area and a second proportion of the intersection area in the projection area, judging whether the first proportion is smaller than the second proportion, if so, taking the projection area as a final segmentation area of the current image data, otherwise, taking the corresponding initial segmentation area as a final segmentation area of the current image data; if the projection area does not exist, determining whether the projection area is used as a final segmentation area of the current image data according to the number of the projection areas; and determining a second segmentation result according to all final segmentation areas of the current image data.
In addition, determining the correspondence between the projection region and the initial segmentation region in the first segmentation result of the current image data specifically includes: for each projection region, the following operations are respectively carried out: determining similar parameters of the projection area and each initial segmentation area; and determining an initial segmentation area corresponding to the projection area according to the initial segmentation area corresponding to the minimum similarity parameter.
In addition, according to the initial segmentation region corresponding to the minimum similarity parameter, determining the initial segmentation region corresponding to the projection region specifically includes: judging whether the minimum similar parameter is smaller than a second threshold value; if so, determining the initial segmentation region corresponding to the minimum similarity parameter as the initial segmentation region corresponding to the projection region; otherwise, determining that the projection region does not have a corresponding relation with all the initial segmentation regions.
In addition, determining whether to use the projection area as a final segmentation area of the current image data according to the number of the projection areas specifically includes: judging whether the total number of the projection areas is larger than that of the initial segmentation areas or not; if yes, the projection area without the corresponding relation is used as a final segmentation area in the second segmentation result.
In addition, determining the similar parameters of the projection region and each initial segmentation region specifically comprises: for each initial segmentation region, the following operations are respectively carried out: determining Euclidean distance between the central point of the initial segmentation region and the central point of the projection region; and determining similar parameters according to the Euclidean distance, the initial segmentation region and the projection region.
In addition, determining similar parameters according to the Euclidean distance, the initial segmentation region and the projection region specifically comprises the following steps:
according to formula a:
Figure BDA0001884951110000051
calculating similar parameters;
wherein, Scp is a similar parameter, Dist (Rec, Rep) represents the Euclidean distance between the central point of the initial segmentation region and the central point of the projection region, and w1A weight representing an euclidean distance, Rec represents an initial divided region, Rep represents a projection region, Area (Rec-Reu Rep-Rec represents a total number of pixels in a union of a region of the initial divided region which is not intersected with the projection region and a region of the projection region which is not intersected with the initial divided region, Area (Rec) represents a number of pixels in the initial divided region, Area (Rep) represents a number of pixels in the projection region, w represents a number of pixels in the projection region, and2to represent
Figure BDA0001884951110000052
The weight of (c).
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a data processing method of a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a data processing procedure according to a second embodiment of the present invention;
FIG. 3 is a flow chart of a data processing method of a second embodiment of the present invention;
FIG. 4 is a flow chart of a data processing method of a third embodiment of the present invention;
FIG. 5 is a schematic configuration diagram of a data processing apparatus according to a fourth embodiment of the present invention;
fig. 6 is a schematic configuration diagram of an electronic apparatus according to a fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The first embodiment of the invention relates to a data processing method in a positioning process, which is applied to electronic equipment, such as a robot, a blind person navigation device and the like. As shown in fig. 1, the data processing method includes the steps of:
step 101: current image data of an environment is acquired.
Specifically, the current image data may be two current RGB images of the environment captured by the electronic device through a binocular camera, may also be a current Red Green Blue (RGB) image of the environment obtained by the electronic device through a monocular camera, and may also be a current Depth map of the environment obtained by a Depth camera, for example, a camera based on Time of Flight (TOF), and may also be a Red Green Blue-Depth (RGB-D) image containing Depth information captured by the electronic device through the Depth camera, where the RGB-D image includes an RGB image and a Depth map, and the method for capturing the current image data of the environment and the specific content of the current image data are not limited in the present embodiment.
Step 102: and positioning according to the feature points in the current image data and the tracking map, and determining a first positioning result.
Specifically, the electronic device may determine the first positioning result based on the current image data using an instant positioning And Mapping technique based on a FAST And robust binary descriptor (ORB _ SLAM) algorithm or other positioning algorithm.
In a specific implementation, the current image data includes an RGB image and a depth map. The electronic device extracts feature points in the RGB image and aligns them with the depth map to obtain three-dimensional coordinates for each feature point in the RGB image. After obtaining the three-dimensional coordinates of each feature point, the electronic device obtains a first positioning result by using a method of minimizing a reprojection error according to the three-dimensional coordinates of the feature points in the current image data and the three-dimensional coordinates of the feature points in the tracking map. The first positioning result comprises first rotation information and first translation information.
Step 103: and segmenting the current image data, and determining a first segmentation result of the current image data.
In particular implementations, an electronic device trains a neural network for classification on an image dataset (MS COCO dataset). The electronic device segments each acquired frame of the RGB image using a full convolution Instance-aware Semantic Segmentation (FCIS) algorithm and a neural network for classification, and determines a bounding box of each object in the RGB image. And regarding the bounding box of each object, if the pixel value in the bounding box is larger than a preset pixel threshold value, the bounding box is considered to be a part of the object, otherwise, the bounding box is marked as a background. By repeating the above operations, a first division result of the RGB image is obtained.
It should be noted that, in practical applications, those skilled in the art may train a neural network by using other data sets, and may also segment the current image data by using other semantic segmentation algorithms, and the method for training the neural network and the method for segmenting the current image data are not limited in this embodiment.
Step 104: selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; and determining a second positioning result according to the characteristic points and the tracking map for secondary positioning.
Specifically, since the current image data may include feature points of an object that moves, the feature points of the object that moves will affect the accuracy of the positioning result, the electronic device needs to select feature points of an object that does not move from the feature points in the current image data according to the first segmentation result, the first positioning result, and the tracking map as feature points for secondary positioning, perform positioning again based on the selected feature points for secondary positioning and the tracking map, determine a second positioning result, and use the second positioning result as a final positioning result of the current image data.
The following illustrates a method for determining feature points for secondary positioning by an electronic device.
In specific implementation, the electronic equipment classifies the feature points in the current image data according to the first segmentation result; determining the overall state of the feature points of each category according to the first positioning result and the tracking map; wherein the overall state of the characteristic points of the category is a static state or a motion state; and determining the characteristic points for secondary positioning according to the overall state of the characteristic points of the category.
It is worth mentioning that the feature points in the current image data are screened, and the feature points of the moving object are removed, so that the influence of the feature points of the moving object on the positioning result is avoided, and the accuracy of the positioning result is improved.
In another specific implementation, the electronic device stores attributes of different objects in advance, where the attributes of the objects indicate that the objects may or may not move, for example, when the objects are backgrounds, the attributes of the objects are not movable. The electronic equipment classifies the feature points in the current image data according to the first segmentation result or the second segmentation result, and determines an object corresponding to the feature point of each category; for each category, the following operations are performed: determining the attribute of the object corresponding to the characteristic point of the category; if the attribute of the object corresponding to the characteristic point of the category indicates that the object corresponding to the characteristic point of the category cannot move, determining that the overall state of the characteristic point of the category indicates the overall static state of the category; and if the attribute of the object corresponding to the characteristic point of the category indicates that the object corresponding to the characteristic point of the category can move, determining the overall state of the characteristic point of the category according to the first positioning result and the tracking map. The electronic equipment determines the feature points for secondary positioning according to the overall state of the feature points of each category.
The method for determining the feature points for secondary positioning by the electronic device according to the overall state of the feature points of the category includes, but is not limited to, the following two methods:
the method comprises the following steps: the electronic device deletes the feature points of the category of which the overall state is the motion state, and takes the feature points of the remaining category as the feature points for secondary positioning.
The method 2 comprises the following steps: the electronic equipment discards the image of the segmentation area corresponding to the category of which the overall state is the motion state, namely removes the image of the segmentation area, and determines the feature points for secondary positioning according to the remaining image.
The state information of the feature point indicates that the feature point is in a motion state, which indicates that the position of the feature point in the current image data is changed relative to the position in the previous image data, and the overall state of the feature point of the category is a motion state, which indicates that the position of the object corresponding to the category in the current image data is changed relative to the position in the previous image data.
It should be noted that, in contrast to the method of directly determining the overall state of the feature points of the category according to the attributes of the object corresponding to the category, in this embodiment, the electronic device determines the feature points of the category that actually moves according to the tracking map and the first positioning result, so as to avoid the electronic device deleting the feature points of the category that can move but does not move in the current image data, thereby ensuring the effective number of the feature points for secondary positioning and further improving the accuracy of the positioning result.
A method for determining the overall state of the feature point of each category by the electronic device based on the first positioning result and the tracking map will be exemplified below.
The electronic equipment respectively performs the following operations for each category: and the electronic equipment determines the state information of each feature point in the category according to the first positioning result and the tracking map, wherein the state information of the feature points indicates that the feature points are in a static state or a motion state. And the electronic equipment determines the overall state of the characteristic points of the category according to the state information of each characteristic point in the category.
In specific implementation, the electronic equipment projects the feature points in the tracking map to the current image data according to the first translation information and the first rotation information in the first positioning result; determining the corresponding relation between the feature points obtained by projection and the feature points in the category; for each feature point in the category, the following operations are respectively performed: determining the position relation between the characteristic points and the characteristic points obtained by projection corresponding to the characteristic points; and determining the state information of the characteristic points according to the position relation. For example, the electronic device calculates the euclidean distance between the feature point and the feature point obtained by projection corresponding to the feature point, and determines that the feature point is in a stationary state if it is determined that the calculated euclidean distance is smaller than a third threshold, otherwise, determines that the feature point is in a moving state. The electronic equipment judges whether the number of the feature points in the static state in the category is larger than a first threshold value or not according to the state information of each feature point in the category; if yes, determining the overall state of the feature points of the category as a static state; otherwise, determining the overall state of the characteristic points of the category as a motion state. Wherein, the first threshold value and the third threshold value can be set according to requirements.
In a specific implementation, after determining the overall state of the feature points of each category, the electronic device may update the tracking map and the long-term map according to the overall state of the feature points of each category. Specifically, the electronic device creates and maintains two kinds of maps, namely a tracking map and a long-term map, in the positioning process. Wherein the tracking map is used for real-time positioning of the electronic device. In the positioning process of the electronic equipment, determining an object corresponding to the characteristic point of each category, and determining an object which can be moved and an object which cannot be moved according to the attribute of each object; for each object which cannot move, the following operations are respectively carried out: projecting the feature points of the object onto a tracking map; judging whether the characteristic point of the object exists on the tracking map or not; if the characteristic points do not exist, adding the characteristic points of the object into the tracking map; if the characteristic points exist, determining the corresponding relation between the characteristic points of the object projected into the tracking map and the characteristic points of the object in the tracking map, if the characteristic points of the object projected into the tracking map do not find the corresponding characteristic points in the tracking map, adding the characteristic points in the tracking map which do not find the corresponding characteristic points into the tracking map. For each movable object, the following operations are respectively carried out: projecting the feature points of the object onto a tracking map; judging whether the characteristic point of the object exists on the tracking map or not; if the object exists and the general state of the characteristic points of the category corresponding to the object is determined to be a motion state, deleting the characteristic points of the object in the tracking map; and if the object does not exist or exists and the overall state of the feature points of the category corresponding to the object is a static state, not operating. The long-term map is designed for long-term use, and the electronic device can create the long-term map during a first navigation into a new area and reuse it later to avoid repeated mapping calculations when accessing the same area. Since the long-term map needs to be used repeatedly at a later stage, the electronic device adds feature points whose positions are kept fixed over time, that is, feature points (such as backgrounds) of categories corresponding to objects whose attributes are immovable, to the long-term map to provide stable environmental information.
It is worth mentioning that the feature points of the unmovable object are helpful for the electronic device to calculate the positioning result, and the feature points of the unmovable object are added into the tracking map, so that the accuracy of the positioning result of the electronic device is improved, and the tracking stability and the track precision of the electronic device are improved.
It is worth mentioning that the electronic device creates a long-term map for each area, avoiding repeated mapping calculations when the electronic device accesses the same area.
Step 105: acquiring a segmentation result of the previous image data and a positioning result of the previous image data; and adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result.
Specifically, normally, there is no great change between the last image data and the current image data of the electronic device, and therefore, the segmentation result of the last image data by the electronic device should be similar to or even identical to the segmentation result of the current image data. When the electronic equipment finds that the segmentation result of the previous image data and the segmentation result of the current image data have large access, the first segmentation result is adjusted in time to obtain the second segmentation result, and the second segmentation result is used as the final segmentation result, so that the accuracy of the segmentation process of the electronic equipment is improved.
In specific implementation, the electronic device projects a segmentation region in a segmentation result of previous image data to current image data according to a positioning result and a first positioning result of the previous image data to obtain a projection region; and adjusting the first segmentation result according to the projection area to obtain a second segmentation result.
It should be noted that, as can be understood by those skilled in the art, in practical applications, when the current image data is the image data of the first frame acquired in the positioning process, since the previous image data does not exist, the first segmentation result of the current image data cannot be adjusted according to the segmentation result of the previous image, and therefore, after the first segmentation result of the current image data is obtained, the electronic device directly takes the first segmentation result of the current image data as the final segmentation result of the current image data.
In a specific implementation, the positioning result of the previous image data is the final positioning result of the previous image data, and the segmentation result of the previous image data is the final segmentation result of the previous image data.
The method for obtaining the projection area by the electronic device is exemplified below. After obtaining the positioning result of the current image data, that is, the rotation information and the translation information of the current image data, the electronic device obtains the positioning result of the previous image data, that is, the rotation information and the translation information of the previous image data, and projects each feature point in the segmented region in the previous image data into the current image data according to the formula b.
Formula b:
Figure BDA0001884951110000101
in formula b, p'uIs the abscissa, p 'of the projected feature point'vOrdinate, f, of characteristic points obtained for projectionx、fy、cx、cyIs an internal parameter of the camera and is,
Figure BDA0001884951110000102
wherein the content of the first and second substances,
Figure BDA0001884951110000103
is RcInverse matrix of RcFor rotation information of current image data, RfFor the rotation information of the last image data, T ═ Tf-TcWherein, TfTranslation information for previous image data, TcFor translation information of current image data, Pz=D(pu-cx) /DF, wherein D (p)u-cx) Is a feature point (p) before projectionu,cx) Depth data of puIs the abscissa, p, of the feature point before projectionvIs the longitudinal seating of the projected characteristic pointsStandard, DF is the depth factor, Px=(pu-cx)*Pz/fx,Py=(pv-cy)*Pz/fy
The following illustrates an exemplary method for adjusting the first segmentation result according to the projection area by the electronic device to obtain the second segmentation result.
The electronic equipment determines the corresponding relation between the projection area and an initial segmentation area in a first segmentation result of the current image data; for each projection region, the following operations are respectively carried out: judging whether an initial segmentation area corresponding to the projection area exists or not; if the projection area exists, determining an intersection area of the projection area and an initial segmentation area corresponding to the projection area, determining a first proportion of the intersection area in the corresponding initial segmentation area and a second proportion of the intersection area in the projection area, judging whether the first proportion is smaller than the second proportion, if so, taking the projection area as a final segmentation area of the current image data, otherwise, taking the initial segmentation area corresponding to the projection area as a final segmentation area of the current image data; and if the projection area does not exist, determining whether the projection area is used as the final segmentation area of the current image data according to the number of the projection areas. After the electronic device completes the above operations on all the projection areas, a second segmentation result is determined according to all the final segmentation areas of the current image data.
In a specific implementation, the method for determining the corresponding relationship between the projection region and the initial segmentation region by the electronic device is as follows: the electronic equipment respectively performs the following operations for each projection area: determining similar parameters of the projection area and each initial segmentation area; and determining an initial segmentation area corresponding to the projection area according to the initial segmentation area corresponding to the minimum similarity parameter. Specifically, after determining the minimum similar parameter, the electronic device determines whether the minimum similar parameter is smaller than a second threshold; if so, determining the initial segmentation region corresponding to the minimum similarity parameter as the initial segmentation region corresponding to the projection region; otherwise, determining that the projection region does not have a corresponding relation with all the initial segmentation regions.
In specific implementation, after determining that there is no initial segmentation region corresponding to the projection region, the electronic device determines whether the total number of the projection regions is greater than the total number of the initial segmentation regions; if yes, the projection area without the corresponding relation is used as a final segmentation area in the second segmentation result.
A method for determining the similarity parameter between the projection region and the initial segmentation region by the electronic device is described below.
The electronic equipment respectively performs the following operations for each initial segmentation area: determining Euclidean distance between the central point of the initial segmentation region and the central point of the projection region; and determining similar parameters according to the Euclidean distance, the initial segmentation region and the projection region.
In specific implementation, the electronic device calculates the similar parameters according to a formula a, where the formula a is as follows:
formula a:
Figure BDA0001884951110000111
wherein, Scp is a similar parameter, Dist (Rec, Rep) represents the Euclidean distance between the central point of the initial segmentation region and the central point of the projection region, and w1A weight representing an euclidean distance, Rec represents an initial divided region, Rep represents a projection region, Area (Rec-Reu Rep-Rec represents a total number of pixels in a union of a region of the initial divided region which is not intersected with the projection region and a region of the projection region which is not intersected with the initial divided region, Area (Rec) represents a number of pixels in the initial divided region, Area (Rep) represents a number of pixels in the projection region, w represents a number of pixels in the projection region, and2to represent
Figure BDA0001884951110000112
The weight of (c). Wherein, w1、w2The specific value of (a) can be set to any value according to needs, and the second threshold value can be set according to w1And w2The value of (a) is set.
It should be noted that, as will be understood by those skilled in the art, the steps 104 and 105 are not necessarily required to be performed, and the steps 104 and 105 or any one of the steps 104 and 105 may be selectively performed. When the electronic device performs step 104, the accuracy of the electronic device positioning can be improved, and when the electronic device performs step 105, the accuracy of the electronic device semantic segmentation can be improved.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, in the data processing method in the positioning process provided by the embodiment, the electronic device can optimize the segmentation result according to the positioning result, and can also optimize the positioning result according to the segmentation result. The electronic equipment selects the feature points in the current image data according to the segmentation result, and performs secondary positioning according to the selected feature points, so that compared with a method for positioning by directly using the feature points in the current image data, the method reduces the influence of the feature points (such as the feature points of a moving object) which do not meet the requirements in the current image data on the positioning result, and improves the accuracy of the positioning result. In addition, in the present embodiment, the influence of motion blur caused by the motion of an object or an electronic device on the segmentation accuracy is sufficiently considered, and the accuracy of the segmentation result of the current image data is improved by adjusting the segmentation result of the current image data using more effective information (the positioning result of the current image data, the positioning result of the previous image data, and the segmentation result of the previous image).
A second embodiment of the present invention relates to a data processing method in a positioning process, and is substantially the same as the first embodiment, and mainly differs therefrom in that: in the first embodiment, step 105 is executed after step 104 is executed, and step 104 is described by taking the example that the electronic device performs the second positioning using the first division result. In the second embodiment of the present invention, step 105 is executed to execute step 104, and in step 104, the second positioning result is determined using the second segmentation result obtained in step 105.
In the positioning process, a schematic diagram of the data processing process is shown in fig. 2, where coarse positioning is first positioning, fine positioning is second positioning, coarse segmentation is first segmentation, and fine segmentation is second segmentation. Specifically, as shown in fig. 3, in the present embodiment, the data processing method includes steps 201 to 207, wherein steps 201 to 203, and step 204 are substantially the same as steps 101 to 103, and step 105 of the first embodiment, respectively, and the following differences are mainly introduced:
step 201: current image data of an environment is acquired.
Step 202: and positioning according to the feature points in the current image data and the tracking map, and determining a first positioning result.
Step 203: and segmenting the current image data, and determining a first segmentation result of the current image data.
Step 204: acquiring a segmentation result of the previous image data and a positioning result of the previous image data; and adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result.
Specifically, the electronic device projects a segmentation region in a segmentation result of previous image data to current image data according to a positioning result of the previous image data and a first positioning result to obtain a projection region; and adjusting the first segmentation result according to the projection area to obtain a second segmentation result.
It should be noted that, as can be understood by those skilled in the art, when the current image data is the image data of the first frame acquired in the positioning process, the electronic device cannot execute step 204, and in this case, the electronic device may classify the feature points in the current image data according to the first segmentation result.
Step 205: and classifying the feature points in the current image data according to the second segmentation result.
The following exemplifies a method of classifying the feature points in the current image data according to the second segmentation result in combination with the actual situation.
Assuming that the current image data includes the feature point a, the feature point B, and the feature point C, the second division result divides the current image data into the region 1 and the region 2. The feature points a and B are located in the region 1, and the feature points a and B are classified into the category 1. The feature point C is not located in the area 1 but located in the area 2, and is classified into the category 2.
Step 206: and determining the overall state of the characteristic points of each category according to the first positioning result and the tracking map.
Specifically, the overall state of the feature points of the category is a stationary state or a moving state. The method for determining the overall state of the feature points of each category by the electronic device is substantially the same as the method for determining the overall state of the feature points of each category by the electronic device in the first embodiment, and therefore details are not repeated here, and a person skilled in the art may refer to the relevant content of the method for determining the overall state of the feature points of each category by the electronic device in the first embodiment to execute step 206.
Step 207: and determining the characteristic points for secondary positioning according to the overall state of the characteristic points of the category.
Specifically, the electronic device takes the feature points of the category, which are in a stationary state as a whole, as feature points for secondary positioning, according to the general state of the feature points of the category.
It is worth mentioning that, because the accuracy of the second segmentation result is higher than that of the first segmentation result, the determination of the feature points for secondary positioning according to the second segmentation result is more accurate, and the accuracy of the positioning result of the electronic device is further improved.
Step 208: and determining a second positioning result according to the characteristic points and the tracking map for secondary positioning.
The data processing method is explained above, and the data processing method provided by the embodiment of the invention is compared with other positioning algorithms and segmentation algorithms in combination with experimental data. In the experiment, two persons move in the environment, and are positioned by using the data processing method, ORB-SLAM2 algorithm, and DynaSLAM provided in the present embodiment, respectively, and the absolute trajectory error values of the positioning results are shown in table 1. For all images in a large-scale 3D dataset (SCANNET dataset), the mean Average precision (mAP) and mean Intersection over unit (mlou) of the segmentation results generated using the data processing method and the FCIS algorithm provided in this embodiment are calculated, and the results are shown in table 2. As can be seen from table 1, the improvement of our localization method on the performance of the walking data set is evident. Compared with the ORB-SLAM2 algorithm and the DyNASLAM algorithm, the data processing method of the embodiment segments and discards moving objects which can generate dynamic feature points, and further removes the dynamic feature points, thereby achieving higher precision, which is the reason why the data processing method of the embodiment is superior to the DynSLAM algorithm. As can be seen from table 2, compared with the FCIS algorithm, the data processing method provided in the present embodiment has a greatly improved segmentation accuracy.
TABLE 1
Figure BDA0001884951110000131
TABLE 2
FCIS Data processing method provided by the present embodiment
mAP 0.6314 0.6504
mIoU 0.5620 0.5751
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, in the data processing method in the positioning process provided by the embodiment, the electronic device can optimize the segmentation result according to the positioning result, and can also optimize the positioning result according to the segmentation result. The electronic equipment selects the feature points in the current image data according to the segmentation result, and performs secondary positioning according to the selected feature points, so that compared with a method for positioning by directly using the feature points in the current image data, the method reduces the influence of the feature points (such as the feature points of a moving object) which do not meet the requirements in the current image data on the positioning result, and improves the accuracy of the positioning result. In addition, in the present embodiment, the influence of motion blur caused by the motion of an object or an electronic device on the segmentation accuracy is sufficiently considered, and the accuracy of the segmentation result of the current image data is improved by adjusting the segmentation result of the current image data using more effective information (the positioning result of the current image data, the positioning result of the previous image data, and the segmentation result of the previous image). In addition, the electronic equipment classifies the feature points according to the more accurate segmentation result, so that the condition that the feature points of different objects are classified into the same category is reduced, and the number of the feature points in a motion state in the selected feature points for secondary positioning is further reduced.
A third embodiment of the present invention relates to a data processing method in a positioning process, and the second embodiment is substantially the same as the first embodiment, and mainly differs therefrom in that: in the first embodiment, step 105 will be described by taking an example that the electronic device adjusts the first segmentation result by using the first positioning result. In the second embodiment of the present invention, the first segmentation result is adjusted by using the second positioning result obtained in step 104.
Specifically, as shown in fig. 4, in the present embodiment, the data processing method includes steps 301 to 307, where steps 301 to 304 are substantially the same as steps 101 to 104 of the first embodiment, and the following differences are mainly introduced:
step 301: current image data of an environment is acquired.
Step 302: and positioning according to the feature points in the current image data and the tracking map, and determining a first positioning result.
Step 303: and segmenting the current image data, and determining a first segmentation result of the current image data.
Step 304: selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; and determining a second positioning result according to the characteristic points and the tracking map for secondary positioning.
Step 305: and acquiring the segmentation result of the previous image data and the positioning result of the previous image data.
Step 306: and projecting the segmentation area in the segmentation result of the previous image data to the current image data according to the positioning result and the second positioning result of the previous image data to obtain a projection area.
Specifically, a process of obtaining the projection area by the electronic device is substantially the same as the process of obtaining the projection area by the electronic device in the first embodiment, and a person skilled in the art may refer to the related description in the first embodiment, and details are not described here again.
Step 307: and adjusting the first segmentation result according to the projection area to obtain a second segmentation result.
Specifically, a process of the electronic device adjusting the first segmentation result according to the projection area is substantially the same as the process of the electronic device adjusting the first segmentation result according to the projection area in the first embodiment, and a person skilled in the art may refer to the related description in the first embodiment, and details are not repeated here.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, in the data processing method in the positioning process provided by the embodiment, the electronic device can optimize the segmentation result according to the positioning result, and can also optimize the positioning result according to the segmentation result. The electronic equipment selects the feature points in the current image data according to the segmentation result, and performs secondary positioning according to the selected feature points, so that compared with a method for positioning by directly using the feature points in the current image data, the method reduces the influence of the feature points (such as the feature points of a moving object) which do not meet the requirements in the current image data on the positioning result, and improves the accuracy of the positioning result. The electronic equipment utilizes the positioning result of the current image data and the positioning result of the previous image data, so that the segmentation result of the current image data can be adjusted according to the segmentation result of the previous image, and the accuracy of the segmentation result of the current image data is improved. In addition, the electronic device projects the segmentation result of the image data according to the more accurate second positioning result, so that the projection error can be reduced.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fourth embodiment of the present invention relates to a data processing apparatus in a positioning process, as shown in fig. 5, including: an acquisition module 401, a first positioning module 402, a first segmentation module 403, and a second positioning module 404, and/or a second segmentation module 405. The obtaining module 401 is configured to obtain current image data of an environment. The first positioning module 402 is configured to perform positioning according to the feature points in the current image data and the tracking map, and determine a first positioning result. The first segmentation module 403 is configured to segment the current image data and determine a first segmentation result of the current image data. The second positioning module 404 is configured to select feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result, and the tracking map; and determining a second positioning result according to the characteristic points and the tracking map for secondary positioning. The second segmentation module 405 is configured to obtain a segmentation result of previous image data and a positioning result of the previous image data; and adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result.
It should be noted that, in fig. 4, the data processing apparatus includes the second positioning module 404 and the second dividing module 405 as an example, and those skilled in the art can understand that in practical applications, the data processing apparatus may also be provided with any one of the second positioning module 404 and the second dividing module 405.
It should be noted that this embodiment is an example of the apparatus corresponding to the first, second, and third embodiments, and may be implemented in cooperation with the first, second, and third embodiments. The related technical details mentioned in the first embodiment, the second embodiment and the third embodiment are still valid in the present embodiment, and are not described herein again in order to reduce the repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment, the second embodiment, and the third embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fifth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 6, including: at least one processor 501; and a memory 502 communicatively coupled to the at least one processor 501; the memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can execute the data processing method in the positioning process according to the above embodiments.
The electronic device includes: one or more processors 501 and a memory 502, with one processor 501 being an example in fig. 6. The processor 501 and the memory 502 may be connected by a bus or other means, and fig. 6 illustrates the connection by the bus as an example. Memory 502, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor 501 executes various functional applications of the device and data processing, namely, implements the data processing method in the positioning process by executing nonvolatile software programs, instructions and modules stored in the memory 502.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 502 may optionally include memory located remotely from processor 501, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 502, which when executed by the one or more processors 501, perform the data processing method in the positioning process in any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
A sixth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by instructing the relevant hardware through a program, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method provided in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (14)

1. A data processing method in a positioning process is characterized by comprising the following steps:
acquiring current image data of an environment;
positioning according to the feature points in the current image data and a tracking map, and determining a first positioning result;
segmenting the current image data, and determining a first segmentation result of the current image data;
selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; determining a second positioning result according to the feature points for secondary positioning and the tracking map;
and/or the presence of a gas in the gas,
acquiring a segmentation result of the previous image data and a positioning result of the previous image data; adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result;
wherein, the selecting, according to the first segmentation result, the first positioning result, and the tracking map, a feature point for secondary positioning from feature points in the current image data specifically includes:
classifying the feature points in the current image data according to the first segmentation result; determining the overall state of the feature points of each category according to the first positioning result and the tracking map; wherein the overall state of the feature points of the category is a static state or a motion state; determining the characteristic points for secondary positioning according to the overall state of the characteristic points of the category; or, obtaining the segmentation result of the previous image data and the positioning result of the previous image data; adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result; classifying the feature points in the current image data according to the second segmentation result; determining the overall state of the feature points of each category according to the first positioning result and the tracking map; wherein the overall state of the feature points of the category is a static state or a motion state; determining the characteristic points for secondary positioning according to the overall state of the characteristic points of the category;
the adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data, and the segmentation result of the previous image data to obtain a second segmentation result specifically includes:
projecting the segmentation area in the segmentation result of the previous image data to the current image data according to the positioning result of the previous image data and the first positioning result to obtain a projection area; adjusting the first segmentation result according to the projection area to obtain a second segmentation result; or selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; determining a second positioning result according to the feature points for secondary positioning and the tracking map; projecting the segmentation area in the segmentation result of the previous image data to the current image data according to the positioning result of the previous image data and the second positioning result to obtain a projection area; and adjusting the first segmentation result according to the projection area to obtain a second segmentation result.
2. The data processing method according to claim 1, wherein the determining an overall state of the feature points of each category according to the first positioning result and the tracking map specifically comprises:
for each category, the following operations are performed: determining the state information of each feature point in the category according to the first positioning result and the tracking map, wherein the state information of the feature points indicates that the feature points are in a static state or a motion state; and determining the overall state of the characteristic points of the category according to the state information of each characteristic point in the category.
3. The data processing method according to claim 2, wherein the first positioning result includes first translation information and first rotation information;
the determining the state information of each feature point in the category according to the first positioning result and the tracking map specifically includes:
projecting the feature points in the tracking map into the current image data according to the first translation information and the first rotation information;
determining the corresponding relation between the feature points obtained by projection and the feature points in the category;
for each feature point in the category, respectively performing the following operations: determining the position relation between the characteristic points and the characteristic points obtained by projection corresponding to the characteristic points; and determining the state information of the feature points according to the position relation.
4. The data processing method according to claim 2, wherein the determining the overall state of the feature points of the category according to the state information of each feature point in the category specifically comprises:
judging whether the number of the feature points in the static state in the category is larger than a first threshold value or not according to the state information of each feature point in the category;
if yes, determining the overall state of the feature points of the category as a static state;
otherwise, determining the overall state of the characteristic points of the category as a motion state.
5. The data processing method according to claim 1, wherein after the determining of the overall state of the feature point of each category from the first positioning result and the tracking map, the data processing method further comprises:
and updating the tracking map and the long-term map according to the overall state of the feature points of each category.
6. The data processing method according to claim 1, wherein the adjusting the first segmentation result according to the projection region to obtain a second segmentation result specifically comprises:
determining a corresponding relation between the projection area and an initial segmentation area in a first segmentation result of the current image data;
for each projection region, respectively performing the following operations: judging whether an initial segmentation area corresponding to the projection area exists or not; if the image data is determined to exist, determining an intersection area of the projection area and an initial segmentation area corresponding to the projection area, determining a first proportion of the intersection area in the corresponding initial segmentation area and a second proportion of the intersection area in the projection area, judging whether the first proportion is smaller than the second proportion, if so, taking the projection area as a final segmentation area of the current image data, otherwise, taking the corresponding initial segmentation area as a final segmentation area of the current image data; if the projection area does not exist, determining whether the projection area is used as a final segmentation area of the current image data according to the number of the projection areas;
and determining a second segmentation result according to all final segmentation areas of the current image data.
7. The data processing method according to claim 6, wherein the determining the correspondence between the projection region and the initial segmentation region in the first segmentation result of the current image data specifically includes:
for each projection region, respectively performing the following operations: determining similarity parameters of the projection region and each initial segmentation region; and determining an initial segmentation area corresponding to the projection area according to the initial segmentation area corresponding to the minimum similarity parameter.
8. The data processing method according to claim 7, wherein the determining an initial segmentation region corresponding to the projection region according to the initial segmentation region corresponding to the minimum similarity parameter specifically includes:
judging whether the minimum similar parameter is smaller than a second threshold value;
if so, determining the initial segmentation region corresponding to the minimum similarity parameter as the initial segmentation region corresponding to the projection region;
otherwise, determining that the projection region does not have a corresponding relation with all the initial segmentation regions.
9. The data processing method according to claim 8, wherein the determining whether to use the projection area as a final segmentation area of the current image data according to the number of the projection areas specifically comprises:
judging whether the total number of the projection areas is larger than the total number of the initial segmentation areas;
if the determination is yes, the projection area without the corresponding relation is used as a final segmentation area in the second segmentation result.
10. The data processing method according to claim 7, wherein the determining the similarity parameters of the projection region and each of the initial segmentation regions specifically comprises:
for each initial segmentation region, respectively performing the following operations:
determining Euclidean distance between the central point of the initial segmentation region and the central point of the projection region;
and determining the similar parameters according to the Euclidean distance, the initial segmentation region and the projection region.
11. The data processing method according to claim 10, wherein the determining the similarity parameter according to the euclidean distance, the initial segmentation region, and the projection region specifically includes:
according to formula a:
Figure FDA0002716465250000041
calculating the similar parameters;
wherein, Scp is a similar parameter, Dist (Rec, Rep) represents the euclidean distance between the central point of the initial segmentation region and the central point of the projection region, and w1A weight representing the euclidean distance, Rec represents the initial divided region, Rep represents the projected region, Area ((Rec-Rep) u (Rep-Rec)) represents a total number of pixels in a union of a region of the initial divided region which is disjoint from the projected region and a region of the projected region which is disjoint from the initial divided region, Area (Rec) represents a number of pixels in the initial divided region, Area (Rep) represents a number of pixels in the projected region, w represents a number of pixels in the projected region, and2to represent
Figure FDA0002716465250000042
The weight of (c).
12. A data processing apparatus in a positioning process, comprising: the system comprises an acquisition module, a first positioning module, a first segmentation module and a second positioning module or a second segmentation module;
the acquisition module is used for acquiring current image data of the environment;
the first positioning module is used for positioning according to the feature points in the current image data and the tracking map and determining a first positioning result;
the first segmentation module is used for segmenting the current image data and determining a first segmentation result of the current image data;
the second positioning module is used for selecting feature points for secondary positioning from the feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; determining a second positioning result according to the feature points for secondary positioning and the tracking map;
the second segmentation module is used for acquiring the segmentation result of the previous image data and the positioning result of the previous image data; adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result;
wherein, the selecting, according to the first segmentation result, the first positioning result, and the tracking map, a feature point for secondary positioning from feature points in the current image data specifically includes:
classifying the feature points in the current image data according to the first segmentation result; determining the overall state of the feature points of each category according to the first positioning result and the tracking map; wherein the overall state of the feature points of the category is a static state or a motion state; determining the characteristic points for secondary positioning according to the overall state of the characteristic points of the category; or, obtaining the segmentation result of the previous image data and the positioning result of the previous image data; adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data and the segmentation result of the previous image data to obtain a second segmentation result; classifying the feature points in the current image data according to the second segmentation result; determining the overall state of the feature points of each category according to the first positioning result and the tracking map; wherein the overall state of the feature points of the category is a static state or a motion state; determining the characteristic points for secondary positioning according to the overall state of the characteristic points of the category;
the adjusting the first segmentation result according to the first positioning result, the positioning result of the previous image data, and the segmentation result of the previous image data to obtain a second segmentation result specifically includes:
projecting the segmentation area in the segmentation result of the previous image data to the current image data according to the positioning result of the previous image data and the first positioning result to obtain a projection area; adjusting the first segmentation result according to the projection area to obtain a second segmentation result; or selecting feature points for secondary positioning from feature points in the current image data according to the first segmentation result, the first positioning result and the tracking map; determining a second positioning result according to the feature points for secondary positioning and the tracking map; projecting the segmentation area in the segmentation result of the previous image data to the current image data according to the positioning result of the previous image data and the second positioning result to obtain a projection area; and adjusting the first segmentation result according to the projection area to obtain a second segmentation result.
13. An electronic device, comprising: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of data processing in a positioning process according to any one of claims 1 to 11.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the data processing method in the positioning process according to any one of claims 1 to 11.
CN201811442347.6A 2018-11-29 2018-11-29 Data processing method and device in positioning process, electronic equipment and storage medium Active CN109543634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811442347.6A CN109543634B (en) 2018-11-29 2018-11-29 Data processing method and device in positioning process, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811442347.6A CN109543634B (en) 2018-11-29 2018-11-29 Data processing method and device in positioning process, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109543634A CN109543634A (en) 2019-03-29
CN109543634B true CN109543634B (en) 2021-04-16

Family

ID=65851096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811442347.6A Active CN109543634B (en) 2018-11-29 2018-11-29 Data processing method and device in positioning process, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109543634B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223114B (en) * 2020-01-09 2020-10-30 北京达佳互联信息技术有限公司 Image area segmentation method and device and electronic equipment
CN111680596B (en) * 2020-05-29 2023-10-13 北京百度网讯科技有限公司 Positioning true value verification method, device, equipment and medium based on deep learning
WO2022114252A1 (en) * 2020-11-25 2022-06-02 한국전자기술연구원 Deep learning-based panoptic segmentation operation accelerated processing method using complexity-based specific region operation omitting scheme
CN112683262A (en) * 2020-11-30 2021-04-20 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108763384A (en) * 2018-05-18 2018-11-06 北京慧闻科技发展有限公司 For the data processing method of text classification, data processing equipment and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10062010B2 (en) * 2015-06-26 2018-08-28 Intel Corporation System for building a map and subsequent localization
US20170161546A1 (en) * 2015-12-08 2017-06-08 Mitsubishi Electric Research Laboratories, Inc. Method and System for Detecting and Tracking Objects and SLAM with Hierarchical Feature Grouping
CN107990899B (en) * 2017-11-22 2020-06-30 驭势科技(北京)有限公司 Positioning method and system based on SLAM
CN108230247B (en) * 2017-12-29 2019-03-15 达闼科技(北京)有限公司 Generation method, device, equipment and the computer-readable storage medium of three-dimensional map based on cloud
CN108230337B (en) * 2017-12-31 2020-07-03 厦门大学 Semantic SLAM system implementation method based on mobile terminal
CN108229416B (en) * 2018-01-17 2021-09-10 苏州科技大学 Robot SLAM method based on semantic segmentation technology
CN108446634B (en) * 2018-03-20 2020-06-09 北京天睿空间科技股份有限公司 Aircraft continuous tracking method based on combination of video analysis and positioning information

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108763384A (en) * 2018-05-18 2018-11-06 北京慧闻科技发展有限公司 For the data processing method of text classification, data processing equipment and electronic equipment

Also Published As

Publication number Publication date
CN109543634A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109543634B (en) Data processing method and device in positioning process, electronic equipment and storage medium
US20210390329A1 (en) Image processing method, device, movable platform, unmanned aerial vehicle, and storage medium
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN108279670B (en) Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory
CN109658454B (en) Pose information determination method, related device and storage medium
CN110176032B (en) Three-dimensional reconstruction method and device
CN113506318B (en) Three-dimensional target perception method under vehicle-mounted edge scene
CN115049700A (en) Target detection method and device
CN110232418B (en) Semantic recognition method, terminal and computer readable storage medium
CN116469079A (en) Automatic driving BEV task learning method and related device
CN115147328A (en) Three-dimensional target detection method and device
Fanani et al. Keypoint trajectory estimation using propagation based tracking
CN106780558B (en) Method for generating unmanned aerial vehicle target initial tracking frame based on computer vision point
CN116105721B (en) Loop optimization method, device and equipment for map construction and storage medium
CN111198563B (en) Terrain identification method and system for dynamic motion of foot type robot
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
CN114972492A (en) Position and pose determination method and device based on aerial view and computer storage medium
CN115713633A (en) Visual SLAM method, system and storage medium based on deep learning in dynamic scene
CN112102412B (en) Method and system for detecting visual anchor point in unmanned aerial vehicle landing process
CN112215205B (en) Target identification method and device, computer equipment and storage medium
US11657506B2 (en) Systems and methods for autonomous robot navigation
US10373004B1 (en) Method and device for detecting lane elements to plan the drive path of autonomous vehicle by using a horizontal filter mask, wherein the lane elements are unit regions including pixels of lanes in an input image
CN113570713A (en) Semantic map construction method and device for dynamic environment
CN116740477B (en) Dynamic pixel point distribution identification method, system and equipment based on sparse optical flow
Liu et al. An efficient edge-feature constraint visual SLAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant