CN110361005B - Positioning method, positioning device, readable storage medium and electronic equipment - Google Patents

Positioning method, positioning device, readable storage medium and electronic equipment Download PDF

Info

Publication number
CN110361005B
CN110361005B CN201910562336.XA CN201910562336A CN110361005B CN 110361005 B CN110361005 B CN 110361005B CN 201910562336 A CN201910562336 A CN 201910562336A CN 110361005 B CN110361005 B CN 110361005B
Authority
CN
China
Prior art keywords
map
image
point
information
feature information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910562336.XA
Other languages
Chinese (zh)
Other versions
CN110361005A (en
Inventor
韩立明
林义闽
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN201910562336.XA priority Critical patent/CN110361005B/en
Publication of CN110361005A publication Critical patent/CN110361005A/en
Application granted granted Critical
Publication of CN110361005B publication Critical patent/CN110361005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Abstract

The disclosure relates to a positioning method, a positioning device, a readable storage medium and an electronic device. The method comprises the following steps: acquiring an image sequence shot by a visual sensor on the mobile equipment; extracting feature information of the image aiming at each acquired image; matching the characteristic information of the image with the characteristic information of a projection point of a map point in a map on a target plane, wherein the target plane is a plane corresponding to the pose of the image in the previous image in the image sequence; determining a target map point corresponding to the projection point matched with the characteristic information of the image; and determining the positioning information of the mobile equipment according to the position information of the target map point in the map. Therefore, the feature information of the projection point matched with the feature information of the image is obtained by projecting the map point on the plane corresponding to the pose of the previous image, and the non-continuous key frame image stored in the map building is not required to be used for rough positioning, so that the positioning range of the mobile equipment can be enlarged, and the positioning efficiency is improved.

Description

Positioning method, positioning device, readable storage medium and electronic equipment
Technical Field
The present disclosure relates to computer vision technologies, and in particular, to a positioning method, a positioning apparatus, a readable storage medium, and an electronic device.
Background
In many fields such as robot blind guidance, unmanned driving, and Augmented Reality (AR), an environment map is required, and therefore, a certain area needs to be mapped and needs to be positioned immediately when the area moves. The vision-based synchronous positioning and mapping (VSLAM) refers to a process of calculating a self position and constructing an environment map according to information of a vision sensor, and solves the problems of positioning and map construction when moving in an unknown environment.
In the related art, positioning is mostly performed based on a key frame image stored in a map, so that it is required to determine a key frame image matched with a current frame image from a plurality of key frame images stored in the map, perform coarse positioning using the key frame image to obtain a local map, and then match the current frame image with map points included in the local map to accurately position the position of the mobile device. Because the key frame images stored in the map are discontinuous, when the key frame images in the map are used for rough positioning, the position of the mobile equipment can only be within a certain range of the pose of a certain key frame image, and the positioning can not be realized after the position of the mobile equipment exceeds the range. Therefore, the existing positioning efficiency is low.
Disclosure of Invention
The disclosure aims to provide a positioning method, a positioning device, a readable storage medium and an electronic device, so as to improve the positioning efficiency.
In order to achieve the above object, a first aspect of the present disclosure provides a positioning method, including:
acquiring an image sequence shot by a visual sensor on the mobile equipment;
extracting feature information of each acquired image; and are
Matching the characteristic information of the image with the characteristic information of a projection point of a map point in a map on a target plane, wherein the target plane is a plane corresponding to the pose of the image in the previous image in the image sequence;
determining a target map point corresponding to the projection point matched with the characteristic information of the image;
and determining the positioning information of the mobile equipment according to the position information of the target map point in the map.
Optionally, before the matching the feature information of the image with the feature information of the projection point of the map point in the map on the target plane, the method further includes:
and determining the feature information of the projection point from the feature information set of the map point according to the corresponding relation between the projection point and the map point and the attitude angle in the pose of the previous image.
Optionally, the matching the feature information of the image with the feature information of the projection point of the map point in the map on the target plane includes:
acquiring position information of a projection point of a map point on the target plane;
determining the projection point located in a preset area according to the position information of the projection point and the preset area, wherein the preset area takes the position in the pose of the previous image as the center, and the size of the preset area is larger than that of the image;
and matching the characteristic information of the image with the characteristic information of the projection point positioned in the preset area.
Optionally, the feature information of the image includes feature information of feature points of the image; the feature information of the projection point and the feature information set of the map point have a corresponding relation; the determining of the target map point corresponding to the projection point matched with the feature information of the image includes:
determining feature information of projection points matched with the feature information of the feature points of the image;
determining a target characteristic information set corresponding to the characteristic information of the matched projection point according to the corresponding relation;
and determining a target map point according to the target characteristic information set.
Optionally, the map point is determined by:
acquiring position information of the map point and N frames of images containing pixel points corresponding to the map point, wherein the poses of the images corresponding to the N frames of images are different, and N is an integer greater than 1;
and determining a feature information set of the map points according to the feature information of the corresponding pixel points in the N frames of images.
Optionally, the determining the feature information set of the map point according to the feature information of the corresponding pixel point in the N-frame image includes:
respectively determining respective attitude angles of the N frames of images;
and determining a feature information set of the map points according to the pose angle of each frame of image and the feature information of the corresponding pixel points, wherein the feature information set comprises N attitude angles and the feature information of N corresponding pixel points corresponding to the N attitude angles respectively.
Optionally, N is an integer greater than 2; the obtaining of the location information of the map point includes:
determining first position information of the map point according to any two frames of images in the N frames of images;
observing the map point for N times according to the N frames of images to obtain N pieces of first position information of the map point;
determining second position information of the map point according to the N pieces of first position information and the following formula:
Figure BDA0002108611830000031
Figure BDA0002108611830000032
wherein, the value range of i is [1, N],AiCharacterizing first location information obtained by observing the map point for the ith time and satisfying the formula (1), d characterizing a preset numerical value,
Figure BDA0002108611830000033
average location information of the N first location information characterizing the map point,
Figure BDA0002108611830000034
second location information characterizing the map point, W characterizing A satisfying formula (1) among N of the first location informationiOf a number ofiCharacterization AiThe corresponding coefficients;
and determining the second position information as the position information of the map point.
A second aspect of the present disclosure provides a positioning apparatus, comprising:
the first acquisition module is used for acquiring an image sequence shot by a visual sensor on the mobile equipment;
the extraction module is used for extracting the characteristic information of each acquired image; and are
The matching module is used for matching the feature information of the image with the feature information of a projection point of a map point in a map on a target plane, wherein the target plane is a plane corresponding to the pose of the image in the previous image in the image sequence;
the first determining module is used for determining a target map point corresponding to the projection point matched with the characteristic information of the image;
and the second determining module is used for determining the positioning information of the mobile equipment according to the position information of the target map point in the map.
Optionally, the apparatus further comprises:
and the third determining module is used for determining the feature information of the projection point from the feature information set of the map point according to the corresponding relation between the projection point and the map point and the attitude angle in the pose of the previous image.
Optionally, the matching module comprises:
the acquisition submodule is used for acquiring the position information of a projection point of a map point on the target plane;
the first determining submodule is used for determining the projection point positioned in the preset area according to the position information of the projection point and the preset area, the preset area takes the position in the pose of the previous image as the center, and the size of the preset area is larger than that of the image;
and the matching submodule is used for matching the characteristic information of the image with the characteristic information of the projection point positioned in the preset area.
Optionally, the feature information of the image includes feature information of feature points of the image; the feature information of the projection point and the feature information set of the map point have a corresponding relation; the first determining module includes:
the second determining submodule is used for determining the characteristic information of the projection point matched with the characteristic information of the characteristic point of the image;
the third determining submodule is used for determining a target characteristic information set corresponding to the characteristic information of the matched projection point according to the corresponding relation;
and the fourth determining submodule is used for determining the target map point according to the target characteristic information set.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring the position information of the map point and N frames of images containing pixel points corresponding to the map point, wherein the poses of the images corresponding to the N frames of images are different, and N is an integer greater than 1;
and the fourth determining module is used for determining the feature information set of the map point according to the feature information of the corresponding pixel point in the N frames of images.
Optionally, the fourth determining module includes:
a fifth determining submodule, configured to determine respective attitude angles of the N frames of images respectively;
and a sixth determining submodule, configured to determine a feature information set of the map point according to the pose angle of each frame of the image and the feature information of the corresponding pixel point, where the feature information set includes N pose angles and N feature information of the corresponding pixel points corresponding to the N pose angles, respectively.
Optionally, N is an integer greater than 2; the second acquisition module includes:
the seventh determining submodule is used for determining the first position information of the map point according to any two frames of images in the N frames of images;
the observation submodule is used for carrying out N times of observation on the map point according to the N frames of images so as to obtain N pieces of first position information of the map point;
an eighth determining submodule, configured to determine second location information of the map point according to the N pieces of first location information and the following formula:
Figure BDA0002108611830000061
Figure BDA0002108611830000062
wherein, the value range of i is [1, N],AiCharacterizing first location information obtained by observing the map point for the ith time and satisfying the formula (1), d characterizing a preset numerical value,
Figure BDA0002108611830000063
average location information of the N first location information characterizing the map point,
Figure BDA0002108611830000064
second location information characterizing the map point, W characterizing A satisfying formula (1) among N of the first location informationiOf a number ofiCharacterization AiThe corresponding coefficients;
and the ninth determining submodule is used for determining the second position information as the position information of the map point.
A third aspect of the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method provided by the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides an electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method provided by the first aspect of the present disclosure.
By the technical scheme, the feature information of the image can be matched with the feature information of the projection point of the plane corresponding to the position of the map point in the previous image, the target map point matched with the feature information of the image is determined according to the corresponding relation between the projection point and the map point, and the positioning information of the mobile equipment is further determined according to the position information of the target map point. Therefore, the feature information of the projection point matched with the feature information of the image is obtained by projecting the map point on the plane corresponding to the pose of the previous image, and the non-continuous key frame image stored in the map building is not required to be used for rough positioning, so that the positioning range of the mobile equipment can be enlarged, and the positioning efficiency is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a diagram illustrating mapping and positioning in a related art according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of positioning according to an example embodiment.
FIG. 3 is a flow chart illustrating a method of building a map in accordance with an exemplary embodiment.
FIG. 4 is a schematic diagram illustrating one type of mapping and positioning according to an exemplary embodiment.
FIG. 5 is a block diagram illustrating a positioning device according to an exemplary embodiment.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 7 is a block diagram illustrating an electronic device in accordance with another example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
In the related technology, a method for establishing a Map by using a VSLAM technology is mainly that N frames of Key frame images are determined according to a preset rule from an M frame image sequence acquired by a visual sensor, feature points (such as Key points (2D Key points) in fig. 1) representing a spatial environment are extracted from the Key frame images, and a 3D Map Point (3D Map Point) and description information (2D average Descriptor) of the 3D Map Point are determined according to Pose information (such as points in fig. 1), Key points (such as 2D Key points in fig. 1) and description information (such as 2D Descriptor) of the Key points of each two frames of Key frame images. The position information of the 3D map point can be obtained by a triangulation method, and the description information of the 3D map point is the average description information of the key point corresponding to the map point. As shown in fig. 1, the created map mainly includes key frame images and 3D map points. Wherein, the 3D map point includes coordinates (X, Y, Z) and description information (2D average Descriptor). When the map established above is used for positioning, feature points are usually extracted from a current frame image acquired by a visual sensor and description information of the feature points is determined, the description information of the feature points is matched with key points in a corresponding key frame image in the map, so that coarse positioning of the mobile equipment is realized, that is, a local map is determined, and then the current frame image is matched with 3D map points in the local map, so that the position of the mobile equipment is accurately positioned.
Specifically, as shown in fig. 1, assuming that a previous frame image Px-1 of the current frame image is closest to the key frame image Py stored in the map creation, the key frame image Py is used as a key frame matched with the current frame image, and a local map is constructed based on the key frame image Py, where the local map includes map points corresponding to the key frame image Py and map points in a preset range around the local map. And matching the current frame image with map points included in the local map to accurately position the position of the mobile equipment.
However, since the map created by using the VSLAM technique requires a key frame image and 3D map points for positioning, the created map occupies a large amount of storage space, which limits the VSLAM technique from being directly used for large-scale scene mapping and positioning. In general, the storage space of the key frame images and the key points in the map occupies more than 80% of the storage space of the whole map. For example, a map built for a 1000 square meter space may be in excess of 200M in size. For larger spaces, the map size usually reaches several G or even more than ten G, and the application of VSLAM technology to real-time map building and existing map positioning is severely limited. For example, in the process of building a map in a large scene, a large amount of memory is required, when a map is too large to exceed the memory, a continuous map corresponding to the large scene cannot be built, and only a plurality of maps can be built in segments, so that in the positioning process, the segmented map is used for positioning, and the switching of the multi-segment map affects the positioning accuracy and the real-time performance. In addition, because the key frame images stored in the map are discontinuous, and the interval between two adjacent key frame images is large, the position of the mobile equipment is required to be near the pose of a certain key frame stored in the process of building the map, so that the rough positioning can be realized, otherwise, the positioning cannot be realized. Therefore, the existing positioning range is small, and the efficiency is low.
In order to solve the problems in the related art, the present disclosure provides a positioning method, a positioning apparatus, a readable storage medium, and an electronic device. Fig. 2 is a flow chart illustrating a method of positioning according to an example embodiment. As shown in fig. 2, the method comprises the steps of:
in step 11: a sequence of images taken by a vision sensor on a mobile device is acquired.
Wherein, the scene corresponding to each image in the image sequence is the scene of the established map. In one embodiment, the method can be applied to mobile devices such as a robot device, a mobile helmet and an unmanned vehicle, wherein a vision sensor is arranged on the mobile device, images of a scene with an established map are acquired in real time or periodically, and when the vision sensor captures the images, the images are sent to the mobile device for subsequent processing. In another embodiment, the method can also be applied to a server, and when the vision sensor on the mobile device captures an image, the image is sent to the server for subsequent processing.
In step 12, for each acquired image, feature information of the image is extracted.
In this disclosure, the feature information of the image may be feature information of each pixel point in the image, or feature information of a feature point in the image, where the feature information is description information describing the pixel point or the feature point. For example, the characteristic information of the pixel point may be a size relationship between the gray value of the pixel point and the gray values of other surrounding pixels, or a difference between the gray value of the pixel point and the gray values of other surrounding pixels, or the like. In the present disclosure, the feature information is not particularly limited as long as it can describe a pixel point or a feature point.
Because the number of the pixel points in the image is large, and each pixel point does not need to be matched in the positioning process, in the present disclosure, the feature information of the image may be the feature information of each feature point in the image. Specifically, for each image acquired in step 11, feature points of the image are extracted, and feature information of each feature point is calculated. The feature points of the image are extracted and the feature information of the feature points is calculated, which belongs to the prior art and is not described herein again.
In step 13, the feature information of the image is matched with the feature information of the projection point of the map point in the map on the target plane. And the target plane is a plane corresponding to the pose of the image in the previous image in the image sequence.
For example, if the current frame image is an image for positioning, the previous image is a previous frame image of the current frame. It should be noted that, while the vision sensor captures an image, the pose sensor also acquires the pose of the mobile device or the vision sensor. Since the time interval between two adjacent frames of images shot by the vision sensor is short, the moving distance of the mobile device is short, and the scenes in the two adjacent frames of images are consistent, in the present disclosure, the feature information of the projection point of the target plane corresponding to the pose of the map point in the previous image can be matched with the feature information of the image. The projection of the map point to the target plane can refer to the imaging process of a common camera (a small hole imaging principle), and the coordinates of the projection point are calculated.
In the present disclosure, the feature information of the image and the feature information of the projection point of the map point on the target plane may be matched, and the feature information of the projection point matched with the feature information of the image may be determined according to a matching relationship between the feature information of the feature point in the image and the feature information of the projection point.
In step 14, the target map points corresponding to the projection points that match the feature information of the image are determined.
As described above, since the projection point is obtained by projecting the map point on the plane corresponding to the pose of the previous image, the projection point and the map point have a correspondence relationship, for example, map point a corresponds to projection point a1, map point B corresponds to projection point B1, and so on. In this manner, after the feature information of the projection point matching the feature information of the image is determined in step 13, the target map point matching the feature information of the image can be further determined based on the correspondence relationship between the projection point and the map point.
In step 15, positioning information of the mobile device is determined based on the location information of the target map point in the map.
In the process of map building, the map stores the position information of the map points, so that after the target map point is determined, the position information corresponding to the target map point can be further determined from the map, and the positioning information of the mobile device can be further determined according to the position information. The method includes determining positioning information of the mobile device according to position information of a target map point matched with feature information of an image, and belongs to the prior art, and the method is not repeated here.
By the technical scheme, the feature information of the image can be matched with the feature information of the projection point of the plane corresponding to the position of the map point in the previous image, the target map point matched with the feature information of the image is determined according to the corresponding relation between the projection point and the map point, and the positioning information of the mobile equipment is further determined according to the position information of the target map point. Therefore, the feature information of the projection point matched with the feature information of the image is obtained by projecting the map point on the plane corresponding to the pose of the previous image, and the non-continuous key frame image stored in the map building is not required to be used for rough positioning, so that the positioning range of the mobile equipment can be enlarged, and the positioning efficiency is improved.
It should be noted that the map used in the positioning may be a map created by using the existing VSLAM technology, or may be an embodiment of the map creation provided in the present disclosure. The map created by using the existing VSLAM technology is shown in fig. 1, and will not be described herein.
The map building method used in fig. 2 is described in detail below. As shown in fig. 3, the method for establishing a map may include the steps of:
in step 31, position information of a map point and N frames of images including a pixel point corresponding to the map point are obtained. The poses of the images corresponding to the N frames of images are different, and N is an integer larger than 1.
In the map building method provided by the present disclosure, in addition to the position information of the map point, a feature information set of the map point needs to be determined. First, a method of determining location information of map points will be described.
In an embodiment, the location information of the map point may be obtained by using an existing method for determining location information of the map point, for example, if the corresponding pixel point of the map point P in the first frame image is P1, the corresponding pixel point in the second frame image is P2, and the pose of the first frame image and the pose of the second frame image are known, the location information of the map point P may be determined according to a triangulation method. The determination of the location information of the map points according to the triangulation method belongs to the prior art, and is not described herein again.
In another embodiment, where N is an integer greater than 2, the location information of the map point may be obtained by:
step (1): and determining first position information of the map point according to any two frames of images in the N frames of images. Illustratively, the first location information may be determined using triangulation.
Step (2): and observing the map points for N times according to the N frames of images to obtain N pieces of first position information of the map points.
Because the N frames of images all comprise pixel points corresponding to the map points, according to a triangulation method, one-time observation is carried out on any two frames of images, and one piece of first position information of the map point can be determined. Therefore, in the present disclosure, from N frames of images, N observations can be made on the map point, resulting in N first location information of the map point.
And (3): determining second position information of the map point according to the N pieces of first position information and the following formula:
Figure BDA0002108611830000121
Figure BDA0002108611830000122
wherein, the value range of i is [1, N],AiCharacterizing first location information obtained by the ith observation of the map point and satisfying the formula (1), d characterizing a preset value,
Figure BDA0002108611830000123
average location information of the N first location information characterizing the map points,
Figure BDA0002108611830000124
second location information characterizing map points, W characterizing A satisfying formula (1) among N of the first location informationiOf a number ofiCharacterization AiThe corresponding coefficients.
And (4): and determining the second position information as the position information of the map point.
By adopting the technical scheme, N is an integer larger than 2, N first position information of the map point is obtained by observing N frames of images for N times, and the position information of the map point is determined based on the first position information, namely, the position information of the map point is determined according to multi-frame images.
In another embodiment, the obtained location information of a plurality of map points can be optimized simultaneously by using a graph optimization method. Illustratively, location information for multiple map points may be bundled Adjustment. The Bundle Adjustment belongs to the prior art, and is not described herein again.
Next, a method of specifying a feature information set of map points will be described.
It should be noted that, in the existing positioning method, a plurality of frames of key frame images are stored during mapping, and during rough positioning, the stored key frame images are used for matching with feature information of an image used for positioning, and at this time, if the mobile device is near the pose of the key frame image, a map point matched with the feature point can be determined according to the feature information of the feature point in the image and the feature information of the key point in the key frame image.
However, since the key frame image is not needed in the positioning process provided by the present disclosure, and the key frame image is saved during map building, which may cause a problem that the space occupied by the built map is large, in the present disclosure, the key frame image is not needed to be saved during map building. In order to ensure that a map point matched with feature information of the feature point can be determined based on the feature information of the feature point in the image, in the present disclosure, the feature information of the map point stored in the map building process is a feature information set, and the feature information set includes feature information of pixel points corresponding to the map point in different poses.
In step 32, a feature information set of the map point is determined according to the feature information of the corresponding pixel point in the N frames of images.
After the N frames of images are obtained, the feature information of the pixel points corresponding to the map points in the N frames of images is further obtained, and the feature information set of the map points is determined according to the feature information of the corresponding pixel points. Therefore, the feature information set of the map point includes the feature information of the pixel point corresponding to the map point in the N frames of images.
Specifically, the specific implementation of step 32 may be: respectively determining respective attitude angles of the N frames of images; and determining a feature information set of the map points according to the attitude angles of each frame of image and the feature information of the corresponding pixel points, wherein the feature information set comprises N attitude angles and the feature information of N corresponding pixel points respectively corresponding to the N attitude angles.
For example, assume that image 1 and image 2 … … both include a pixel corresponding to the map point P, and the feature information of the corresponding pixel in image 1 and image 2 … … image N is 2D1desc、2D2desc……2DNdescThe attitude angles of the respective poses of image 1 and image 2 … …, image N, are α1、α2……αNThen the feature information set of map points can be recorded as 3Ddesc={2D1desc1;2D2desc2……2DNdescN}。
And respectively determining the position information of the map point and the characteristic information set of the map point according to the mode for each map point in the region to be mapped.
It should be noted that, as shown in fig. 4, after the location information of the map point and the feature information set of the map point are determined, N frames of images (key frame images in fig. 4) and feature information (position, 2D KeyPoint, 2D Descriptor in fig. 4) of a Pose, a pixel point, and a pixel point of an image used when the map point is determined may be deleted. Thus, the storage space occupied by the map can be greatly reduced.
By this point, the map is built up. The following describes the positioning method provided by the present disclosure in detail in a complete embodiment with reference to the map created in the above manner.
When the map is created according to the above method, the feature information of the map point is a three-dimensional feature information set including the pose angle of the image, however, when the map point is projected onto a plane corresponding to the pose of the previous image to obtain a projection point (for example, a key frame is generated in fig. 4), the feature information of the projection point is one two-dimensional feature information in the feature information set. Specifically, the positioning method may further include: and determining the characteristic information of the projection point from the characteristic information set of the map point according to the corresponding relation between the projection point and the map point and the attitude angle in the pose of the previous image.
Exemplarily, the feature information set of map points may be denoted as 3Ddesc={2D1desc1;2D2desc2……2DNdescNAnd if the pose angle in the pose of the previous image is alpha2Then the feature information of the projection point corresponding to the map point is 2D2desc. In this way, the feature information of each proxel can be determined in turn.
After determining the feature information of the projection point, step 13 in fig. 2 is executed to match the feature information of the image with the feature information of the projection point of the map point in the target plane in the map, that is, to perform matching according to the feature information of the feature point in the image and the feature information of the projection point. And if the characteristic information of the characteristic point is consistent with the characteristic information of the projection point, the characteristic point is considered to be matched with the projection point.
In addition, in step 13, when the map point is projected to the target plane corresponding to the pose of the previous image, all maps in the maps are projected, and the size of the target plane is much larger than that of the image acquired by the vision sensor. However, since the probability of matching the projection points located on the boundary of the target plane with the feature points in the image is extremely low, in the present disclosure, in order to reduce the workload of matching, the projection points may be processed, that is, at the time of matching, the projection points located on the boundary of the target plane are removed, and only the projection points located within the preset range are matched.
Specifically, the implementation of step 13 in fig. 2 may be:
firstly, acquiring the position information of a projection point of a map point on a target plane, wherein the position information of the projection point can be determined according to the position information of the map point according to the pinhole imaging principle.
And then, determining the projection point positioned in the preset area according to the position information of the projection point and the preset area, wherein the preset area is centered on the position in the pose of the previous image, and the size of the preset area is larger than that of the image.
Illustratively, with the position in the pose of the previous image as the center, assuming that the preset region is in the range of 0 to 480 high and in the range of 0 to 640 wide, the unit is pixels, the size of the preset region is 480 × 640. And determining the projection points with the height within 0 to 480 and the width within 0 to 640 as the projection points within the preset area according to the position information of the projection points.
And finally, matching the characteristic information of the image with the characteristic information of the projection point in the preset area. It should be noted that the specific implementation of matching is as described above, and is not described herein again.
By adopting the scheme, when the feature information of the image is matched, only the feature information of the projection point in the preset area is matched with the feature information of the image, and the size of the preset area is larger than that of the image. Thus, the matching workload can be reduced, and the feature information with a sufficient number of projection points can be ensured to be matched with the feature information of the image.
After determining the projection point matched with the feature information of the image, further determining the target map point corresponding to the projection point matched with the feature information of the image. In the present disclosure, the feature information of the image includes feature information of feature points of the image; the feature information of the projection point and the feature information set of the map point have a corresponding relation; the specific implementation of step 14 in fig. 2 is: determining feature information of projection points matched with the feature information of the feature points of the image; determining a target characteristic information set corresponding to the characteristic information of the matched projection point according to the corresponding relation between the characteristic information of the projection point and the characteristic information set of the map point; and determining the target map points according to the target characteristic information set.
Since the feature information of the projection point is one two-dimensional feature information in the feature information set of the map point, the feature information of the projection point and the feature information set of the map point have a corresponding relationship. In this way, after the feature information of the projection point matched with the feature information of the feature point is determined according to the feature information of the feature point and the feature information of the projection point of the image, a target feature information set corresponding to the feature information of the matched projection point is determined according to the feature information of the matched projection point and the corresponding relation, and then the target map point is determined according to the target feature information set.
Exemplarily, it is assumed that the feature information set of the map point P is 3Ddesc={2D1desc1;2D2desc2……2DNdescNAnd feature information of the projection point matched with the feature information of the feature point is 2D2descThen, it can be determined that the set of target features corresponding to the feature information of the projection point is 3Ddesc={2D1desc1;2D2desc2……2DNdescNAnd determining the target map point as a map point P according to the target feature set.
After the target map point is determined, the positioning information of the mobile equipment is determined according to the position information of the target map point stored in the map.
It should be noted that, in order to improve the positioning accuracy, the positioning information of the mobile device may be determined after the matching number of the feature points and the map points in the image reaches a preset value.
Based on the same inventive concept, the disclosure also provides a positioning device. FIG. 5 is a block diagram illustrating a positioning device according to an exemplary embodiment. As shown in fig. 5, the positioning device may include:
a first acquisition module 51, configured to acquire a sequence of images captured by a vision sensor on a mobile device;
an extracting module 52, configured to extract, for each acquired image, feature information of the image; and are
A matching module 53, configured to match feature information of the image with feature information of a projection point of a map point in a map on a target plane, where the target plane is a plane corresponding to a pose of a previous image of the image in the image sequence;
a first determining module 54, configured to determine a target map point corresponding to the projection point matched with the feature information of the image;
a second determining module 55, configured to determine the positioning information of the mobile device according to the location information of the target map point in the map.
Optionally, the apparatus may further include:
and the third determining module is used for determining the feature information of the projection point from the feature information set of the map point according to the corresponding relation between the projection point and the map point and the attitude angle in the pose of the previous image.
Optionally, the matching module 53 may include:
the acquisition submodule is used for acquiring the position information of a projection point of a map point on the target plane;
the first determining submodule is used for determining the projection point positioned in the preset area according to the position information of the projection point and the preset area, the preset area takes the position in the pose of the previous image as the center, and the size of the preset area is larger than that of the image;
and the matching submodule is used for matching the characteristic information of the image with the characteristic information of the projection point positioned in the preset area.
Optionally, the feature information of the image includes feature information of feature points of the image; the feature information of the projection point and the feature information set of the map point have a corresponding relation; the first determining module 54 may include:
the second determining submodule is used for determining the characteristic information of the projection point matched with the characteristic information of the characteristic point of the image;
the third determining submodule is used for determining a target characteristic information set corresponding to the characteristic information of the matched projection point according to the corresponding relation;
and the fourth determining submodule is used for determining the target map point according to the target characteristic information set.
Optionally, the apparatus may further include:
the second acquisition module is used for acquiring the position information of the map point and N frames of images containing pixel points corresponding to the map point, wherein the poses of the images corresponding to the N frames of images are different, and N is an integer greater than 1;
and the fourth determining module is used for determining the feature information set of the map point according to the feature information of the corresponding pixel point in the N frames of images.
Optionally, the fourth determining module may include:
a fifth determining submodule, configured to determine respective attitude angles of the N frames of images respectively;
and a sixth determining submodule, configured to determine a feature information set of the map point according to the pose angle of each frame of the image and the feature information of the corresponding pixel point, where the feature information set includes N pose angles and N feature information of the corresponding pixel points corresponding to the N pose angles, respectively.
Optionally, N is an integer greater than 2; the second obtaining module may include:
the seventh determining submodule is used for determining the first position information of the map point according to any two frames of images in the N frames of images;
the observation submodule is used for carrying out N times of observation on the map point according to the N frames of images so as to obtain N pieces of first position information of the map point;
an eighth determining submodule, configured to determine second location information of the map point according to the N pieces of first location information and the above equations (1) and (2);
and the ninth determining submodule is used for determining the second position information as the position information of the map point.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the functional module, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Fig. 6 is a block diagram illustrating an electronic device 600 according to an example embodiment. Wherein, the electronic equipment can be a robot equipment or a mobile helmet, etc. As shown in fig. 6, the electronic device 600 may include: a processor 601 and a memory 602. The electronic device 600 may also include one or more of a multimedia component 603, an input/output (I/O) interface 604, and a communications component 605.
The processor 601 is configured to control the overall operation of the electronic device 600, so as to complete all or part of the steps in the positioning method. The memory 602 is used to store various types of data to support operation at the electronic device 600, such as instructions for any application or method operating on the electronic device 600 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and so forth. The Memory 602 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 603 may include a screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 602 or transmitted through the communication component 605. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 604 provides an interface between the processor 601 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 605 is used for wired or wireless communication between the electronic device 600 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 605 may therefore include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described positioning method.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the positioning method described above is also provided. For example, the computer readable storage medium may be the memory 602 described above comprising program instructions that are executable by the processor 601 of the electronic device 600 to perform the positioning method described above.
Fig. 7 is a block diagram illustrating an electronic device 700 in accordance with another example embodiment. For example, the electronic device 700 may be provided as a server. Referring to fig. 7, an electronic device 700 includes a processor 722, which may be one or more in number, and a memory 732 for storing computer programs that are executable by the processor 722. The computer programs stored in memory 732 may include one or more modules that each correspond to a set of instructions. Further, the processor 722 may be configured to execute the computer program to perform the above-described positioning method.
Additionally, the electronic device 700 may also include a power component 726 that may be configured to perform power management of the electronic device 700 and a communication component 750 that may be configured to enable communication, e.g., wired or wireless communication, of the electronic device 700. The electronic device 700 may also include input/output (I/O) interfaces 758. The electronic device 700 may operate based on an operating system stored in memory 732, such as Windows Server, Mac OS XTM, UnixTM, Linux, and the like.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the positioning method described above is also provided. For example, the computer readable storage medium may be the memory 732 described above including program instructions that are executable by the processor 722 of the electronic device 700 to perform the positioning method described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned positioning method when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method of positioning, comprising:
acquiring an image sequence shot by a visual sensor on the mobile equipment;
extracting feature information of each acquired image; and are
Matching the characteristic information of the image with the characteristic information of a projection point of a map point in a map on a target plane, wherein the target plane is a plane corresponding to the pose of the image in the previous image in the image sequence;
determining a target map point corresponding to the projection point matched with the characteristic information of the image;
and determining the positioning information of the mobile equipment according to the position information of the target map point in the map.
2. The positioning method according to claim 1, wherein before said matching the feature information of the image with the feature information of the projection point of the map point in the map on the target plane, the method further comprises:
and determining the feature information of the projection point from the feature information set of the map point according to the corresponding relation between the projection point and the map point and the attitude angle in the pose of the previous image.
3. The positioning method according to claim 1, wherein the matching of the feature information of the image with the feature information of the projection point of the map point in the map on the target plane comprises:
acquiring position information of a projection point of a map point on the target plane;
determining the projection point located in a preset area according to the position information of the projection point and the preset area, wherein the preset area takes the position in the pose of the previous image as the center, and the size of the preset area is larger than that of the image;
and matching the characteristic information of the image with the characteristic information of the projection point positioned in the preset area.
4. The positioning method according to any one of claims 1 to 3, characterized in that the feature information of the image includes feature information of feature points of the image; the feature information of the projection point and the feature information set of the map point have a corresponding relation; the determining of the target map point corresponding to the projection point matched with the feature information of the image includes:
determining feature information of projection points matched with the feature information of the feature points of the image;
determining a target characteristic information set corresponding to the characteristic information of the matched projection point according to the corresponding relation;
and determining a target map point according to the target characteristic information set.
5. The method according to claim 1, wherein the map point is determined by:
acquiring position information of the map point and N frames of images containing pixel points corresponding to the map point, wherein the poses of the images corresponding to the N frames of images are different, and N is an integer greater than 1;
and determining a feature information set of the map points according to the feature information of the corresponding pixel points in the N frames of images.
6. The method according to claim 5, wherein said determining the feature information set of the map point according to the feature information of the corresponding pixel point in the N-frame image comprises:
respectively determining respective attitude angles of the N frames of images;
and determining a feature information set of the map points according to the attitude angle of each frame of image and the feature information of the corresponding pixel points, wherein the feature information set comprises N attitude angles and the feature information of N corresponding pixel points corresponding to the N attitude angles respectively.
7. The positioning method according to claim 5 or 6, wherein N is an integer greater than 2; the obtaining of the location information of the map point includes:
determining first position information of the map point according to any two frames of images in the N frames of images;
observing the map point for N times according to the N frames of images to obtain N pieces of first position information of the map point;
determining second position information of the map point according to the N pieces of first position information and the following formula:
Figure FDA0002941691660000031
Figure FDA0002941691660000032
wherein, the value range of i is [1, N],AiCharacterizing first location information obtained by observing the map point for the ith time and satisfying the formula (1), d characterizing a preset numerical value,
Figure FDA0002941691660000033
average location information of the N first location information characterizing the map point,
Figure FDA0002941691660000034
second location information characterizing the map point, W characterizing A satisfying formula (1) among N of the first location informationiOf a number ofiCharacterization AiThe corresponding coefficients;
and determining the second position information as the position information of the map point.
8. A positioning device, comprising:
the first acquisition module is used for acquiring an image sequence shot by a visual sensor on the mobile equipment;
the extraction module is used for extracting the characteristic information of each acquired image; and are
The matching module is used for matching the feature information of the image with the feature information of a projection point of a map point in a map on a target plane, wherein the target plane is a plane corresponding to the pose of the image in the previous image in the image sequence;
the first determining module is used for determining a target map point corresponding to the projection point matched with the characteristic information of the image;
and the second determining module is used for determining the positioning information of the mobile equipment according to the position information of the target map point in the map.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN201910562336.XA 2019-06-26 2019-06-26 Positioning method, positioning device, readable storage medium and electronic equipment Active CN110361005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910562336.XA CN110361005B (en) 2019-06-26 2019-06-26 Positioning method, positioning device, readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910562336.XA CN110361005B (en) 2019-06-26 2019-06-26 Positioning method, positioning device, readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110361005A CN110361005A (en) 2019-10-22
CN110361005B true CN110361005B (en) 2021-03-26

Family

ID=68216595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910562336.XA Active CN110361005B (en) 2019-06-26 2019-06-26 Positioning method, positioning device, readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110361005B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115039015A (en) * 2020-02-19 2022-09-09 Oppo广东移动通信有限公司 Pose tracking method, wearable device, mobile device and storage medium
CN111627065B (en) * 2020-05-15 2023-06-20 Oppo广东移动通信有限公司 Visual positioning method and device and storage medium
CN111780763B (en) * 2020-06-30 2022-05-06 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map
CN112288817A (en) * 2020-11-18 2021-01-29 Oppo广东移动通信有限公司 Three-dimensional reconstruction processing method and device based on image
CN113015018B (en) * 2021-02-26 2023-12-19 上海商汤智能科技有限公司 Bullet screen information display method, bullet screen information display device, bullet screen information display system, electronic equipment and storage medium
CN113435462B (en) * 2021-07-16 2022-06-28 北京百度网讯科技有限公司 Positioning method, positioning device, electronic equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335328A (en) * 2017-01-19 2018-07-27 富士通株式会社 Video camera Attitude estimation method and video camera attitude estimating device
CN109191504A (en) * 2018-08-01 2019-01-11 南京航空航天大学 A kind of unmanned plane target tracking

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104931057B (en) * 2015-07-02 2018-07-27 深圳乐动机器人有限公司 A kind of any position localization method, the apparatus and system of robot
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
US10212428B2 (en) * 2017-01-11 2019-02-19 Microsoft Technology Licensing, Llc Reprojecting holographic video to enhance streaming bandwidth/quality
CN107784671B (en) * 2017-12-01 2021-01-29 驭势科技(北京)有限公司 Method and system for visual instant positioning and drawing
CN108398139B (en) * 2018-03-01 2021-07-16 北京航空航天大学 Dynamic environment vision mileometer method fusing fisheye image and depth image
CN108805917B (en) * 2018-05-25 2021-02-23 杭州易现先进科技有限公司 Method, medium, apparatus and computing device for spatial localization
CN108776976B (en) * 2018-06-07 2020-11-20 驭势科技(北京)有限公司 Method, system and storage medium for simultaneously positioning and establishing image
CN108776492B (en) * 2018-06-27 2021-01-26 电子科技大学 Binocular camera-based autonomous obstacle avoidance and navigation method for quadcopter
CN109035334A (en) * 2018-06-27 2018-12-18 腾讯科技(深圳)有限公司 Determination method and apparatus, storage medium and the electronic device of pose
CN109073390B (en) * 2018-07-23 2022-10-04 达闼机器人股份有限公司 Positioning method and device, electronic equipment and readable storage medium
CN109166149B (en) * 2018-08-13 2021-04-02 武汉大学 Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN109242913B (en) * 2018-09-07 2020-11-10 百度在线网络技术(北京)有限公司 Method, device, equipment and medium for calibrating relative parameters of collector
CN109387204B (en) * 2018-09-26 2020-08-28 东北大学 Mobile robot synchronous positioning and composition method facing indoor dynamic environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335328A (en) * 2017-01-19 2018-07-27 富士通株式会社 Video camera Attitude estimation method and video camera attitude estimating device
CN109191504A (en) * 2018-08-01 2019-01-11 南京航空航天大学 A kind of unmanned plane target tracking

Also Published As

Publication number Publication date
CN110361005A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110361005B (en) Positioning method, positioning device, readable storage medium and electronic equipment
CN107223269B (en) Three-dimensional scene positioning method and device
CN108986161B (en) Three-dimensional space coordinate estimation method, device, terminal and storage medium
CN110568447B (en) Visual positioning method, device and computer readable medium
US10659768B2 (en) System and method for virtually-augmented visual simultaneous localization and mapping
CN111436208B (en) Planning method and device for mapping sampling points, control terminal and storage medium
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
CN113436270B (en) Sensor calibration method and device, electronic equipment and storage medium
CN112556685B (en) Navigation route display method and device, storage medium and electronic equipment
CN112207821B (en) Target searching method of visual robot and robot
CN115035235A (en) Three-dimensional reconstruction method and device
CN112487979A (en) Target detection method, model training method, device, electronic device and medium
CN113959444A (en) Navigation method, device and medium for unmanned equipment and unmanned equipment
CN112270709A (en) Map construction method and device, computer readable storage medium and electronic device
WO2021217403A1 (en) Method and apparatus for controlling movable platform, and device and storage medium
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
WO2021051220A1 (en) Point cloud fusion method, device, and system, and storage medium
CN113378605A (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN111738906B (en) Indoor road network generation method and device, storage medium and electronic equipment
KR102146839B1 (en) System and method for building real-time virtual reality
CN115019167B (en) Fusion positioning method, system, equipment and storage medium based on mobile terminal
CN111311491B (en) Image processing method and device, storage medium and electronic equipment
WO2022153910A1 (en) Detection system, detection method, and program
CN112148815B (en) Positioning method and device based on shared map, electronic equipment and storage medium
CN109118592B (en) AR presentation compensation effect realization method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210302

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.

CP03 Change of name, title or address