CN114821386A - Four-legged robot posture accurate estimation method based on multiple sight vectors - Google Patents

Four-legged robot posture accurate estimation method based on multiple sight vectors Download PDF

Info

Publication number
CN114821386A
CN114821386A CN202210223638.6A CN202210223638A CN114821386A CN 114821386 A CN114821386 A CN 114821386A CN 202210223638 A CN202210223638 A CN 202210223638A CN 114821386 A CN114821386 A CN 114821386A
Authority
CN
China
Prior art keywords
algorithm
vector
calculating
vectors
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210223638.6A
Other languages
Chinese (zh)
Inventor
贺亮
袁建平
陈建林
于洋
马川
宋婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Yunmu Zhizao Technology Co ltd
Original Assignee
Jiangsu Yunmu Zhizao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Yunmu Zhizao Technology Co ltd filed Critical Jiangsu Yunmu Zhizao Technology Co ltd
Priority to CN202210223638.6A priority Critical patent/CN114821386A/en
Publication of CN114821386A publication Critical patent/CN114821386A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a four-footed robot attitude accurate estimation method based on multiple sight line vectors, which comprises the following steps: constructing an integral image according to a given image; calculating the response of the Hessian matrix of the pixel points under the scale; calculating a discriminant of the Hessian matrix, and judging whether the point is an extreme point according to the positive and negative of the discriminant; in order to avoid signal aliasing caused by re-acquisition of an image, discrete sampling is required to be carried out on a Gaussian function, square filtering approximation is used for replacing second-order Gaussian filtering, and integral images are used for carrying out rapid calculation; the patent further improves the detection part of the local characteristic operator, and the improved detection kernel is divided into 4 independent block filters to be added, and 16 coordinates are used for calculating the filter response of the region. According to the invention, when the GPS is unavailable, the posture can be rapidly resolved by using the road sign characteristics marked with global or local position information, so that the navigation, positioning and posture determination of the field four-legged robot can be conveniently completed.

Description

Four-legged robot posture accurate estimation method based on multiple sight vectors
Technical Field
The invention relates to the technical field of robot navigation and positioning, in particular to a four-footed robot attitude accurate estimation method based on multiple sight line vectors.
Background
With the development of robotics, the exploration of space by human beings is more and more intensive, especially the global lunar exploration tide brought by the united states re-lunar project. However, the lunar surface is rugged in terrain, is continuously bombarded by a large number of various meteorites, and lacks the protection of the atmospheric layer, and these impactors can reach the lunar surface, so that a new meteorite pit is formed, and in addition, the field has a lot of unknown obstacles such as rocks, slopes and the like, which brings a very big challenge to the accurate autonomous landing problem of the future unmanned or manned lunar surface, so that an obstacle detection method which can enable the spacecraft to accurately avoid meteorite pit obstacles and can meet very strict landing accuracy requirements is needed.
In the aspect of feature matching, various methods exist at present, the Harris [17] algorithm detects the average value of the change rate of each pixel and surrounding points in an image so as to detect angular points, but the detection range is large and takes long time, the FAST algorithm is improved in speed on the basis, the calculation speed is high, and the scale invariance is not provided, so that the mismatching rate is high. Lowe et al propose an SIFT algorithm, establish a scale space by constructing a Gaussian difference table, construct a 128-dimensional feature vector for a descriptor, well solve the problems of scale, rotation, illumination, noise and the like, and have wide application but poor real-time performance. The SIFT is optimized by Herbert Bay, and an integral graph is introduced to be used on a Hessian matrix, so that the operation amount is reduced, but the real-time performance cannot be met. These algorithms do not meet the temporal responsiveness problem of the lander in speed due to large speed variations during descent. Subsequently, an ORB algorithm is proposed by Ethan rule on ICVC2011, FAST characteristics and BRIEF description are combined, improvement and optimization are carried out, and the BRIEF adopts binary string descriptors to reduce memory, so that the operation speed is greatly improved, but the method has no scale invariance and has the problem of low detection precision of an area with small gray change.
In recent years, many ORB improvement algorithms have appeared, such as snowplum, which fuse SURF feature point detection and ORB descriptors to detect images. The Liuting and the like add a K nearest neighbor algorithm (KNN) to the ORB for coarse matching, the processing speed is improved compared with the classic SIFT algorithm, and the effect is not ideal. Subsequent ORBs are continuously perfected and often combined with a random sample consensus (RANSAC) algorithm, so that error points can be effectively eliminated, but the number of calculation iterations of the RANSAC algorithm has no upper limit and requires setting a threshold related to a problem. Kiduck Kim uses the projective invariant features of the five coplanar points to match the merle crate. On the meteor crater image, 4 groups of coplanar 5 points are obtained through the intersection and crossing of two ellipses, 2 invariants can be respectively calculated according to each group of points, and the image meteor crater and the database meteor crater are matched by using a simple voting algorithm. However, the existing meteorite crater target detection method cannot adapt to multi-scale changes of targets, so that the target identification error rate is high, and the target detection precision needs to be improved.
The known SURF algorithm includes three basic steps of (1) feature extraction of scale space, (2) direction assignment, and (3) feature description. The characteristic pixel response template of the nonlinear SURF detector is shown in fig. 1, and the essence is that an integral template is used to replace a second-order DoG in a SIFT algorithm: the detection kernel shown in fig. 1 is considered as a prototype feature to be detected by the SURF algorithm. To reliably locate such features in an image, a convolution window sweep is performed on the SURF detection kernel, i.e., by detecting local scale-invariant features that are most similar to a given detection kernel as pose-determining features.
As shown in fig. 2, in the second-order Hessian computation template in the SURF algorithm, the black points on the pattern boundary are access points from which features are extracted each time, and therefore, 32 times of corresponding templates need to be accessed to compute a single feature point. Further, 30 additions and subtractions and 5 multiplications in total are required. Reducing the number of times of accessing the template and the calculation amount is a development direction for improving the real-time recognition and the detection accuracy.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art. Therefore, the first purpose of the present invention is to provide a method for accurately estimating the pose of a quadruped robot based on multiple sight vectors, so as to be able to utilize the landmark features labeled with global or local position information to perform pose fast resolving when a GPS is unavailable, thereby facilitating the completion of navigation, positioning and pose determination of the quadruped robot in the field.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a four-footed robot attitude accurate estimation method based on multiple sight line vectors comprises the following steps:
s1, constructing an integral image I according to a given image I
Figure BDA0003538413940000021
Wherein x is (x, y) is a pixel point in the image I; calculating the response of the Hessian matrix of the point under the scale sigma;
s2, calculating a discriminant of the Hessian matrix, and judging whether the point is an extreme point according to the positive and negative of the discriminant;
s3, using square frame filtering approximation to replace second-order Gaussian filtering, and using integral images to calculate filtering response;
s4, comparing the response values on each scale, and matching the obtained feature detection result with the homonymous feature points recorded into the database to obtain a sight vector;
and S5, calculating the attitude determination parameters through the sight line vectors.
S6, wherein: the block filter in step S3 is formed by overlapping 4 independent block filters, a pattern formed by overlapping 4 independent block filters is adapted to the SURF algorithm detection kernel pattern, and the pattern on each independent block filter has 4 boundary coordinate points; the 16 coordinate points after the 4 box filters are overlapped can express the coordinates of each boundary point in the SURF algorithm detection kernel pattern; the 4 independent box filters can be calculated by linear superposition using the 16 coordinate points instead of the template of the original SURF algorithm.
S7, further, the invention discloses a four-footed robot attitude accurate estimation method based on multiple sight line vectors, which is characterized by comprising the following steps: the 4 independent block filters comprise 1 pair of first block filters and 1 pair of second block filters; 1, the patterns of the first square frame filter are the same, and the directions are orthogonal; the 1 pair of second box filter patterns are identical and are orthogonal in direction.
S8, preferably, the four-footed robot posture accurate estimation method based on the multi-sight vector is characterized by comprising the following steps: in the step S5, when the attitude is resolved, a main/auxiliary improved multi-vector attitude determination algorithm is developed by using the gravity vector obtained by the IMU as a main vector and combining the multi-sight vector, and the algorithm is based on the classical wahba problem, that is:
Figure BDA0003538413940000031
will gravity vector b g =M A ·r g As the dominant vector, the other vectors are used as auxiliary vectors, and the following are obtained:
Figure BDA0003538413940000032
and combining theoretical derivation to obtain a posture estimation equation based on the main/auxiliary vectors:
Figure BDA0003538413940000033
wherein,
Figure BDA0003538413940000034
Figure BDA0003538413940000035
thus, the formula of the attitude parameter is obtained:
Figure BDA0003538413940000041
the invention has at least one of the following technical effects:
(1) when the response of the detector is improved, each pixel point only needs to calculate 15 times of addition and subtraction and two times of multiplication. In contrast, a total of 30 additions and subtractions and 5 multiplications are required using the SURF algorithm. Therefore, the improvement of the original SURF algorithm can be obviously improved in real-time compared with the SURF algorithm. In addition, the original SURF operator needs to visit the image for 32 times, while the SURF improved algorithm only needs to visit for 16 times, so that the risk of cache loss can be reduced due to the times of visits in actual operation, additional speed improvement can be brought, and the reliability of image processing can be improved;
(2) after detection-matching based on Fast-SURF, a plurality of sight guiding vectors can be generated, and the phenomena of mismatching, missing matching and the like are inevitable in the aspect of visual information processing by considering the actual field application scene;
(3) the method proposed by the patent has similarities with the QUEST algorithm, but has obvious differences on the vector processing part, and mainly considers the characteristic background of field environment positioning: that is, the gravity vector is more accurate and easier to acquire than the line-of-sight vector.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is an efficient substitution of the detection kernel-DoG operator of the SURF algorithm described in the present invention;
FIG. 2 is a Hessian calculation template in the original SURF algorithm;
FIG. 3 is a schematic diagram of an improved block filter according to the present invention;
fig. 4 is a schematic diagram of detecting local feature extraction matching feature points in a real scene according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating matching of local feature extraction and matching feature points in a real scene according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating mismatching of local feature extraction matching feature points in a real scene according to an embodiment of the present invention;
fig. 7 is a comparison result of multi-vector pose simulation according to an embodiment of the present invention.
In FIG. 7: 1-course angle; 2-tilt angle; 3-roll angle.
Detailed Description
Reference will now be made in detail to the present embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a four-footed robot pose accurate estimation method based on multiple sight line vectors according to the embodiment with reference to the drawings.
The image local feature extraction and matching algorithm is researched, a reference datum is provided for multi-sight vector generation, a SURF feature operator is used as a blue book, a local feature description operator with light computational power is developed, the SURF algorithm is used as a light computational power version of an SIFT local feature extraction operator, the computational power requirement can be remarkably reduced without sacrificing precision, the feature scale adaptability is 6 times as high as possible, and the application background of a field quadruped robot is well fitted.
From a given image I, SURF constructs an integral image I as follows
Figure BDA0003538413940000051
Where x ═ x, y is a pixel point in the image I, then the response of the Hessian matrix at this point under the scale σ is as follows:
Figure BDA0003538413940000052
wherein L is xx (x, σ) is the second derivative of Gaussian
Figure BDA0003538413940000053
The convolution at that point in the original image has
Figure BDA0003538413940000054
L xy (x,σ),L yy (x, σ) is similarly obtainable.
And calculating a discriminant of the Hessian matrix, and judging whether the point is an extreme point according to the positive and negative of the discriminant. In practice, to avoid signal aliasing caused by re-acquisition of an image, the gaussian function needs to be subjected to discretization sampling. Block filter approximation can be used instead of second order gaussian filtering for fast computation with integral images. The values of the square frame filtering template after convolution with the image are respectively D xx ,D xy ,D yy . Further solving, an approximation of the matrix can be obtained:
det(H approx )=D xx D yy -(ωD xy ) 2 (3)
wherein, omega is a weight value, which is introduced for balancing the error between the accurate value L and the approximate value D, and the value of omega changes along with the scale change.
Use of D SURF Replacement of det (H) approx ) Representing the response of the detector, which can be obtained from equation (3):
D SURF =D xx D yy -(ωD xy ) 2 (4)
the above equation is a Hessian approximate response under the conventional SURF operator, and corresponds to the substitution of the detection response of the DoG operator in the SIFT algorithm.
The characteristic pixel response template of the nonlinear SURF detector is shown in FIG. 1, and the essence of the method is to replace a second-order DoG in the SIFT algorithm by an integral template: the detection kernel shown in fig. 1 is considered as a prototype feature to be detected by the SURF algorithm. To reliably locate such features in an image, a convolution window sweep is performed on the SURF detection kernel, i.e., by detecting local scale-invariant features that are most similar to a given detection kernel as pose-determining features. The detection kernel is considered to have significant symmetry and to be additive in nature with the gaussian difference operator (DoG) filter used in SIFT, meaning that it can be considered both a linearized version of SURF and a non-uniformly quantized version of SIFT.
Therefore, in this embodiment, the original detection operator is improved as follows:
as shown in fig. 3, the left, middle and right of fig. 3 are respectively a schematic diagram of image pixel position, a schematic diagram of detection frame and a schematic diagram of detection point position. The improved detection kernel is divided into 4 independent box filters to be added, wherein 16 coordinates of the rightmost graph are used for calculating the filtering response of the region, and the method replaces a second-order Hessian calculation template in the original SURF operator, namely: in the embodiment, the square frame filter of the SURF algorithm is formed by overlapping 4 independent square frame filters, a pattern formed by overlapping the 4 independent square frame filters is adapted to a SURF algorithm detection kernel pattern, and the pattern on each independent square frame filter has 4 boundary coordinate points; the 16 coordinate points after the 4 box filters are overlapped can express the coordinates of each boundary point in the SURF algorithm detection kernel pattern; the 4 independent box filters can be calculated by linear superposition using the 16 coordinate points instead of the template of the original SURF algorithm.
In contrast to the computation template of FIG. 2, as shown in the middle picture of FIG. 3, 4 independent box filters, including 1 pair of first box filters and 1 pair of second box filters; 1, the patterns of the first square frame filter are the same, and the directions are orthogonal; the 1 pair of second box filter patterns are identical and are oriented orthogonally. Therefore, the number of templates is reduced to 2, and the original templates can be replaced by linear superposition, specifically, the improved Fast-SURF algorithm (hereinafter referred to as Fast-SURF) of the present embodiment is calculated as follows:
Figure BDA0003538413940000071
wherein I Is an integral image defined in formula (1), a 1 To d 1 Is the specific scale coordinate labeled in fig. 3. In order that the response values can be compared on various scales, a constant s2 must be performed, where s is a normalization of the filter size determined from the current scale.
It can be seen that in calculating the response of the improved detector in equation (5), each pixel point only needs to operate 15 additions and subtractions and two multiplications. In contrast, a total of 30 additions and subtractions and 5 multiplications are required using the SURF algorithm. Therefore, the improvement of the original SURF algorithm can be obviously improved in real-time compared with the SURF algorithm. In addition, the original SURF algorithm needs to access the image 32 times, while the SURF improved algorithm only needs 16 such accesses, and the low-frequency accesses in the actual operation can also reduce the risk of cache loss, thereby not only bringing additional speed improvement, but also improving the reliability of image processing.
The improved algorithm in this embodiment actually achieves additional acceleration by linearizing the SURF operator and may preserve its detection properties. Based on the method, a multi-feature vector screening and extracting method is established according to the local feature sight direction.
Tests are carried out under an actual scene and a teleoperation software simulation platform, a feature vector extraction schematic diagram under the teleoperation platform is shown in fig. 4-6, and the detection performance of the local feature extraction operator provided by the embodiment is verified under the support of a large number of simulation images, wherein repeatability and positioning accuracy are not significantly different from those of an original SURF algorithm (the error of 1000 tests is lower than 0.01%), but the time consumption of Fast-SURF is averagely lower than about 38% of that of the SURF algorithm, and the advantage of the algorithm in the aspect of real-time property is verified.
After the feature detection result is obtained, the feature operator needs to be matched with the feature points with the same name recorded in the database, so that the sight line vector can be obtained for pose determination, and a conventional SURF operator matching scheme is adopted here, so that the detailed description is omitted.
After the Fast-SURF-based detection and matching, a plurality of sight line guide vectors can be generated, and the phenomena of mismatching, missing matching and the like are inevitable in the aspect of visual information processing by considering the actual field application scene. Therefore, when the quadruped robot posture is solved, the situation that all sight line vectors are endowed with equal confidence is avoided, and the quadruped robot posture is flexibly determined according to the richness of the scene textures. Inheriting the research thought, a main/auxiliary improved multi-vector attitude determination algorithm is developed by taking a gravity vector obtained by an IMU as a main vector and combining a multi-sight vector, and the algorithm is based on the classic wahba problem, namely:
Figure BDA0003538413940000081
will gravity vector b g =M A ·r g As the dominant vector, the other vectors are as auxiliary vectors:
Figure BDA0003538413940000082
and combining theoretical derivation to obtain a posture estimation equation based on the main/auxiliary vectors:
Figure BDA0003538413940000083
wherein
Figure BDA0003538413940000084
Figure BDA0003538413940000085
The proposed method has similarities with the QUEST algorithm, but there are significant differences to the vector processing part, and here we mainly consider the distinctive background of lunar surface localization: compared with a sight vector, the gravity vector has higher precision and is easier to obtain, so the engineering consideration is taken into the algorithm design, the design work of a multi-vector algorithm is completed, the comparison with a classical QUEST algorithm is carried out in digital simulation, and the simulation result also reflects the advantages of the algorithm:
as shown in FIG. 7, compared to QUEST, the estimation error of the present implemented algorithm is on the left and the original QUEST algorithm is on the right.
The algorithm is developed by combining the theoretical framework of the algorithm and the Q and the platform under the Visual Studio platform, the algorithm is preliminarily tested in the semi-physical simulation at present, and after the characteristic extraction and matching algorithm are combined, the time consumed for overall attitude estimation is about 0.5 second, so that the real-time requirement can be met.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (3)

1. A four-footed robot attitude accurate estimation method based on multiple sight line vectors comprises the following steps:
s1: from a given image I, an integral image I is constructed
Figure FDA0003538413930000011
Wherein x is (x, y) is a pixel point in the image I; calculating the response of the Hessian matrix of the point under the scale sigma;
s2: calculating a discriminant of the Hessian matrix, and judging whether the point is an extreme point according to the positive and negative of the discriminant;
s3: using a square filtering approximation to replace second-order Gaussian filtering, and calculating a filtering response by using an integral image;
s4: comparing the response values on each scale, matching the obtained feature detection result with the homonymous feature points recorded into the database, and obtaining a sight vector;
s5: calculating attitude determination parameters through the sight line vectors;
the method is characterized in that: the block filter in step S3 is formed by overlapping 4 independent block filters, a pattern formed by overlapping 4 independent block filters is adapted to the SURF algorithm detection kernel pattern, and the pattern on each independent block filter has 4 boundary coordinate points; 16 coordinate points after the 4 box filters are superposed can express coordinates of each boundary point in the SURF algorithm detection kernel pattern; the 4 independent box filters can be calculated by linear superposition using the 16 coordinate points instead of the template of the original SURF algorithm.
2. The method for accurately estimating the pose of the quadruped robot based on the multiple sight line vectors as claimed in claim 1, wherein: the 4 independent block filters comprise 1 pair of first block filters and 1 pair of second block filters; 1, the patterns of the first square frame filter are the same, and the directions are orthogonal; the 1 pair of second box filter patterns are identical and are oriented orthogonally.
3. The method for accurately estimating the pose of the quadruped robot based on the multiple sight line vectors as claimed in claim 1 or 2, wherein: in the step S5, when the attitude is resolved, a main/auxiliary improved multi-vector attitude determination algorithm is developed by using the gravity vector obtained by the IMU as a main vector and combining the multi-sight vector, and the algorithm is based on the classical wahba problem, that is:
Figure FDA0003538413930000012
will gravity vector b g =M A ·r g As the dominant vector, the other vectors are used as auxiliary vectors, and the following are obtained:
Figure FDA0003538413930000013
and combining theoretical derivation to obtain a posture estimation equation based on the main/auxiliary vectors:
Figure FDA0003538413930000014
wherein,
Figure FDA0003538413930000015
Figure FDA0003538413930000021
thus, the formula of the attitude parameter is obtained:
Figure FDA0003538413930000022
cosφ=p/νsinφ=q/ν
Figure FDA0003538413930000023
CN202210223638.6A 2022-03-09 2022-03-09 Four-legged robot posture accurate estimation method based on multiple sight vectors Withdrawn CN114821386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210223638.6A CN114821386A (en) 2022-03-09 2022-03-09 Four-legged robot posture accurate estimation method based on multiple sight vectors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210223638.6A CN114821386A (en) 2022-03-09 2022-03-09 Four-legged robot posture accurate estimation method based on multiple sight vectors

Publications (1)

Publication Number Publication Date
CN114821386A true CN114821386A (en) 2022-07-29

Family

ID=82528715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210223638.6A Withdrawn CN114821386A (en) 2022-03-09 2022-03-09 Four-legged robot posture accurate estimation method based on multiple sight vectors

Country Status (1)

Country Link
CN (1) CN114821386A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235604A (en) * 2023-11-09 2023-12-15 江苏云幕智造科技有限公司 Deep learning-based humanoid robot emotion recognition and facial expression generation method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235604A (en) * 2023-11-09 2023-12-15 江苏云幕智造科技有限公司 Deep learning-based humanoid robot emotion recognition and facial expression generation method

Similar Documents

Publication Publication Date Title
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
CN107833249B (en) Method for estimating attitude of shipboard aircraft in landing process based on visual guidance
CN111324145B (en) Unmanned aerial vehicle autonomous landing method, device, equipment and storage medium
Jiang et al. Performance evaluation of feature detection and matching in stereo visual odometry
CN112883850B (en) Multi-view space remote sensing image matching method based on convolutional neural network
GB2526342A (en) Point cloud matching method
CN104574401A (en) Image registration method based on parallel line matching
Müller et al. Squeezeposenet: Image based pose regression with small convolutional neural networks for real time uas navigation
CN113295171A (en) Monocular vision-based attitude estimation method for rotating rigid body spacecraft
CN109871024A (en) A kind of UAV position and orientation estimation method based on lightweight visual odometry
CN117218350A (en) SLAM implementation method and system based on solid-state radar
CN117029817A (en) Two-dimensional grid map fusion method and system
Zhao et al. Visual odometry-A review of approaches
CN114821386A (en) Four-legged robot posture accurate estimation method based on multiple sight vectors
Guan et al. Relative pose estimation for multi-camera systems from affine correspondences
CN113822996A (en) Pose estimation method and device for robot, electronic device and storage medium
CN117664124A (en) Inertial guidance and visual information fusion AGV navigation system and method based on ROS
Xu et al. Object detection on robot operation system
Brink Stereo vision for simultaneous localization and mapping
Guan et al. GPS-aided recognition-based user tracking system with augmented reality in extreme large-scale areas
Wan et al. A performance comparison of feature detectors for planetary rover mapping and localization
Abratkiewicz et al. The concept of applying a SIFT algorithm and orthophotomaps in SAR-based augmented integrity navigation systems
CN113570667A (en) Visual inertial navigation compensation method and device and storage medium
Liu et al. An RGB‐D‐Based Cross‐Field of View Pose Estimation System for a Free Flight Target in a Wind Tunnel
Istighfarin et al. Leveraging Spatial Attention and Edge Context for Optimized Feature Selection in Visual Localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220729