CN117475092B - Pose optimization method, pose optimization equipment, intelligent equipment and medium - Google Patents

Pose optimization method, pose optimization equipment, intelligent equipment and medium Download PDF

Info

Publication number
CN117475092B
CN117475092B CN202311817382.2A CN202311817382A CN117475092B CN 117475092 B CN117475092 B CN 117475092B CN 202311817382 A CN202311817382 A CN 202311817382A CN 117475092 B CN117475092 B CN 117475092B
Authority
CN
China
Prior art keywords
facial feature
global
determining
feature
primitive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311817382.2A
Other languages
Chinese (zh)
Other versions
CN117475092A (en
Inventor
白昕晖
路超
孙立
袁弘渊
任少卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202311817382.2A priority Critical patent/CN117475092B/en
Publication of CN117475092A publication Critical patent/CN117475092A/en
Application granted granted Critical
Publication of CN117475092B publication Critical patent/CN117475092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a pose optimization method, equipment, intelligent equipment and medium, which comprise the steps of extracting key frames of point cloud data from a plurality of point cloud data frames; generating a global bundle adjustment factor according to the key frame; constructing a factor graph according to the global bundle adjustment factors; and optimizing the pose of the key frame based on the factor graph to obtain the optimized pose of the key frame. In this way, when the three-dimensional scene is built later, the pose and the surface element characteristics of the two-by-two point cloud data frames are adjusted based on the global bundle set adjustment factors, so that the surface elements of a plurality of key frames can finally fall on the same surface element, double images can be avoided, and the quality of the three-dimensional scene is improved.

Description

Pose optimization method, pose optimization equipment, intelligent equipment and medium
Technical Field
The application relates to the technical field of target detection, and particularly provides a pose optimization method, equipment, intelligent equipment and a medium.
Background
In the process of constructing a three-dimensional scene based on point cloud data frames of multiple sessions, when target elements corresponding to the point cloud data frames of different sessions repeatedly access the same position, after detecting one loop, the traditional loop detection carries out point cloud data registration between the loop frames. However, because the point cloud registration has errors, when the number of sessions is large, the same position has more loop frames between different sessions, and the errors of point cloud data registration of every two frames can be accumulated, so that the finally constructed three-dimensional scene has ghost images and has poor quality.
Disclosure of Invention
In order to overcome the defects, the application is provided to provide the pose optimization method, the equipment, the intelligent equipment and the medium for solving or at least partially solving the technical problems that the three-dimensional scene finally constructed has ghost and poor quality due to the fact that errors exist between point cloud data registration of every two frames and the errors are accumulated.
In a first aspect, the present application provides a pose optimization method, the pose optimization method comprising:
extracting key frames of point cloud data from a plurality of point cloud data frames;
generating a global bundle adjustment factor according to the key frame; the global bundle set adjustment factor is a factor for adjusting pose and facial element characteristics of the two-by-two point cloud data frames;
constructing a factor graph according to the global bundle adjustment factors;
and optimizing the pose of the key frame based on the factor graph to obtain the optimized pose of the key frame.
Further, in the pose optimization method, generating a global bundle adjustment factor according to the key frame includes:
extracting first primitive features of the key frames;
detecting whether the first primitive feature is contained in a global primitive feature sequence;
if the first surface element feature is contained in the global surface element feature sequence, establishing a constraint item from a first surface element to a second surface element as the global bundling adjustment factor; wherein the constraint item is that the distance from the point of the first surface element to the surface of the second surface element is smaller than a first preset distance; the first surface element is the surface element corresponding to the key frame, and the second surface element is the surface element containing the first surface element characteristic in a plurality of surface elements corresponding to the global surface element characteristic sequence.
Further, the pose optimization method further includes:
and if the first facial feature is not contained in the global facial feature sequence, inserting the first facial feature into the global facial feature sequence.
Further, in the pose optimization method, detecting whether the first primitive feature is included in a global primitive feature sequence includes:
if the first primitive feature comprises a first normal vector, determining a vector difference between the first normal vector and each second normal vector; the second normal vector is a normal vector corresponding to each second facial feature in the global facial feature sequence;
if the at least one vector difference is smaller than or equal to a preset difference value, determining that at least one second facial feature is matched with the first facial feature, and determining that the first facial feature is contained in the global facial feature sequence;
and if all the vector differences are larger than the preset difference, determining that each second facial feature is not matched with the first facial feature, and determining that the first facial feature is not contained in the global facial feature sequence.
Further, in the pose optimization method, detecting whether the first primitive feature is included in a global primitive feature sequence includes:
if the first surface element feature comprises a first coordinate mean value corresponding to a plurality of surface element point coordinates, determining a distance between the first coordinate mean value and the second coordinate mean value; the second coordinate mean value is a coordinate mean value corresponding to each second facial feature in the global facial feature sequence;
if at least one distance is smaller than or equal to a second preset distance, determining that at least one second facial feature is matched with the first facial feature, and determining that the first facial feature is contained in the global facial feature sequence;
if all the distances are larger than a second preset distance, determining that each second facial feature is not matched with the first facial feature, and determining that the first facial feature is not included in the global facial feature sequence.
Further, in the pose optimization method, detecting whether the first primitive feature is included in a global primitive feature sequence includes:
if the first surface element feature comprises a first normal vector and a first coordinate mean value corresponding to a plurality of surface element point coordinates, determining a vector difference between the first normal vector and each second normal vector; and determining a distance between the first coordinate mean and each second coordinate mean; wherein the second normal vector is a normal vector corresponding to each second primitive feature in the global primitive feature sequence; the second coordinate mean value is the coordinate mean value corresponding to each second panel feature;
determining a similarity between the first and each second primitive feature according to the vector difference and the distance;
if the at least one similarity is greater than or equal to a preset similarity, determining that at least one second facial feature is matched with the first facial feature, and determining that the first facial feature is contained in the global facial feature sequence;
if all the similarities are smaller than the preset similarities, determining that each second facial feature is not matched with the first facial feature, and determining that the first facial feature is not contained in the global facial feature sequence.
Further, in the pose optimization method, extracting key frames of point cloud data from a plurality of point cloud data frames includes:
performing loop detection on the point cloud data frames to obtain loop frames of a plurality of point cloud data;
determining a state change value of a preset target for any loop frame of the plurality of loop frames; the state change value is a difference value between a first state value of the preset target in any loop frame and a second state value of the preset target in a reference loop frame;
if the state change value is larger than a preset threshold value, determining that any loop frame is a key frame;
and if the state change value is smaller than or equal to the preset threshold value, determining that any loop frame is not a key frame.
In a second aspect, the present application provides a pose optimization device comprising a processor and a storage means adapted to store a plurality of program codes adapted to be loaded and run by the processor to perform the pose optimization method of any of the above.
In a third aspect, a smart device is provided, which may comprise a pose optimization device as described above.
In a fourth aspect, there is provided a computer readable storage medium storing a plurality of program codes adapted to be loaded and executed by a processor to perform the pose optimization method according to any of the above.
The technical scheme has at least one or more of the following beneficial effects:
in the technical scheme of implementing the application, key frames of point cloud data are extracted from a plurality of point cloud data frames; and after generating a global bundling adjustment factor according to the key frame of the point cloud data, constructing a factor graph according to the global bundling adjustment factor so as to optimize the pose of the key frame of the point cloud data based on the constructed factor graph and obtain the optimized pose of the key frame of the point cloud data. In this way, when the three-dimensional scene is built later, the pose and the surface element characteristics of the two-by-two point cloud data frames are adjusted based on the global bundle set adjustment factors, so that the surface elements of a plurality of key frames can finally fall on the same surface element, double images can be avoided, and the quality of the three-dimensional scene is improved.
Drawings
The disclosure of the present application will become more readily understood with reference to the accompanying drawings. As will be readily appreciated by those skilled in the art: these drawings are for illustrative purposes only and are not intended to limit the scope of the present application. Moreover, like numerals in the figures are used to designate like parts, wherein:
FIG. 1 is a schematic architecture diagram of a vehicle cloud connection scenario;
FIG. 2 is a flow chart of the main steps of a pose optimization method according to an embodiment of the present application;
FIG. 3 is a flow diagram of generating a global bundle adjustment factor;
FIG. 4 is a flow diagram of an overall implementation of the pose optimization method of the present application;
FIG. 5 is a schematic flow chart of a specific implementation corresponding to FIG. 4;
fig. 6 is a main structural block diagram of the pose optimization apparatus according to one embodiment of the present application.
Detailed Description
Some embodiments of the present application are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present application, and are not intended to limit the scope of the present application.
In the description of the present application, a "module," "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, or software components, such as program code, or a combination of software and hardware. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like. The term "a and/or B" means all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" has a meaning similar to "A and/or B" and may include A alone, B alone or A and B. The singular forms "a", "an" and "the" include plural referents.
Referring to fig. 1, fig. 1 is a schematic architecture diagram of a vehicle cloud connection scenario. As shown in fig. 1, a plurality of autonomous vehicles and a cloud may be included in the cloud connection scenario. The working definition is as follows:
automatically driving a vehicle: and collecting and uploading target elements acquired in the driving process of the user, and uploading the target elements to the cloud.
Cloud: after multiple sessions with the automatic driving vehicle, extracting characteristic information based on target elements uploaded by the automatic driving vehicle, and finally constructing a three-dimensional scene.
In an example, the cloud end may collect the vehicle-mounted data by installing the relevant APP on the vehicle by the vehicle owner, or may upload the vehicle-mounted data to the cloud end by installing the relevant program on the vehicle after receiving the vehicle-mounted data collection protocol, which is not limited in this application.
In one example, in the process of constructing a three-dimensional scene based on point cloud data frames of multiple sessions, the cloud end performs point cloud data registration between loop frames after detecting one loop by using loop detection when repeatedly accessing the same position according to tracks corresponding to the point cloud data frames of different sessions. However, because the point cloud registration has errors, when the number of sessions is large, the same position has more loop frames between different sessions, and the errors of point cloud data registration of every two frames can be accumulated, so that the finally constructed three-dimensional scene has ghost images and has poor quality.
Therefore, in order to solve the above technical problems, the present application provides the following technical solutions:
referring to fig. 2, fig. 2 is a schematic flow chart of main steps of a pose optimization method according to an embodiment of the present application. As shown in fig. 2, the pose optimization method in the embodiment of the present application mainly includes the following steps 201 to 204.
Step 201, extracting key frames of point cloud data from a plurality of point cloud data frames;
in a specific implementation process, loop detection can be performed on the point cloud data frames to obtain loop frames of a plurality of point cloud data; determining a state change value of a preset target aiming at any one of the loop back frames of the point cloud data; if the state change value is larger than a preset threshold value, determining that the loop frame of any point cloud data is a key frame of the point cloud data; and if the state change value is smaller than or equal to the preset threshold value, determining that the loop frame of any point cloud data is not a key frame of the point cloud data. The state change value is a difference value between a first state value of the preset target in a loop frame of the arbitrary point cloud data and a second state value of the preset target in a loop frame of the reference point cloud data. The loop frame of the reference point cloud data may be a key frame of each detected point cloud data, where the key frame of the detected point cloud data is located before and is temporally closest to the loop frame of any point cloud data. That is, after each time a key frame of one point cloud data is detected, the key frame of the point cloud data detected at the time can be used as a loop frame of reference point cloud data, and then the loop frames of other point cloud data are traversed backwards until all the key frames are obtained.
In a specific implementation process, the first state value of the preset target may only include a first position of the preset target, the second state value of the preset target may include a second position of the preset target, after determining a position difference between the first position and the second position, when the position difference is greater than a preset position difference in a preset threshold, determining that a loop frame of any point cloud data is the key frame, and otherwise, when the position difference is less than or equal to the preset position difference in the preset threshold, determining that the loop frame of any point cloud data is not the key frame.
In a specific implementation process, the first state value of the preset target may only include a first angle of the preset target, the second state value of the preset target may include a second angle of the preset target, after determining an angle difference value between the first angle and the second angle, when the angle difference value is greater than a preset angle difference value in a preset threshold, determining that a loop frame of any point cloud data is the key frame, and otherwise, when the angle difference value is less than or equal to the preset angle difference value in the preset threshold, determining that the loop frame of any point cloud data is not the key frame.
In a specific implementation process, the first state value of the preset target may include a first position of the preset target and a first angle of the preset target at the same time, and the second state value of the preset target may include a second position of the preset target and a second angle of the preset target, where a position difference between the first position and the second position and an angle difference between the second angle and the second angle may be obtained.
If both the two are larger than the corresponding preset threshold, determining that the loop frame of any point cloud data is the key frame; or if only one of the two is greater than the corresponding preset threshold value, obtaining a first weighted value after weighted summation, comparing the first weighted value with the first preset weighted threshold value, and if the first weighted value is greater than the first preset weighted threshold value, determining that the loop frame of any point cloud data is the key frame.
If the two are smaller than or equal to the corresponding preset threshold, determining that the loop frame of any point cloud data is not the key frame; or if only one of the two is smaller than or equal to the corresponding preset threshold value, the first weighting value can be obtained after the weighted summation, the first weighting value is compared with the first preset weighting threshold value, and if the first weighting value is smaller than or equal to the first preset weighting threshold value, the loop frame of any point cloud data is determined not to be the key frame.
It should be noted that, the loop detection method in this embodiment may be a conventional method in the automatic driving technical field, and will not be described herein.
Step 202, generating a global bundle adjustment factor according to the key frame;
in one specific implementation, step 202 may be implemented according to steps 301 through 303 shown in fig. 3. FIG. 3 is a flow diagram of generating a global bundle adjustment factor.
Step 301, extracting first primitive features of the key frames;
in a specific implementation process, a first surface element corresponding to a key frame of point cloud data can be constructed according to the key frame of the point cloud data, and a first surface element characteristic corresponding to the first surface element is extracted.
Specifically, the first primitive may be constructed based on the coordinate point of the point cloud data closest to the origin of the current representation coordinate system among the point cloud data, so that the parameter form may be minimized and have a true physical meaning. Wherein the current representation coordinate system is a coordinate system constructed based on a plane formed by the point cloud data. The corresponding bin equation of the first bin isThe method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>A face representing a first face element +.>Representing the normal vector of the first bin, +.>The distance between the origin and the coordinate point of the point cloud data closest to the origin is represented. In this embodiment, a normal vector of the first bin and/or a coordinate mean of the multi-point cloud data in the first bin may be extracted as the first bin feature of the first bin.
Step 302, detecting whether the first primitive feature is contained in a global primitive feature sequence;
in a specific implementation process, the global bin feature sequence is a sequence obtained after feature extraction based on a global bin corresponding to the current three-dimensional scene. If the first primitive feature includes a first normal vector, determining a vector difference between the first normal vector and each second normal vector; if the at least one vector difference is smaller than or equal to a preset difference value, determining that at least one second facial feature is matched with the first facial feature, and determining that the first facial feature is contained in the global facial feature sequence; if all the vector differences are larger than the preset difference value, determining that each second facial feature is not matched with the first facial feature, and determining that the first facial feature is not contained in the global facial feature sequence. The second normal vector is a normal vector corresponding to each second facial feature in the global facial feature sequence.
In a specific implementation process, if the first primitive feature comprises a first coordinate mean value corresponding to a plurality of primitive point coordinates, determining a distance between the first coordinate mean value and each second coordinate mean value; if at least one distance is smaller than or equal to a second preset distance, determining that at least one second facial feature is matched with the first facial feature, and determining that the first facial feature is contained in the global facial feature sequence; if all the distances are larger than a second preset distance, each second facial feature is determined to be not matched with the first facial feature, and the first facial feature is determined to be not included in the global facial feature sequence. The second coordinate mean value is a coordinate mean value corresponding to each second panel feature.
In a specific implementation process, if the first primitive feature comprises a first normal vector and a first coordinate mean value corresponding to a plurality of primitive point coordinates, determining a vector difference between the first normal vector and each second normal vector; and determining a distance between the first coordinate mean and each second coordinate mean; determining a similarity between the first and each second primitive feature according to the vector difference and the distance; if the at least one similarity is greater than or equal to a preset similarity, determining that at least one second facial feature is matched with the first facial feature, and determining that the first facial feature is contained in the global facial feature sequence; each second primitive feature is determined to not match the first primitive feature and it is determined that the first primitive feature is not included in the global primitive feature sequence.
Specifically, if the vector difference is greater than the preset difference and the distance is greater than the second preset distance, a similarity greater than the preset similarity may be obtained. Or if only one of the vector difference and the distance is larger than the corresponding threshold value, the vector difference and the distance can be weighted to obtain a second weighted value, the second weighted value is compared with a second preset weighted threshold value, and if the second weighted value is larger than the second preset weighted threshold value, a similarity larger than the preset similarity is obtained. If the second weighted value is smaller than or equal to a second preset weighted threshold value, obtaining a similarity smaller than the preset similarity.
Step 303, if the first primitive feature is included in the global primitive feature sequence, establishing a constraint term from the first primitive to the second primitive as the global bundle adjustment factor.
In a specific implementation process, if the first primitive feature is included in the global primitive feature sequence, the illustrated target element corresponding to the key frame of the point cloud data already exists in the constructed three-dimensional scene, in order to avoid the occurrence of ghost between the target element corresponding to the key frame of the point cloud data and the existing target element, a constraint item from the first primitive to the second primitive may be established as the global bundling adjustment factor, so that the pose and primitive feature of the pairwise point cloud data frame may be adjusted based on the global bundling adjustment factor, so that the target element is prevented from the occurrence of ghost as much as possible. The first bin is a bin corresponding to the key frame, and the second bin is a bin containing the first bin feature in a plurality of bins corresponding to the global bin feature sequence.
In a specific implementation process, in the case that the first primitive feature is matched with the second primitive feature, the primitive equation corresponding to the first primitive and the primitive equation corresponding to the second primitive should be the same in practice, but the milestone error results in a certain distance error, so in this embodiment, a first preset distance may be set based on the distance error, and then the obtained constraint term is that the distance from the point of the first primitive to the surface of the second primitive is smaller than the first preset distance, so that the pose and the primitive feature of the key frame of the point cloud data may be adjusted based on the constraint term subsequently.
In a specific implementation process, if the first primitive feature is not included in the global primitive feature sequence, it is indicated that the first primitive corresponding to the first primitive feature may be a new target element, and at this time, the first primitive feature may be inserted into the global primitive feature sequence, so as to match with the primitives corresponding to the key frames of other point cloud data.
Step 203, constructing a factor graph according to the global bundle adjustment factors;
in one particular implementation, after the global bundle adjustment factor is generated, a factor graph may be constructed based on the global bundle adjustment factor. The factor graph can also comprise other factors besides the global bundle adjustment factor, such as a relative pose factor corresponding to a relative pose constraint term, a time-space pose factor corresponding to a time-space constraint term, and the like. In this embodiment, the construction process of the factor graph may be a construction method in the related art, which is not described herein.
And 204, optimizing the pose of the key frame based on the factor graph to obtain the optimized pose of the key frame.
In a specific implementation process, the pose of the key frame is optimized based on the factor graph, and the optimized pose of the key frame is obtained. Because the factor graph is provided with the global bundling adjustment factor, points in the first surface element corresponding to the key frames of the point cloud data can fall into the second surface element, and when the key frames of the point cloud data exist, points in the corresponding surface elements among the key frames fall into the same surface element under the factor graph, so that double images can be avoided. The process of optimizing the pose of the key frame based on the factor graph may refer to an optimization method in the related art, which is not described herein.
According to the pose optimization method, key frames of point cloud data are extracted from a plurality of point cloud data frames; and after generating a global bundle adjustment factor according to the key frame of the point cloud data, constructing a factor graph according to the global bundle adjustment factor so as to optimize the pose of the key frame based on the constructed factor graph and obtain the optimized pose of the key frame of the point cloud data. In this way, when the three-dimensional scene is built later, the pose and the surface element characteristics of the two-by-two point cloud data frames are adjusted based on the global bundle set adjustment factors, so that the surface elements of a plurality of key frames can finally fall on the same surface element, double images can be avoided, and the quality of the three-dimensional scene is improved.
Fig. 4 is a schematic flow chart of an overall implementation of the pose optimization method of the present application. As shown in fig. 4, the overall implementation flow of the pose optimization method of the present application is as follows:
step 401, loop detection and point cloud registration are performed first;
step 402, generating a global bundle set adjustment factor and adding a factor graph;
step 402, global factor graph optimization.
Fig. 5 is a schematic flow chart of the specific implementation corresponding to fig. 4. As shown in fig. 5, the specific flow of the pose optimization method applied is as follows:
step 501, loop detection and point cloud registration are carried out firstly, and loop frames of multi-pass point cloud data are obtained;
step 502, traversing each key frame;
step 503, generating all the face element characteristics of the current key frame;
step 504, detecting whether the global bin feature sequence is associated with the global bin feature sequence; if yes, go to step 505; if not, go to step 508;
step 505, establishing a point-to-face constraint item as a global bundle adjustment factor;
step 506, adding the global bundle adjustment factor into the factor graph;
step 507, optimizing a global factor graph;
step 508, inserting a global bin feature sequence.
It should be noted that, although the foregoing embodiments describe the steps in a specific sequential order, it should be understood by those skilled in the art that, in order to achieve the effects of the present application, different steps need not be performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of protection of the present application.
It will be appreciated by those skilled in the art that the present application may implement all or part of the above-described methods according to the above-described embodiments, or may be implemented by means of a computer program for instructing relevant hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of the above-described embodiments of the method when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code. It should be noted that the computer readable storage medium may include content that is subject to appropriate increases and decreases as required by jurisdictions and by jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunications signals.
Further, the application also provides pose optimizing equipment.
Referring to fig. 6, fig. 6 is a main structural block diagram of the pose optimization apparatus according to one embodiment of the present application. As shown in fig. 6, the pose optimization apparatus in the embodiment of the present application may include a processor 61 and a storage 62.
The storage device 62 may be configured to store a program for performing the pose optimization method of the above-described method embodiment, and the processor 61 may be configured to perform the program in the storage device 62, including but not limited to a program for performing the pose optimization method of the above-described method embodiment. For convenience of explanation, only those portions relevant to the embodiments of the present application are shown, and specific technical details are not disclosed, refer to the method portions of the embodiments of the present application. The pose optimization device may be a control device formed of various electronic devices.
In one implementation, the number of memory devices 62 and processors 61 may be multiple. While the program for performing the pose optimization method of the above-described method embodiment may be divided into a plurality of sub-programs, each of which may be loaded and executed by the processor 61 to perform the different steps of the pose optimization method of the above-described method embodiment, respectively. Specifically, each of the sub-programs may be stored in a different storage 62, and each of the processors 61 may be configured to execute the programs in one or more storage 62 to collectively implement the pose optimization method of the above method embodiment, that is, each of the processors 61 executes different steps of the pose optimization method of the above method embodiment, respectively, to collectively implement the pose optimization method of the above method embodiment.
The plurality of processors 61 may be processors disposed on the same device, for example, the device may be a high-performance device composed of a plurality of processors, and the plurality of processors 61 may be processors configured on the high-performance device. The plurality of processors 61 may be processors disposed on different devices, for example, the devices may be a server cluster, and the plurality of processors 61 may be processors on different servers in the server cluster.
Further, the application also provides an intelligent device, which comprises the detection device for the cross-modal target state in the embodiment. The intelligent device may include a steering device, an autopilot vehicle, an intelligent car, a robot, an unmanned aerial vehicle, etc.
In some embodiments of the present application, the smart device further comprises at least one sensor for sensing information. The sensor is communicatively coupled to any of the types of processors referred to herein. Optionally, the intelligent device further comprises an automatic driving system, and the automatic driving system is used for guiding the intelligent device to drive by itself or assist driving. The processor communicates with the sensors and/or the autopilot system for performing the method of any one of the embodiments described above.
Further, the present application also provides a computer-readable storage medium. In one computer-readable storage medium embodiment according to the present application, the computer-readable storage medium may be configured to store a program that performs the pose optimization method of the above-described method embodiment, which may be loaded and executed by a processor to implement the pose optimization method described above. For convenience of explanation, only those portions relevant to the embodiments of the present application are shown, and specific technical details are not disclosed, refer to the method portions of the embodiments of the present application. The computer readable storage medium may be a storage device including various electronic devices, and optionally, in embodiments of the present application, the computer readable storage medium is a non-transitory computer readable storage medium.
Further, it should be understood that, since the respective modules are merely set to illustrate the functional units of the apparatus of the present application, the physical devices corresponding to the modules may be the processor itself, or a part of software in the processor, a part of hardware, or a part of a combination of software and hardware. Accordingly, the number of individual modules in the figures is merely illustrative.
Those skilled in the art will appreciate that the various modules in the apparatus may be adaptively split or combined. Such splitting or combining of specific modules does not lead to a deviation of the technical solution from the principles of the present application, and therefore, the technical solution after splitting or combining will fall within the protection scope of the present application.
It should be noted that, the personal information of the relevant user possibly related to each embodiment of the present application is personal information that is strictly according to requirements of laws and regulations, follows legal, legal and necessary principles, and processes the personal information actively provided by the user or generated by using the product/service in the process of using the product/service based on the reasonable purpose of the business scenario, and is obtained by the user through authorization.
The personal information of the user processed by the application may be different according to the specific product/service scene, and the specific scene that the user uses the product/service may be referred to as account information, equipment information, driving information, vehicle information or other related information of the user. The present application treats the user's personal information and its processing with a high diligence.
The method and the device have the advantages that safety of personal information of the user is very important, and safety protection measures which meet industry standards and are reasonable and feasible are adopted to protect the information of the user and prevent the personal information from unauthorized access, disclosure, use, modification, damage or loss.
Thus far, the technical solutions of the present application have been described with reference to the embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present application is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present application, and such modifications and substitutions will be within the scope of the present application.

Claims (9)

1. The pose optimization method is characterized by comprising the following steps of:
extracting key frames of point cloud data from a plurality of point cloud data frames;
extracting first primitive features of the key frames;
detecting whether the first primitive feature is contained in a global primitive feature sequence;
if the first surface element feature is contained in the global surface element feature sequence, establishing a constraint item from the first surface element to the second surface element as a global bundling adjustment factor; the global bundle set adjustment factor is a factor for adjusting pose and facial element characteristics of the two-by-two point cloud data frames;
constructing a factor graph according to the global bundle adjustment factors;
optimizing the pose of the key frame based on the factor graph to obtain the optimized pose of the key frame;
if the first primitive feature includes a first coordinate mean value corresponding to coordinates of a plurality of primitive points, detecting whether the first primitive feature is included in a global primitive feature sequence includes:
determining a distance between the first coordinate mean and the second coordinate mean; the second coordinate mean value is a coordinate mean value corresponding to each second facial feature in the global facial feature sequence;
if at least one distance is smaller than or equal to a second preset distance, determining that at least one second facial feature is matched with the first facial feature, and determining that the first facial feature is contained in the global facial feature sequence;
and if all the distances are larger than a second preset distance, determining that each second facial feature is not matched with the first facial feature, and determining that the first facial feature is not contained in the global facial feature sequence.
2. The pose optimization method according to claim 1, wherein the constraint term is that a distance from a point of the first surface element to a surface of the second surface element is smaller than a first preset distance; the first surface element is the surface element corresponding to the key frame, and the second surface element is the surface element containing the first surface element characteristic in a plurality of surface elements corresponding to the global surface element characteristic sequence.
3. The pose optimization method according to claim 1, further comprising:
and if the first facial feature is not contained in the global facial feature sequence, inserting the first facial feature into the global facial feature sequence.
4. The pose optimization method according to claim 1, wherein if the first primitive feature comprises a first normal vector, detecting whether the first primitive feature is included in a global primitive feature sequence comprises:
determining a vector difference between the first normal vector and each second normal vector; the second normal vector is a normal vector corresponding to each second facial feature in the global facial feature sequence;
if the at least one vector difference is smaller than or equal to a preset difference value, determining that at least one second facial feature is matched with the first facial feature, and determining that the first facial feature is contained in the global facial feature sequence;
and if all the vector differences are larger than the preset difference, determining that each second facial feature is not matched with the first facial feature, and determining that the first facial feature is not contained in the global facial feature sequence.
5. The pose optimization method according to claim 1, wherein if the first primitive feature includes a first normal vector and a first coordinate mean corresponding to a plurality of primitive point coordinates, detecting whether the first primitive feature is included in a global primitive feature sequence includes:
determining a vector difference between the first normal vector and each second normal vector; and determining a distance between the first coordinate mean and each second coordinate mean; wherein the second normal vector is a normal vector corresponding to each second primitive feature in the global primitive feature sequence; the second coordinate mean value is the coordinate mean value corresponding to each second panel feature;
determining a similarity between the first and each second primitive feature according to the vector difference and the distance;
if the at least one similarity is greater than or equal to a preset similarity, determining that at least one second facial feature is matched with the first facial feature, and determining that the first facial feature is contained in the global facial feature sequence;
if all the similarities are smaller than the preset similarities, determining that each second facial feature is not matched with the first facial feature, and determining that the first facial feature is not contained in the global facial feature sequence.
6. The pose optimization method according to claim 1, wherein extracting key frames of point cloud data from a plurality of point cloud data frames comprises:
performing loop detection on the point cloud data frames to obtain loop frames of a plurality of point cloud data;
determining a state change value of a preset target for any loop frame of the plurality of loop frames; the state change value is a difference value between a first state value of the preset target in any loop frame and a second state value of the preset target in a reference loop frame;
if the state change value is larger than a preset threshold value, determining that any loop frame is a key frame;
and if the state change value is smaller than or equal to the preset threshold value, determining that any loop frame is not a key frame.
7. A pose optimization device comprising a processor and a storage means adapted to store a plurality of program codes adapted to be loaded and run by the processor to perform the pose optimization method according to any of claims 1 to 6.
8. An intelligent device comprising the pose optimization device of claim 7.
9. A computer readable storage medium, characterized in that a plurality of program codes are stored, which are adapted to be loaded and run by a processor to perform the pose optimization method according to any of claims 1 to 6.
CN202311817382.2A 2023-12-27 2023-12-27 Pose optimization method, pose optimization equipment, intelligent equipment and medium Active CN117475092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311817382.2A CN117475092B (en) 2023-12-27 2023-12-27 Pose optimization method, pose optimization equipment, intelligent equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311817382.2A CN117475092B (en) 2023-12-27 2023-12-27 Pose optimization method, pose optimization equipment, intelligent equipment and medium

Publications (2)

Publication Number Publication Date
CN117475092A CN117475092A (en) 2024-01-30
CN117475092B true CN117475092B (en) 2024-03-19

Family

ID=89631525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311817382.2A Active CN117475092B (en) 2023-12-27 2023-12-27 Pose optimization method, pose optimization equipment, intelligent equipment and medium

Country Status (1)

Country Link
CN (1) CN117475092B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115265523A (en) * 2022-09-27 2022-11-01 泉州装备制造研究所 Robot simultaneous positioning and mapping method, device and readable medium
CN115479598A (en) * 2022-08-23 2022-12-16 长春工业大学 Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN115507842A (en) * 2022-10-12 2022-12-23 中国电子科技集团公司第五十四研究所 Surface element-based lightweight unmanned aerial vehicle map construction method
CN115561731A (en) * 2022-12-05 2023-01-03 安徽蔚来智驾科技有限公司 Pose optimization method, point cloud map establishment method, computer device and medium
WO2023280274A1 (en) * 2021-07-07 2023-01-12 The Hong Kong University Of Science And Technology Geometric structure aided visual localization method and system
CN115638787A (en) * 2022-12-23 2023-01-24 安徽蔚来智驾科技有限公司 Digital map generation method, computer readable storage medium and electronic device
KR20230029120A (en) * 2021-08-23 2023-03-03 연세대학교 산학협력단 Method and apparatus for estimating location of a moving object and generating map using fusion of point feature and surfel feature
CN116148808A (en) * 2023-04-04 2023-05-23 江苏集萃清联智控科技有限公司 Automatic driving laser repositioning method and system based on point cloud descriptor
CN116839600A (en) * 2023-06-15 2023-10-03 南京航空航天大学 Visual mapping navigation positioning method based on lightweight point cloud map
CN117128950A (en) * 2023-08-29 2023-11-28 中国第一汽车股份有限公司 Point cloud map construction method and device, electronic equipment and storage medium
CN117218350A (en) * 2023-09-19 2023-12-12 中南林业科技大学 SLAM implementation method and system based on solid-state radar
CN117213470A (en) * 2023-11-07 2023-12-12 武汉大学 Multi-machine fragment map aggregation updating method and system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023280274A1 (en) * 2021-07-07 2023-01-12 The Hong Kong University Of Science And Technology Geometric structure aided visual localization method and system
KR20230029120A (en) * 2021-08-23 2023-03-03 연세대학교 산학협력단 Method and apparatus for estimating location of a moving object and generating map using fusion of point feature and surfel feature
CN115479598A (en) * 2022-08-23 2022-12-16 长春工业大学 Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN115265523A (en) * 2022-09-27 2022-11-01 泉州装备制造研究所 Robot simultaneous positioning and mapping method, device and readable medium
CN115507842A (en) * 2022-10-12 2022-12-23 中国电子科技集团公司第五十四研究所 Surface element-based lightweight unmanned aerial vehicle map construction method
CN115561731A (en) * 2022-12-05 2023-01-03 安徽蔚来智驾科技有限公司 Pose optimization method, point cloud map establishment method, computer device and medium
CN115638787A (en) * 2022-12-23 2023-01-24 安徽蔚来智驾科技有限公司 Digital map generation method, computer readable storage medium and electronic device
CN116148808A (en) * 2023-04-04 2023-05-23 江苏集萃清联智控科技有限公司 Automatic driving laser repositioning method and system based on point cloud descriptor
CN116839600A (en) * 2023-06-15 2023-10-03 南京航空航天大学 Visual mapping navigation positioning method based on lightweight point cloud map
CN117128950A (en) * 2023-08-29 2023-11-28 中国第一汽车股份有限公司 Point cloud map construction method and device, electronic equipment and storage medium
CN117218350A (en) * 2023-09-19 2023-12-12 中南林业科技大学 SLAM implementation method and system based on solid-state radar
CN117213470A (en) * 2023-11-07 2023-12-12 武汉大学 Multi-machine fragment map aggregation updating method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种改进匹配点对选取策略的ElasticFusion室内三维重建算法;王玮琦等;《武汉大学学报(信息科学版)》;20200905;第45卷(第09期);第1469-1477页 *
基于LDSO的机器人视觉定位与稠密建图的应用;李奎霖;魏武;高勇;李艳杰;王栋梁;;微电子学与计算机;20200205;第37卷(第02期);第51-56页 *

Also Published As

Publication number Publication date
CN117475092A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN112800825B (en) Key point-based association method, system and medium
CN115965657B (en) Target tracking method, electronic device, storage medium and vehicle
CN116758518B (en) Environment sensing method, computer device, computer-readable storage medium and vehicle
CN112434566A (en) Passenger flow statistical method and device, electronic equipment and storage medium
CN115965943A (en) Target detection method, device, driving device, and medium
CN117475092B (en) Pose optimization method, pose optimization equipment, intelligent equipment and medium
CN117079309A (en) ReID model training method, reID pedestrian recognition method, device and medium
CN113869163B (en) Target tracking method and device, electronic equipment and storage medium
CN116129378A (en) Lane line detection method, device, equipment, vehicle and medium
CN116012609A (en) Multi-target tracking method, device, electronic equipment and medium for looking around fish eyes
CN115205820A (en) Object association method, computer device, computer-readable storage medium, and vehicle
CN117173692B (en) 3D target detection method, electronic device, medium and driving device
CN117197631B (en) Multi-mode sensor fusion sensing method, computer equipment, medium and vehicle
CN115984803B (en) Data processing method, device, driving device and medium
CN117765535A (en) Point cloud data labeling method, equipment, intelligent equipment and medium
CN113989694B (en) Target tracking method and device, electronic equipment and storage medium
CN117173693A (en) 3D target detection method, electronic device, medium and driving device
CN116883960A (en) Target detection method, device, driving device, and medium
CN116452959A (en) Scene recognition method, scene data acquisition method, device, medium and vehicle
CN116993831A (en) Vehicle detection method, computer device, storage medium and vehicle
CN117705105A (en) Point cloud positioning method, computer readable storage medium and intelligent device
CN117671628A (en) Target tracking method, device, intelligent device and medium
CN117765030A (en) Target tracking method, storage medium and vehicle
CN115984801A (en) Point cloud target detection method, computer equipment, storage medium and vehicle
CN117689690A (en) Target tracking and reconstructing method, storage medium and intelligent device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant