CN113129361B - Pose determining method and device for movable equipment - Google Patents

Pose determining method and device for movable equipment Download PDF

Info

Publication number
CN113129361B
CN113129361B CN202010039297.8A CN202010039297A CN113129361B CN 113129361 B CN113129361 B CN 113129361B CN 202010039297 A CN202010039297 A CN 202010039297A CN 113129361 B CN113129361 B CN 113129361B
Authority
CN
China
Prior art keywords
determining
pose information
edge
pose
edge contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010039297.8A
Other languages
Chinese (zh)
Other versions
CN113129361A (en
Inventor
杨帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN202010039297.8A priority Critical patent/CN113129361B/en
Publication of CN113129361A publication Critical patent/CN113129361A/en
Application granted granted Critical
Publication of CN113129361B publication Critical patent/CN113129361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a pose determining method and device of movable equipment, comprising the following steps: determining a first edge profile of the reference object in the first top view; determining a corresponding second top view according to a first image acquired by the movable equipment at the current moment, and determining a second edge contour of the reference object in the second top view; determining at least two pieces of second pose information corresponding to the movable equipment at the current moment according to the first pose information corresponding to the movable equipment at the previous moment before the current moment; determining a third edge contour of the reference object corresponding to each piece of second pose information according to the second edge contour and each piece of second pose information; determining the deviation distance between the first edge profile and each third edge profile; and determining second pose information corresponding to a third edge contour, wherein the deviation distance meets the first preset condition, as third pose information.

Description

Pose determining method and device for movable equipment
Technical Field
The disclosure relates to the technical field of image analysis, in particular to a pose determining method and device of movable equipment.
Background
The positioning of the mobile device in the map may be considered as a process of matching the map with the observation information of the sensor carried by the mobile device to determine the pose of the mobile device on the map.
In order to ensure accuracy of positioning, the above-mentioned sensor for acquiring observation information is usually a lidar. However, the cost of the laser radar is too high, which is not beneficial to the popularization in a large range. If a camera with relatively low cost is selected as the sensor, depth information is missing from the observed information, and the accuracy is difficult to meet the requirement.
Disclosure of Invention
The present disclosure has been made in order to solve the above technical problems. The embodiment of the disclosure provides a pose determining method and device for a mobile device, wherein the pose determining for the mobile device is realized by combining an image acquired by a camera with a top view of a high-altitude nodding.
According to a first aspect of the present disclosure, there is provided a pose determining method of a mobile device, including:
determining a first edge profile of the reference object in the first top view;
determining a corresponding second top view according to a first image acquired by the movable equipment at the current moment, and determining a second edge contour of the reference object in the second top view;
Determining at least two pieces of second pose information corresponding to the movable equipment at the current moment according to the first pose information corresponding to the movable equipment at the previous moment before the current moment;
determining a third edge contour of the reference object corresponding to each piece of second pose information according to the second edge contour and each piece of second pose information;
determining the deviation distance between the first edge profile and each third edge profile;
and determining second pose information corresponding to a third edge contour, wherein the deviation distance meets the first preset condition, as third pose information.
According to a second aspect of the present disclosure, there is provided a pose determining apparatus of a movable device, comprising:
a first edge contour determination module for determining a first edge contour of the reference object in the first plan view;
the second edge contour determining module is used for determining a corresponding second top view according to a first image acquired by the movable equipment at the current moment and determining a second edge contour of the reference object in the second top view;
a second pose information determining module, configured to determine at least two pieces of second pose information corresponding to the mobile device at the current moment according to first pose information corresponding to a previous moment of the mobile device before the current moment;
The third edge contour determining module is used for determining a third edge contour of the reference object corresponding to each piece of second pose information according to the second edge contour and each piece of second pose information;
a deviation distance determining module, configured to determine a deviation distance between the first edge profile and each third edge profile;
and the third pose information determining module is used for determining the second pose information corresponding to the third edge contour, of which the deviation distance meets the first preset condition, as third pose information.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the pose determining method of the movable apparatus described in the first aspect described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the pose determining method of the mobile device according to the first aspect.
Compared with the prior art, the pose determining method and the pose determining device for the movable equipment are adopted, and the first edge outline of the reference object is determined through the first top view of the overhead nodding; combining the image shot by the camera and the second pose information of the mobile equipment obtained through prediction, and projecting the second pose information to the first top view to obtain third contour information of the reference object; determining the most accurate predicted second pose information by calculating the deviation distance between the first edge contour and the third edge contour; therefore, the movable equipment is positioned by combining the common camera with the high-altitude nodding image. The positioning accuracy is guaranteed, and meanwhile, the laser radar is not needed, so that the positioning cost is reduced.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing embodiments thereof in more detail with reference to the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a schematic structural diagram of a pose determining system of a mobile device according to an exemplary embodiment of the present disclosure;
fig. 2 is a flowchart of a method for determining a pose of a mobile device according to an exemplary embodiment of the present disclosure;
fig. 3 is a flowchart of a method for determining a pose of a mobile device according to an exemplary embodiment of the present disclosure;
fig. 4 is a schematic distribution diagram of second pose information in a pose determining method of a mobile device according to an exemplary embodiment of the present disclosure;
fig. 5 is a flowchart of a method for determining a pose of a mobile device according to an exemplary embodiment of the present disclosure;
FIG. 6 is a schematic diagram of reference points in a method for determining the pose of a mobile device according to an exemplary embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a pose determining apparatus of a mobile device according to an exemplary embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a second edge contour determining module in the pose determining apparatus of the mobile device according to an exemplary embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a second pose information determining module in a pose determining apparatus of a mobile device according to an exemplary embodiment of the present disclosure;
Fig. 10 is a schematic structural view of a deviation distance determining module in a pose determining apparatus of a mobile device according to an exemplary embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a third pose information determining module in a pose determining apparatus of a mobile device according to an exemplary embodiment of the present disclosure;
fig. 12 is a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
It will be appreciated by those of skill in the art that the terms "first," "second," etc. in embodiments of the present disclosure are used merely to distinguish between different steps, devices or modules, etc., and do not represent any particular technical meaning nor necessarily logical order between them.
It should also be understood that in embodiments of the present disclosure, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in the presently disclosed embodiments may be generally understood as one or more without explicit limitation or the contrary in the context.
In addition, the term "and/or" in this disclosure is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the front and rear association objects are an or relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and that the same or similar features may be referred to each other, and for brevity, will not be described in detail.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Embodiments of the present disclosure may be applicable to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with the terminal device, computer system, server, or other electronic device include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, minicomputer systems, mainframe computer systems, and distributed cloud computing technology environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computing system storage media including memory storage devices.
Summary of the application
In the prior art, a laser radar is generally mounted on a mobile device to observe, and the obtained observation information (such as a point cloud image) is matched with a map, so that the pose of the mobile device on the map can be determined. The accuracy of the pose determination method can meet the requirements, but the laser radar is too expensive in cost and unfavorable for wide-range popularization, so that the limitation is obvious.
As an alternative, a low cost camera may be selected to obtain the observation information (e.g., an image) and further determine the pose of the mobile device. However, the greatest difference between the camera (especially a common monocular camera) and the laser radar is that the camera obtains observation information in which depth information is missing, so that the accuracy of determining pose is poor, and the requirement is difficult to meet.
In other solutions, the determination of the pose of the mobile device by using a top view of satellite shooting (i.e. a satellite map) is selected, and the requirements can be theoretically satisfied. In practice, however, the use of satellite overhead views may be subject to certain regulatory policies. The accuracy of satellite overhead views that can be used in commercial or civilian applications is also typically low, again resulting in insufficient accuracy in pose determination. There are also great limitations to such schemes.
Exemplary System
In view of the above-mentioned problems, the pose determining system of the mobile device according to the present disclosure combines an image acquired by a camera with a top view (such as a satellite map) of a high-altitude nodding to determine the pose of the mobile device. The structure of the system is shown with reference to fig. 1.
In this system, a first edge profile of a reference object (for example, various elements such as a building and a pavement marker included in the first plan view, which can be used as a reference object) in a world coordinate system is determined from the first plan view of the overhead nodding. And determining a second edge contour of the reference object in a camera coordinate system through a first image shot by the movable device.
And then predicting a plurality of possible second pose information of the movable equipment at the current moment according to the first pose information of the movable equipment determined by the previous positioning in a particle scattering mode. And respectively converting the second edge wheel in the camera coordinate system into a world coordinate system by utilizing the second pose information, namely projecting the second edge wheel into the first top view to obtain a third edge contour.
It will be appreciated that if the position of the first edge contour is considered to be accurate, the closer a certain second pose information is to the actual pose of the mobile device at the current moment, the more accurate the position of the corresponding third edge contour should be, i.e. the more fitting the first edge contour. The system determines the relatively most accurate second pose information by calculating the offset distance of the first edge profile from each third edge profile, thereby enabling pose determination of the mobile device.
Exemplary method
Fig. 2 is a flowchart illustrating a method for determining a pose of a mobile device according to an exemplary embodiment of the present disclosure. The present embodiment may be applied to an electronic device, as shown in fig. 2, and includes the following steps:
step 201, determining a first edge contour of a reference object in a first top view.
Typically, the first plan view may be a satellite map captured by a satellite, or may be a high-altitude depression image captured by aerial or other similar means. The actual positions of various elements on the ground can be accurately reflected in the first top view. The ground elements may include artifacts such as buildings, pavement markers, roads, railways, bridges, etc., and may also include specific features such as mountains, rivers, forests, farmlands, etc. In theory, any ground element can be used as a reference in this step, and in practical applications, pavement markers or buildings are usually selected as the reference.
In this step, image analysis may be performed on the first top view, so as to extract the contour of the reference object, so as to obtain the first edge contour of the reference object in the first top view. The contour extraction can be realized based on the technologies of image semantic segmentation, image instance segmentation and the like in the prior art; of course, the implementation may be based on any other technology having the same or similar functions, which is not limited in this embodiment.
The first edge contour may be essentially expressed by coordinates in a series of world coordinate systems (i.e., the coordinate system in which the first top view is located). The meaning is to embody the actual position of the reference object.
Step 202, determining a corresponding second top view according to a first image acquired by the mobile device at the current moment, and determining a second edge contour of the reference object in the second top view.
And the current moment, namely the moment according to which the movable equipment is positioned at the moment, is determined. A first image is captured at a current time using a camera onboard the mobile device. It is obvious that the space corresponding to the first image will depend on the actual pose of the camera at the current moment, i.e. the actual pose of the movable device at the current moment. The first image is obtained in the step only by using a common monocular camera without laser radar.
It is understood that the space corresponding to the first image is also a part of the space included in the first top view. The reference object referred to in the first plan view may likewise be included in the first image. The difference is that the photographing angle of the first image is close to the panning, and the photographing angle of the first top view is the nodding. Therefore, in order to facilitate the subsequent comparison of the positional relationship of the reference objects, the first image is further converted from the flat shot image to the nodding image in this step, that is, the second top view is determined. In this embodiment, the above-mentioned conversion modes are not limited, and any technical means capable of achieving the same or similar effects may be combined in the overall scheme of this embodiment. After the second plan view is determined, a second edge profile of the reference object can be determined from the second plan view. The image analysis process for determining the second edge profile may be the same as that for the first edge profile and will not be repeated here.
The reference system of the second plan view obtained in this step is a coordinate system built in the camera. Correspondingly, the second edge contour may also be expressed by coordinates in a series of camera built-in coordinate systems.
Step 203, determining at least two pieces of second pose information corresponding to the mobile device at the current moment according to the first pose information corresponding to the mobile device at the previous moment before the current moment.
The previous time may be the time at which the mobile device was last located. The first pose information may be pose information of the movable device determined after the last positioning. The movable device may move and change its pose in the time frame between the current moment and the previous moment. However, since this time range is generally small, the actual pose of the current moment of the movable device can be approximately "predicted" according to the first pose information, the time difference between the current moment and the previous moment, and the motion parameters (such as the moving speed, the moving direction, the moving distance, etc.) of the movable device.
In this embodiment, possible pose information of at least two movable devices at the current moment, that is, the second pose information, may be predicted by a technique known as "particle scattering" in the art. Typically the number of second pose information is a plurality. And, from a probability point of view, some second pose information will be consistent with or have a sufficiently small error from the actual pose of the mobile device at the current moment. This second pose information may be considered to be equivalent to the actual pose. In this embodiment, the pose determination of the mobile device may be implemented by determining the second pose information.
And 204, determining a third edge contour of the reference object corresponding to each piece of second pose information according to the second edge contour and each piece of second pose information.
And combining the second pose information, so that the coordinate conversion of the second edge contour from the built-in coordinate system of the camera to the world coordinate system can be realized. That is, the second edge contour is projected into the first top view, resulting in a third edge contour of the reference object. The third edge contour may also be expressed by coordinates in a series of world coordinate systems. In this embodiment, the conversion algorithm is not limited, and any related algorithm may be combined in the overall scheme of this embodiment. Of course, since the second pose information is different, the position of the corresponding third edge contour falling on the first top view will be different after the second edge contour is projected in combination with the second pose information.
Step 205, determining the offset distance between the first edge profile and each third edge profile.
In theory, if the second edge profile is projected in combination with the actual pose of the mobile device at the current moment, the corresponding third edge profile should be exactly coincident with the actual position of the reference object. I.e. the third edge profile coincides with the first edge profile. In practice, since each second pose information is derived from a "prediction" and is not absolutely accurate, there is often a certain deviation between its corresponding third edge profile and the first edge profile. But the closer the second pose information is to the actual pose of the mobile device, the smaller the deviation. In this embodiment, by calculating the offset distance between the first edge contour and each third edge contour, one of the plurality of second pose information having the highest consistency with the actual pose is determined.
The calculation method of the offset distance is not limited in this embodiment. For example, the offset distance may be calculated by the intersection ratio between the first edge profile and the third edge profile; a reference point may also be selected on the first edge profile and/or the third edge profile and the offset distance determined by calculating the perpendicular distance of the reference point-reference point/reference point-profile line. Of course, any algorithm capable of achieving similar functions may be equally incorporated into the overall arrangement of the present embodiment.
And 206, determining second pose information corresponding to a third edge contour with the deviation distance meeting the first preset condition as third pose information.
After the deviation distances corresponding to the second pose information are obtained through calculation, the deviation distances can be compared and analyzed, namely whether a certain deviation distance meets a first preset condition is judged. Thereby realizing the determination of the pose of the movable equipment.
For example, the first preset condition may be "the deviation distance is minimum". Namely, the minimum deviation distance is considered to correspond to the second pose information, and the consistency of the second pose information and the actual pose of the movable equipment is highest; i.e. it is considered that the second pose information can correspond to the actual pose of the mobile device. The second pose information is determined as third pose information.
Alternatively, the first preset condition may be that "the deviation distance is minimum and less than a preset threshold". The condition not only can determine the second pose information with the highest consistency with the actual pose information of the movable equipment, but also can further limit the error range through the setting of the threshold value, thereby further ensuring the positioning accuracy.
And third pose information, namely the pose of the movable equipment at the current moment, which is determined by the embodiment. Thus, the determination of the pose of the movable equipment is realized in the embodiment.
According to the technical scheme, the beneficial effects of the embodiment are as follows: determining a first edge profile of the reference object through a first top view of the overhead nodding; combining the image shot by the camera and the second pose information of the mobile equipment obtained through prediction, and projecting the second pose information to the first top view to obtain third contour information of the reference object; determining the most accurate predicted second pose information by calculating the deviation distance between the first edge contour and the third edge contour; therefore, the movable equipment is positioned by combining the common camera with the high-altitude nodding image. The positioning accuracy is guaranteed, and meanwhile, the laser radar is not needed, so that the positioning cost is reduced.
As shown in fig. 2, only the basic embodiment of the method disclosed in the present disclosure is shown, and certain optimization and expansion are performed on the basis of the basic embodiment, so that other preferred embodiments of the method can be obtained.
Fig. 3 is a schematic flow chart of a pose determining method of a mobile device according to another exemplary embodiment of the present disclosure. The embodiment can be applied to electronic equipment. The present embodiment will specifically describe a determination process for the third edge profile on the basis of the embodiment shown in fig. 2. In this embodiment, the method specifically includes the following steps:
step 301, determining a first edge contour of a reference object in a first top view.
The first edge contour may be essentially expressed by coordinates in a series of world coordinate systems. In the present embodiment, therefore, the first edge contour is represented as a set of coordinates Pw'.
Step 302, performing inverse perspective transformation on the first image to determine a second top view.
Because the shooting angle of the first image is close to that of the flat shooting, the first image needs to be converted in the step, and the first image is converted from the flat shooting image to the nodding image, namely, the second top view is determined. In this embodiment, the above conversion process will be implemented by inverse perspective transformation. The process of inverse perspective transformation is well known in the art and will not be described in detail herein.
It should be noted that, the above-mentioned inverse perspective transformation process does not need to combine the actual pose of the mobile device at the current moment, and the reference system where the converted second top view is located is the coordinate system built in the camera.
Step 303, determining a second edge profile of the reference object in the second top view.
In the first image, the pixel coordinates of the edge contour of the reference object, denoted as the set of coordinates p, can be determined by image analysis techniques. With the above-described inverse perspective transformation, the above-described pixel coordinates p can be simultaneously converted into the coordinate set Pc in the coordinate system built in the camera. The set of coordinates Pc is the coordinate representation of the second edge profile.
Step 304, obtaining motion parameters of the movable device between the current moment and the previous moment.
In the present embodiment, it is assumed that the previous time is t1 and the current time is t2. The movement parameter may be an odometer. The odometer can be calculated by the number of code discs fed back by the wheel encoder of the mobile device. The odometer reflects the distance and direction of movement of the mobile device between time t1 and time t2, and can be expressed as [ delta x, delta y, delta theta ], the reference frame of which is the world coordinate system. Where Δx represents the moving distance of the movable apparatus in the x-axis, Δy represents the moving distance of the movable apparatus in the y-axis, and Δθ represents the steering angle of the movable apparatus.
Step 305, performing random conversion on the first pose information according to the motion parameters to obtain at least two pieces of second pose information corresponding to the mobile equipment at the current moment.
Based on the first pose information, the time difference between the current time and the previous time, and the motion parameters of the mobile device, an approximate "prediction" of the actual pose of the mobile device at the current time may be made.
I.e. so-called "dusting".
Assume that the pose of the movable device at time t1, i.e., the first pose information may be represented as q1= [ x 0 ,y 0 ,θ 0 ]The reference system is the world coordinate system. Wherein x is 0 Coordinate value of movable equipment t1 in x-axis and y 0 For the coordinate value of the movable equipment t1 in the y axis at the moment, theta 0 Is the angle at the moment of the movable device t 1.
The random conversion process described above can convert the motion parameters [ Deltax, deltay, deltaθ ] in consideration of the presence of noise during the prediction process]Converted into a median value [ delta x, delta y, delta theta ]]The variance is [ sigma x, sigma y, sigma theta ]]Is a gaussian distribution of (c). Of course, other distributions may be used in other cases, and the present embodiment is not limited thereto. A sample value, i.e. [ delta x ', delta y ', delta theta ], is then randomly sampled from the Gaussian distribution ']. According to the first pose information q1= [ x 0 ,y 0 ,θ 0 ]And the sampled value [ Deltax ', deltay ', deltaθ ] ']Pose transformation calculation q2= [ x ] 0 ,y 0 ,θ 0 ]+[Δx’,Δy’,Δθ’]. And q2 is the second pose information.
Reference is made to fig. 4. The solid rectangle is the position of the movable equipment at the time t1 and corresponds to the first pose information. The dashed rectangle is a plurality of predicted positions of the movable equipment at the time t2, and each position corresponds to one piece of second pose information.
In this embodiment, at least two pieces of second pose information can be obtained in the same way. In addition, through some conventional calculation conversion, the second pose information can be expressed as twc = [ R, t ]. Where R represents the rotation matrix of the camera and t represents the translation vector of the camera.
And 306, projecting the second edge contour into the first top view according to the second pose information to determine a third edge contour corresponding to the second pose information.
After determining the second pose information, the projection process may refer to the following formula:
pw=pc× twc, where Pw is a coordinate set of the third edge contour in the world coordinate system, pc is a coordinate set of the second edge contour in the coordinate system built in the camera, and twc is the second pose information. Obviously, the projection is carried out by combining different second pose information, and the obtained third edge contours are also different.
Step 307, determining the offset distance between the first edge profile and each third edge profile.
And 308, determining second pose information corresponding to a third edge contour, the deviation distance of which meets the first preset condition, as third pose information.
In this embodiment, steps 307 to 308 correspond to the embodiment shown in fig. 2, and the description thereof will not be repeated.
Fig. 5 is a flowchart illustrating a method for determining a pose of a mobile device according to another exemplary embodiment of the present disclosure. The embodiment can be applied to electronic equipment. This embodiment will specifically describe a calculation process for the offset distance on the basis of the embodiments shown in fig. 2 to 3. In this embodiment, the method specifically includes the following steps:
step 501, determining a first edge contour of a reference object in a first top view.
Step 502, determining a corresponding second top view according to a first image acquired by the mobile device at the current moment, and determining a second edge contour of the reference object in the second top view.
Step 503, determining at least two pieces of second pose information corresponding to the mobile device at the current moment according to the first pose information corresponding to the mobile device at the previous moment before the current moment.
Step 504, determining a third edge contour of the reference object corresponding to each second pose information according to the second edge contour and each second pose information.
In this embodiment, the contents of steps 501 to 504 are identical to those of the embodiment shown in fig. 2 to 3, and the description thereof will not be repeated here.
Step 505, determining reference coordinates of at least one reference point from the third edge profile.
In this embodiment, the offset distance will be determined by selecting a reference point and calculating the reference point-contour distance. In particular, at least one reference point may be determined in each third edge profile. Assuming that in the present embodiment, the reference object is a pavement marker having a rectangular shape in plan view, 4 vertex angles of the rectangle and midpoints on 4 sides may be taken as reference points, that is, 8 reference points may be determined in total.
Since the third edge contour can likewise be regarded as a set of coordinates in the world coordinate system, from the third edge contour the specific coordinates of the reference point in the world coordinate system can be determined.
Step 506, determining a deviation distance between the first edge profile and the third edge profile according to the first edge profile and the coordinates of each reference point.
After the reference points are determined, the distances from the reference points to the first edge contour can be calculated respectively, and then the deviation distance is determined according to the sum of the distances from the coordinates of the reference points to the first edge contour.
As shown in fig. 6, the solid rectangle in the figure represents the first edge profile, the dotted rectangle represents the third edge profile, and the solid dot represents the reference point on the third edge profile. For the reference points, i.e. point a, point B, point C and point D, where 4 vertices correspond, their distances to the corresponding vertices on the first edge contour can be calculated. For the reference points corresponding to the midpoints of the 4 edges, namely points E, F, G, H, the vertical distances thereof to the corresponding edges on the first edge profile can be calculated.
In this embodiment, the sum of the distances from each reference point to the first edge profile may be directly offset from the distance. The offset distance can of course also be determined by further combining weighting calculations as required. In this embodiment, the offset distance may be denoted as d.
Step 507, determining the matching probability of the first edge profile and each third edge profile according to the deviation distance of the first edge profile and each third edge profile.
The positions of the third edge contours corresponding to the second pose information are different, so that the corresponding deviation distances are also different. In this embodiment, the matching probability of the corresponding third edge profile and the first edge profile is further calculated according to each offset distance.
The matching probability can be calculated by referring to the following formula: xn=1/c exp (-dn). Wherein xn represents the matching probability of the third edge contour corresponding to the nth second pose information, c is a normalization parameter, and dn is the deviation distance of the third edge contour corresponding to the nth second pose information. According to the formula, the matching probability of the third edge contour corresponding to each piece of second pose information can be calculated, and the sum of the matching probabilities is exactly 100%.
And 508, determining second pose information corresponding to a third edge contour, of which the matching probability meets a second preset condition, as third pose information.
In this embodiment, it is assumed that the second preset condition is xn > 80%. And the second pose information corresponding to the matching probability meeting the condition can be determined as the third pose information. And third pose information, namely the pose of the movable equipment at the current moment, which is determined by the embodiment. Thus, the determination of the pose of the movable equipment is realized in the embodiment.
In addition, if the special condition that all the matching distances do not meet the second preset condition occurs, the fact that all the second pose information obtained by the 'particle scattering' prediction is inaccurate is indicated. At this time, the prediction of the "scattering particles" can be selected to be carried out again, or the prediction can be carried out again after the algorithm and parameters of the "scattering particles" are adjusted.
Exemplary apparatus
Fig. 7 is a schematic structural view of a pose determining apparatus of a mobile device according to an exemplary embodiment of the present disclosure. The apparatus of this embodiment is a physical apparatus for performing the methods of fig. 2 to 5. The technical solution is essentially identical to the above embodiment, and the corresponding description in the above embodiment is also applicable to this embodiment. The device in this embodiment includes:
a first edge contour determination module 701 is configured to determine a first edge contour of the reference object in the first top view.
The second edge contour determining module 702 is configured to determine a corresponding second top view according to the first image acquired by the mobile device at the current time, and determine a second edge contour of the reference object in the second top view.
The second pose information determining module 703 is configured to determine at least two pieces of second pose information corresponding to the mobile device at the current time according to the first pose information corresponding to the mobile device at a previous time before the current time.
And a third edge contour determination module 704, configured to determine a third edge contour of the reference object corresponding to each second pose information according to the second edge contour and each second pose information.
The offset distance determining module 705 is configured to determine offset distances between the first edge profile and each third edge profile.
The third pose information determining module 706 is configured to determine, as third pose information, second pose information corresponding to a third edge contour whose deviation distance satisfies the first preset condition.
Fig. 8 is a schematic structural diagram of a second edge contour determining module 702 in a pose determining apparatus of a mobile device according to another exemplary embodiment of the present disclosure. As shown in fig. 8, in an exemplary embodiment, the second edge profile determination module 702 includes:
an inverse transformation unit 811 for performing inverse perspective transformation on the first image to determine a second top view.
A second edge contour determining unit 812 for determining a second edge contour of the reference object in the second top view.
Fig. 9 is a schematic structural diagram of a second pose information determining module 703 in a pose determining apparatus of a mobile device according to another exemplary embodiment of the present disclosure. As shown in fig. 9, in an exemplary embodiment, the second pose information determination module 703 includes:
a motion parameter acquisition unit 911 is used for acquiring motion parameters of the movable device between the current time and the previous time.
The second pose information determining unit 912 is configured to randomly convert the first pose information according to the motion parameter, so as to obtain at least two pieces of second pose information corresponding to the mobile device at the current moment.
Fig. 10 is a schematic structural view of a deviation distance determining module 705 in a pose determining apparatus of a movable device according to another exemplary embodiment of the present disclosure. As shown in fig. 10, in an exemplary embodiment, the offset distance determination module 705 includes:
a reference point coordinate determination unit 1011 for determining the reference coordinates of at least one reference point from the third edge profile.
And a deviation distance determining unit 1012, configured to determine a deviation distance between the first edge profile and the third edge profile according to a sum of distances between the coordinates of each reference point and the first edge profile.
Fig. 11 is a schematic structural diagram of a third pose information determining module 706 in a pose determining apparatus of a mobile device according to another exemplary embodiment of the present disclosure. As shown in fig. 11, in an exemplary embodiment, the third pose information determination module 706 includes:
a matching probability determination unit 1111 configured to determine a matching probability of the first edge contour and each third edge contour according to a deviation distance between the first edge contour and each third edge contour;
and a third pose information determining unit 1112, configured to determine, as third pose information, second pose information corresponding to a third edge contour whose matching probability satisfies a second preset condition.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present disclosure is described with reference to fig. 12. The electronic device may be either or both of the first device 100 and the second device 200, or a stand-alone device independent thereof, which may communicate with the first device and the second device to receive the acquired input signals therefrom.
Fig. 12 illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
As shown in fig. 12, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 11 to implement the pose determination method and/or other desired functions of the removable device of the various embodiments of the disclosure described above. Various contents such as an input signal, a signal component, a noise component, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
For example, when the electronic device is the first device 100 or the second device 200, the input means 13 may be a microphone or a microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input means 13 may be a communication network connector for receiving the acquired input signals from the first device 100 and the second device 200.
In addition, the input device 13 may also include, for example, a keyboard, a mouse, and the like.
The output device 14 may output various information to the outside, including the determined distance information, direction information, and the like. The output device 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 relevant to the present disclosure are shown in fig. 12, with components such as buses, input/output interfaces, etc. omitted for simplicity. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions toComputer programThe instructions, when executed by a processor, cause the processor to perform steps in a pose determination method of a mobile device according to various embodiments of the present disclosure described in the above-described "exemplary methods" section of the present specification.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the pose determination method of a mobile device according to various embodiments of the present disclosure described in the above "exemplary method" section of the present description.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A pose determination method of a mobile device, comprising:
determining a first edge profile of the reference object in the first top view;
determining a corresponding second top view according to a first image acquired by the movable equipment at the current moment, and determining a second edge contour of the reference object in the second top view;
determining at least two pieces of second pose information corresponding to the movable equipment at the current moment according to the first pose information corresponding to the movable equipment at the previous moment before the current moment;
determining a third edge contour of the reference object corresponding to each piece of second pose information according to the second edge contour and each piece of second pose information;
determining the deviation distance between the first edge profile and each third edge profile;
and determining second pose information corresponding to a third edge contour, wherein the deviation distance meets the first preset condition, as third pose information.
2. The method of claim 1, wherein the determining the corresponding second top view from the first image acquired by the mobile device at the current time comprises:
the first image is subjected to an inverse perspective transformation to determine the second top view.
3. The method of claim 1, wherein the determining at least two second pose information corresponding to the mobile device at the current time based on first pose information corresponding to a previous time of the mobile device before the current time comprises:
acquiring motion parameters of the movable equipment between the current moment and the previous moment;
and according to the motion parameters, carrying out random conversion on the first pose information to obtain at least two pieces of second pose information corresponding to the movable equipment at the current moment.
4. A method according to claim 3, wherein said determining a third edge profile of the reference object corresponding to each of the second pose information from the second edge profile and each of the second pose information comprises:
and projecting the second edge contour into the first top view according to the second pose information to determine a third edge contour corresponding to the second pose information.
5. The method of claim 4, wherein the determining the offset distance of the first edge profile from each of the third edge profiles comprises:
determining reference coordinates of at least one reference point from the third edge profile;
and determining the deviation distance between the first edge profile and the third edge profile according to the first edge profile and each reference point coordinate.
6. The method of claim 5, wherein the determining a distance of departure of the first edge profile from the third edge profile based on the first edge profile and the reference point coordinates comprises:
and determining the deviation distance according to the sum of the distances from each reference point coordinate to the first edge contour.
7. The method according to any one of claims 1 to 6, wherein determining the second pose information corresponding to the third edge contour, in which the deviation distance satisfies the first preset condition, as the third pose information includes:
determining the matching probability of the first edge profile and each third edge profile according to the deviation distance of the first edge profile and each third edge profile;
and determining second pose information corresponding to the third edge contour, of which the matching probability meets a second preset condition, as third pose information.
8. A pose determining apparatus of a movable device, comprising:
a first edge contour determination module for determining a first edge contour of the reference object in the first plan view;
the second edge contour determining module is used for determining a corresponding second top view according to a first image acquired by the movable equipment at the current moment and determining a second edge contour of the reference object in the second top view;
a second pose information determining module, configured to determine at least two pieces of second pose information corresponding to the mobile device at the current moment according to first pose information corresponding to a previous moment of the mobile device before the current moment;
the third edge contour determining module is used for determining a third edge contour of the reference object corresponding to each piece of second pose information according to the second edge contour and each piece of second pose information;
a deviation distance determining module, configured to determine a deviation distance between the first edge profile and each third edge profile;
and the third pose information determining module is used for determining the second pose information corresponding to the third edge contour, of which the deviation distance meets the first preset condition, as third pose information.
9. A computer-readable storage medium storing a computer program for executing the pose determination method of a removable device according to any of the preceding claims 1-7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the pose determining method of the mobile device according to any of the preceding claims 1-7.
CN202010039297.8A 2020-01-14 2020-01-14 Pose determining method and device for movable equipment Active CN113129361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010039297.8A CN113129361B (en) 2020-01-14 2020-01-14 Pose determining method and device for movable equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010039297.8A CN113129361B (en) 2020-01-14 2020-01-14 Pose determining method and device for movable equipment

Publications (2)

Publication Number Publication Date
CN113129361A CN113129361A (en) 2021-07-16
CN113129361B true CN113129361B (en) 2024-03-15

Family

ID=76771252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010039297.8A Active CN113129361B (en) 2020-01-14 2020-01-14 Pose determining method and device for movable equipment

Country Status (1)

Country Link
CN (1) CN113129361B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945230A (en) * 2017-11-14 2018-04-20 福建中金在线信息科技有限公司 A kind of attitude information determines method, apparatus, electronic equipment and storage medium
CN108802785A (en) * 2018-08-24 2018-11-13 清华大学 Vehicle method for self-locating based on High-precision Vector map and monocular vision sensor
CN109544629A (en) * 2018-11-29 2019-03-29 南京人工智能高等研究院有限公司 Camera pose determines method and apparatus and electronic equipment
CN109727285A (en) * 2017-10-31 2019-05-07 霍尼韦尔国际公司 Use the position of edge image and attitude determination method and system
CN110208783A (en) * 2019-05-21 2019-09-06 同济人工智能研究院(苏州)有限公司 Intelligent vehicle localization method based on environment profile
CN110398979A (en) * 2019-06-25 2019-11-01 天津大学 A kind of unmanned engineer operation equipment tracking method and device that view-based access control model is merged with posture

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401225B2 (en) * 2011-01-31 2013-03-19 Microsoft Corporation Moving object segmentation using depth images
US10260862B2 (en) * 2015-11-02 2019-04-16 Mitsubishi Electric Research Laboratories, Inc. Pose estimation using sensors
US10282860B2 (en) * 2017-05-22 2019-05-07 Honda Motor Co., Ltd. Monocular localization in urban environments using road markings
US10503760B2 (en) * 2018-03-29 2019-12-10 Aurora Innovation, Inc. Use of relative atlas in an autonomous vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727285A (en) * 2017-10-31 2019-05-07 霍尼韦尔国际公司 Use the position of edge image and attitude determination method and system
CN107945230A (en) * 2017-11-14 2018-04-20 福建中金在线信息科技有限公司 A kind of attitude information determines method, apparatus, electronic equipment and storage medium
CN108802785A (en) * 2018-08-24 2018-11-13 清华大学 Vehicle method for self-locating based on High-precision Vector map and monocular vision sensor
CN109544629A (en) * 2018-11-29 2019-03-29 南京人工智能高等研究院有限公司 Camera pose determines method and apparatus and electronic equipment
CN110208783A (en) * 2019-05-21 2019-09-06 同济人工智能研究院(苏州)有限公司 Intelligent vehicle localization method based on environment profile
CN110398979A (en) * 2019-06-25 2019-11-01 天津大学 A kind of unmanned engineer operation equipment tracking method and device that view-based access control model is merged with posture

Also Published As

Publication number Publication date
CN113129361A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
EP3627180B1 (en) Sensor calibration method and device, computer device, medium, and vehicle
CN109059902B (en) Relative pose determination method, device, equipment and medium
CN109461211B (en) Semantic vector map construction method and device based on visual point cloud and electronic equipment
CN109635685B (en) Target object 3D detection method, device, medium and equipment
KR102032070B1 (en) System and Method for Depth Map Sampling
Servos et al. Multi-Channel Generalized-ICP: A robust framework for multi-channel scan registration
WO2019179464A1 (en) Method for predicting direction of movement of target object, vehicle control method, and device
CN113015924B (en) Apparatus and method for characterizing an object based on measurement samples from one or more position sensors
US11783507B2 (en) Camera calibration apparatus and operating method
JP7228623B2 (en) Obstacle detection method, device, equipment, storage medium, and program
CN110390706B (en) Object detection method and device
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
KR20190062852A (en) System, module and method for detecting pedestrian, computer program
CN114355415A (en) Pose information determining method and device, electronic equipment and storage medium
CN110542421B (en) Robot positioning method, positioning device, robot, and storage medium
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
Pessanha Santos et al. Unscented particle filters with refinement steps for uav pose tracking
CN110542422B (en) Robot positioning method, device, robot and storage medium
CN113129361B (en) Pose determining method and device for movable equipment
CN113759348A (en) Radar calibration method, device, equipment and storage medium
Cai et al. 3D vehicle detection based on LiDAR and camera fusion
CN115346020A (en) Point cloud processing method, obstacle avoidance method, device, robot and storage medium
CN114384486A (en) Data processing method and device
CN113129437B (en) Method and device for determining space coordinates of markers
CN112614189B (en) Combined calibration method based on camera and 3D laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant