CN111897432A - Pose determining method and device and electronic equipment - Google Patents

Pose determining method and device and electronic equipment Download PDF

Info

Publication number
CN111897432A
CN111897432A CN202010768009.2A CN202010768009A CN111897432A CN 111897432 A CN111897432 A CN 111897432A CN 202010768009 A CN202010768009 A CN 202010768009A CN 111897432 A CN111897432 A CN 111897432A
Authority
CN
China
Prior art keywords
electronic device
image
target object
target objects
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010768009.2A
Other languages
Chinese (zh)
Inventor
刘超
陈玉琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010768009.2A priority Critical patent/CN111897432A/en
Publication of CN111897432A publication Critical patent/CN111897432A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/24Constructional details thereof, e.g. game controllers with detachable joystick handles

Abstract

The application discloses a pose determining method, a pose determining device and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining a current first image of first electronic equipment, wherein the first electronic equipment can control a virtual scene output by second electronic equipment, and a plurality of target objects are arranged on the first electronic equipment; and determining first position and orientation information of the first electronic equipment relative to the second electronic equipment at present at least based on the relative position relationship between the target objects in the first image and a position relationship model created based on the three-dimensional coordinate information of the target objects of the first electronic equipment. The method and the device for determining the pose information of the first electronic equipment serving as the control end relative to the second electronic equipment serving as the controlled end can be determined.

Description

Pose determining method and device and electronic equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a pose determination method and apparatus, and an electronic device.
Background
It is now common to control one electronic device with another. For example, in Virtual Reality (VR) and Augmented Reality (AR), a handle is required to control a Virtual scene output by a Virtual Reality device.
To improve the convenience of control, one electronic device may determine a control instruction based on the pose (position and pose information) of another electronic device. For example, in an AR or VR based game scenario, a player may implement game control by changing the position and pose of the handle. Therefore, how to determine the pose of the electronic device serving as the control end is a precondition for realizing accurate control of the electronic device serving as the controlled end is also a technical problem to be solved by the technical personnel in the field.
Disclosure of Invention
The application provides a pose determining method and device and electronic equipment.
The pose determination method comprises the following steps:
the method comprises the steps of obtaining a current first image of first electronic equipment, wherein the first electronic equipment can control a virtual scene output by second electronic equipment, and a plurality of target objects are arranged on the first electronic equipment;
determining first pose information of the first electronic device relative to the second electronic device at present based on at least a relative position relationship between target objects in the first image and a position relationship model created based on three-dimensional coordinate information of the target objects of the first electronic device.
Preferably, the determining first pose information of the first electronic device with respect to the second electronic device based on at least a relative positional relationship between target objects in the first image and a positional relationship model created based on three-dimensional coordinate information of the target objects of the first electronic device includes:
determining a target object group contained in the first image based on a relative position relationship between target objects in the first image and the position relationship model, wherein each target object in the target object group has a first relative position relationship, and the position relationship model at least can represent three-dimensional distribution of each target object in the first electronic equipment;
and determining first position and orientation information of the first electronic equipment relative to the second electronic equipment currently at least based on the first relative position relationship and the three-dimensional coordinate information of each target object in the first image.
Preferably, the method further comprises the following steps:
determining projection position points of the rest target objects in the first electronic equipment on the first image at least based on the position relation model and the first position and orientation information, and determining second position and orientation information of the first electronic equipment relative to the second electronic equipment according to the projection position points and the three-dimensional coordinate information corresponding to the rest target objects.
Preferably, the first image includes at least four target objects therein;
the determining a target object group contained in the first image based on the relative positional relationship between the target objects in the first image and the positional relationship model includes:
determining at least three first target objects for pose calculation and at least one second target object for pose verification from among at least four target objects of the first image;
determining a first target object group and a second target object group contained in the first image based on the positional relationship model and a relative positional relationship between target objects in the first image, wherein the first target object group comprises at least three target objects matched with the at least three first target objects in the first electronic device, the second target object group comprises at least one target object matched with the at least one second target object in the first electronic device, and the first target object group has a first relative positional relationship;
the determining, based on at least the first relative positional relationship and three-dimensional coordinate information of each target object in the first image, first pose information of the first electronic device with respect to the second electronic device includes:
determining at least one set of first pose information of the first electronic device relative to the second electronic device at present based on at least a first relative position relationship of each target object in the first target object group and three-dimensional coordinate information of each target object in the first image;
the method further comprises the following steps:
and performing projection verification on the at least one group of first pose information based on each target object in the second target object group and the position relation model, and determining correct pose information in the at least one group of first pose information according to a projection verification result.
Preferably, the method further comprises the following steps:
and if the fact that correct pose information does not exist in the at least one group of first pose information is determined according to the projection verification result, returning to execute the operation of determining the first target object group and the second target object group contained in the first image so as to determine the first target object group and the second target object group contained in the first image from the plurality of target objects of the first electronic equipment again.
Preferably, the determining first pose information of the first electronic device with respect to the second electronic device based on at least the first relative positional relationship and three-dimensional coordinate information of each target object in the first image includes:
and determining first position and posture information of the first electronic equipment relative to the second electronic equipment currently by utilizing a perspective projection algorithm at least based on the first relative position relation and the three-dimensional coordinate information of each target object in the first image.
Preferably, the method further comprises the following steps:
if the number of the target objects in the first image is less than the set number, obtaining third pose information determined by an inertial measurement device in the first electronic device, and determining the third pose information as pose information of the first electronic device relative to the second electronic device.
Preferably, the method further comprises the following steps:
obtaining a current second image of the first electronic equipment;
determining fourth pose information of the first electronic device relative to the second electronic device at present based on at least a second relative position relationship between target objects in the second image and the position relationship model;
determining input control data for the second electronic device based on the first and fourth pose information.
Wherein, a position appearance confirming device includes:
the image obtaining unit is used for obtaining a current first image of first electronic equipment, the first electronic equipment can control a virtual scene output by second electronic equipment, and a plurality of target objects are arranged on the first electronic equipment;
a pose determination unit configured to determine first pose information of the first electronic device with respect to the second electronic device at present based on at least a relative positional relationship between target objects in the first image and a positional relationship model created based on three-dimensional coordinate information of the target objects of the first electronic device.
Wherein, an electronic equipment includes:
a memory and a processor;
the processor is used for executing the pose determination method of any one of the above items;
the memory is used for storing programs needed by the processor to execute operations.
According to the above scheme, a plurality of target objects are arranged on the first electronic device serving as the control end in the present application, and the second electronic device can obtain the current first image of the first electronic device, and since the two-dimensional coordinate information of at least part of the target objects in the first electronic device can be reflected according to the relative position relationship between the target objects in the first image, the position and orientation information of the first electronic device relative to the second electronic device can be determined based on the relative position relationship between the target objects in the first image and the position relationship model capable of reflecting the three-dimensional coordinate information of each target object of the first electronic device.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a pose determination method according to an embodiment of the present application;
fig. 2 is a schematic arrangement diagram of a plurality of target objects on a first electronic device according to an embodiment of the present disclosure;
FIG. 3 is a schematic plan view of a plurality of target objects on the first electronic device shown in FIG. 2;
fig. 4 is a schematic flowchart of another pose determination method according to an embodiment of the present application;
FIG. 5 is a schematic plan view of a distribution of target objects on the first electronic device;
FIG. 6 is a schematic diagram of target objects in a first image of the first electronic device shown in FIG. 5;
fig. 7 is a schematic flowchart of another pose determination method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a target object in a first image of the first electronic device shown in FIG. 2;
fig. 9 is a schematic structural diagram of a pose determination apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The method and the device are suitable for determining the pose information of the first electronic device serving as the control end relative to the second electronic device serving as the controlled end. For example, the first electronic device may be a handle in a virtual reality scene, while the second electronic device may be a smart helmet that outputs a virtual display scene.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
Referring to fig. 1, fig. 1 is a schematic view of an implementation flow of a pose determination method according to an embodiment of the present application, where the method of this embodiment may be applied to an electronic device serving as a controlled end, that is, a second electronic device. The method of the embodiment may include:
s101, obtaining a current first image of the first electronic device.
The first electronic equipment can control the virtual scene output by the second electronic equipment.
In the application, a plurality of target objects are arranged on the first electronic equipment. The target object is an identification mark on the first electronic device, for example, the target object may be an icon, an LED lamp, or other identifier disposed on the housing of the first electronic device.
It can be understood that, since the first electronic device has a plurality of target objects as the identification marks, after the image of the first electronic device is obtained, at least a part of the target objects on the first electronic device will be included in the image. For the sake of convenience of distinction, the image of the first electronic device obtained in step S101 is referred to as a first image.
The first image may include an image of all or a part of the target object in the first electronic device.
The target objects on the first electronic device can be arranged as required.
Optionally, in order to enable the target objects in the first image to be more accurate and convenient and which target objects in the first electronic device are, in the application, the plurality of target objects on the first electronic device are arranged according to a set arrangement rule.
Wherein the arrangement rule may have many possibilities. For example, in a possible implementation manner, the plurality of target objects of the first electronic device are all the same, and on this basis, the arrangement of the plurality of target objects on the first electronic device may conform to the set pattern rule. For example, a plurality of patterns arranged in sequence, such as a circular ring, a triangle, a diamond, a pentagram, and the like, may be sequentially constructed on the first electronic device by using a plurality of target objects (such as a plurality of circular lamps or black dots, and the like) according to an arrangement rule that the circular ring, the triangle, the diamond, the pentagram, and the like are arranged in sequence.
In yet another possible case, at least two different types of target objects are disposed on the first electronic device, and the arrangement between the at least two different types of target objects has a set arrangement rule.
For example, the first electronic device may include a plurality of different target objects, and the plurality of different target objects are arranged in a set order. For example, if the plurality of target objects are a plurality of markers having different patterns, the plurality of patterns on the plurality of markers can be arranged in a predetermined order by arranging the plurality of markers.
For another example, the plurality of target objects of the first electronic device may be divided into at least two types, and each type of target object includes at least one. On the basis, the arrangement rule of different types of target objects in the first electronic equipment can be set. In connection with a scenario, as seen in fig. 2, a schematic diagram of an arrangement of a plurality of target objects on a first electronic device is shown.
In fig. 2, taking the first electronic device as an example of a ring-shaped control handle 200, a plurality of LED lamps 201 may be disposed on the outer side of the handle, as indicated by the respective dots on the handle 200 shown in fig. 2. The plurality of LED lamps are divided into two types, one is a small LED lamp, and the other is a relatively large LED, such as large dots and small dots in fig. 2. Of course, it is also possible to set the same size of LED lamps, but the LED lamps are controlled to exhibit different brightness by the first electronic device to achieve the effect of fig. 2. As can be seen from FIG. 2, each large LED lamp is sequentially distributed at intervals on the upper edge and the lower edge of the handle, and four small LED lamps are distributed around each large LED lamp. For visual inspection, see fig. 3, fig. 3 is a schematic plan view of the LED light distribution on the handle of fig. 2. It can be seen from fig. 3 that the arrangement of the large LED lamps and the small LED lamps has regular arrangement.
S102, determining first posture information of the first electronic equipment relative to second electronic equipment at present at least based on the relative position relation between the target objects in the first image and the position relation model of each target object of the first electronic equipment.
Wherein the position relation model is created based on three-dimensional coordinate information of each target object of the first electronic device. For example, the three-dimensional coordinate information of each target object of the first electronic device may be three-dimensional coordinates of each target object relative to the first electronic device, for example, a spatial coordinate system is constructed with a central point of the first electronic device as an origin of a coordinate axis, and then the three-dimensional coordinate information of each target object in the first electronic device is determined.
Wherein the position relation model may be stored at the second electronic device side. Accordingly, the second electronic device side can restore the three-dimensional coordinate information of each target object included in the first electronic device based on the position relationship model. Meanwhile, the three-dimensional coordinate information of each target object on the first electronic device can reflect the arrangement mode of each target object on the first electronic device, so that the arrangement rule of each target object in the first electronic device can be obtained.
It can be understood that, since the position relationship model may reflect three-dimensional coordinate information of each target object on the first electronic device, and the relative position relationship of the target objects in the first image actually reflects two-dimensional coordinate information of at least part of the target objects on the first electronic device in the coordinate system of the first image, based on this, two-dimensional coordinates and three-dimensional coordinate information corresponding to at least part of the target objects of the first electronic device may be obtained.
The two-dimensional coordinates of at least part of target objects in the first electronic equipment refer to the two-dimensional coordinates of the at least part of target objects in the first electronic equipment on a camera coordinate system of the second electronic equipment, so that the pose information of the first electronic equipment relative to the second electronic equipment is determined based on the two-dimensional coordinates of the at least part of target objects and the three-dimensional coordinate information of the at least part of target objects.
The pose can be determined by combining any three-dimensional reconstruction mode and the like by combining the two-dimensional coordinates and the mapping relation between the three-dimensional coordinates of at least part of the target objects in the first electronic equipment, and the pose is not limited.
For example, in an alternative, the first pose information of the first electronic device relative to the second electronic device may be determined based on at least the first relative position relationship and the three-dimensional coordinate information of each target object in the first image by using a perspective projection algorithm. The Perspective projection algorithm may also be referred to as a N-point Perspective projection (PNP) algorithm.
Wherein the pose information comprises position information and pose information of the first electronic device relative to the second electronic device. For example, the attitude information may include a pitch angle, a heading angle, and a roll angle.
For the sake of convenience of distinction, the present application refers to the pose information of the first electronic device with respect to the second electronic device, which is determined based on the first image, as first pose information.
As can be seen, in the present application, a plurality of target objects are disposed on a first electronic device serving as a control end, and a second electronic device can obtain a current first image of the first electronic device, and two-dimensional coordinate information of at least some target objects in the first electronic device can be reflected according to a relative position relationship between the target objects in the first image, so that pose information of the first electronic device relative to the second electronic device can be determined based on the relative position relationship between the target objects in the first image and a position relationship model capable of reflecting three-dimensional coordinate information of each target object of the first electronic device.
Meanwhile, compared with the prior art that the control terminal equipment is positioned based on ultrasonic waves, only the position information of the first electronic equipment relative to the second electronic equipment can be determined, the method and the device can also obtain the attitude information, and are favorable for more accurately controlling the virtual scene output by the second electronic equipment based on the first electronic equipment; it is possible to avoid a situation where the accuracy is too low due to the shaking of the positioning by the ultrasonic wave or the like. Moreover, compared with the situation that the ultrasonic waves and the inertial measurement unit on the first electronic equipment side are combined to determine the pose information of the first electronic equipment, the method and the device can effectively avoid the situation that the determined attitude is unstable due to the fact that drift generated by long-time integration of a gyroscope in the inertial measurement unit cannot be effectively corrected at a yaw angle.
In addition, the pose information of the first electronic equipment can be determined only based on the first image obtained currently, and the pose information does not need to be determined by combining the previous frame of image before the first image, so that the situation that due to the fact that the first electronic equipment moves rapidly, a target object serving as a marker cannot be correctly identified due to mismatching, and finally pose positioning abnormity is caused due to tracking abnormity of the target object can be avoided.
As shown in fig. 4, which shows a schematic flow chart of another embodiment of the pose determination method according to the present application, the method of this embodiment may include:
s401, obtaining a current first image of the first electronic device.
The first electronic equipment can control the virtual scene output by the second electronic equipment.
In the application, a plurality of target objects are arranged on the first electronic equipment.
S402, determining a target object group contained in the first image based on the relative position relation between the target objects in the first image and the position relation model.
The position relationship model at least can represent three-dimensional distribution of each target object in the first electronic device, and the position relationship model may specifically refer to the related description of the foregoing embodiment, which is not described herein again.
And each target object in the target object group has a first relative position relationship.
The target object group includes: and determining at least a set number of target objects matched with the target objects of the first electronic equipment in the first image based on the relative position relationship between the target objects in the first image and the position relationship of the target objects in the electronic equipment. The set number may be set as needed, and if the set number may be four or five, the target object group may include at least four or five target objects.
It is understood that, in practical applications, the number of target objects required to be included in the target object group may be set as required. Meanwhile, the first relative positional relationship between the target objects included in the target object group may be different. For example, if the first image actually includes images of the target objects 1 to 10 in the first electronic device, if the target object group is set to include 4 target objects, the determined target object group may be composed of the target object 1, the target object 2, the target object 3, and the target object 4, or may be composed of the target object 3, the target object 4, the target object 5, and the target object 6, and in these two cases, the relative position relationship of the target objects in the target object group may be different.
It can be understood that, since the target object in the first image is actually an image of the target object on the first electronic device side, determining the group of target objects contained in the first image is actually, that is, determining which target objects on the first electronic device side the target object in the first image belongs to, and finally determining at least the set number of target objects on the first electronic device contained in the first image. Correspondingly, determining the target object group contained in the first image may also be seen as determining the target object group on the first electronic device side, the target object group comprising: and the relative position relationship of the target objects is matched with that of the target objects in the first image.
It can be understood that, since the relative position relationship of each target object in the first image may reflect the relative position relationship of each target object in the first image on the first electronic device, the target object in the first electronic device that matches the relative position relationship of each target object in the first image may be determined based on the position relationship model corresponding to each target object in the first electronic device, so as to determine the target object group corresponding to the first electronic device side in the first image.
For example, the relative position relationship of each target object in the first image may reflect the arrangement manner between each target object in the first image, and accordingly, the target object that meets the arrangement manner may be searched from the first electronic device side based on the arrangement manner, so as to determine which target objects belonging to the first electronic device are included in the first image.
Referring to fig. 5 and fig. 6, fig. 5 is a schematic plan view illustrating distribution of target objects on the first electronic device, and in fig. 5, the target objects of the first electronic device are shown as dots 501 in fig. 5, and as can be seen, the target objects constitute different images. And fig. 6 is a schematic diagram of each target object in the first image of the first electronic device shown in fig. 5. As can be seen from a comparison of fig. 5 and 6, the first image in fig. 6 includes a portion of the target object composition of fig. 5 constituting a circular pattern and a square pattern, as indicated by the bold dashed box in fig. 5. Based on this, it can be determined that the target object group included in the first image is composed of target objects within a dashed frame shown in fig. 5 on the first electronic device side.
As an alternative, the position relationship model may also be associated with attribute features of each target object on the first electronic device, for example, the attribute features may include: the shape of the target object, the pattern features contained therein, the image of the target object, and the like can reflect one or more of the external features presented by the target object. On the basis, since the first image includes the image of each target object, the attribute information of each target object in the first image can be determined, and based on this, the image and the relative position relationship of each target object in the first image can be combined to determine which target objects in the first electronic device are the target object group included in the first image.
And S403, determining first position information of the first electronic equipment relative to the second electronic equipment at present based on at least the first relative position relation and the three-dimensional coordinate information of each target object in the first image.
The three-dimensional coordinate information of each target object in the first image can be obtained based on the target object group contained in the first image and the position relation model of each target object of the first electronic device. For example, the corresponding relationship between the target object group in the first image and the target object on the first electronic device may be determined based on the target object group in the first image, so that the three-dimensional coordinate information of each target object in the first image may be determined based on the position relationship model.
It is understood that the first relative position relationship between the target objects in the target object group is two-dimensional coordinate information between the target objects in the target object group, and the three-dimensional coordinate information of each target object in the first image includes three-dimensional coordinate information of each target object in the target object group, so that the first pose information of the first electronic device relative to the second electronic device can be determined based on an algorithm such as PNP.
Alternatively, the first pose information may be determined based on the first relative positional relationship and three-dimensional coordinate information of each of the target object groups. It can be understood that the first relative position relationship is two-dimensional coordinate information of each target object in the target object group on the coordinate system where the first image is located, and the three-dimensional coordinate information of each target object in the target object group can be directly obtained from the position relationship model, so that the efficiency of obtaining the three-dimensional coordinate information of each target object group of the target object group more efficiently is better, and the first posture information can be determined more efficiently.
In this embodiment of the application, according to the relative position relationship of each target object in the first image and the relative position relationship model of each target object of the first electronic device, the target object group included in the first image is determined, that is, the corresponding relationship between the target object in the first image and the target object on the first electronic device is determined, and the relative position relationship of the target object group on the first image is the two-dimensional coordinate information on the camera coordinate system of the second electronic device, so that the relative spatial posture between the first electronic device and the second electronic device can be restored based on the two-dimensional coordinate information of each target object in the target object group and the three-dimensional coordinate information of each target object of the target object group, and the first posture information can be obtained.
S404, determining projection position points of the residual target objects in the first electronic equipment on the first image at least based on the position relation model and the first position and orientation information, and determining second position and orientation information of the first electronic equipment relative to the second electronic equipment according to the projection position points and the three-dimensional coordinate information corresponding to the residual target objects.
The remaining target objects in the first electronic device refer to target objects in the first electronic device that do not belong to the target object group.
The projection position point of the target object on the first image may be determined in various ways:
for example, according to the position relationship model and the first pose information, each target object outside the target object group in the first electronic device is projected to a camera coordinate system (a plane coordinate system where the first image is located) of the second electronic device, so as to obtain projected position points of the remaining target objects in the first electronic device on the first image.
For another example, projection location points of the remaining target objects in the first electronic device on the first image may be determined according to the mapping relationship, the location relationship model, and the first pose information between each target object in the target object group in the first electronic device and the target object in the first image. The second pose information may be determined by combining an algorithm such as PNP, based on the projection location points of the remaining target objects in the first electronic device on the first image and the three-dimensional coordinate information of each of the remaining target objects in the first electronic device.
For example, the mapping relationship between the projection position points of the remaining target objects and the remaining target objects in the electronic device may be combined, and the second position information may be determined by using an algorithm such as PNP based on the projection position points of the remaining target objects in the first electronic device on the first image and the three-dimensional coordinate information of the remaining target objects in the first electronic device. The implementation principle of determining the second position and orientation information is similar to that of determining the first position and orientation information, and is not described herein again. It should be noted that, this step S404 is an optional step, and the purpose of this step is to calculate second pose information of the first electronic device relative to the second electronic device again by using a projected position point that is not projected onto the first image by the first electronic device after the first pose information is determined, so as to improve the accuracy of the determined pose information. This step S404 may or may not be performed according to the requirement for the accuracy of the pose information.
It is to be understood that, in order to improve the accuracy, the step S404 may further determine a projection location point of each target object in the first image in the first electronic device based on at least the location relationship model and the first pose information. Then, second pose information of the first electronic device relative to the second electronic device is determined based on the position relation model and the mapping relation between the projection position points of the target objects in the first image in the first electronic device.
After the projection position points of the target objects of the first electronic device on the first image are determined, the mapping relationship between the projection position points and the target objects in the first electronic device is also determined, so that based on the mapping relationship, the two-dimensional coordinate information (projection position points) of the target objects in the first image and the position relationship model, the second posture information can be determined by adopting an algorithm such as PNP.
For ease of understanding, the process of determining the first pose information based on the target object group is described below as an example of one implementation.
As shown in fig. 7, which shows a schematic flowchart of another implementation procedure of a pose determination method according to an embodiment of the present application, the method according to this embodiment may include:
s701, obtaining a current first image of the first electronic device.
The first electronic equipment can control the virtual scene output by the second electronic equipment.
The first electronic device is provided with a plurality of target objects.
In this application, at least four target objects are included in the first image.
S702, at least three first target objects for pose calculation and at least one second target object for pose verification are determined from the at least four target objects of the first image.
For example, at least three target objects may be randomly selected from the plurality of target objects of the first image as the first target object for calculating the pose. Correspondingly, all or part of the target objects except the first target object in the first image can be used as second target objects for pose verification to obtain at least one second target object.
Optionally, at least three adjacent first target objects may be selected according to the relative position relationship of the at least four target objects in the first image, so as to reduce the difficulty in subsequently matching the first target object group.
S703, determining a first target object group and a second target object group included in the first image based on the position relationship model and the relative position relationship between the target objects in the first image.
The position relationship model at least can represent a three-dimensional distribution of the target objects in the first electronic device, which can be referred to in the related description above.
Wherein the first target object group includes: at least three target objects in the first electronic device that match the at least three first target objects. That is, the first target object group included in the first image is each target object in the first electronic device that matches at least three first target objects. The number of the target objects in the first target object group is the same as the number of the at least three first target objects.
For example, taking obtaining three first target objects as an example, based on the relative position relationship of the three target objects in the first image, it is determined that the arrangement rule of the three target objects in the first image is consistent with the arrangement rule of the target object 1, the target object 2, and the target object 3 in the first electronic device, and then it may be determined that the target object group included in the first image is the target object 1, the target object 2, and the target object 3 in the first electronic device.
Similarly, the second target object group includes at least one target object in the first electronic device that matches the at least one second target object.
The first target object group has a first relative position relationship.
It is understood that after the first target object group and the second target object group included in the first image are determined, three-dimensional coordinate information of each target object in the first target object group and the second target object group can be obtained from the position relationship model.
S704, at least one group of first posture information of the first electronic device relative to the second electronic device is determined at present at least based on the first relative position relation of each target object in the first target object group and the three-dimensional coordinate information of each target object in the first image.
For example, at least one set of first pose information is determined based on the first relative position relationship of each target object in the first target object group and the three-dimensional coordinate information corresponding to each target object in the first target object group included in the first image.
The way of calculating the first pose information in this embodiment is similar to the previous one.
It is understood that, in determining the pose information based on the two-dimensional coordinates and the three-dimensional coordinates of the plurality of object points, if the number of object points is only three, a plurality of sets of pose information are determined. For example, when the pose information is calculated based on the PNP algorithm, if there are three feature points (corresponding to the first target object in the first target object group in this embodiment), N takes a value of 3, that is, 4 sets of pose information can be determined by using the P3P algorithm; and when the number of the feature points is more than three, a group of pose information can be uniquely determined. Thus, in case the first set of target objects comprises at least three target objects, then at least one set of pose information may be determined.
S705, projection verification is carried out on the at least one group of first pose information based on each target object in the second target object group and the position relation model, and correct pose information in the at least one group of first pose information is determined according to a projection verification result.
During projection verification, for each piece of first pose information, according to the first pose information of the first electronic device and the position relationship model, determining a projection position point of each target object in the first electronic device, which belongs to the second target object group, on the first image, and comparing whether the projection position point of each target object in the first target object group in the first electronic device is consistent with an actual position of each target object in the first image.
It can be understood that, for a certain first pose information, if the projection location point of each target object belonging to the second target object group in the first electronic device is consistent with the location point of each target object in the second target object group contained in the first image, the first pose information is correct; otherwise, the first posture information is considered to be erroneous. Based on this, correct pose information can be determined from the calculated at least one set of first pose information, and the verified correct pose information can be determined as pose information of the first electronic device relative to the second electronic device.
As can be seen, in this embodiment, in addition to determining the first target object group included in the first electronic device for calculating the pose, a second target object group for pose verification is also determined, and on this basis, after determining multiple sets of first pose information based on the first target object group, the multiple sets of first pose information are also subjected to projection verification by using the second target object group.
It is understood that, if the mutual positional relationship between any three target objects in the first electronic device is not repeated with the mutual positional relationship between at least three other target objects, in the case that the first image includes at least four target objects, the first target object group matching with at least three first target objects in the first image in the first electronic device may be uniquely determined based on at least three first target objects in the first image.
For example, the relative position relationships between any one target object in the first electronic device and other electronic devices are different, so that the first target object group in the first electronic device matching the first relative position relationships of the at least three first target objects can be directly and accurately determined based on the first relative position relationships of the at least three first target objects in the first image. For example, referring to fig. 5 and 6, based on the first image shown in fig. 6, it can be uniquely determined which target objects in the first electronic device are included in the first image.
However, if there is a repetition of the relative positional relationship between different target objects in the first electronic device, a plurality of matching first target object groups may be determined from the first electronic device based on the relative positional relationship of at least three first target objects in the first image.
For example, the target object which takes the first electronic device as a ring-shaped handle and is arranged on the first electronic device is shown in fig. 2 and 3. As can be seen from fig. 2 and 3, a plurality of large LED lamps (or full-bright LEDs) and a plurality of small LED lamps (or half-bright LEDs) are disposed in the first electronic device, and four small LED lamps surround each large LED lamp, so that the arrangement of the LEDs around any one large LED lamp is the same, and therefore, it is not certain and unique to determine, based on the first image, which LED lamps on the electronic device constitute the first target object group. Fig. 8, which shows a first image of the electronic device of fig. 2. As can be seen from comparing fig. 1, 2 and 8, there are many possibilities for the LED lamp set on the first electronic device according to the relative position relationship between the LED lamps shown in fig. 8.
In a case where a plurality of first target object groups may exist in the first electronic device, the above steps S704 and S705 may be performed each time after the first target object group that may be included in the first image is determined, and it may be understood that, if the first target object group included in the currently determined first image is wrong, it indicates that at least one first target object actually included in the first image does not correspond to a target object in the first target object group in the electronic device. On the basis, at least one group of first attitude information obtained based on the first target object group can be obtained by performing projection verification on the second target object group, so that the first target object group and the second target object group can be determined again.
Correspondingly, if it is determined that correct pose information does not exist in the at least one set of first pose information according to the projection verification result, the method returns to execute step S704 to determine a first target object set and a second target object set actually included in the first image from the plurality of target objects of the first electronic device again.
In a possible implementation manner, in any one of the above embodiments of the present application, if the number of target objects in the first image is less than a set number, third pose information determined by the inertial measurement device in the first electronic device is obtained, and the third pose information is determined as pose information of the first electronic device with respect to the second electronic device.
The inertial measurement unit may include a gyroscope and other devices capable of determining the pose.
For example, the set number may be four, and if the number of target objects in the first image is less than four, the pose information of the first electronic device with respect to the second electronic device may not be determined using an algorithm for calculating two-dimensional to three-dimensional pose information such as PNP.
It can be understood that, in any of the above embodiments of the present application, two images may also be obtained simultaneously, and pose information is determined based on the two images, respectively.
Specifically, the present application may further obtain a current second image of the first electronic device. For example, the second electronic device may be provided with two image capturing devices, which then capture the first image and the second image, respectively. Of course, the second image and the first image may be acquired by the same image acquisition device, and the second image may be an image of a frame immediately before the first image.
On this basis, fourth pose information of the first electronic device with respect to the second electronic device may be determined based on at least a second relative positional relationship between target objects in the second image and the positional relationship model. Wherein. For the sake of convenience of distinction, the pose information of the first electronic device with respect to the second electronic device determined based on the second image is referred to as fourth pose information.
Accordingly, input control data for the second electronic device can be determined based on the first and fourth pose information to improve accuracy of input control.
The application also provides a pose determining device corresponding to the pose determining method. As shown in fig. 9, which shows a schematic view of another composition structure of a pose determination apparatus according to the present application, the apparatus of the present embodiment may include:
an image obtaining unit 901, configured to obtain a current first image of a first electronic device, where the first electronic device is capable of controlling a virtual scene output by a second electronic device, and a plurality of target objects are set on the first electronic device;
a first pose determination unit 902, configured to determine first pose information of the first electronic device currently relative to the second electronic device based on at least a relative positional relationship between target objects in the first image and a positional relationship model created based on three-dimensional coordinate information of each target object of the first electronic device.
In one possible implementation, the first pose determination unit includes:
the group positioning subunit is configured to determine a target object group included in the first image based on a relative positional relationship between target objects in the first image and the positional relationship model, where each target object in the target object group has a first relative positional relationship, and the positional relationship model at least can represent a three-dimensional distribution of each target object in the first electronic device;
and the first posture determining subunit is used for determining first posture information of the first electronic equipment relative to the second electronic equipment currently at least based on the first relative position relationship and the three-dimensional coordinate information of each target object in the first image.
Optionally, the first position and orientation determining subunit is specifically configured to determine, based on at least the first relative position relationship and three-dimensional coordinate information of each target object in the first image, first position and orientation information of the first electronic device currently relative to the second electronic device by using a perspective projection algorithm.
Optionally, the apparatus may further include:
and the second pose determining unit is used for determining projection position points of the rest target objects in the first electronic equipment on the first image at least based on the position relation model and the first pose information, and determining second pose information of the first electronic equipment relative to the second electronic equipment according to the projection position points and the three-dimensional coordinate information corresponding to the rest target objects.
In yet another possible implementation manner, the first image obtained by the image obtaining unit includes at least four target objects;
the set of positioning subunits comprising:
an object classification subunit, configured to determine, from the at least four target objects of the first image, at least three first target objects for pose calculation and at least one second target object for pose verification;
a group determination subunit, configured to determine, based on the positional relationship model and a relative positional relationship between target objects in the first image, a first target object group and a second target object group included in the first image, where the first target object group includes at least three target objects in the first electronic device that match the at least three first target objects, the second target object group includes at least one target object in the first electronic device that matches the at least one second target object, and the first target object group has a first relative positional relationship;
the first pose determining subunit is specifically configured to determine, based on at least a first relative position relationship of each target object in the first target object group and three-dimensional coordinate information of each target object in the first image, at least one group of first pose information of the first electronic device currently relative to the second electronic device;
the apparatus may further include:
and the pose verification unit is used for performing projection verification on the at least one group of first pose information based on each target object in the second target object group and the position relation model, and determining correct pose information in the at least one group of first pose information according to a projection verification result.
Optionally, the apparatus may further include:
and the group re-determination unit is used for returning to execute the group determination subunit operation to re-determine the first target object group and the second target object group contained in the first image from the plurality of target objects of the first electronic equipment if the fact that correct pose information does not exist in the at least one group of first pose information is determined according to the projection verification result.
In yet another possible implementation manner, in an embodiment of any one of the above apparatuses, the apparatus may further include:
and the third pose determining unit is used for obtaining third pose information determined by the inertial measurement device in the first electronic equipment if the number of the target objects in the first image is less than a set number, and determining the third pose information as pose information of the first electronic equipment relative to the second electronic equipment.
In yet another possible implementation manner, in an embodiment of any one of the above apparatuses, the apparatus may further include:
the auxiliary image obtaining unit is used for obtaining a current second image of the first electronic equipment;
a fourth pose determination unit, configured to determine fourth pose information of the first electronic device with respect to the second electronic device currently based on at least the second relative positional relationship between the target objects in the second image and the positional relationship model;
and the input control unit is used for determining input control data of the second electronic equipment based on the first posture information and the fourth posture information.
In another aspect, the present application further provides an electronic device, which may be the specific structure of the aforementioned second electronic device. As shown in fig. 10, which shows a schematic view of a composition structure of an electronic device according to the present application, the electronic device of the present embodiment at least includes: a processor 1001 and a memory 1002.
Wherein the processor is configured to execute the pose determination method according to any one of the above embodiments.
The memory is also used for storing programs needed by the processor to perform operations.
It will be appreciated that the electronic device may also include other components, as shown in fig. 10, a display 1003, an input device 1004 to which the electronic device is connected, and a communication bus 1005. The processor, memory and display and input device may be connected by a communication bus.
Of course, the electronic device may also include more or less components than those shown in fig. 10, which is not limited in this regard.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A pose determination method, comprising:
the method comprises the steps of obtaining a current first image of first electronic equipment, wherein the first electronic equipment can control a virtual scene output by second electronic equipment, and a plurality of target objects are arranged on the first electronic equipment;
determining first pose information of the first electronic device relative to the second electronic device at present based on at least a relative position relationship between target objects in the first image and a position relationship model created based on three-dimensional coordinate information of the target objects of the first electronic device.
2. The method of claim 1, the determining first pose information of the first electronic device with respect to the second electronic device at present based on at least a relative positional relationship between target objects in the first image and a positional relationship model created based on three-dimensional coordinate information of the target objects of the first electronic device, comprising:
based on the relative position relationship between the target objects in the first image and the position relationship model, a target object group contained in the first image, wherein each target object in the target object group has a first relative position relationship, and the position relationship model at least can represent the three-dimensional distribution of each target object in the first electronic equipment;
and determining first position and orientation information of the first electronic equipment relative to the second electronic equipment currently at least based on the first relative position relationship and the three-dimensional coordinate information of each target object in the first image.
3. The method of claim 2, further comprising:
determining projection position points of the rest target objects in the first electronic equipment on the first image at least based on the position relation model and the first position and orientation information, and determining second position and orientation information of the first electronic equipment relative to the second electronic equipment according to the projection position points and the three-dimensional coordinate information corresponding to the rest target objects.
4. The method of claim 2 or 3, the first image comprising at least four target objects;
the determining a target object group contained in the first image based on the relative positional relationship between the target objects in the first image and the positional relationship model includes:
determining at least three first target objects for pose calculation and at least one second target object for pose verification from among at least four target objects of the first image;
determining a first target object group and a second target object group contained in the first image based on the positional relationship model and a relative positional relationship between target objects in the first image, wherein the first target object group comprises at least three target objects matched with the at least three first target objects in the first electronic device, the second target object group comprises at least one target object matched with the at least one second target object in the first electronic device, and the first target object group has a first relative positional relationship;
the determining, based on at least the first relative positional relationship and three-dimensional coordinate information of each target object in the first image, first pose information of the first electronic device with respect to the second electronic device includes:
determining at least one set of first pose information of the first electronic device relative to the second electronic device at present based on at least a first relative position relationship of each target object in the first target object group and three-dimensional coordinate information of each target object in the first image;
the method further comprises the following steps:
and performing projection verification on the at least one group of first pose information based on each target object in the second target object group and the position relation model, and determining correct pose information in the at least one group of first pose information according to a projection verification result.
5. The method of claim 4, further comprising:
and if the fact that correct pose information does not exist in the at least one group of first pose information is determined according to the projection verification result, returning to execute the operation of determining the first target object group and the second target object group contained in the first image so as to determine the first target object group and the second target object group contained in the first image from the plurality of target objects of the first electronic equipment again.
6. The method of claim 2, the determining first pose information of the first electronic device currently relative to a second electronic device based on at least the first relative positional relationship and three-dimensional coordinate information of target objects in the first image, comprising:
and determining first position and posture information of the first electronic equipment relative to the second electronic equipment currently by utilizing a perspective projection algorithm at least based on the first relative position relation and the three-dimensional coordinate information of each target object in the first image.
7. The method of claim 1, further comprising:
if the number of the target objects in the first image is less than the set number, obtaining third pose information determined by an inertial measurement device in the first electronic device, and determining the third pose information as pose information of the first electronic device relative to the second electronic device.
8. The method of claim 1, further comprising:
obtaining a current second image of the first electronic equipment;
determining fourth pose information of the first electronic device relative to the second electronic device at present based on at least a second relative position relationship between target objects in the second image and the position relationship model;
determining input control data for the second electronic device based on the first and fourth pose information.
9. A pose determination apparatus comprising:
the image obtaining unit is used for obtaining a current first image of first electronic equipment, the first electronic equipment can control a virtual scene output by second electronic equipment, and a plurality of target objects are arranged on the first electronic equipment;
a pose determination unit configured to determine first pose information of the first electronic device with respect to the second electronic device at present based on at least a relative positional relationship between target objects in the first image and a positional relationship model created based on three-dimensional coordinate information of the target objects of the first electronic device.
10. An electronic device, comprising:
a memory and a processor;
the processor is configured to execute the pose determination method according to any one of claims 1 to 8;
the memory is used for storing programs needed by the processor to execute operations.
CN202010768009.2A 2020-08-03 2020-08-03 Pose determining method and device and electronic equipment Pending CN111897432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010768009.2A CN111897432A (en) 2020-08-03 2020-08-03 Pose determining method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010768009.2A CN111897432A (en) 2020-08-03 2020-08-03 Pose determining method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111897432A true CN111897432A (en) 2020-11-06

Family

ID=73183559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010768009.2A Pending CN111897432A (en) 2020-08-03 2020-08-03 Pose determining method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111897432A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114636386A (en) * 2022-02-28 2022-06-17 浙江时空道宇科技有限公司 Angle measuring method, device, system and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5388059A (en) * 1992-12-30 1995-02-07 University Of Maryland Computer vision system for accurate monitoring of object pose
JP2018171456A (en) * 2018-05-17 2018-11-08 グリー株式会社 Program, information processing device, and control method
CN111427452A (en) * 2020-03-27 2020-07-17 海信视像科技股份有限公司 Controller tracking method and VR system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5388059A (en) * 1992-12-30 1995-02-07 University Of Maryland Computer vision system for accurate monitoring of object pose
JP2018171456A (en) * 2018-05-17 2018-11-08 グリー株式会社 Program, information processing device, and control method
CN111427452A (en) * 2020-03-27 2020-07-17 海信视像科技股份有限公司 Controller tracking method and VR system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114636386A (en) * 2022-02-28 2022-06-17 浙江时空道宇科技有限公司 Angle measuring method, device, system and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
US9467682B2 (en) Information processing apparatus and method
CN110782492B (en) Pose tracking method and device
CN110163903A (en) The acquisition of 3-D image and image position method, device, equipment and storage medium
CN109816730A (en) Workpiece grabbing method, apparatus, computer equipment and storage medium
US20210374978A1 (en) Capturing environmental scans using anchor objects for registration
JP3611239B2 (en) Three-dimensional CG model creation device and recording medium on which processing program is recorded
CN110688002A (en) Virtual content adjusting method and device, terminal equipment and storage medium
McIlroy et al. Kinectrack: 3d pose estimation using a projected dense dot pattern
CN111897432A (en) Pose determining method and device and electronic equipment
JP7439410B2 (en) Image processing device, image processing method and program
US11758100B2 (en) Portable projection mapping device and projection mapping system
CN111771227A (en) Program, system, electronic device, and method for recognizing three-dimensional object
CN114147725B (en) Zero point adjustment method, device and equipment for robot and storage medium
JP2012216981A (en) Calibration method for stereo camera and information processing device
CN114926542A (en) Mixed reality fixed reference system calibration method based on optical positioning system
CN111047710B (en) Virtual reality system, interactive device display method, and computer-readable storage medium
CN114419162A (en) Optical finger capturing mark positioning method
CN115082520A (en) Positioning tracking method and device, terminal equipment and computer readable storage medium
CN110598605B (en) Positioning method, positioning device, terminal equipment and storage medium
CN110471577B (en) 360-degree omnibearing virtual touch control method, system, platform and storage medium
CN113961068A (en) Close-distance real object eye movement interaction method based on augmented reality helmet
US20240012238A1 (en) Tracking apparatus, method, and non-transitory computer readable storage medium thereof
CN111176445B (en) Interactive device identification method, terminal equipment and readable storage medium
US20230154162A1 (en) Method For Generating Training Data Used To Learn Machine Learning Model, System, And Non-Transitory Computer-Readable Storage Medium Storing Computer Program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination