CN106599893B - Processing method and device for object deviating from recognition graph based on augmented reality - Google Patents

Processing method and device for object deviating from recognition graph based on augmented reality Download PDF

Info

Publication number
CN106599893B
CN106599893B CN201611196616.6A CN201611196616A CN106599893B CN 106599893 B CN106599893 B CN 106599893B CN 201611196616 A CN201611196616 A CN 201611196616A CN 106599893 B CN106599893 B CN 106599893B
Authority
CN
China
Prior art keywords
recognition
axis
identification
map
controlling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611196616.6A
Other languages
Chinese (zh)
Other versions
CN106599893A (en
Inventor
王娜
郑文和
刘颖
李霞
肖帅帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201611196616.6A priority Critical patent/CN106599893B/en
Publication of CN106599893A publication Critical patent/CN106599893A/en
Application granted granted Critical
Publication of CN106599893B publication Critical patent/CN106599893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a processing method and a device after an object deviates from an identification picture based on augmented reality, wherein the method comprises the following steps: acquiring the positions of a K recognition map and an S recognition map after a K object in the K recognition map and an S object in the S recognition map deviate, wherein the position of the K recognition map is unchanged, controlling the S object to face the K recognition map and controlling the K object to face the S recognition map based on the position of the K recognition map and the position of the S recognition map. In the invention, after the object is deviated, the S object is controlled to face the K recognition diagram and the K object is controlled to face the S recognition diagram, because the position of the K recognition diagram is not changed, even if the position of the S recognition diagram is changed and the S object and the K object are deviated, the deviation amount of the S object can be reduced by controlling the S object to face the K recognition diagram, and similarly, the deviation amount of the K object is reduced by controlling the K object to face the S recognition diagram, thereby realizing the correction of the deviation.

Description

Processing method and device for object deviating from recognition graph based on augmented reality
Technical Field
The invention relates to the field of augmented reality, in particular to a method and a device for processing an object deviating from an identification picture based on augmented reality.
Background
The Augmented Reality (AR) technology is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and 3D models, and aims to sleeve a virtual world on a plane and interact with the real world.
However, a 3D model (object) in AR application has a problem that the position of the model (object) deviates from the recognition map where it is located when it is directed to another model (object), and the reasons for the deviation are as follows:
now, assume that two recognition maps, namely a K recognition map (k.imagetarget) and an S recognition map (s.imagetarget), are provided, and a K object (k.plane) and an S object (s.plane) are respectively provided under the recognition maps, wherein the K object and the S object are models in the recognition maps, and are specifically shown in fig. 1.
In applications implemented based on Vuforia, this AR engine, the Transform of the recognition graph appearing first is unchanged, while the Transform of the recognition graph appearing later automatically displays the position information relative to the first recognition graph.
In fig. 1, it is assumed that the K recognition graph appears first, the right graph in fig. 1 represents the K recognition graph by a coordinate system, the K object is a child object under the recognition graph, the S recognition graph appears below the coordinate system, the S recognition graph is represented by a coordinate system on the left side in fig. 1, and the S object is a child object under the recognition graph. The s object can face the k object through a preset function, and the k object also faces the s object through the preset function, wherein the facing direction is that the front face of the k object faces the front face of the s object, and the directions of coordinate axes corresponding to the front faces of different models can be different. Here we assume that the front of the object is the plane of the object's own coordinate axis Z.
When the S recognition graph moves up and down, the S object is a sub-object of the S recognition graph, the Position (Position) parameter and the Rotation (Rotation) parameter in the Transform of the S object are changed, the k object faces the S object at the moment, the Position of the k object is unchanged all the time because of the sub-object of the k object recognition graph, but the k object can face the target S object by rotating the k object, the coordinate axis of the k object rotates, the S object is influenced in reverse, the deviation degree of the S object relative to the recognition graph where the S object is located is increased, and the two objects influence each other to finally cause the object to move to an uncontrollable place from the recognition graph, so that the problem of large deviation occurs.
However, the prior art does not have a technical solution that can effectively solve the problem of the object deviating from the recognition map.
Disclosure of Invention
The invention mainly aims to provide a processing method for an object deviation identification chart based on augmented reality, and aims to solve the technical problem that the deviation amount is particularly large when an object deviation identification chart moves to a place which cannot be controlled in the prior art.
In order to achieve the above object, a first aspect of the present invention provides a method for processing an augmented reality-based object after deviating from a recognition map, the method comprising:
after a K object in a K recognition image and an S object in an S recognition image deviate, acquiring the positions of the K recognition image and the S recognition image, wherein the position of the K recognition image is unchanged;
and controlling the S object to face the K identification map and controlling the K object to face the S identification map based on the position of the K identification map and the position of the S identification map.
In order to achieve the above object, a second aspect of the present invention provides an apparatus for processing an augmented reality-based object after deviating from a recognition map, the apparatus comprising:
the acquisition module is used for acquiring the positions of a K identification map and an S identification map after a K object in the K identification map and an S object in the S identification map deviate, wherein the position of the K identification map is unchanged;
and the control module is used for controlling the S object to face the K identification image and controlling the K object to face the S identification image based on the position of the K identification image and the position of the S identification image.
The invention provides a processing method for an object deviating identification picture based on augmented reality, which comprises the following steps: acquiring the positions of a K recognition map and an S recognition map after a K object in the K recognition map and an S object in the S recognition map deviate, wherein the position of the K recognition map is unchanged, controlling the S object to face the K recognition map and controlling the K object to face the S recognition map based on the position of the K recognition map and the position of the S recognition map. In the invention, after the object is deviated, the S object is controlled to face the K recognition diagram and the K object is controlled to face the S recognition diagram, because the position of the K recognition diagram is not changed, even if the position of the S recognition diagram is changed and the S object and the K object are deviated, the deviation amount of the S object can be reduced by controlling the S object to face the K recognition diagram, and similarly, the deviation amount of the K object is reduced by controlling the K object to face the S recognition diagram, thereby realizing the correction of the deviation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic diagram of an identification map and objects in the identification map in the prior art;
FIG. 2 is a flowchart illustrating a processing method after an augmented reality-based object deviates from an identification map according to a first embodiment of the present invention;
FIG. 3 is a diagram illustrating a processing method for recognizing an object deviation based on augmented reality according to a second embodiment of the present invention;
FIG. 4 is a schematic flow chart of a refinement step of step 303 in the second embodiment;
FIG. 5 is a schematic view of the projection direction in an embodiment of the present invention;
FIG. 6 is a schematic view of an embodiment of the present invention after completion of the correction;
FIG. 7 is a schematic view of an embodiment of the present invention with the identification map placed at a right angle;
FIG. 8 is a diagram illustrating functional blocks of a processing apparatus after an augmented reality-based object deviates from an identification map according to a third embodiment of the present invention;
FIG. 9 is a diagram illustrating functional blocks of a processing apparatus after an augmented reality-based object deviates from an identification map according to a fourth embodiment of the present invention;
fig. 10 is a schematic diagram of the detailed functional blocks of the second control module 903 in the fourth embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior art, the 3D object (model) in the augmented reality application deviates from the recognition map due to the object deviation caused by facing other objects, and the deviation is particularly great.
In order to solve the technical problem, the invention provides a processing method for an augmented reality-based object after deviating from an identification map, based on the method, after the object deviates, an S object is controlled to face a K identification map and a K object is controlled to face the S identification map, because the position of the K identification map is unchanged, even if the position of the S identification map is changed and the S object and the K object deviate, the deviation amount of the S object can be reduced by controlling the S object to face the K identification map, and similarly, the deviation amount of the K object is reduced by controlling the K object to face the S identification map, so that the deviation is corrected.
Referring to fig. 2, a flowchart of a processing method after an augmented reality-based object deviation recognition graph in a first embodiment of the present invention is shown, where the method includes:
step 201, after a K object in a K identification map and an S object in an S identification map deviate, acquiring positions of the K identification map and the S identification map, wherein the position of the K identification map is unchanged;
step 202, based on the position of the K identification map and the position of the S identification map, controlling the S object to face the K identification map and controlling the K object to face the S identification map.
In the embodiment of the present invention, in an augmented reality application, the identification map needs to be scanned first, and the identification map scanned first appears first, where usually the Transform of the identification map scanned first is unchanged.
Wherein, the identification diagram is an identification diagram prefabricated body carried by Vuforia, and is a carrier for 3D model identification display, the Transform is some orientation scale parameter components of the prefabricated body, and all objects have one Transform component. That is, transform represents the position of the recognition graph.
The position of the identification map of the first scanning is invariable, and the position of the identification map obtained by the subsequent scanning can display the position relative to the first identification map. For better understanding, in the embodiment of the present invention, the identification map scanned first is referred to as a K identification map, and K objects are in the K identification map, and the identification map scanned later is referred to as an S identification map, and S objects are in the S identification map. It is understood that K and S are only used to distinguish two different identification graphs, and do not limit the content and function of the identification graphs, and similarly, K and S.
It can be understood that, in the embodiment of the present invention, only the K object and the S object in the K recognition diagram and the S recognition diagram are described as an example, and in practical applications, if there are deviations of objects in other recognition diagrams, the technical solution in the embodiment of the present invention may also be used for correction.
In an embodiment of the present invention, the processing method after the augmented reality-based object deviates from the recognition graph may be implemented by a corresponding processing device, specifically, the processing device may be a processor, and if the method is implemented on Unity, the processor is specifically a Unity kernel.
After the K object in the K recognition map and the S object in the S recognition map are deviated, the processing device acquires the positions of the K recognition map and the S recognition map, wherein the K recognition map is the recognition map which is scanned firstly, and the position of the K recognition map is unchanged and is fixed. It should be noted that the position of the K identification map is fixed, but the K object in the K identification map is deflected, and the position of the K object is changed.
Wherein the processing device will control the S object to face the K identification map and control the K object to face the S identification map based on the position of the K identification map and the position of the S identification map.
In the embodiment of the invention, after the object is deviated, the S object is controlled to face the K recognition diagram and the K object is controlled to face the S recognition diagram, because the position of the K recognition diagram is not changed, even if the position of the S recognition diagram is changed and the S object and the K object are deviated, the deviation amount of the S object can be reduced by controlling the S object to face the K recognition diagram, and similarly, the deviation amount of the K object is reduced by controlling the K object to face the S recognition diagram, so that the deviation correction is realized.
Referring to fig. 3, a processing method for an augmented reality-based object deviation recognition graph in a second embodiment of the present invention is shown, where the method includes:
step 301, after a K object in a K recognition graph and an S object in an S recognition graph deviate, placing the K recognition graph and the S recognition graph on the same plane;
step 302, controlling the Z axis of the S object to point to the coordinate origin of the K identification map, and controlling the Z axis of the K object to point to the coordinate origin of the S identification map;
and step 303, controlling the Y-axis rotation of the S-object and controlling the Y-axis rotation of the K-object until the Y-axis of the S-object is consistent with the direction of a first workup parameter determined based on the Y-axis direction of the K-recognition map, and the Y-axis of the K-object is consistent with the direction of a second workup parameter determined based on the Y-axis direction of the S-recognition map.
In the embodiment of the present invention, after the K object in the K recognition map and the S object in the S recognition map are deviated, the K recognition map and the S recognition map are placed on the same plane, and it can be understood that if the two recognition maps are on the same plane in the real world, the coordinates of the two recognition maps are also on the same plane in the world coordinate system in the software. Wherein the placing of the K recognition diagram and the S recognition diagram on the same plane is specifically performed by a human being, the K recognition diagram and the S recognition diagram are both diagrams in the real world, and the processing means can realize the placing of the K recognition diagram and the S recognition diagram on the same plane by indicating an operation of the human being.
The processing device points the Z axis of the control S object to the coordinate origin of the K identification map, points the Z axis of the control K object to the coordinate origin of the S identification map, and also rotates the Y axis of the control S object and controls the Y axis of the control K object to rotate until the Y axis of the S object is consistent with the direction of a first workUp parameter determined based on the Y axis direction of the K identification map, and the Y axis of the K object is consistent with the direction of a second workUp parameter determined based on the Y axis direction of the S identification map.
Specifically, please refer to fig. 4, which is a flowchart illustrating the refining step of step 303 in the second embodiment, wherein the refining step of step 303 includes:
step 401, acquiring the Y-axis direction of the K identification chart and acquiring the Y-axis direction of the S identification chart;
step 402, calculating a first projection direction of the Y-axis direction of the K recognition graph on the XY-axis plane of the S object, and calculating a second projection direction of the Y-axis direction of the S recognition graph on the XY-axis plane of the K object; the first projection direction is the direction of the first world Upper parameter, and the second projection direction is the direction of the second world Up parameter;
and 403, controlling the Y axis of the s object to rotate to the same position as the first projection direction, and controlling the Y axis of the k object to rotate to the same position as the second projection direction.
In the embodiment of the invention, the processing device is used for acquiring the Y-axis direction of the K identification map, acquiring the Y-axis direction of the S identification map, calculating a first projection direction of the Y-axis direction of the K identification map on the XY-axis plane of the S object, calculating a second projection direction of the Y-axis direction of the S identification map on the XY-axis plane of the K object, controlling the Y-axis of the S object to rotate to the same position as the first projection direction, and controlling the Y-axis of the K object to rotate to the same position as the second projection direction, wherein the first projection direction is a parameter input when the S object is rotated by calling the LookAt function, and the second projection direction is a parameter input when the K object is rotated by calling the LookAt function.
It should be noted that, in the embodiment of the present invention, when the S object is controlled to face the K identification diagram and the K object is controlled to face the S identification diagram, a lookot function needs to be called, where the lookot function is defined as follows:
VoidLookAt(Transform target,Vector3 worldUp=Vector3.up)
wherein, the function of the LookAt function is as follows:
1. rotating the object so that the Z-axis of the object faces the position of the target object
2. Continuing to rotate the object itself so that the Y-axis of the object is pointed the same as the world parameter (which is a parameter of the incoming LookAt function), this function will use the Y-axis of the world space coordinates if the world parameter is ignored. If the forward direction (Z-axis) of the target object is perpendicular to the world parameter, the rotated up vector will only match the vector in the direction of the world parameter.
The directions of the two vectors are not exactly the same, but after the Z axis of the current object points to the coordinate origin of the target object, the Y axis of the current object is aligned with the projection direction of the worldUp parameter on the plane formed by the XY axes of the target object. Fig. 5 is a schematic view of a projection direction according to an embodiment of the invention. The s object faces the K recognition graph, the Z axis of the s object points to the coordinate origin of the K recognition graph, and then the Y axis direction of the s object is consistent with the projection direction of the Y axis of the K recognition graph on the plane formed by the XY axes of the s object.
Referring to fig. 6, which is a schematic diagram after the correction is completed in the embodiment of the present invention, in fig. 6, the K recognition diagram and the S recognition diagram are placed on the same plane, because the position of the K recognition diagram is always unchanged, the coordinate axes of the K recognition diagram are also always unchanged, and the position of the S recognition diagram is the position relative to the K recognition diagram, the Y axes of the two coordinate axes are pointed in the same direction in the world space, as long as the lookup at function is called all the time to make the S object face the K recognition diagram, the S object will slowly rotate itself until the Y axis of the coordinate axes of the S object is consistent with the projection direction of the Y axis of the coordinate axes of the K recognition diagram on the plane formed by the XY axes of the S object, and because the Y axes of the S recognition diagram and the K recognition diagram are oriented in the same direction, the S object is restored to the original state relative to the S recognition diagram, and similarly, the lookup at function is called by the K object all the time to face the S recognition diagram, the k object deviation is also corrected, and finally, the k object and the s object return to the original plane, so that the object deviation is corrected.
In the embodiment of the present invention, when correcting a K object and an S object that have deviated, the S object faces no more than the K object but faces a K recognition map where the K object is located, where the workup parameter is also set to be the up direction of the K recognition map, such as the Y-axis direction, the K object faces the S recognition map, and the workup parameter is set to be the up direction of the S recognition map, such as the Y-axis direction, and is expressed as follows:
k.player.LookAt(S.Imagetarget.transform,S.Imagetarget.transform.up)
s.player.LookAt(K.Imagetarget.transform,k.Imagetarget.transform.up)
the embodiment of the invention realizes the correction of the deviated s object and k object based on the principle.
It should be noted that, in the embodiment of the present invention, placing the K recognition diagram and the S recognition diagram on the same plane can achieve complete correction of the K object and the S object, that is, there is no deviation between the K object and the S object, which is the most preferable technical solution in the embodiment of the present invention. Under the condition that the K recognition diagram and the S recognition diagram are not placed on the same plane, if the K object is controlled to face the S recognition diagram and the S object is controlled to face the K recognition diagram, the deviation of the K object and the S object can be effectively reduced, and the purpose of reducing the object deviation is achieved. For example, please refer to fig. 7, which is a schematic diagram of an embodiment of the present invention when the recognition diagram is placed at a right angle, in fig. 7, the K recognition diagram and the S recognition diagram are placed at a right angle, and the S object and the K object are not orthogonal but face the K recognition diagram and the S recognition diagram respectively at a certain inclination angle.
In the embodiment of the invention, after the K object in the K identification map is deviated from the S object in the S identification map, by placing the K identification chart and the S identification chart on the same plane, controlling the Z axis of the S object to point to the coordinate origin of the K identification chart and controlling the Z axis of the K object to point to the coordinate origin of the S identification chart, and the Y-axis direction of the control S object is consistent with the projection direction of the Y-axis of the K recognition diagram on the plane formed by the XY axes of the S object, and the Y-axis direction of the control K object is consistent with the projection direction of the Y-axis of the S recognition diagram on the plane formed by the XY axes of the K object, so as to realize the aim that the S object points to the K identification image and the K object points to the S identification image, because the position of the K identification image is not changed, even if the S object and the k object are displaced due to a change in the position of the S recognition map, the displacement between the S object and the k object can be corrected in the above manner.
It should be noted that the first embodiment or the second embodiment can be applied to a scene in which an object (model) in augmented reality interacts with the object (model), or an item in which the object needs to leave the recognition graph, and if the object deviates in the interaction due to the item need, the object in the recognition graph can be regressed by the first embodiment or the second embodiment without resetting the scene, so that data in the scene can be retained to the maximum extent, and the process looks more natural and stable.
It should be noted that the technical solution in the embodiments of the present invention can be applied to the field of games, for example, games developed based on Unity, where Unity is a multi-platform comprehensive game development tool developed by Unity Technologies, and is a fully integrated professional game engine.
Referring to fig. 8, a schematic diagram of functional modules of a processing apparatus after an augmented reality-based object deviation identification map according to a third embodiment of the present invention is shown, where the processing apparatus includes:
an obtaining module 801, configured to obtain positions of a K identification map and an S identification map after a K object in the K identification map and an S object in the S identification map deviate from each other, where the positions of the K identification map are not changed;
a control module 802, configured to control the S object to face the K identification map and control the K object to face the S identification map based on the position of the K identification map and the position of the S identification map.
In the embodiment of the present invention, in an augmented reality application, the identification map needs to be scanned first, and the identification map scanned first appears first, where usually the Transform of the identification map scanned first is unchanged.
Wherein, the identification diagram is an identification diagram prefabricated body carried by Vuforia, and is a carrier for 3D model identification display, the Transform is some orientation scale parameter components of the prefabricated body, and all objects have one Transform component. That is, transform represents the position of the recognition graph.
The position of the identification map of the first scanning is invariable, and the position of the identification map obtained by the subsequent scanning can display the position relative to the first identification map. For better understanding, in the embodiment of the present invention, the identification map scanned first is referred to as a K identification map, and K objects are in the K identification map, and the identification map scanned later is referred to as an S identification map, and S objects are in the S identification map. It is understood that K and S are only used to distinguish two different identification graphs, and do not limit the content and function of the identification graphs, and similarly, K and S.
It can be understood that, in the embodiment of the present invention, only the K object and the S object in the K recognition diagram and the S recognition diagram are described as an example, and in practical applications, if there are deviations of objects in other recognition diagrams, the technical solution in the embodiment of the present invention may also be used for correction.
In an embodiment of the present invention, the processing method after the augmented reality-based object deviates from the recognition graph may be implemented by a corresponding processing device, specifically, the processing device may be a processor, and if the method is implemented on Unity, the processor is specifically a Unity kernel.
After the K object in the K recognition map and the S object in the S recognition map deviate from each other, the obtaining module 801 obtains the positions of the K recognition map and the S recognition map, where the K recognition map is the recognition map that is scanned first, and the position of the K recognition map is fixed. It should be noted that the position of the K identification map is fixed, but the K object in the K identification map is deflected, and the position of the K object is changed.
The control module 802 controls the S object to face the K recognition map and controls the K object to face the S recognition map based on the position of the K recognition map and the position of the S recognition map.
In the embodiment of the invention, after the object is deviated, the S object is controlled to face the K recognition diagram and the K object is controlled to face the S recognition diagram, because the position of the K recognition diagram is not changed, even if the position of the S recognition diagram is changed and the S object and the K object are deviated, the deviation amount of the S object can be reduced by controlling the S object to face the K recognition diagram, and similarly, the deviation amount of the K object is reduced by controlling the K object to face the S recognition diagram, so that the deviation correction is realized.
Please refer to fig. 9, which is a schematic diagram of functional modules of a processing apparatus after identifying an augmented reality-based object deviation according to a fourth embodiment of the present invention, wherein the processing apparatus includes an obtaining module 801 and a control module 802 as in the third embodiment, and the contents are similar to those described in the third embodiment, and are not described herein again.
In an embodiment of the present invention, the processing apparatus further includes:
a placing module 901, configured to place the K identification map and the S identification map on the same plane before the obtaining module.
In an embodiment of the present invention, the control module 802 includes:
a first control module 902, configured to control the Z-axis of the S-object to point to the origin of coordinates of the K-recognition map, and control the Z-axis of the K-object to point to the origin of coordinates of the S-recognition map, where the position includes the origin of coordinates;
and a second control module 903, configured to control the Y-axis rotation of the S object and the Y-axis rotation of the K object until the Y-axis of the S object is consistent with a direction of a first workup parameter determined based on the Y-axis direction of the K recognition map, and the Y-axis of the K object is consistent with a direction of a second workup parameter determined based on the Y-axis direction of the S recognition map.
In the embodiment of the present invention, after the K object in the K recognition graph and the S object in the S recognition graph deviate from each other, the placing module 901 places the K recognition graph and the S recognition graph on the same plane, and it can be understood that if the two recognition graphs are on the same plane in the real world, the coordinates of the two recognition graphs will also be on the same plane in the world coordinate system in the software. The placing module 901 can place the K recognition diagram and the S recognition diagram on the same plane by indicating the operation of the human.
The first control module 902 directs the Z axis of the control S object to the coordinate origin of the K recognition map, and directs the Z axis of the control K object to the coordinate origin of the S recognition map, and the second control module 903 further controls the Y axis of the control S object to rotate and controls the Y axis of the control K object to rotate until the Y axis of the S object is consistent with the direction of the first workup parameter determined based on the Y axis direction of the K recognition map, and the Y axis of the K object is consistent with the direction of the second workup parameter determined based on the Y axis direction of the S recognition map.
Specifically, please refer to fig. 10, which is a schematic diagram of a detailed functional module of the second control module 903 according to a fourth embodiment of the present invention, where the second control module 903 includes:
a direction obtaining module 1001 configured to obtain a Y-axis direction of the K identification chart and obtain a Y-axis direction of the S identification chart;
a calculating module 1002, configured to calculate a first projection direction of the Y-axis direction of the K recognition graph on the XY-axis plane of the S object, and calculate a second projection direction of the Y-axis direction of the S recognition graph on the XY-axis plane of the K object; the first projection direction is the direction of the first world Upper parameter, and the second projection direction is the direction of the second world Up parameter;
a third control module 1003, configured to control the Y-axis of the s object to rotate to the same position as the first projection direction, and control the Y-axis of the k object to rotate to the same position as the second projection direction.
In this embodiment of the present invention, the direction obtaining module 1001 obtains a Y-axis direction of the K identification map and a Y-axis direction of the S identification map, and the calculating module 1002 calculates a first projection direction of the Y-axis direction of the K identification map on the XY-axis plane of the S object and a second projection direction of the Y-axis direction of the S identification map on the XY-axis plane of the K object, and the third control module 1003 controls the Y-axis of the S object to rotate to a position same as the first projection direction and controls the Y-axis of the K object to rotate to a position same as the second projection direction, where the first projection direction is a parameter input when the lookoat function is called to rotate the S object, and the second projection direction is a parameter input when the lookoat function is called to rotate the K object.
It should be noted that, in the embodiment of the present invention, when the S object is controlled to face the K identification diagram and the K object is controlled to face the S identification diagram, a lookot function needs to be called, where the lookot function is defined as follows:
VoidLookAt(Transform target,Vector3worldUp=Vector3.up)
wherein, the function of the LookAt function is as follows:
1. rotating the object so that the Z-axis of the object faces the position of the target object
2. Continuing to rotate the object itself so that the Y-axis of the object is pointed the same as the world parameter (which is a parameter of the incoming LookAt function), this function will use the Y-axis of the world space coordinates if the world parameter is ignored. If the forward direction (Z-axis) of the target object is perpendicular to the world parameter, the rotated up vector will only match the vector in the direction of the world parameter.
The directions of the two vectors are not exactly the same, but after the Z axis of the current object points to the coordinate origin of the target object, the Y axis of the current object is aligned with the projection direction of the worldUp parameter on the plane formed by the XY axes of the target object. Fig. 5 is a schematic view of a projection direction according to an embodiment of the invention. The s object faces the K recognition graph, the Z axis of the s object points to the coordinate origin of the K recognition graph, and then the Y axis direction of the s object is consistent with the projection direction of the Y axis of the K recognition graph on the plane formed by the XY axes of the s object.
Referring to fig. 6, which is a schematic diagram after the correction is completed in the embodiment of the present invention, in fig. 6, the K recognition diagram and the S recognition diagram are placed on the same plane, because the position of the K recognition diagram is always unchanged, the coordinate axes of the K recognition diagram are also always unchanged, and the position of the S recognition diagram is the position relative to the K recognition diagram, the Y axes of the two coordinate axes are pointed in the same direction in the world space, as long as the lookup at function is called all the time to make the S object face the K recognition diagram, the S object will slowly rotate itself until the Y axis of the coordinate axes of the S object is consistent with the projection direction of the Y axis of the coordinate axes of the K recognition diagram on the plane formed by the XY axes of the S object, and because the Y axes of the S recognition diagram and the K recognition diagram are oriented in the same direction, the S object is restored to the original state relative to the S recognition diagram, and similarly, the lookup at function is called by the K object all the time to face the S recognition diagram, the k object deviation is also corrected, and finally, the k object and the s object return to the original plane, so that the object deviation is corrected.
In the embodiment of the present invention, when correcting a K object and an S object that have deviated, the S object faces not the K object but a K recognition map where the K object is located, where a worldop parameter is also set to be an up direction of the K recognition map, the K object faces the S recognition map, and the worldop parameter is set to be an up direction of the S recognition map, which is expressed as follows:
k.player.LookAt(S.Imagetarget.transform,S.Imagetarget.transform.up)
s.player.LookAt(K.Imagetarget.transform,k.Imagetarget.transform.up)
the embodiment of the invention realizes the correction of the deviated s object and k object based on the principle.
It should be noted that, in the embodiment of the present invention, placing the K recognition diagram and the S recognition diagram on the same plane can achieve complete correction of the K object and the S object, that is, there is no deviation between the K object and the S object, which is the most preferable technical solution in the embodiment of the present invention. Under the condition that the K recognition diagram and the S recognition diagram are not placed on the same plane, if the K object is controlled to face the S recognition diagram and the S object is controlled to face the K recognition diagram, the deviation of the K object and the S object can be effectively reduced, and the purpose of reducing the object deviation is achieved. For example, please refer to fig. 7, which is a schematic diagram of an embodiment of the present invention when the recognition diagram is placed at a right angle, in fig. 7, the K recognition diagram and the S recognition diagram are placed at a right angle, and the S object and the K object are not orthogonal but face the K recognition diagram and the S recognition diagram respectively at a certain inclination angle.
In the embodiment of the invention, after the K object in the K identification map is deviated from the S object in the S identification map, by placing the K identification chart and the S identification chart on the same plane, controlling the Z axis of the S object to point to the coordinate origin of the K identification chart and controlling the Z axis of the K object to point to the coordinate origin of the S identification chart, and the Y-axis direction of the control S object is consistent with the projection direction of the Y-axis of the K recognition diagram on the plane formed by the XY axes of the S object, and the Y-axis direction of the control K object is consistent with the projection direction of the Y-axis of the S recognition diagram on the plane formed by the XY axes of the K object, so as to realize the aim that the S object points to the K identification image and the K object points to the S identification image, because the position of the K identification image is not changed, even if the S object and the k object are displaced due to a change in the position of the S recognition map, the displacement between the S object and the k object can be corrected in the above manner.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present invention is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required of the invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the above description, for a person skilled in the art, there are variations in the specific implementation and application scope according to the ideas of the embodiments of the present invention, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. A processing method for an augmented reality-based object after deviating from an identification map is characterized by comprising the following steps:
after a K object in a K recognition image and an S object in an S recognition image deviate, acquiring the positions of the K recognition image and the S recognition image, wherein the position of the K recognition image is unchanged;
controlling the S object to face the K recognition map and controlling the K object to face the S recognition map based on the position of the K recognition map and the position of the S recognition map;
wherein the controlling the S object to face the K recognition map and the controlling the K object to face the S recognition map based on the position of the K recognition map and the position of the S recognition map comprises:
controlling the Z axis of the S object to point to the coordinate origin of the K identification drawing, and controlling the Z axis of the K object to point to the coordinate origin of the S identification drawing, wherein the position comprises the coordinate origin;
controlling the Y-axis rotation of the S object and the Y-axis rotation of the K object until the Y-axis of the S object is consistent with the direction of a first workUp parameter determined based on the Y-axis direction of the K recognition map, and the Y-axis of the K object is consistent with the direction of a second workUp parameter determined based on the Y-axis direction of the S recognition map;
the first world parameter is a first projection direction of the Y-axis direction of the K identification diagram on an XY-axis plane of an S object, and the second world parameter is a second projection direction of the Y-axis direction of the S identification diagram on the XY-axis plane of the K object.
2. The method of claim 1, wherein the step of obtaining the K-grams and the locations of the s-objects is preceded by:
and placing the K identification image and the S identification image on the same plane.
3. The method according to claim 1, wherein the step of controlling the Y-axis rotation of the S-object and controlling the Y-axis rotation of the K-object until the Y-axis of the S-object coincides with a direction of a first workup parameter determined based on the Y-axis direction of the K-recognition map and the Y-axis of the K-object coincides with a direction of a second workup parameter determined based on the Y-axis direction of the S-recognition map comprises:
acquiring the Y-axis direction of the K identification chart and the Y-axis direction of the S identification chart;
calculating a first projection direction of the Y-axis direction of the K recognition graph on the XY-axis plane of the S object, and calculating a second projection direction of the Y-axis direction of the S recognition graph on the XY-axis plane of the K object; the first projection direction is the direction of the first world Upper parameter, and the second projection direction is the direction of the second world Up parameter;
controlling the Y-axis of the s-object to rotate to the same position as the first projection direction, and controlling the Y-axis of the k-object to rotate to the same position as the second projection direction.
4. An apparatus for processing an augmented reality-based object after deviating from a recognition map, the apparatus comprising:
the acquisition module is used for acquiring the positions of a K identification map and an S identification map after a K object in the K identification map and an S object in the S identification map deviate, wherein the position of the K identification map is unchanged;
the control module is used for controlling the S object to face the K identification image and controlling the K object to face the S identification image based on the position of the K identification image and the position of the S identification image;
the control module includes:
the first control module is used for controlling the Z axis of the S object to point to the coordinate origin of the K identification image and controlling the Z axis of the K object to point to the coordinate origin of the S identification image, and the position comprises the coordinate origin;
the second control module is used for controlling the Y-axis rotation of the S object and controlling the Y-axis rotation of the K object until the Y-axis of the S object is consistent with the direction of a first workUp parameter determined based on the Y-axis direction of the K recognition map, and the Y-axis of the K object is consistent with the direction of a second workUp parameter determined based on the Y-axis direction of the S recognition map;
the first world parameter is a first projection direction of the Y-axis direction of the K identification diagram on an XY-axis plane of an S object, and the second world parameter is a second projection direction of the Y-axis direction of the S identification diagram on the XY-axis plane of the K object.
5. The apparatus of claim 4, further comprising:
and the placing module is used for placing the K identification chart and the S identification chart on the same plane before the obtaining module.
6. The apparatus of claim 4, wherein the second control module comprises:
the direction obtaining module is used for obtaining the Y-axis direction of the K identification chart and obtaining the Y-axis direction of the S identification chart;
the calculation module is used for calculating a first projection direction of the Y-axis direction of the K recognition graph on the XY-axis plane of the S object and calculating a second projection direction of the Y-axis direction of the S recognition graph on the XY-axis plane of the K object; the first projection direction is the direction of the first world Upper parameter, and the second projection direction is the direction of the second world Up parameter;
and the third control module is used for controlling the Y axis of the s object to rotate to the position same as the first projection direction and controlling the Y axis of the k object to rotate to the position same as the second projection direction.
CN201611196616.6A 2016-12-22 2016-12-22 Processing method and device for object deviating from recognition graph based on augmented reality Active CN106599893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611196616.6A CN106599893B (en) 2016-12-22 2016-12-22 Processing method and device for object deviating from recognition graph based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611196616.6A CN106599893B (en) 2016-12-22 2016-12-22 Processing method and device for object deviating from recognition graph based on augmented reality

Publications (2)

Publication Number Publication Date
CN106599893A CN106599893A (en) 2017-04-26
CN106599893B true CN106599893B (en) 2020-01-24

Family

ID=58600711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611196616.6A Active CN106599893B (en) 2016-12-22 2016-12-22 Processing method and device for object deviating from recognition graph based on augmented reality

Country Status (1)

Country Link
CN (1) CN106599893B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104160426A (en) * 2012-02-22 2014-11-19 株式会社微网 Augmented reality image processing device and method
CN105264478A (en) * 2013-05-23 2016-01-20 微软技术许可有限责任公司 Hologram anchoring and dynamic positioning
CN105934902A (en) * 2013-11-27 2016-09-07 奇跃公司 Virtual and augmented reality systems and methods

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8797353B2 (en) * 2010-02-12 2014-08-05 Samsung Electronics Co., Ltd. Augmented media message
KR102114618B1 (en) * 2014-01-16 2020-05-25 엘지전자 주식회사 Portable and method for controlling the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104160426A (en) * 2012-02-22 2014-11-19 株式会社微网 Augmented reality image processing device and method
CN105264478A (en) * 2013-05-23 2016-01-20 微软技术许可有限责任公司 Hologram anchoring and dynamic positioning
CN105934902A (en) * 2013-11-27 2016-09-07 奇跃公司 Virtual and augmented reality systems and methods

Also Published As

Publication number Publication date
CN106599893A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
US8681179B2 (en) Method and system for coordinating collisions between augmented reality and real reality
US11074755B2 (en) Method, device, terminal device and storage medium for realizing augmented reality image
US20210035346A1 (en) Multi-Plane Model Animation Interaction Method, Apparatus And Device For Augmented Reality, And Storage Medium
CN111161422A (en) Model display method for enhancing virtual scene implementation
CN109887003A (en) A kind of method and apparatus initialized for carrying out three-dimensional tracking
US20190130599A1 (en) Systems and methods for determining when to provide eye contact from an avatar to a user viewing a virtual environment
CN108961423B (en) Virtual information processing method, device, equipment and storage medium
US10769811B2 (en) Space coordinate converting server and method thereof
CN108430032B (en) Method and equipment for realizing position sharing of VR/AR equipment
CN112882576B (en) AR interaction method and device, electronic equipment and storage medium
CN111161398B (en) Image generation method, device, equipment and storage medium
WO2017113729A1 (en) 360-degree image loading method and loading module, and mobile terminal
CN113470112A (en) Image processing method, image processing device, storage medium and terminal
CN111569414A (en) Flight display method and device of virtual aircraft, electronic equipment and storage medium
CN115187729A (en) Three-dimensional model generation method, device, equipment and storage medium
CN116437061A (en) Demonstration image laser projection method, device, computer equipment and storage medium
CN111899349A (en) Model presentation method and device, electronic equipment and computer storage medium
CN106599893B (en) Processing method and device for object deviating from recognition graph based on augmented reality
CN113223137B (en) Generation method and device of perspective projection human face point cloud image and electronic equipment
CN112862981B (en) Method and apparatus for presenting a virtual representation, computer device and storage medium
CN112416218B (en) Virtual card display method and device, computer equipment and storage medium
CN114307145A (en) Picture display method, device, terminal, storage medium and program product
CN110827411B (en) Method, device, equipment and storage medium for displaying augmented reality model of self-adaptive environment
CN108845669B (en) AR/MR interaction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant