CN114546210B - Mapping method and augmented reality device - Google Patents
Mapping method and augmented reality device Download PDFInfo
- Publication number
- CN114546210B CN114546210B CN202210108384.3A CN202210108384A CN114546210B CN 114546210 B CN114546210 B CN 114546210B CN 202210108384 A CN202210108384 A CN 202210108384A CN 114546210 B CN114546210 B CN 114546210B
- Authority
- CN
- China
- Prior art keywords
- information
- target
- entity object
- target entity
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000013507 mapping Methods 0.000 title claims abstract description 48
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 6
- 238000012545 processing Methods 0.000 description 18
- 230000015654 memory Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000005291 magnetic effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000010438 heat treatment Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 208000000112 Myalgia Diseases 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005294 ferromagnetic effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 208000015001 muscle soreness Diseases 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a mapping method and augmented reality equipment, wherein the method comprises the following steps: if a target entity object is detected, setting identification information on the target entity object; based on the identification information, vector information of a specified operation executed for the target entity object is obtained; and mapping the vector information to a target virtual object associated with the target entity object so as to process the target virtual object.
Description
Technical Field
The application relates to the technical field of information processing, in particular to a mapping method and augmented reality (Augmented Reality, AR) equipment.
Background
When operating the virtual object, the user cannot flexibly and precisely control the virtual object.
Disclosure of Invention
The embodiment of the application provides a mapping method and AR equipment.
The technical scheme provided by the embodiment of the application is as follows:
The embodiment of the application provides a mapping method, which comprises the following steps:
if a target entity object is detected, setting identification information on the target entity object;
based on the identification information, vector information of a specified operation executed for the target entity object is obtained;
and mapping the vector information to a target virtual object associated with the target entity object so as to process the target virtual object.
In some embodiments, the setting identification information on the target entity object includes:
acquiring shape structure information of the target entity object;
Acquiring the starting point position information of the execution of the specified operation on the target entity object;
and setting the identification information on the target entity object based on the shape structure information of the target entity object and the starting point position information.
In some embodiments, the method further comprises:
obtaining a target edge and/or a target surface which meet specified conditions on the target entity object based on the shape structure information of the target entity object;
And projecting prompt information to the target edge and/or the target surface so as to prompt a user to execute the specified operation on the target edge and/or the target surface.
In some embodiments, the setting the identification information on the target entity object based on the shape structure information of the target entity object and the start point position information includes:
Based on the shape structure information of the target entity object, obtaining the characteristic information of the appointed area of the target entity object; wherein the designated area includes at least one edge and/or surface of the target physical object;
Determining identification category information based on the feature information of the designated area;
and setting the identification information corresponding to the identification category information in the designated area based on the starting point position information.
In some embodiments, the identification category information includes at least one of length category information and angle category information; the identification information comprises at least one of unit length information and unit angle information of at least one dimension; the setting the identification information corresponding to the identification category information in the specified area based on the start point position information includes:
And setting unit length information of at least one dimension corresponding to the length category and/or unit angle information corresponding to the angle category information in the designated area based on the starting point position information.
In some embodiments, the mapping the vector information to a target virtual object associated with the target entity object includes:
acquiring a corresponding relation between at least one edge and/or surface of the target entity object and at least one edge and/or surface of the target virtual object;
the vector information is mapped to at least one edge and/or surface of the target virtual object based on a correspondence between the at least one edge and/or surface of the target physical object and the at least one edge and/or surface of the target virtual object, and the trajectory information of the specified operation.
In some embodiments, the method further comprises:
acquiring shape structure information of the target virtual object and shape structure information of the target physical object;
And determining the corresponding relation between at least one edge and/or surface of the target entity object and at least one edge and/or surface of the target virtual object based on the corresponding relation between the shape structure information of the target virtual object and the shape structure information of the target entity object.
In some embodiments, the obtaining vector information of the specified operation performed on the target entity object based on the identification information includes:
If the specified operation which is executed on the surface of the target entity object is detected, obtaining track information of the specified operation;
and quantizing the track information based on the identification information to obtain the vector information.
In some embodiments, the method further comprises:
based on the vector information, display parameters of at least part of the edge of at least one edge and/or at least part of the surface of at least one surface of the target virtual object are adjusted.
The embodiment of the application also provides AR equipment, which can realize the mapping method as described in any one of the previous embodiments.
In the mapping method provided by the embodiment of the application, if the target entity object is detected, the identification information is set on the target entity object, the vector information of the appointed operation executed for the target entity object is obtained based on the identification information, and then the vector information is mapped to the target virtual object associated with the target entity object so as to process the target virtual object. Because the controllability of the appointed operation executed for the target entity object is better, and the accuracy of the vector information for executing the appointed operation for the target entity object is higher, the processing and the control of the target entity object with higher accuracy can be realized; meanwhile, at least one factor of the target entity object, the identification information and the appointed operation is adjusted, so that flexible adjustment of the vector information can be realized, and after the vector information is mapped to the target virtual object associated with the target entity object, flexible and accurately controllable adjustment processing of the target virtual object can be realized, thereby solving the technical problem that high-precision control cannot be flexibly performed on the virtual object in the related art.
Drawings
Fig. 1 is a schematic diagram of a related art method for setting a click button on a virtual object surface;
fig. 2 is a flow chart of a mapping method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of setting identification information on a target entity object according to an embodiment of the present application;
fig. 4A is a schematic diagram of setting unit length information on a surface of a first target entity object according to an embodiment of the present application;
fig. 4B is a schematic diagram of setting unit angle information on a surface of a second target physical object according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of obtaining vector information according to an embodiment of the present application;
Fig. 6 is another flow chart of the mapping method according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In an actual AR scene, a user usually needs to implement processes or controls such as zooming, translation, rotation and the like on a virtual object through gesture operation, however, the fineness of the gesture operation is insufficient, and when the gesture operation is performed, the arm or the hand of the user cannot be stably supported, and the accuracy of the gesture operation is further weakened due to shake of the hand or the arm caused by muscle soreness generated by the gesture operation for a long time, so that the accuracy of the gesture operation on the virtual object is seriously affected.
In order to solve the above-described problems, a scheme of operation control of the virtual object 1 shown in fig. 1 is provided in the related art. Fig. 1 is a schematic diagram of a related art method for setting a click button on a virtual object surface. As shown in fig. 1, the AR device sets a first button 101, a second button 102, a third button 103, a fourth button 104, a fifth button 105, a sixth button 106, and a seventh button 107 for the corner points of the three-dimensional virtual object 1 through a User Interface (UI), and a User can perform an operation on the virtual object 1 by performing an operation such as selecting or dragging on at least one of the seven buttons. However, the flow of the above-described operation is cumbersome, and precise pose change of quantification of the virtual object 1 cannot be achieved either.
Based on the above, the embodiments of the present application provide a mapping method, which may be implemented by a Processor of an AR device, where the Processor may be at least one of an Application SPECIFIC INTEGRATED Circuit (ASIC), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor.
Fig. 2 is a flowchart illustrating a mapping method according to an embodiment of the present application. As shown in fig. 2, the method may include steps 201 to 203:
step 201, if the target entity object is detected, setting identification information on the target entity object.
In one embodiment, the AR device may detect a plurality of entity objects, then select at least one entity object from the plurality of entity objects based on a user's selection or based on a preset entity object selection policy, and determine the selected at least one entity object as a target entity object.
In one embodiment, the target entity object may include a physical device object associated with the AR device, such as a control device for implementing AR function control that establishes a communication connection with the AR device.
In one embodiment, the target physical object may be an object independent of the AR device, such as a cup, a book, a stylus, etc. independent of the AR device.
In one embodiment, the identification information may include motion state information of the target entity object, for example, the first target entity object is an object in motion state, and then the identification information of the first target entity object may be an object in motion state; for example, the second target entity object is an object in a static state, and then the identification information of the second target entity object may be the object in the static state.
In one embodiment, the identification information may include morphological information of the target entity object, for example, the third target entity object is a stereoscopic object, and then the identification information of the third target entity object may be a stereoscopic object; for example, the fourth target entity object is a two-dimensional planar object, and then the identification information of the fourth target entity object may be the planar object.
In an embodiment, the identification information may include number information set by the AR device for at least one surface and/or edges of the target entity object, for example, in a case where the target entity object is a cube, the AR device may identify the target entity object after detecting the target entity object, in a case where the target entity object is determined to be a cube, may set numbers for at least two surfaces of the cube, and/or may set numbers for edges of at least two edges of the cube, and at this time, the identification information of the cube may include numbers of at least two surfaces of the cube and/or numbers of at least two edges of the cube.
In one embodiment, the identification information may further include number information or sequence information of the target entity object in the entity object set, for example, the entity object set may include at least two target entity objects, and then the identification information of each target entity object may be number information set for the target entity object by the AR device based on the total number of the entity object sets, and illustratively, the number in the number information of each target entity object may be less than or equal to the sum of the number of entity objects in the entity object set; for example, in the case where the set of entity objects includes two target entity objects, the number of the first target entity object in the set of objects may be 1, and the number of the second target entity object in the set of entity objects may be 2.
In one embodiment, setting the identification information on the target entity object may include projecting the identification information on a surface of the target entity object; for example, the surface of the target physical object may include an inner surface and/or an outer surface of the target physical object; for example, the identification information is projected on the surface of the target entity object, which may be a visual representation mode of converting the identification information into figures or characters, and the converted identification information is projected on the surface of the target entity object.
In one embodiment, the AR device may determine the identification information of the target entity object according to at least two of the motion information, the form information, the number information or the sequence information of the target entity object in the entity object set, for example, if the AR device detects the target entity object and identifies the target entity object to determine that the target entity object is a three-dimensional movable object, and the entity object set includes two target entity objects, the identification information set by the AR device for the target entity object may be: a number one movable three-dimensional object.
Step 202, obtaining vector information of a specified operation executed for the target entity object based on the identification information.
In one embodiment, the designating operation may include at least one operation, such as at least one operation among a sliding operation performed on the target entity surface, a clicking operation, a moving operation on the target entity object, and an attitude adjustment operation on the target entity object.
In one embodiment, in the case that the number of target entity objects is at least two, the designating operation may further include performing an operation set of operations on each target entity object, such as performing a first operation and a second operation on a first target entity object and performing a third operation and a fourth operation on a second target entity object, and then the designating operation may include the first operation, the second operation, the third operation, and the fourth operation.
In one embodiment, the vector information may include direction information specifying the operation and/or span information over which the operation is performed. For example, the span information of the specified operation may include distance information between a start point position to an end point position of the specified operation.
In the embodiment of the present application, the obtaining of vector information of the specified operation performed on the target entity object based on the identification information may be achieved by any one of the following modes:
If the fact that the user executes the specified operation on the target entity object is detected, the motion state information of the target entity object is obtained, and then the vector information of the specified operation is determined based on the motion state information of the target entity object in the identification information and the corresponding relation between the motion state information and the vector information.
If the user is detected to execute the specified operation on the target entity object, the form information of the target entity object is obtained, and then the vector information of the specified operation is determined based on the form information of the target entity object in the identification information and the corresponding relation between the form information and the vector information.
If the fact that the user executes the specified operation on the target entity object is detected, the surface and/or the edge of the target entity object where the specified operation is located is obtained, and then the vector information of the specified operation is determined based on the surface and/or the edge of the target entity object in the identification information and the corresponding relation between the surface and/or the edge of the target entity object and the vector information.
Step 203, mapping the vector information to a target virtual object associated with the target entity object, so as to process the target virtual object.
In one embodiment, the target virtual object may be at least one virtual object presented by the AR device, and illustratively, the target virtual object may be a virtual object with a smaller visual area, or a virtual object with at least a portion of the area occluded.
In an embodiment, the association relationship between the target virtual object and the target entity object may be set by the user through the AR device when the target entity object is detected, or may be set by the AR device according to at least one information such as an operation requirement, an operation habit, a current AR scene, and the like of the user on the virtual object, or may be determined by the AR device according to a historical association relationship between the target virtual object and the target entity object.
In one embodiment, mapping vector information to a target virtual object associated with a target physical object to process the target virtual object may be accomplished by:
if the vector information comprises direction information, the direction information is obtained from the vector information, and the direction information is mapped to a target virtual object associated with the target entity object, so that the adjustment processing of the designated surface orientation of the target virtual object is realized.
If the vector information comprises span information, span information is acquired from the vector information, and the span information is mapped to a target virtual object associated with the target entity object so as to realize adjustment of the moving distance of the target virtual object.
And screening the information contained in the vector information based on the requirement information of the user for the target virtual object adjustment processing, and mapping the screening result to the target virtual object associated with the target entity object so as to realize the adjustment processing of the target virtual object.
As can be seen from the above, in the mapping method provided by the embodiment of the present application, if the target entity object is detected, the identification information is set on the target entity object, then the vector information of the specified operation executed on the target entity object is obtained based on the identification information, and then the vector information is mapped to the target virtual object associated with the target entity object, so as to process the target virtual object. Because the controllability of the appointed operation executed for the target entity object is better, and the accuracy of the vector information for executing the appointed operation for the target entity object is higher, the processing and the control of the target entity object with higher accuracy can be realized; meanwhile, at least one factor of the target entity object, the identification information and the appointed operation is adjusted, so that flexible adjustment of the vector information can be realized, and after the vector information is mapped to the target virtual object associated with the target entity object, flexible and accurately controllable adjustment processing of the target virtual object can be realized, thereby solving the technical problem that high-precision control cannot be flexibly performed on the virtual object in the related art.
Based on the foregoing embodiments, in the mapping method provided by the embodiment of the present application, the setting of the identification information on the target entity object may be implemented through a flow shown in fig. 3. Fig. 3 is a schematic flow chart of setting identification information on a target entity object according to an embodiment of the present application, as shown in fig. 3, the flow may include steps 301 to 303:
Step 301, obtaining shape structure information of a target entity object.
In one embodiment, the shape structure information of the target entity object may include shape information of at least one surface of the target entity object, ratio information between edges of the at least one surface, whether the target entity object has stereoscopic structure information, overall structure information of the target entity object, and the like; for example, the AR device may determine that the target entity object has three-dimensional overall structure information as shape structure information of the target entity object, may determine that the obtained shape information of at least one surface of the target entity object facing the AR device is shape structure information of the target entity object, and may determine that intersection information and scale information between edges of at least one surface of the target entity object facing the AR device are shape structure information of the target entity object.
Step 302, obtaining the starting point position information of the execution of the specified operation on the target entity object.
In one embodiment, the start point position included in the start point position information may include any one of a surface of the target entity object, an edge of the target entity object, and a corner point of the target entity object.
In one embodiment, the starting point location information may be determined by the AR device identifying and tracking the specified operation.
For example, in the embodiment of the present application, the execution sequence of step 301 and step 302 may be sequentially adjusted, or may be executed simultaneously, which is not limited in the embodiment of the present application.
Step 303, setting identification information on the target entity object based on the shape structure information and the starting point position information of the target entity object.
In one embodiment, the setting of the identification information on the target entity object based on the shape structure information and the start point position of the target entity object may be achieved by any one of the following modes:
If the shape structure information of the target entity object indicates that the target entity object has a two-dimensional plane structure, acquiring a relative position relation between a starting point position and the edge of the target entity object, and setting identification information on the edge of the target entity object based on the relative position relation; for example, if the target physical object is a ruler and the starting point position information is the scale starting point of the ruler, the relative positional relationship of the four edges of the ruler with respect to the scale starting point is obtained, and the exemplary set is that the identification information of the first edge closest to the scale starting point and longer in length is set to 1, the identification information of the second edge perpendicular to the first edge and closer to the scale starting point is set to 2, the identification information of the third edge perpendicular to the first edge and far from the scale starting point is set to 3, and the identification information of the fourth edge is set to 4.
If the shape structure information of the target entity object indicates that the target entity object has a three-dimensional structure, a relative position relation of a starting point position relative to an edge and/or a surface of the target entity object may be obtained, then identification information is set on the edge and/or the surface of the target entity object based on the relative position relation, for example, the target entity object is a triangular pyramid, if the starting point position is located on a side surface of the triangular pyramid facing the AR device, the identification information of the side surface may be set to 1, the identification information of two side surfaces adjacent to the surface may be set to 2 and3, and the identification information of the bottom surface may be set to 4.
In one embodiment, the setting form of the identification information may be different according to the edge and/or the surface of the target physical object, for example, in the case that the target physical object is a trigonal pyramid, the corresponding identification information may be displayed in a dot-like array on the first side of the trigonal pyramid, and their respective identification information may be displayed in the form of grids or lines on the second side and the third side, respectively.
As can be seen from the foregoing, in the mapping method provided by the embodiment of the present application, the identification information set on the target entity object is set based on the shape structure information of the target entity object and the start point position information of the specified operation performed on the target entity object, so that the identification information set in this manner not only carries the actual shape structure information of the target entity object, but also is closely related to the start point position information of the specified operation of the user, thereby the vector information of the specified operation determined based on the identification information can more accurately reflect the actual execution process of the specified operation on the target entity object, and laying a foundation for accurate control processing of the target virtual object.
Based on the foregoing embodiment, the mapping method provided in the embodiment of the present application may further include steps A1 to A2:
And A1, obtaining a target edge and/or a target surface which meet specified conditions on the target entity object based on the shape structure information of the target entity object.
In one embodiment, the number of target edges may be plural, and the number of target surfaces may also include at least one.
In one embodiment, the specified condition may include at least one of an edge and/or a surface being in a specified orientation, an edge length being greater than or equal to a specified length, a surface area being greater than or equal to a specified area, an edge being a first shape, a surface having a second shape, an edge color being a first color, a surface color being a second color, an edge brightness being a first brightness, and a surface brightness being a second brightness.
In one embodiment, the obtaining, based on the shape structure information of the target entity object, the target edge and/or the target surface on the target entity object that meets the specified condition may be achieved by:
The AR device identifies the edge and/or the surface of the target entity object based on the shape structure information of the target entity object and the placement gesture information of the target entity object, and determines the edge and/or the surface, close to the AR device, of the target entity object as the target edge and/or the target surface.
The AR equipment identifies the edge of the target entity object close to the AR equipment and/or the surface facing the AR equipment to obtain an identification result, then obtains the length of each edge and/or the area of the surface of the target entity object from the identification result based on the shape structure information of the target entity object and the distance information of the target entity object relative to the AR equipment, and sets the edge with the length greater than or equal to a first threshold and/or the surface with the area greater than or equal to a second threshold as the target edge and/or the target surface.
And A2, projecting prompt information to the target edge and/or the target surface so as to prompt a user to execute specified operation on the target edge and/or the target surface.
In one embodiment, the cue information may include a light comprising at least one color; for example, the prompt information is projected to the target edge and/or the target surface, and the lamplight containing at least one color is projected to the target edge and/or the target surface; the prompt information may also be text information, for example.
As can be seen from the above, in the projection method provided by the embodiment of the present application, the AR device can obtain the target edge and/or the target surface on the target physical object that meets the specified condition based on the shape structure information of the target physical object, and can also project the prompt information to the target edge and/or the target surface to prompt the user to perform the specified operation on the target edge and/or the target surface. Therefore, the AR device can flexibly and objectively prompt the user to execute the specified operation on the target edge and/or the target surface according to the actual shape structure information of the target entity object, so that the probability that the specified operation of the user is executed on the target edge and/or the target area can be improved, and the probability that the specified operation is unrecognizable can be reduced.
Based on the foregoing embodiments, in the mapping method provided by the embodiment of the present application, setting the identification information on the target entity object based on the shape structure information and the starting point position information of the target entity object may be implemented through steps B1 to B3:
And B1, obtaining characteristic information of a designated area of the target entity object based on the shape structure information of the target entity object.
Wherein the designated area comprises at least one edge and/or surface of the target physical object.
In one embodiment, the designated area may include an edge on the target physical object that is proximate to the AR device and/or a surface facing the AR device.
In one embodiment, based on the shape structure information of the target entity object, the feature information of the designated area of the target entity object may be obtained by any one of the following modes:
If the shape structure information of the target entity object indicates that the target entity object is a two-dimensional plane object, at most two surfaces of the target entity object and a common edge of the two surfaces can be determined as a designated area.
If the shape structure information of the target physical object indicates that the target physical object has a three-dimensional structure, each surface of the target physical object identified by the AR device and the edges identified by the AR device on each surface may be determined as the designated area.
In one embodiment, the feature information of the specified region may include at least one of the number of edges constituting any one surface in the specified region, the relative position information between the respective edges, the length ratio information of the respective edges, and the shape information of the surface in the specified region.
In one embodiment, the characteristic information of the designated area may further include at least one of color information, brightness information, and contrast information of the plurality of edges and/or surfaces in the designated area.
And B2, determining identification type information based on the characteristic information of the designated area.
In one embodiment, the identification category information may include at least one of a surface identification category, an edge identification category, and a corner identification category.
In one embodiment, the identification category information may include a category of a presentation form of the identification information, and by way of example, the category of the presentation form of the identification information may include the presentation of the identification information in a grid form, the presentation of the identification information in a dot or straight line form, the presentation of the identification information in a text form, the presentation of the identification information in a drawing form, and the like.
In one embodiment, the identification category information may be determined from a plurality of identification category information of the correspondence relationship based on the correspondence relationship between the feature information and the identification category information, and the feature information of the designated area.
In one embodiment, the AR device may determine the identification category information based on the feature information of the designated area through a selection or setting operation by the user.
And B3, setting identification information corresponding to the identification category information in the designated area based on the starting point position information.
In the embodiment of the present application, setting the identification information corresponding to the identification category information in the designated area based on the start point position information may be achieved by any one of the following modes:
and determining the distance between the starting point position of the specified operation and each edge and/or surface in the specified area based on the starting point position information, and setting edge identification information and/or surface identification information corresponding to the edge identification category and/or surface identification category on the edge and/or surface of which the distance between the starting point position and the starting point position is smaller than or equal to a distance threshold.
And determining the edge and/or the surface of the edge and/or the surface adjacent to the starting point position of the specified operation from the respective edges and/or surfaces of the specified region based on the starting point position information, and setting edge identification information and/or surface identification information corresponding to the edge identification category and/or the surface identification category on the edge and/or the surface.
As can be seen from the foregoing, in the mapping method provided by the embodiment of the present application, the AR device can obtain the feature information of the designated area of the target entity object based on the property structure information of the target entity object, determine the identification category information based on the feature information of the designated area, and set the identification information corresponding to the identification category information in the designated area based on the start point position information. Therefore, in the mapping method provided by the embodiment of the application, the AR equipment can determine the identification information based on the shape structure information of the target entity object and the starting point position information of the appointed operation, so that the identification information can more objectively reflect the shape structure information of the target entity object and can more objectively display the actual execution state of the appointed operation.
Based on the foregoing embodiment, identifying category information, including at least one of length category information and angle category information; the identification information includes at least one of unit length information and unit angle information of at least one dimension.
In one embodiment, the length class information may represent coordinate class information or length scale class information for determining the euclidean distance.
In one embodiment, the angle class information may be angle scale information.
Accordingly, setting identification information corresponding to the identification category information in the designated area based on the start point position information can be achieved by:
Based on the start point position information, unit length information of at least one dimension corresponding to the length category information and/or unit angle information corresponding to the angle category information is set in the designated area.
In one embodiment, setting the unit length information of at least one dimension corresponding to the length category information and/or the unit angle information corresponding to the angle category information in the designated area based on the start point position information may be achieved by any one of the following means:
Acquiring an edge where the starting point position is located or is close to from a designated area based on the starting point position information, and setting unit length information on the linear edge if the edge adjacent to the starting point position is the linear edge; for example, on the surface of the target physical object, the edge at or near the start point may include at least two edges, and each edge is a straight edge, and then unit length information may be set on each edge of the at least two edges, so as to obtain unit length information of at least two dimensions.
Acquiring an edge where the starting point position is located or is close to from a designated area based on the starting point position information, and setting unit angle information on the curve edge if the edge adjacent to the starting point position is the curve edge; illustratively, the starting point of the unit angle information may be any point on the curve edge, or may be a point on the curve edge near the starting point position.
Based on the starting point position information, acquiring a corner point where the starting point is located from a designated area, then acquiring identification category information of at least two edges connected to the corner point, and setting identification information corresponding to the identification category information of the edges on each edge.
Fig. 4A is a schematic diagram of setting unit length information on the surface of the first target entity object 401 according to an embodiment of the present application. As shown in fig. 4A, the first target entity object 401 is a regular cuboid, and a designated area facing the AR device includes at least a first side 4011; for example, the first start point position of the designating operation on the first target entity object 401 may be the corner point 4012 on the first side surface 4011, and thus, length scale information is respectively set at the first edge 4013 and the second edge 4014 adjacent to the corner point 4012.
Fig. 4B is a schematic diagram of setting unit angle information on the surface of the second target physical object 402 according to an embodiment of the present application. As shown in fig. 4B, the second target physical object 402 is a regular cylinder, and its designated area facing the AR device includes a third side 4021 and a top 4022; for example, the second starting point position 4024 designated to operate on the second target physical object 402 may be any point on the third edge 4023 of the top surface 4022, and thus, the angle scale information may be set along the third edge 4023 with the second starting point 4024 as an angle scale starting point.
For example, the arrow directions in fig. 4A and 4B may be used to indicate the execution direction of the specified operation.
Therefore, in the setting process of the angle scale information or the length scale information in the embodiment of the application, the edge characteristics and the body characteristics of the target entity object are fully considered, so that the finally set angle scale information or the length scale information can be more suitable for the actual edge characteristics and the body characteristics of the target entity object, and the accuracy of the angle scale information or the length scale information is further improved.
As can be seen from the foregoing, in the mapping method provided by the embodiment of the present application, the AR device can set, in the designated area of the target entity object, unit length information and/or unit angle information of at least one dimension corresponding to the length category based on the starting point position information of the designated operation, so that the unit length information and/or unit angle information of at least one dimension is closely related to the actual execution state of the designated operation and the actual form of the target entity object, thereby improving the accuracy of the vector information.
Based on the foregoing embodiments, in the mapping method provided in the embodiments of the present application, obtaining vector information of a specified operation performed on a target entity object based on identification information may be implemented by a flow shown in fig. 5, and fig. 5 is a schematic flow chart of obtaining vector information provided in the embodiments of the present application, as shown in fig. 5, where the flow may include steps 501 to 502:
step 501, if a specified operation performed on the surface of the target entity object is detected, track information of the specified operation is obtained.
In one embodiment, the track information of the designated operation may include a straight track or a curved track adjacently connected to each point of the touching or sliding process by the user when the designated operation is performed on the surface of the target physical object.
In one embodiment, the track information of the specified operation may include coordinate information of a line connecting a start point position of the specified operation to an end point position of the specified operation; the line between the start point position of the specified operation and the end point position of the specified operation may include a straight line, or may include a combination of a plurality of straight line segments in different planes.
In one embodiment, the trajectory information of the designated operation may be consistent with at least one characteristic of at least one edge of the target physical object, such as the trajectory information of the sliding operation performed along the second edge 4014 of the first target physical object 401 in fig. 4A, and the extending direction of the second edge 4014.
In one embodiment, the trajectory information of the specified operation may be inconsistent with any feature of any edge of the target physical object, such as the trajectory information of a circular closed curve shape at any surface of the first target physical object 401 shown in fig. 4A, which is different from the shape of any surface of the first target physical object 401.
In one embodiment, the trajectory information of the specified operation may be obtained by means of the AR device continuously tracking and identifying the specified operation on the target entity surface.
Step 502, the track information is quantized based on the identification information, and vector information is obtained.
In the embodiment of the present application, the vector information may be obtained by any one of the following modes:
and under the condition that the identification information comprises unit length information of at least two dimensions, quantifying each track point in the track corresponding to the track information based on the unit length information of the at least two dimensions to obtain coordinate information of each track point in a coordinate system formed by the unit length information of the at least two dimensions, and obtaining vector information based on each coordinate information.
And under the condition that the identification information comprises unit angle information, quantifying the bending degree of the track corresponding to the track information based on the unit angle information, so as to obtain radian information corresponding to a line segment between a starting point and an ending point of the track information, and determining the direction information and the radian information of the track information as vector information.
Segmenting the track corresponding to the track information, quantizing each line segment in the track obtained after segmentation based on unit length information and/or unit angle information of at least two dimensions in the identification information, and sequentially combining the obtained quantized results according to the execution sequence of the specified operation, so as to obtain vector information.
As can be seen from the above, in the mapping method provided by the embodiment of the present application, after track information of a specified operation performed on a surface of a target entity object is detected, the track information can be quantized based on the identification information, so as to obtain vector information, thereby making the degree of consistency between the vector information and the execution process of the specified operation higher.
Based on the foregoing embodiments, in the mapping method provided by the embodiment of the present application, mapping vector information to a target virtual object associated with a target entity object may be implemented through steps C1 to C2:
And C1, acquiring a corresponding relation between at least one edge and/or surface of the target entity object and at least one edge and/or surface of the target virtual object.
In one embodiment, the correspondence between the at least one edge and/or surface of the target physical object and the at least one edge and/or surface of the target virtual object may be set by the AR device based on at least one factor of a user's operation habit on the target virtual object, a user's operation requirement, and a historical correspondence between the at least one edge and/or surface of the target physical object and the at least one edge and/or surface of the target virtual object.
In one embodiment, the correspondence between at least one edge and/or surface of the target physical object and at least one edge and/or surface of the target virtual object may be determined based on user input or settings.
And C2, mapping vector information to at least one edge and/or surface of the target virtual object based on the corresponding relation between the at least one edge and/or surface of the target entity object and the at least one edge and/or surface of the target virtual object and the track information of the designated operation.
In one embodiment, based on the trajectory information of the specified operation, the surface and/or edge covered by the specified operation when performed may be determined.
In one embodiment, at least one edge and/or surface of the target virtual object corresponding to the edge and/or surface covered by the specified operation may be determined based on the edge and/or surface covered by the specified operation and the correspondence.
In one embodiment, mapping vector information to at least one edge and/or surface of the target virtual object may be accomplished by either:
The direction information in the vector information is mapped to at least one edge and/or surface of the target virtual object to adjust the orientation of the at least one edge and/or surface of the target virtual device.
Amplitude information in the vector information is mapped to at least one edge and/or surface of the target virtual object to adjust at least one parameter of shape, size, and location of the at least one edge and/or surface of the target virtual device.
The direction information and the amplitude information in the vector information are mapped to at least one edge and/or surface of the target virtual object to adjust the gesture parameters of the at least one edge and/or surface of the target virtual device.
As can be seen from the foregoing, in the mapping method provided by the embodiment of the present application, after the correspondence between at least one edge and/or surface of the target entity object and at least one edge and/or surface of the target virtual object is obtained, the vector information can be mapped to at least one edge and/or surface of the target virtual object based on the correspondence and the track information of the specified operation, so that the mapping of the vector information from the target entity object to the target virtual object with fine granularity is achieved.
Based on the foregoing embodiment, the mapping method provided in the embodiment of the present application may further include steps D1 to D2:
and D1, acquiring shape structure information of the target virtual object and shape structure information of the target physical object.
In one embodiment, the shape structure information of the target entity object may be obtained by detecting and identifying the target entity object by the AR device.
In one embodiment, the shape structure information of the target virtual object may be determined when the target virtual object is created; for example, when the shape structure information of the target virtual object is different from the shape structure information when it was created, the AR device may update the shape structure information of the target virtual object when the shape structure information of the target virtual object is changed.
And D2, determining the corresponding relation between at least one edge and/or surface of the target entity object and at least one edge and/or surface of the target virtual object based on the corresponding relation between the shape structure information of the target virtual object and the shape structure information of the target entity object.
In one embodiment, in the case that the shape structure information of the target virtual object is similar to the shape structure information of the target physical object, the geometric shape or geometric structure representing the target virtual object and the target physical object, a correspondence between at least one edge and/or surface of the target physical object and at least one edge and/or surface of the target virtual object may be determined; for example, if the target virtual object and the target physical object are cubes, a correspondence between at least one square surface of the target physical object and at least one square surface of the target virtual object may be established, and a correspondence between at least two edges of the target physical object and at least two edges of the target virtual object may be established.
In one embodiment, the AR device may analyze the shape structure information of the target virtual object and the shape structure information of the target physical object, and in case it is determined that at least one surface of the target virtual object is similar to at least one surface of the target physical object, may determine a correspondence between at least one edge and/or surface of the target physical object and at least one edge and/or surface of the target virtual object.
In one embodiment, the correspondence between the at least one edge and/or surface of the target physical object and the at least one edge and/or surface of the target virtual object may include at least one of a one-to-one correspondence between the edge and/or surface of the target physical object and the edge and/or surface of the target virtual object, a one-to-many correspondence between the edge and/or surface of the target physical object and the edge and/or surface of the target virtual object, and a many-to-one correspondence between the edge and/or surface of the target physical object and the edge and/or surface of the target virtual object.
As can be seen from the foregoing, in the mapping method provided by the embodiment of the present application, the correspondence between the edge and/or the surface of the target physical object and the edge and/or the surface of the target virtual object is determined according to the shape structure information of the target physical object and the shape structure information of the target virtual object, so that the correspondence between the edge and/or the surface of the target physical object and the edge and/or the surface of the target virtual object can reflect the geometric structure characteristics of the target physical object and the target virtual object.
Based on the foregoing embodiment, the mapping method provided by the embodiment of the present application may further include the following steps:
Based on the vector information, display parameters of at least a portion of the at least one edge and/or at least a portion of the at least one surface of the target virtual object are adjusted.
In one embodiment, the display parameters of at least a portion of the at least one edge may include at least one parameter of a length, a thickness, a color, a brightness, and whether a shadow display effect is provided.
In one embodiment, the display parameters of at least a portion of the at least one surface may include at least one parameter of an area size, shape, color, brightness, and contrast of at least a portion of the at least one surface.
In one embodiment, when the number of vector information is plural, the plural vector information may have a sequence relationship corresponding to the execution sequence of the specified operation, and in this case, the plural display parameters of the target virtual object may be sequentially adjusted based on the sequence relationship of the plural vector information. For example, the brightness of the first surface of the target virtual object is adjusted according to the first vector information, then the shape of the second surface of the target virtual object is adjusted according to the second vector information, and then the color of the first edge of the target virtual object is adjusted according to the third vector information.
In one embodiment, the gesture parameters of the target virtual object may also be adjusted based on the vector information; for example, the attitude parameters may include adjustments to pitch angle and/or rotation angle.
As can be seen from the above, the mapping method provided by the embodiment of the present application can adjust the display parameters of at least a portion of the edges of at least one edge and/or at least a portion of the surfaces of at least one target virtual object based on the vector information, thereby realizing high-precision and omnibearing adjustment of the display parameters of the target virtual object.
Fig. 6 is another flow chart of the mapping method according to the embodiment of the present application. As shown in fig. 6, the process may include steps 601 to 606:
and 601, setting attribute adjustment parameters of the target virtual object.
Illustratively, the attribute adjustment parameters may include at least some of the adjustable attribute parameters of the target virtual object; by way of example, the attribute adjustment parameters may include at least one basic attribute parameter of a shape, a stereoscopic structure, a size, a color, a brightness, a contrast, an edge length, and an edge thickness of the target virtual object.
Exemplary attribute adjustment parameters further include a scaling factor, a rotation factor, a translation factor of the X/Y/Z three-dimensional space, and the like, corresponding to the basic attribute parameters.
The attribute adjustment parameters may be set by the user through the AR device, for example.
Step 602, detecting and identifying a target entity object.
For example, when the AR device detects that the target entity object is identified, the target virtual object may also be set at a location adjacent to the target entity object.
For example, after detecting the identification target physical object, the AR device may further determine a correspondence between at least one edge and/or surface of the target physical object and at least one edge and/or surface of the target virtual object based on shape structure information of the target physical object and shape structure information of the target virtual object.
For example, after detecting that the target physical object is identified, the AR device may project grid lines at edges of the target physical object to display shape structure information of the target physical object in an intuitive and visual manner.
And 603, projecting prompt information to the corner point of the target entity object.
Illustratively, the corner point of the target entity object for projecting the prompt message may be selected by the user; or the AR device may determine based on shape structure information of the target physical object.
For example, the corner points of the target physical object used to project the hint information may be determined by the AR device based on how dense it is to grid lines projected at the edge of the target physical object.
For example, the hint information may include a light of a specified color projected to a corner of the target physical object.
Step 604, obtaining a starting point position of the specified operation.
For example, the AR device may acquire a start point position of the specified operation by detecting and identifying the specified operation.
Step 605, setting scale information.
For example, the scale information may be determined by the AR device based on feature information of an edge where the start point position is located or an edge adjacent to the start point position; for example, if the edge where the start point position is located is a straight line edge, unit length scale information as shown in fig. 4A may be set; if the edge where the starting point position is located is a curve edge, the unit angle scale information as shown in fig. 4B is set.
Step 606, obtaining vector information and mapping the vector information.
For example, the AR device may continuously recognize and track a specified operation to acquire track information of the specified operation, and quantize the track information based on scale information to acquire vector information.
For example, the AR device may map vector information to at least a portion of at least one edge of the target virtual object and/or at least a portion of at least one surface to enable an adjustment process for the target virtual object.
It should be noted that, the mapping method provided by the embodiment of the present application may be applied not only to AR devices operated by a single person, but also to AR devices operated by multiple persons or AR systems, for example, in a multi-person AR conference, when the AR conference starts, a presenter of the conference may control the AR conference system to scan an entity object on a conference table, including each conference participant, and may set various virtual scales that the entity object can project and attribute parameters of the virtual object corresponding to the various virtual scales in the AR system, and may also set a correspondence between the entity object and at least one virtual object; for example, after the AR conference is started, any conference participant in the AR conference site can adjust and control the virtual object through the designated operation performed on the entity object by the conference participant and the vector information of the designated operation on the surface of the target entity object during the AR conference.
As can be seen from the above, in the mapping method provided by the embodiment of the present application, the user performs the specified operation on the target entity object or the entity object, so that the user can adjust or control the target virtual object or the virtual object, which not only improves the accuracy of processing or controlling the virtual object by the user, but also improves the flexibility of processing or controlling the virtual object by the user.
Based on the foregoing embodiments, the embodiments of the present application further provide an AR device, where the AR device is capable of implementing the mapping method provided in any one of the foregoing embodiments.
Based on the foregoing embodiments, the present application further provides a computer readable storage medium having stored therein a computer program that, when executed by a processor of an electronic device, is capable of implementing the mapping method provided in any of the foregoing embodiments.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
The methods disclosed in the method embodiments provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new method embodiment.
The features disclosed in the embodiments of the products provided by the application can be combined arbitrarily under the condition of no conflict to obtain new embodiments of the products.
The features disclosed in the embodiments of the method or the device provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new embodiment of the method or the device.
The computer readable storage medium may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), or the like; but may be various electronic devices such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus necessary general hardware nodes, or of course by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a heating module of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the heating module of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (10)
1. A mapping method, comprising:
if a target entity object is detected, setting identification information on the target entity object;
based on the identification information, vector information of a specified operation executed for the target entity object is obtained;
mapping the vector information to a target virtual object associated with the target entity object to process the target virtual object;
the obtaining, based on the identification information, vector information of a specified operation performed on the target entity object includes:
Determining vector information of the specified operation based on a correspondence between the identification information and the vector information; the vector information of the specified operation comprises direction information of the specified operation and/or span information of the specified operation, the direction information is used for adjusting the specified surface orientation of the target virtual object, and the span information is used for adjusting the moving distance of the target virtual object.
2. The method of claim 1, wherein the setting identification information on the target entity object comprises:
acquiring shape structure information of the target entity object;
Acquiring the starting point position information of the execution of the specified operation on the target entity object;
and setting the identification information on the target entity object based on the shape structure information of the target entity object and the starting point position information.
3. The method according to claim 1 or 2, wherein the method further comprises:
obtaining a target edge and/or a target surface which meet specified conditions on the target entity object based on the shape structure information of the target entity object;
And projecting prompt information to the target edge and/or the target surface so as to prompt a user to execute the specified operation on the target edge and/or the target surface.
4. The method of claim 2, wherein the setting the identification information on the target entity object based on the shape structure information of the target entity object and the start point position information comprises:
Based on the shape structure information of the target entity object, obtaining the characteristic information of the appointed area of the target entity object; wherein the designated area includes at least one edge and/or surface of the target physical object;
Determining identification category information based on the feature information of the designated area;
and setting the identification information corresponding to the identification category information in the designated area based on the starting point position information.
5. The method of claim 4, wherein the identification category information includes at least one of length category information and angle category information; the identification information comprises at least one of unit length information and unit angle information of at least one dimension; the setting the identification information corresponding to the identification category information in the specified area based on the start point position information includes:
And setting unit length information of at least one dimension corresponding to the length category and/or unit angle information corresponding to the angle category information in the designated area based on the starting point position information.
6. The method of claim 1, wherein the mapping the vector information to a target virtual object associated with the target entity object comprises:
acquiring a corresponding relation between at least one edge and/or surface of the target entity object and at least one edge and/or surface of the target virtual object;
the vector information is mapped to at least one edge and/or surface of the target virtual object based on a correspondence between the at least one edge and/or surface of the target physical object and the at least one edge and/or surface of the target virtual object, and the trajectory information of the specified operation.
7. The method of claim 6, wherein the method further comprises:
acquiring shape structure information of the target virtual object and shape structure information of the target physical object;
And determining the corresponding relation between at least one edge and/or surface of the target entity object and at least one edge and/or surface of the target virtual object based on the corresponding relation between the shape structure information of the target virtual object and the shape structure information of the target entity object.
8. The method of claim 1, wherein the obtaining vector information for the specified operation performed on the target entity object based on the identification information comprises:
If the specified operation which is executed on the surface of the target entity object is detected, obtaining track information of the specified operation;
and quantizing the track information based on the identification information to obtain the vector information.
9. The method of claim 1, wherein the method further comprises:
based on the vector information, display parameters of at least part of the edge of at least one edge and/or at least part of the surface of at least one surface of the target virtual object are adjusted.
10. An augmented reality AR device capable of implementing the mapping method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210108384.3A CN114546210B (en) | 2022-01-28 | 2022-01-28 | Mapping method and augmented reality device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210108384.3A CN114546210B (en) | 2022-01-28 | 2022-01-28 | Mapping method and augmented reality device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114546210A CN114546210A (en) | 2022-05-27 |
CN114546210B true CN114546210B (en) | 2024-07-23 |
Family
ID=81673378
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210108384.3A Active CN114546210B (en) | 2022-01-28 | 2022-01-28 | Mapping method and augmented reality device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114546210B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109891370A (en) * | 2016-10-24 | 2019-06-14 | 维塔驰有限公司 | Auxiliary carries out the method and system and non-transitory computer readable recording medium of object control |
CN113168737A (en) * | 2018-09-24 | 2021-07-23 | 奇跃公司 | Method and system for three-dimensional model sharing |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11227435B2 (en) * | 2018-08-13 | 2022-01-18 | Magic Leap, Inc. | Cross reality system |
-
2022
- 2022-01-28 CN CN202210108384.3A patent/CN114546210B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109891370A (en) * | 2016-10-24 | 2019-06-14 | 维塔驰有限公司 | Auxiliary carries out the method and system and non-transitory computer readable recording medium of object control |
CN113168737A (en) * | 2018-09-24 | 2021-07-23 | 奇跃公司 | Method and system for three-dimensional model sharing |
Also Published As
Publication number | Publication date |
---|---|
CN114546210A (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3256938B1 (en) | Image display system, information processing apparatus, image display method, image display program, image processing apparatus, image processing method, and image processing program | |
KR100869447B1 (en) | Apparatus and method for indicating a target by image processing without three-dimensional modeling | |
CN108268186B (en) | Selection points on an electroanatomical map | |
CN109308203B (en) | Pie chart label display method, system, readable storage medium and computer equipment | |
KR20080045510A (en) | Controlling method and apparatus for user interface of electronic machine using virtual plane | |
US20140333585A1 (en) | Electronic apparatus, information processing method, and storage medium | |
CN109731329B (en) | Method and device for determining placement position of virtual component in game | |
CN110956674B (en) | Graph adjusting method, device, equipment and storage medium | |
CN109508093A (en) | A kind of virtual reality exchange method and device | |
JP6562752B2 (en) | Information processing apparatus, control method therefor, program, and storage medium | |
JP7379684B2 (en) | Image generation method and device and computer program | |
CN105912101B (en) | Projection control method and electronic equipment | |
CN109102865A (en) | A kind of image processing method and device, equipment, storage medium | |
AU2013383628B2 (en) | Image processing apparatus, program, computer readable medium and image processing method | |
WO2019019372A1 (en) | Picture operation and control method and device for mobile terminal, mobile terminal, and medium | |
CN114546210B (en) | Mapping method and augmented reality device | |
US10297036B2 (en) | Recording medium, information processing apparatus, and depth definition method | |
CN104820512A (en) | Information processing apparatus | |
US10073612B1 (en) | Fixed cursor input interface for a computer aided design application executing on a touch screen device | |
CN111199512A (en) | SVG vector graphics adjusting method, SVG vector graphics adjusting device, storage medium and terminal | |
JP7411534B2 (en) | Instrument reading system, instrument reading program | |
CN112292241B (en) | Apparatus and method for optimizing hair style guide generation | |
JP2017049984A (en) | Information processing device, control method thereof and program, and information processing system, control method thereof and program | |
EP3059664A1 (en) | A method for controlling a device by gestures and a system for controlling a device by gestures | |
CN114594889B (en) | Control method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |