CN114706511A - Interaction processing method and device and electronic equipment - Google Patents
Interaction processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN114706511A CN114706511A CN202111646004.3A CN202111646004A CN114706511A CN 114706511 A CN114706511 A CN 114706511A CN 202111646004 A CN202111646004 A CN 202111646004A CN 114706511 A CN114706511 A CN 114706511A
- Authority
- CN
- China
- Prior art keywords
- virtual object
- guide
- target
- target virtual
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 48
- 230000003993 interaction Effects 0.000 title claims description 56
- 230000002452 interceptive effect Effects 0.000 claims abstract description 96
- 238000012545 processing Methods 0.000 claims abstract description 31
- 238000000034 method Methods 0.000 claims description 97
- 230000033001 locomotion Effects 0.000 claims description 36
- 230000006870 function Effects 0.000 claims description 32
- 230000004044 response Effects 0.000 claims description 19
- 230000000007 visual effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 description 35
- 238000010586 diagram Methods 0.000 description 20
- 238000001179 sorption measurement Methods 0.000 description 14
- 210000000887 face Anatomy 0.000 description 12
- 238000009877 rendering Methods 0.000 description 10
- 230000036544 posture Effects 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 8
- 238000013459 approach Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000003190 augmentative effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000009751 slip forming Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04802—3D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides an interactive processing method, an interactive processing device and electronic equipment, wherein when a user wears the electronic equipment to watch any application scene output by the electronic equipment, the electronic equipment can firstly obtain position guide information aiming at a target virtual object, the position guide information is shown in the application scene so that the user can execute interactive trigger operation on the target virtual object according to the visually seen position guide information, the electronic equipment responds to the interactive trigger operation, the target display position of the target virtual object is quickly and accurately determined according to the obtained interactive trigger position and the position guide information, the target virtual object is fused to the target display position in the application scene for display, the flexible updating requirement of the content presented by the corresponding application scene is met, and compared with a processing method that the user observes and determines the target display position by the eyes, the interactive processing mode is enriched, and the interactive processing efficiency and accuracy are improved.
Description
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an interactive processing method and apparatus, and an electronic device.
Background
However, in an Augmented Reality (AR) or Virtual Reality (VR) scene application, when an existing Virtual object needs to be added or adjusted, it is often necessary to borrow an interactive object to continuously adjust the display position of the Virtual object, which is inefficient.
Disclosure of Invention
In view of this, the present application provides an interactive processing method, including:
obtaining position guide information aiming at a target virtual object, and presenting the position guide information in an output application scene;
responding to the interactive trigger operation of the target virtual object, and determining a target display position of the target virtual object according to the interactive trigger position aiming at the target virtual object and the position guide information;
fusing the target virtual object to the target display position in the application scene for display.
Optionally, the obtaining the position guidance information for the target virtual object includes:
identifying a first face in which a first scene object exists in an application scene, determining at least one guiding position point on the first face, and forming position guiding information for a target virtual object by using the determined at least one guiding position point; and/or the presence of a gas in the gas,
responding to a position guide trigger operation aiming at a target virtual object, and acquiring position guide information aiming at the target virtual object; the position guide information includes a target guide pattern constituted by a guide line and/or a guide surface.
Optionally, the identifying a first face in which a first scene object exists in the application scene, and determining at least one guidance position point on the first face includes:
determining a first scene object in an application scene according to visual angle information of the application scene;
determining a first face in which the first scene object exists, and identifying geometrical characteristic information of the first face;
and acquiring at least one guide position point on the first surface according to the geometric characteristic information and a target guide rule.
Optionally, the determining a first face in which the first scene object exists includes:
scanning a plurality of surfaces where the first scene object exists, and determining a surface in a specified spatial plane as a first surface; or,
scanning a plurality of faces in which the first scene object exists, and detecting a selection operation for the plurality of faces;
in response to the selecting operation, the selected first face is determined.
Optionally, the method for obtaining the target guidance rule includes:
calling a preset guide rule corresponding to the identified geometric feature information as a target guide rule from preset guide rules configured for different surfaces with geometric features; or,
responding to a guide rule configuration triggering operation, outputting a guide rule configuration interface aiming at the application scene, and presenting a plurality of guide modes to be selected on the guide rule configuration interface;
and determining a target guide rule in response to the selection operation of the plurality of guide modes to be selected.
Optionally, the obtaining, in response to a position guidance trigger operation for a target virtual object, position guidance information for the target virtual object includes:
responding to position guide creation operation aiming at a target virtual object, creating a guide graph according to the motion information of an interactive object, and forming position guide information aiming at the target virtual object; or,
responding to the position guide calling operation aiming at the target virtual object, outputting a guide graphic display interface, and presenting at least one guide graphic on the guide graphic display interface;
in response to a selection operation of the presented guidance graphic, position guidance information for the target virtual object is composed by the selected guidance graphic.
Optionally, the forming the position guidance information for the target virtual object includes:
displaying the created or selected guide graphic in the application scene;
responding to the display state editing operation of the displayed guide graph, and adjusting the display state of the guide graph to obtain a target guide graph aiming at a target virtual object;
and at least the target guide graph forms position guide information aiming at the target virtual object.
Optionally, the determining a target display position of the target virtual object according to the interaction triggering position for the target virtual object and the position guidance information includes:
acquiring the relative position relation between the real-time moving position of a preset geometric feature point or surface of the target virtual object and each guide object contained in the position guide information; the real-time mobile position is determined based on interactive triggering operation of an interactive object; each guide object comprises a guide position point and/or at least one guide graph;
determining a target display position of the target virtual object in the guide object according to the relative position relation and a target guide rule;
the fusing the target virtual object to the target display position in the application scene for displaying includes:
directly moving the target virtual object from the real-time moving position to a corresponding target display position in the application scene so as to align a plurality of objects on the same designated guide line or designated guide surface; the object comprises at least the target virtual object;
detecting that any virtual object exists at the target display position in the application scene, moving the virtual object to a display position adjacent to the target display position for displaying, and displaying the target virtual object at the target display position;
and responding to the adjustment operation of the target virtual object to adjust the display state of the target virtual object when the display state of the target virtual object displayed by the application scene is detected to be not in accordance with the display requirement.
The present application further proposes an interaction processing apparatus, the apparatus comprising:
the position guide information output module is used for obtaining position guide information aiming at the target virtual object and presenting the position guide information in an output application scene;
a target display position determining module, configured to determine, in response to an interaction trigger operation on the target virtual object, a target display position of the target virtual object according to the interaction trigger position for the target virtual object and the position guidance information;
and the target virtual object display module is used for fusing the target virtual object to the target display position in the application scene for displaying.
The present application further proposes an electronic device, which includes:
a display module; a plurality of sensors;
a memory for storing a program for implementing the interactive processing method as described above;
and the processor is used for loading and executing the program stored in the memory to realize the interactive processing method.
Therefore, the application provides an interactive processing method, an interactive processing device and an electronic device, when a user wears the electronic device to watch any application scene output by the electronic device, the electronic device can firstly obtain position guide information for the target virtual object, the position guide information is displayed in the application scene, so that the user can execute interactive trigger operation on the target virtual object according to the intuitively-seen position guide information, the electronic device responds to the interactive trigger operation, the target display position of the target virtual object is quickly and accurately determined according to the obtained interactive trigger position and the position guide information, the target virtual object is fused to the target display position in the application scene for display, the flexible updating requirement of the presented content of the corresponding application scene is met, and compared with the processing method that the user determines the target display position by observing the eyes, the interactive processing mode is enriched, and the interactive processing efficiency and accuracy are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic hardware structure diagram of an alternative example of an electronic device suitable for the interaction processing method proposed in the present application;
FIG. 2 is a system architecture diagram of an alternative application environment suitable for the interactive processing method proposed in the present application;
fig. 3 is a schematic flowchart of an alternative example of the interaction processing method proposed in the present application;
fig. 4 is a schematic flowchart of another alternative example of the interaction processing method proposed in the present application;
fig. 5 is a schematic flowchart of an optional scenario of the interactive processing method proposed in the present application;
fig. 6 is a schematic view of an optional placement scene of a plurality of target virtual objects in the interactive processing method provided in the present application;
fig. 7 is a schematic diagram of an optional scene for moving a target virtual object based on a predicted movement trajectory in the interactive processing method proposed in the present application;
fig. 8 is a flowchart illustrating yet another alternative example of the interaction processing method proposed in the present application;
fig. 9 is a schematic diagram of a guidance rule configuration interface in the interaction processing method provided in the present application;
fig. 10a is a schematic diagram of geometric features of a rectangular object in the interactive processing method proposed in the present application;
fig. 10b is a schematic diagram of geometric features of a circular object in the interactive processing method proposed in the present application;
FIG. 10c is a schematic diagram of the geometric features of an irregular object in the interactive processing method proposed in the present application;
fig. 11 is a flowchart illustrating yet another alternative example of the interaction processing method proposed in the present application;
fig. 12a is a schematic diagram of an optional scene for creating a guidance graphic in the interactive processing method according to the present application;
fig. 12b is a schematic diagram of an optional scenario of editing guidelines in the interactive processing method of the present application;
fig. 12c is a schematic diagram of an optional scene of editing a guide plane in the interactive processing method proposed in the present application;
fig. 12d is a schematic diagram of an optional interaction processing scene in which the target virtual object is automatically adsorbed onto the guideline in the interaction processing method proposed in the present application;
fig. 12e is a schematic diagram of an optional interactive processing scene in which the target virtual object is automatically adsorbed to the guide surface in the interactive processing method provided by the present application;
fig. 13 is a schematic structural diagram of an alternative example of the interaction processing apparatus proposed in the present application.
Detailed Description
For the description of the background art, when a user wears an electronic device such as an AR (Augmented Reality) device or a VR (Virtual Reality) device, and views an application scene presented by the electronic device, the present application may wish to adjust a display position of a Virtual object in the application scene, and/or add one or more Virtual objects, and even form a specific position relationship between the Virtual objects or existing objects in the application scene, such as aligning, placing in a specific shape, placing at a specific position, and so on, and for various application requirements, it is proposed that before operating the Virtual object by means of an interactive object (such as an interactive device such as a handle, a hand of the user, and so on), position guidance information for the Virtual object (such as a guide line indicating the placing position of the Virtual object, a guide line for guiding the Virtual object to be placed at a specific position, and so on, Guide face, guide position point, specific guide figure, etc.), present it in this application scene, like this, the user just can refer to this position guide information and realize that the display position of virtual object moves, places target display position fast and accurately and shows, for the processing mode who relies on user's self observation to constantly adjust virtual object display position, this application has improved interactive processing efficiency and accuracy, can satisfy the interactive processing demand of different grade type better, improves user experience.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Referring to fig. 1, a schematic diagram of a hardware structure of an optional example of an electronic device suitable for the interactive processing method provided by the present application is shown, where the electronic device may include a terminal device, such as the AR device and the VR device, capable of presenting an application scene (which may be formed by combining a virtual environment and a display environment, or formed by a virtual environment, where the application does not limit the formation manner and content of the application scene, and may be determined according to the situation) for a user, such as a helmet-type or glasses-type AR device/VR device, where the application does not limit the product type of the electronic device, and the user may select to wear the electronic device according to the actual situation, so as to implement the interactive processing method provided by the present application.
In practical applications, as shown in fig. 1, the electronic device may include: display module assembly 11, a plurality of sensors 12, at least one memory 13 and at least one processor 14, wherein:
the display module 11, the sensor 12, the memory 13 and the processor 14 may be connected to a communication bus to realize mutual data interaction.
Display module assembly 11 can be used for showing the application scene after the rendering to the electronic equipment user, in this application scene, can contain virtual environment and/or the virtual scene that the rendering of real environment obtained, if under the recreation scene, the user wears electronic equipment, can show the recreation environment through display module assembly 11, and this recreation environment can be that the rendering of combining partial real environment obtains, also can be that only virtual environment renders and obtains, and optional condition is decided. In practical applications, the display module 11 may be integrated in an electronic device, or may be connected to the electronic device by using other terminals having a display, such as a mobile phone, as the display module 11 of the electronic device to display an application scene. The present application is not limited to the structure and type of the display module 11 of the electronic device.
The rendering process of the application scene output by the display module 11 can be implemented by the processor 14 of the electronic device, so that the electronic device can be used offline; in some embodiments, as shown in the application environment schematic diagram shown in fig. 2, the electronic device may also be networked to access any service server, and the service server provides application scene data that needs to be output by the display module 11 of the electronic device, in this case, in order to reduce the processing load of the electronic device, the service server may implement a rendering process of an application scene; similarly, the electronic device may also access other terminal devices with strong data processing capability, such as a notebook computer, a desktop computer, and the like, and provide service data for the electronic devices, such as AR, VR, and the like.
The plurality of sensors 12 may include, but are not limited to: the present disclosure relates to a portable electronic device, and more particularly, to a portable electronic device including an electronic device, which includes an inertial sensor (also referred to as a motion capture sensor) such as an acceleration sensor, a gyroscope, a geomagnetic sensor, and the like, for capturing motions of a head and/or other body parts of a user, such as movement and rotation, and a proximity sensor, a capacitive sensor, an infrared sensor, an image sensor, and the like, according to requirements. Wherein, for each type of sensor, the position of the sensor in the electronic equipment can be determined according to the function of the sensor, such as the sensor is positioned on an interactive object such as a handle, a glove and the like, and the sensor is positioned and worn
The memory 13 may be used to store a program for implementing the information processing method described in the above-described method embodiments; the processor 14 can load and execute the program stored in the memory 13 to implement the steps of the information processing method described in the above-mentioned corresponding method embodiment, and the specific implementation process can refer to the description of the corresponding part of the following embodiment.
In the embodiment of the present application, the memory 13 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device or other volatile solid-state storage devices. The processor 14 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA), or other programmable logic device. The structures and types of the memory 32 and the processor 33 are not limited in the present application, and can be flexibly adjusted according to actual requirements.
It should be understood that the structure of the terminal device shown in fig. 3 is not limited to the terminal device in the embodiment of the present application, and in practical applications, the terminal device may include more components than those shown in fig. 3, or combine some components, such as a sound pickup, a speaker, a power supply module, various communication modules, and the like, and the present application is not limited thereto.
Referring to fig. 3, a schematic flow chart of an optional example of the interaction processing method provided in the present application is shown, and the method may be executed by an electronic device, and may be applicable to an interaction processing scenario in which a user wears the electronic device, views an application scenario displayed by the electronic device, needs to perform an interaction operation on a virtual object included in the application scenario or an entity object from a real environment, or adds another virtual object to the application scenario, and the interaction processing scenario may be determined according to an actual situation. As shown in fig. 3, the method may include:
step S31, obtaining position guide information for the target virtual object, and presenting the position guide information in the output application scene;
in combination with the above description about the technical solution of the present application, a target virtual object may be any one or more virtual objects in an application scene presented by a user wearing an electronic device, and may also be a virtual object that the user wants to add in the application scene presented currently.
It should be noted that the content and the representation form of the position guidance information are not limited in the present application. For example, if the target virtual object is to be placed at a middle position on a table in the application scene, the position guide information may include a guide position point presented at the center position of the table, and even the guide position point may be presented at several equally divided positions of the center line, so as to assist the user in placing the target virtual object; if the target virtual objects are to be aligned and placed on the table, the position guide information may include a guide line or a guide surface on the table surface, which is presented to assist the user in achieving the aligned placement of the target virtual objects, and the like. Therefore, the target virtual object can be determined according to the type of the target virtual object and the placement requirement of the target virtual object in the application scene, and the detailed description of the target virtual object in the application scene is not given.
The presentation position of the position guide information in the application scene can be flexibly adjusted by a user and can also be determined according to the generation mode of the position guide information, and the presentation position and the presentation mode are not limited by the application.
Step S32, responding to the interaction triggering operation of the target virtual object, and determining the target display position of the target virtual object according to the interaction triggering position and the position guide information aiming at the target virtual object;
as described above, the position guidance information seen by the user may be used as an auxiliary tool for placing the target virtual object at the target display position, and the target virtual object may be selected or determined by using an interactive object such as a handle (the user may trigger operations such as a function button and a rotary handle on the handle to implement an interactive operation with the object), a glove (the user wears the glove to perform an interactive operation), and the like.
Based on the above description, the interaction triggering operation in step S32 may be a triggering operation of an interaction object of the electronic device by a user, and the interaction data generated by the interaction triggering operation is analyzed to determine an interaction triggering position generated by each interaction triggering operation, such as a display moving position representing a target virtual object, and then, the interaction triggering position and the position guidance information may be compared and analyzed according to a preset position configuration rule, and a target display position of the target virtual object is determined by referring to an independent guidance position point or a continuous plurality of guidance position points included in the position guidance information, and the user may be guided to borrow the interaction triggering operation of the interaction object according to the comparison result, and the display moving position of the target virtual object is continuously updated until the target virtual object is moved to the target display position.
If the interaction triggering position of the moved target virtual object can be presented in the application scene in real time, the user can view the position relationship between the interaction triggering position and the position guide information, determine the target display position where the target virtual object should be placed, move the target virtual object to the target display position accordingly, or directly place the selected target virtual object to the target display position, and the like.
In some embodiments, a display position automatic guidance function of the target virtual object is configured for the position guidance information, and a corresponding effective distance range is configured for the function, so that when the interaction trigger position enters the effective distance range, the target virtual object may be automatically controlled to reach a target display position in the position guidance information, or a guidance position in the position guidance information is automatically selected as the target display position, and the like.
Step S33, fusing the target virtual object to the target display position in the application scene for display.
In one possible implementation, after determining a target display position of the target virtual object in the application scene of the current frame (even a plurality of subsequent consecutive frames, which depends on the next time for adjusting the display position of the target virtual object), the application scene displaying the target virtual object at the target display position is output through an image rendering technology; in yet another possible implementation manner, for a target virtual object that should be rendered and displayed in an application scene, the target virtual object may be directly superimposed on the target display position for display, and the like.
The rendering process of the application scene and each virtual object can be implemented on the electronic device, or can be executed by the service server or other terminal devices, and then the rendered continuous frame images are fed back to the electronic device, so that the target virtual object is displayed at the target display position in the application scene presented to the user by the electronic device. The present application does not limit the implementation method of step S33.
In summary, in the embodiment of the present application, when a user wears an electronic device to view any application scene output by the electronic device, the user needs to display a target virtual object (an existing or/and newly added virtual object) at a position marked as a target display position (which refers to a position that meets a display requirement of the target virtual object, and the target display position cannot be known in advance due to the variability of the presented application scene), the electronic device may first obtain position guidance information for the target virtual object and present the position guidance information in the application scene, so that the user can perform an interaction triggering operation on the target virtual object according to the intuitively-seen position guidance information, and the electronic device responds to the interaction triggering operation to quickly and accurately determine the target display position of the target virtual object according to the obtained interaction triggering position and position guidance information, the target virtual object is fused to the target display position in the application scene for display, the flexible updating requirement of the display content of the application scene is met, and compared with a processing method that a user observes and determines the target display position by the eyes of the user, the method not only enriches the interactive processing mode, but also improves the interactive processing efficiency and accuracy.
Referring to fig. 4, which is a schematic flow chart of yet another optional example of the interaction processing method proposed in the present application, this embodiment may be an optional detailed implementation method of the interaction processing method described in the foregoing embodiment, such as a detailed description of a location guidance information obtaining process, but is not limited to this obtaining method described in this embodiment, and as shown in fig. 4, the method may include:
step S41, outputting any application scene obtained by image rendering;
step S42, identifying a first face in the application scene where a first scene object exists, and determining at least one guide position point on the first face;
in this embodiment of the application, the first scene object may refer to any one or more objects currently present in an application scene in front of a user wearing the electronic device, and may be a real object existing in a real environment or a virtual object in a virtual environment.
Taking an application scene as a virtual-real combined augmented reality scene as an example for explanation, if a plurality of scene objects exist in the current application scene, when a target virtual object needs to be placed, size and distance information of a plane (which can be determined by a collision detection ray) of the existing scene objects can be detected, and a plane closest to a user under the application scene is selected as a first plane for placing the target virtual object, but the processing mode has great limitation and often cannot meet application requirements in the aspect of AR social technology.
In this regard, the embodiments of the present application propose to identify the surface of the scene object, and further determine the guide location points existing thereon by combining the geometric features thereof, so as to guide or assist the user to place the virtual object at the precise guide location points. Therefore, the electronic device may perform recognition detection on each currently output application scene or a designated scene object, and determine all surfaces existing on the scene object or designated to place other objects.
Therefore, after the user sees the object in the application scene, if a new virtual object, that is, a target virtual object, needs to be placed in the application scene, or an existing virtual object needs to be moved to another position, the user knows the target display position of the target virtual object, and can directly select the object at the target display position as a first object and identify the first object, so that the electronic device knows one or more surfaces included in the first object, and the user can further select one or more surfaces as a first surface for placing the target virtual object; in this case, all the identified surfaces may be determined as the first surfaces without any choice, and each of the determined first surfaces may be subjected to geometric feature analysis detection.
It should be noted that, when the first object is determined in the upper stage, not limited to the implementation mode selected by the user, the electronic device of the present application may also perform matching analysis according to object attribute information (such as object type, size, shape, placement posture, and the like) of the target virtual object and attribute information (such as object type, size, shape, other basic attributes, and the like) of each object existing in the application scene, to determine an object suitable for placing the target virtual object as the first object, and even may determine the first surface of the first object at the same time, and then perform subsequent analysis of geometric features; of course, the electronic device may also directly perform surface recognition on each object currently existing in the application scene, and then determine the first surface corresponding to the first object based on a user selection operation, or directly determine all the recognized surfaces as the first surface of the object.
And then, performing further identification analysis on the identified geometric characteristics of the first surface, and obtaining at least one guide position point existing on the first surface according to a preset guide rule. The guidance rule may be configured in advance by a user, and the guidance position configuration function may be triggered by a preset trigger manner, such as a voice trigger, a trigger of a specific function button, a menu trigger, an interactive object or a trigger manner of a specific gesture of the user, to complete configuration of the guidance rule. The guiding rule may include various kinds of guiding position points, such as end points, middle points, center points, quartering points, intersection points, closest points, extension lines/planes, parallels, etc. the user may select the required guiding rule content based on the actual situation, so that the following guiding position points are generated accordingly.
It can be understood that, in the process of configuring the guidance rules, the user can perform autonomous selection through the interactive object, and also can input new rule contents, so as to determine the actually required guidance rules, and record the guidance rules as target guidance rules for actual invocation, thereby better satisfying the personalized requirements of different users. Certainly, the target guiding rule may also be configured when the electronic device leaves the factory, and the user does not need to spend time to configure the target guiding rule, so that the user who does not know the configuration of the electronic device can conveniently use the electronic device, and the use experience of the user is improved.
In still other embodiments, regarding the implementation method of step S42, including but not limited to the implementation manners described above, all guidance location points on each first surface may be determined, the determined guidance location points may be presented in the application scene for the user to select a desired guidance location point from, and other guidance location points may be cancelled.
Step S43, using the determined at least one guiding position point to form position guiding information for the target virtual object, and presenting the position guiding information in the application scene;
after determining at least one guidance position point on the first surface of the first object in the identified application scene according to the above-described method, the determined guidance position points may be directly presented in the application scene as guidance position information of the target virtual object, or the guidance position information display may be composed of the selected and reserved guidance position points according to the above-described selection manner, or one or more guidance graphics may be determined based on the determined guidance position points, and guidance position information may be composed of the guidance graphics and the guidance position points, and the application does not limit the configuration method of the guidance position information.
The method comprises the following steps that a guide graph can be a guide line, a guide surface, a guide graph with a specific shape and the like, the shape and the display state of the guide graph are not limited, the flexible configuration can be carried out according to the actual placing requirements (such as alignment placing, specific shape placing and the like) of a virtual object, and a plurality of guide position points are automatically and continuously formed according to the preset guide graph requirements; or, in response to a wiring operation for a plurality of guide position points presented, a guide graphic or the like is determined.
Step S44, responding to the interactive trigger operation of the target virtual object, and determining the real-time moving position of the preset geometric feature point of the target virtual object;
in combination with the description of the corresponding part of the above embodiment, when a user moves a target virtual object by using an interactive object, the position of the target virtual object usually needs to be detected in real time, because the target virtual object is usually not a point, when a position coordinate of the target virtual object is located, the real-time position of a preset geometric feature point of the target virtual object can be determined as the real-time moving position of the target virtual object, and the preset geometric feature point can refer to a center of gravity, a vertex, a central point, and the like of the target virtual object.
In still other embodiments, according to a placement requirement of the target virtual object, the position of the preset surface of the target virtual object may also be determined as a real-time moving position of the target virtual object, and the position representation of the preset surface may be represented by information such as positions of endpoints and/or edges of the preset surface, which is not limited in this application. If the target virtual object needs to be placed on the first surface of the first object in a specific posture, and the vertex, edge or surface and the like needing to be contacted with the first surface in the target virtual object are determined, the positions of the features can be represented by the position of the interactive object, so that the real-time moving position of the target virtual object is represented; of course, the real-time movement position of other geometric feature points, edges or faces of the target virtual object may also be detected to determine the real-time movement condition of the target virtual object, and the moving target virtual object may be synchronously displayed in the application scene, and the movement track thereof may be viewed, as required.
The mobile position of the interactive object in the electronic device can be determined through the sensing parameters of the sensor configured in the electronic device, and the obtaining method for determining the real-time mobile position based on the interactive triggering operation of the interactive object is not described in detail in the application.
Step S45, acquiring a relative positional relationship between the real-time movement position and each guide object included in the position guide information;
in conjunction with the above description of the location guidance information, the guidance object may include, but is not limited to, the guidance location point and/or at least one guidance graphic described above, and if a plurality of guidance graphics are included, the types of the plurality of guidance graphics may be the same or different, as the case may be.
The embodiment of the application can express the relative position relationship between the target virtual object and each guide object at the real-time moving position by using the position distance difference, the angle relationship determined by combining the postures of each object and the like, including the relative distance, the relative posture angle relationship and the like, and the application does not limit the content and the expression mode of the relative position relationship.
Step S46, determining a target display position of the target virtual object in the guidance object according to the relative positional relationship and the target guidance rule;
and step S47, directly moving the target virtual object from the real-time moving position to the corresponding target display position in the application scene for display.
It should be noted that, in the process of configuring the target guidance rule, the present application may not only define information such as the type and the construction mode of the guidance object, but also define a guidance mode that may be adopted in the process of guiding the target virtual object according to the selected guidance object, for example, when the relative distance is small, the target virtual object may be automatically adsorbed to the guidance object, or other guidance modes; how to determine the target display position of the target virtual object and adjust the display position of the other virtual object existing at the guide object when the other virtual object exists at the selected guide object; the number of the target virtual objects is multiple, how to implement guiding placement of the multiple target virtual objects, and the like, which may be determined according to circumstances, and detailed description is not given in this application.
Based on this, in conjunction with the above description related to the technical solution of the present application, the present application intuitively guides or assists the user to place the target virtual object at the target display position through the position guidance information presented in the application scene, such as the scene schematic diagram shown in fig. 5, a gray circle may represent each guidance feature point on the first surface, but is not limited to the representation of the guidance object by the display identifier such as a circle, and the virtual object and the guidance object shown in fig. 5 are an example and do not constitute a limitation on the type of the object and the display manner thereof. Based on this, in the process of controlling the target virtual object to move online or representing the display position of the target virtual object to move through the interactive object, the relative position relationship between the target virtual object and the interactive object can be synchronously detected, and whether the relative position relationship meets the preset guiding and positioning rule or not is determined.
In the embodiment of the application, it is determined that the currently obtained relative position relationship conforms to the guiding and positioning rule, which indicates that the electronic device basically determines the target display position at present, and then, the target virtual object can be directly controlled to be directly displayed at the target display position. For example, if the guiding and positioning rule includes that the first distance between the real-time moving position of the target virtual object and the guiding object is smaller than the first distance threshold, the target virtual object may be automatically attracted to the target display position where the guiding object is located, for example, the target virtual object is automatically controlled to directly move from the current position to the target display position for display, so that the interaction processing efficiency and the operation convenience are improved.
The first distance threshold may be a radius of a preset effective range in which the corresponding guide object realizes the above-mentioned automatic adsorption function on the virtual object, that is, the target virtual object enters the preset effective range of any guide object, and the target virtual object may be automatically adsorbed to the guide position. The preset effective range is usually a circumferential range with the guide object as a center and the first distance threshold as a radius, the numerical value of the first distance threshold and the shape of the preset effective range are not limited, flexible configuration can be performed according to actual requirements, the preset effective range and the first distance threshold of different guide objects can be the same or different, and the preset effective range is not limited by the application.
In some embodiments, the guidance object of the present embodiment shown in fig. 4 may refer to a guidance feature point, and when it is determined that there is no other virtual object currently at the target display position, the target virtual object is directly attached to the target display position where the guidance object is located for display. If there are other virtual objects at the target display position, the target virtual object still moves to the target display position, which often causes mutual occlusion between multiple virtual objects at the target display position, and the effect of displaying the multiple virtual objects at the same time cannot be achieved.
Therefore, in order to avoid the above situation, in some embodiments, if it is detected that any virtual object exists at the target display position in the application scene, the virtual object may be moved to an adjacent display position of the target display position (i.e., the position where the guide feature point is located) or another display position to be displayed, and then the target virtual object may be displayed at the target display position. Therefore, if a plurality of virtual objects are to be displayed at the same target display position, the previously placed virtual object may be pushed to the display position of the adjacent guide object according to the placement order of the plurality of virtual objects, and if other virtual objects still exist at the display position of the adjacent guide object, as shown in fig. 5, the virtual object at the position may be pushed to the display position of the adjacent guide object according to the above placement, so as to guide and position, and ensure that the newly placed target virtual object is displayed at the target display position, but the present invention is not limited to this processing manner.
In still other embodiments, if it is detected that a virtual object already exists at the determined target display position (i.e., the position where the guidance feature point exists), if the target guidance rule includes an automatic adsorption function between the virtual object and the guidance object, the automatic adsorption function may be automatically cancelled, namely, the automatic absorption function is temporarily disabled, or the user executes the corresponding function cancellation operation through the interactive object, such as any one of the above-described voice trigger, specific gesture/gesture trigger, menu trigger, etc., the start or cancel operation of the automatic adsorption function is realized, and at this time, after the automatic absorption function is cancelled, even if the target virtual object enters the preset effective range of the target display position determined at this time, the target virtual object cannot be automatically absorbed to the target display position, and thus, the virtual object at the target display position can still be continuously displayed at the target display position. The user may continue to move to the other guide object positions according to the method described above, and re-determine the target display positions where the other guide objects are located, or may use the positions around the currently determined target display position (e.g., within a preset range) as new target display positions, so as to display the target virtual object.
In the process that a user moves a target virtual object (which may be the target virtual object directly presented in an application scene for display, or a display identifier representing a real-time moving position of the target virtual object) to another guide object through an interactive object, if an automatic adsorption function is cancelled before, the automatic adsorption function can be recovered, so that the target virtual object automatically moves to the position of the guide object for display when approaching to a new guide object and entering a preset effective range, and the implementation process is not repeated.
In still other embodiments, in the case where the above-mentioned guide object is a guide line or a guide surface, since such guide graphics generally guide a plurality of virtual objects to be placed in alignment, therefore, in the process of placing any target virtual object, when the above step S46 is executed, the target display position of the target virtual object can be determined according to the existence of other objects (such as other at least one virtual object and/or at least one real object in the real scene) on the guiding object, if the adjacent idle position of the virtual object on the guide line is the target display position, then if the automatic adsorption function is configured in advance, the target virtual object can be controlled to automatically move to the target virtual object for display, the angle of the target virtual object can be automatically adjusted according to needs, so that a plurality of objects (at least including the target virtual object) on the same guide object are aligned and displayed.
Of course, when the guide object is a guide line or a guide surface and the target display position of the target virtual object is specified, the display position of the existing object is not limited to the adjacent display position of the existing object described above, but may also be the starting display position of the existing object or a specific position (such as an end point, a center, a certain side, etc.) in the guiding object, in this case, if there are other objects in such a position, according to the implementation method described above, the other object may be moved to a position adjacent to the target display position, the target virtual object may be placed at the target display position for display, such that the target virtual object at the target display location is the most recently placed object, and if the guide objects are all virtual objects placed, then, the closer the other is to the display position of the target virtual object, the smaller the time difference between the placement time of the virtual object displayed thereon and the current time.
In still other embodiments provided by the present application, in a case that the number of the target virtual objects is multiple, in a possible implementation manner, the target display positions of the multiple target virtual objects may be determined simultaneously according to the method described above, and the multiple target virtual objects are moved to the respective target display positions simultaneously for displaying, so that the multiple target virtual objects are fused to the corresponding target display positions in the application scene simultaneously for displaying. For the processing procedure that each target virtual object moves to the target display position for display, the description of the corresponding part above may be referred to, and details are not repeated in this embodiment.
As shown in fig. 6, the target display positions of the target virtual objects may be connected to each other to form a preset required shape, and as shown in fig. 6, the target display positions may be located at four endpoints and a center of the table, so that when the real-time moving position of the target virtual object is controlled by the interactive object to approach the target display positions, the target virtual objects may be automatically adsorbed to the target display positions for display according to a method that has not been described yet when the relative distance is smaller than the first distance threshold. But not limited to the number and layout of the target display positions shown in fig. 6, which may be determined according to the circumstances, and the detailed description of the present application is not given by way of example.
In yet another possible implementation manner, the plurality of target virtual objects may also sequentially move to the target display positions for limitation, during the implementation process, the plurality of target display positions of the corresponding number may be determined in the manner described above, and the placement and display of the plurality of target virtual objects in the application scene is implemented in the sequence shown in, but not limited to, fig. 6.
In another possible implementation manner, the target display positions that the multiple target virtual objects need to display may be in a one-to-one correspondence, or may also be in a many-to-one relationship, that is, the target display positions of the multiple target virtual objects are the same, in this case, the first target virtual object displayed on the target display position in sequence may still be displayed by moving according to the method described above, and when the second target virtual object is moved to be close to the target display position, as described in the above optional embodiment, another target virtual object displayed before may be moved to an adjacent display position to be displayed, and then the target virtual object moved this time is moved to the target display position to be displayed, which is not repeated. It is understood that during the interactive process, for other target virtual objects that need to be displayed at other target display positions, synchronous display or sequential display can still be performed according to the above-described mobile display processing method, as the case may be.
In still other embodiments, if the same target virtual object needs to be placed at multiple target display positions in a scene, a position relationship between multiple target display positions where one target virtual object needs to be placed, a position requirement on the first surface of each target display position, and the like may be preconfigured according to actual requirements, so that, in the process of moving the target virtual object, a distance from the target display position is smaller than a first distance threshold, the target virtual object may be moved to multiple target display positions simultaneously or sequentially according to a preset requirement for display, and an implementation process is similar to that of the optional embodiment described above, and details are not described in this application.
Therefore, no matter a plurality of target virtual objects are moved to a plurality of corresponding target display positions for display, or the same target virtual object is moved to a plurality of target display positions for display, the plurality of target display positions can be flexibly configured according to actual requirements, such as uniform distribution, specific shape distribution and the like, and the application is not illustrated one by one. And the user can approach the target virtual object to the target display position according to the visual observation of each guide position point in the position guide information presented by the application scene and the real-time moving position of the target virtual object, and meet the preset automatic adsorption requirement (which can be flexibly configured in advance according to the requirement, if not, the automatic adsorption function can be cancelled, the user manually moves the target virtual object to the position according to the visual position indication of the guide position point), can install the preset display requirement (which can indicate the placing mode of a plurality of virtual objects flexibly configured in advance, such as placing according to what shape in sequence or simultaneously, and the like), automatically move the target virtual object to the corresponding target display position for display, enrich the interactive processing method, and improve the processing efficiency and interest.
In addition, according to any content guidance rule described above, after any target virtual object is moved to the corresponding target display position for display, the user can view the overall display effect of the target virtual object in the application scene, and if the display result is not satisfied, after the automatic adsorption function can be cancelled, the target virtual object can be moved, the target display position thereof can be updated, and fine tuning of the target virtual object can be realized.
Therefore, after the target virtual object is displayed at the target display position, whether the display state of the target virtual object (which may include the display position, or may include other display states such as display color, size, angle, and the like) displayed by the application scene meets the display requirement (which may be subjectively determined by the user, or may be pre-configured by the electronic device, the display state information of the target virtual object is obtained, and the display state information is matched with the preset display requirement, and the like) or not can be further detected, and if the display state information meets the display requirement, other virtual objects can be continuously processed; if the display requirement is not met, the user may trigger output of the display state adjustment interface for the target virtual object through any trigger manner (which may refer to the description of the trigger manner such as the guidance rule configuration trigger, the automatic adsorption function trigger, and the like described in the above embodiments, and the present embodiment is not described in detail), such as voice trigger, based on the interactive object trigger, and the like, and perform an adjustment operation on any one or more display state parameters included in the display state adjustment interface, so that the electronic device adjusts the display state of the target virtual object in response to the adjustment operation on the target virtual object, such as changing the display size, the display color, the display brightness, and the like of the target virtual object, depending on the circumstances.
Wherein, in the process of fine-tuning the target virtual object according to the method, or moving the target virtual object to approach the target display position according to each guide position point and/or guide graph displayed, in order to improve the efficiency and reliability of manual adjustment of the target virtual object, the movement track of the target virtual object can be predicted according to the position change of the target virtual object and recorded as the predicted movement track, or the predicted movement track can be presented in an application scene as position guide information which can be generated in real time to guide or assist a user to move the target virtual object according to the predicted movement track, if the moving direction of the target virtual object changes in the moving process, the predicted moving track can be updated in real time, and the moving direction of the presented predicted moving track changes correspondingly, so that the target virtual object is pushed forward, the target virtual object can be manually controlled to move to the target display position to be displayed quickly and accurately.
For example, as shown in fig. 7, for any target virtual object, when a user needs to adjust the current display position of the target virtual object horizontally or vertically or at any angle, the target virtual object is selected before moving to enter an editing state, a spatial coordinate system may be formed with the target virtual object as an origin, and different coordinate axes of the spatial coordinate system are displayed, for example, in the form of a dotted line, and after moving the target virtual object, according to the above-described method, the predicted movement trajectory of the target virtual object may be predicted according to the real-time change of the movement display position of the target virtual object, as shown in fig. 7, the predicted movement rule may still be displayed in the form of a dotted line, but is not limited to indicate that the target virtual object moves according to the predicted movement trajectory, and the predicted movement trajectory may be updated in real time during moving according to the above-described method and displayed, and until the target virtual object moves to the target display position to be displayed.
Optionally, for the implementation method for predicting the movement trajectory, in order to improve the prediction accuracy, the movement trajectory of the target virtual object at the target display position may be predicted by using algorithms such as artificial intelligent deep learning and machine learning, so to speak, the recommended movement trajectory, in combination with the object attribute of the target virtual object, the attribute of each object included in the application scene or the attribute of each object located within the preset distance range of the target virtual object, or even the movement history of the target virtual object under the user account or the historical layout information of the application scene. In addition, the presentation manners of the position guidance information such as the movement tracks and the spatial coordinate axes, including but not limited to the dotted line display manner shown in fig. 7, may be flexibly configured according to actual situations, and the present application is not illustrated one by one.
Referring to fig. 8, which is a flowchart illustrating a further optional example of the interaction processing method proposed in the present application, this embodiment may be a further optional detailed implementation method of the interaction processing method described in the foregoing embodiment, and the method may still be executed by an electronic device, as shown in fig. 8, and the method may include:
step S81, outputting any application scene obtained by image rendering;
step S82, determining a first scene object in the application scene according to the perspective information of the application scene;
step S83, determining a first surface of the first scene object, and identifying the geometric feature information of the first surface;
in conjunction with the description in the corresponding part of the above embodiment, the first scene object is any object in the application scene currently output by the electronic device, and may be a real object in a real environment or a virtual object in a virtual environment. The first surface may be any one or more surfaces on which the first scene object exists, and may be determined according to actual needs.
In one possible implementation manner, the electronic device may determine, as the first face, a face in a specified spatial plane (which may be determined by the user according to the selection or configuration manner described above, or may be determined by combining attribute information of each of the first scene object and the target virtual object, which is not limited by this application) by scanning a plurality of faces where the first scene object exists, such as, but not limited to, a desktop, a side face, and the like of a desk in the above scene example.
In yet another possible implementation, the first face may be freely selected by the user, and therefore, the electronic device may scan a plurality of faces where the first scene object exists, detect a selection operation for the plurality of faces (e.g., adjust a display state of each of the scanned plurality of faces, remind the user of noticing the plurality of faces, and the user may click to select one or more of the faces through an interactive object), and determine the selected first face by responding to the selection operation, but is not limited to the determination method of the first face described in this embodiment.
Then, in order to determine the position guide information existing or generated on the first surface, the geometric feature information of the first surface, such as the shape, size, vertex, center, circle or even point of the rectangle, can be identified, and can be determined according to actual situations. Of course, the geometric feature information content to be identified may also be determined according to the preset configured target guidance rule content, and the geometric feature information content and the identification method thereof are not limited in the present application.
Step S84, according to the geometric feature information, according to the target guiding rule, at least one guiding position point on the first surface is obtained and displayed;
in combination with the above description of the guidance rule in the embodiment, the target guidance rule may be a geometric feature type indicating that a guidance position point is determined, and may further include whether to start an automatic adsorption function or not in a process that a subsequent moving target virtual object approaches a target display position, as needed, which is not limited in this application.
Optionally, in order to obtain the target guidance rule, the present application may call, from preset guidance rules configured for surfaces with different geometric features, the preset guidance rule corresponding to the identified geometric feature information as the target guidance rule, and in this case, a corresponding relationship between different geometric feature information or different types of geometric feature information (which is not limited in classification manner, such as a geometric figure type, etc.) and different guidance rules may be pre-constructed.
In still other embodiments, the user may perform a guidance rule configuration triggering operation in any of the triggering manners described above, such as pressing a corresponding function button on an interactive object, a voice triggering signal containing guidance rule configuration content, a preset interaction gesture, and the like, so that the electronic device outputs a guidance rule configuration interface for an application scenario in response to the guidance rule configuration triggering operation, as shown in fig. 9, where a plurality of guidance modes to be selected are presented, such as an open object guidance shown in fig. 9, and a guidance mode of an object (i.e., an object lock point mode shown in fig. 9) may include, but is not limited to: the method includes selecting geometric feature points such as end points, middle points, center points, geometric center points, insertion points, mutually perpendicular points, tangent points, closest points, extension points, parallel points, quartering points, intersection points, and the like (i.e., different object lock point options), and may further include an open object guidance tracking option, an automatic suction function option, and the like (not shown in fig. 9), which may be optional and are not listed in this application.
For example, the polygon is divided into different triangles, and then the barycenter of the whole object is calculated according to the barycenter and the area of each triangle, which is not limited in the present application. It should be noted that, in the guide position configuration interface, the user may also flexibly configure geometric feature points, such as other uniform division points including 6 uniform division points and 8 uniform division points, or other meaningful geometric feature points, according to the actual situation, which is not illustrated in this application.
And then, the user can select a required guide mode to be selected from the guide modes to be selected output by the guide rule configuration interface, and the electronic equipment determines a target guide rule in response to the selection operation of a plurality of guide modes to be selected, so that each guide position point on the first surface is determined according to the target guide rule and the geometric characteristic information. Optionally, after that, as for an irregular first object, other types of guiding position points on the first scene object may be further determined according to the guiding position points on the plurality of first surfaces and other attribute information of the first surfaces, such as areas, for example, and the embodiment of the present application is described by taking only determining the guiding position points on the first surfaces as an example.
After the electronic device determines each guidance position point, in order to highlight and draw special attention of the user, a visual prompting mode may be adopted to adjust the display mode and the display state of each guidance position point, for example, a preset AR mark, animation or other display modes are adopted, different types of guidance position points may also be distinguished by combining display colors, and the like.
Step S85, responding the moving operation of the interactive object aiming at the target virtual object, and acquiring the relative distance between the real-time moving position of the target virtual object and each guide position point;
step S86, determining that any relative distance is smaller than the first distance threshold, and moving the target virtual object to the corresponding guide position point for display.
Regarding the implementation process of step S85 and step S86, reference may be made to the method described above, and this embodiment is not described in detail.
Therefore, in the application example, geometric feature analysis is performed on the first surface of the first scene object in the application scene, each guide position point on the first surface is determined and displayed according to the target guide rule, and the user can determine the target display position of the target virtual object, so that the user can directly move the target virtual object to the required guide position point according to the guide position point, position fine adjustment can be further performed according to needs until the target display position is reached for display, and compared with a processing mode without blind movement of any guide position point, the virtual object movement efficiency and accuracy are greatly improved. Moreover, when the target virtual object is close enough to the guide position point of the target display position, the target virtual object can be automatically adsorbed to the guide position point to be displayed, the user does not need to precisely move, the moving efficiency and accuracy of the virtual object are further improved, and the interaction modes of AR/VR and other electronic equipment are enriched.
Referring to fig. 11, a schematic flowchart of a further optional example of the interaction processing method proposed by the present application is shown, where this embodiment may be a further optional detailed implementation method of the interaction processing method described in the above embodiment, and this embodiment describes a further implementation method for obtaining location guidance information for a target virtual object, which is different from the implementation method for obtaining location guidance information for a target virtual object proposed in the above embodiment, that is, a guidance graph is drawn on line as location guidance information for placing a target virtual object, and an electronic device may obtain location guidance information for a target virtual object by responding to a location guidance trigger operation for the target virtual object, but is not limited to this detailed implementation method described in this embodiment. The method of this embodiment may still be performed by an electronic device, as shown in fig. 11, and the method may include:
step S111, outputting any application scene obtained by image rendering;
step S112, responding to the position guide creating operation aiming at the target virtual object, creating a guide graph according to the motion information of the interactive object, and displaying the created guide graph in the application scene;
the application scene may be as described in the above embodiments, and includes at least one object, such as a real object or a virtual object in a real environment, or may be multiple objects in which a real environment and a virtual environment are combined; in the embodiment of the present application, the application scenario also starts a virtual environment in which the virtual object is to be placed, where the application scenario may not include any virtual object, and waits for the user to place the virtual object.
If a plurality of target virtual objects need to be placed in an application scene, and the placement positions of the plurality of target virtual objects are in a certain layout rule, for example, a certain face/point is in a straight line, placed on the same face and placed in a specific shape (such as a triangle, a circle, a certain animal shape, and the like), a user directly places each target virtual object manually through human-eye observation, the efficiency is very low, and the required placement effect is difficult to achieve.
The method includes the steps that position guide creating operation is executed based on the fact that a user can operate an interactive object, at the moment, executed operation contents can be different according to different types of guide graphics to be created, for example, according to the method, corresponding creating function buttons/key combinations/designated interactive gestures and other creating trigger modes can be configured in advance according to different types of guide graphics, after the user determines the type of the guide graphics to be created at this time, the user can enter a creating mode of the guide graphics according to the corresponding creating trigger modes, then the user can move the interactive object according to the guide graphics to complete drawing of the guide graphics, electronic equipment can synchronously respond to motion information of the interactive object, and the guide graphics are drawn in real time until the creating is completed.
For example, taking an interactive object as a handle for implementing an interactive operation with an output application scene in an electronic device as an example, in a process of creating a guidance graph such as a guide line, with reference to a scene diagram shown in fig. 12a, a user may press a first creation graph button corresponding to the guide line in the handle, enter a line-type graph creation mode, and use a current position as a start position of the guide line to be created, move the handle, draw a line consistent with a movement trajectory of the handle from the start position in a virtual space where the application scene is located, continuously change with the movement of the handle until the first creation graph button is pressed, determine an end position of the drawn/created graph, and complete creation of the guide line. The whole process can be synchronously shown in the space of the application scene, so that the user can watch the created graphics, and the display state of the currently created graphics can be adjusted to be different from other objects in the application scene, such as a preset color and the like.
It should be noted that the created guiding line may be a straight line, a curved line, a broken line, etc., and the present application does not limit the line type thereof, which may be determined as the case may be. If the line type and the representation form (such as a solid line, a dashed line, a dotted line, etc.) of the created guideline can be selected before the guideline is created, and then the drawn graph can be presented according to the line type and the representation form when the user moves the interactive object, the solid line process is not described in detail, the second row of the drawing in fig. 12a only takes drawing a straight-line guideline as an example for illustration, and the creation processes of guidelines of other types or forms are similar, and are not described in detail in the present application.
In practical applications, since the guiding line is generally used to align a plurality of objects, the length of the guiding line may not be concerned, and the preset length of the guiding line may be configured in advance in order to improve the efficiency and the regularity of the graphic creation, so that when the guiding line creation is completed according to the above-described method, the end position thereof is determined, and the operation on the interactive object is released, the guiding line may be directly jumped to the preset length according to the type of the drawn graphic. Similarly, for the curve, a curve waveform may also be configured in advance, so that a user draws a similar waveform, a guide curve of a standard waveform and the like may be directly generated, and a detailed description of the implementation process is not given in this application.
Step S113, responding to the display state editing operation of the displayed guide graph, adjusting the display state of the guide graph, and obtaining a target guide graph aiming at the target virtual object;
step S114, forming position guide information for the target virtual object by at least the target guide graph;
after the guiding line is drawn according to the method described above, the attributes such as the current display posture and/or position of the guiding line in the application scene space may not meet the application requirements, and the current display position, the current display posture, and the like of the created guiding line may be adjusted, so, referring to the guiding line editing scene schematic diagram shown in fig. 12b, the user may control the handle to approach the guiding line presented by the application scene, and calculate the distance between the position of the handle and the geometric feature point of the guiding line (such as two end points, a central point, and the like shown in fig. 12 b) in real time, if any distance is smaller than the second distance threshold (the numerical value of which is not limited, and may be determined as the case may be), may determine to enter the display state editing mode of the guiding line, may adjust the display color of the guiding line as the editing display color as required (may adjust the other display states of the guiding line, not limited to display colors) to allow the user to visually see the pattern in which the guideline is located.
Then, the user can press a button for implementing a corresponding editing function according to the editing requirement on the guide line, control the interactive object to approach the corresponding geometric feature point, and when the distance between the interactive object and the geometric feature point (such as a central point or an end point) is smaller than a third distance threshold (the numerical value of which is not limited), the entered display state editing mode may be a moving mode, as shown in the upper right side of fig. 12b, the guide line may move synchronously with the movement of the interactive object until reaching the target position, and release the pressed editing function button; similarly, if the posture of the guide line needs to be adjusted, the user can hold the handle to approach a certain end point of the guide line, and after entering the end point editing mode, the user rotates the guide line in the moving mode shown in the lower right side of fig. 12b, changes the posture of the guide line in the application scene space until the requirement of placing assistance on the target virtual object is met, and releases the pressed editing function button.
Similarly, for the creation of the guide pattern such as the guide surface, the second creation pattern button corresponding to the guide surface on the handle is pressed, two guide lines may be created according to the method described above, and when the second guide line is created, the end position of the first guide line may be used as the start position of the second guide line, that is, the second creation pattern button is pressed for the first time, when the handle is moved to draw the first guide line to the end position, the second creation pattern button is released and then pressed again, the second guide line is drawn from the start position, as shown in the first row of fig. 12a, until the second creation pattern button is released at the end position of the second guide line, three vertex positions may be obtained, and the normal is calculated accordingly to form an equilateral triangle, or other types of triangles, which the present application is not limited, because the area shape of the guide surface is meaningless, either configuration is possible and the present application is illustrated with only an equilateral triangle as an example. For this guide surface represented by the resulting equilateral triangle, its display state, such as display color, can also be adjusted to highlight its display in the application scene space.
Then, if the created guide surface needs to be moved or rotated, as the moving and rotating implementation processes of the guide line are similar, and as shown in fig. 12c, the handle gradually approaches the guide surface, when the distance between the position of the handle and a plurality of geometric feature points (such as three vertexes and center of a triangle) of the guide surface is smaller than a fourth distance threshold (the value of which is not limited, as the case may be), an editing mode may be entered, at this time, the display state of the guide surface may be adjusted as needed, and then, according to the editing requirement for the guide surface, the guide surface is moved synchronously by the handle movement according to the corresponding editing function button, approaching the center of the triangle; the guide surface is rotated by the handle movement near either vertex of the triangle.
The method for creating the guidance graph according to the motion information of the interactive object and constructing the position guidance information for the target virtual object according to the present application includes, but is not limited to, the methods described in step S113 and step S114 above, and the present application is described only by way of example. The creating and editing implementation processes of other types of guide graphics are similar, and the detailed description of examples is not given in the application; and for each guide graphic editing processing method, including but not limited to the moving and rotating manners described above, and not limited to the handle as the interactive object, for other interactive objects or other operation manners, the implementation process of creating and editing the guide graphic may refer to the above-described process, and will not be described in detail in this application.
In still other embodiments, the method and the device can also directly select from various preset guide graphics without on-line drawing by a user, so that the position guide information obtaining efficiency is improved, and the use and operation requirements of the user who does not draw the guide graphics on the electronic equipment are better met. Therefore, the electronic device can respond to the position guide invoking operation aiming at the target virtual object, output a guide graphic display interface, present at least one guide graphic on the guide graphic display interface for the user to select, and respond to the selection operation of the presented guide graphic, and form the position guide information aiming at the target virtual object by the selected guide graphic.
Optionally, for the guide graph selected and called directly, the guide graph may be edited according to the editing method described above, and then the guide graph after editing processing forms position guide information and the like, that is, a first position relationship between a trigger position of the interactive object and a geometric feature point in an undetermined guide graph (i.e., a guide graph created or selected directly) is detected, if the first position relationship meets a guide graph editing condition, an editing mode of the undetermined guide graph is entered, and a display state of the undetermined guide graph is updated, so that the undetermined guide graph presented in the application scene is controlled to move or rotate according to relative position change information between the trigger position of the interactive object and a specified geometric feature point in the undetermined guide graph, so as to obtain a target guide graph for the target virtual object, and then the updated display state of the target guide graph may be restored to an initial display state, and exiting the editing mode of the pending guide graph.
In still other embodiments, the position guidance information configured as described above may include, but is not limited to, the created or selected target guidance figure, and may also include relevant parameters of the target guidance figure as needed, such as spatial coordinates of each vertex of the target guidance figure in the application scene space, and the like, as the case may be.
Step S115, responding to the moving operation of the interactive object to the target virtual object, and controlling the target virtual object to synchronously move and display according to the motion information of the interactive object;
step S116, obtaining the relative position relation between the real-time moving position of the target virtual object and the target guide graph in the position guide information;
regarding the implementation processes of step S115 and step S116, reference may be made to the description of the corresponding parts in the above embodiments, which is not described in detail herein.
And step S117, controlling the target virtual object to move to the target display position on the target guide graph for displaying according to the relative position relation and the target guide rule.
In conjunction with the above description of the target guidance rule, reference is made to the scene diagrams of the moving target virtual pair to the target display position shown in fig. 12d and 12 e. Assuming that the target guiding graph is a guiding line and the target virtual object is a cube, as shown in fig. 12d, the cube is moved by the handle to be close to the center of the guiding line, and when the distance between the two is smaller than the first distance threshold, the cube can be automatically attached to the center of the guiding line, or manually moved by the user to the center (in this case, the middle point of the guiding line and the center of gravity of the cube, etc. can be presented in the application scene), and then, if the posture presented by the cube does not meet the requirement, the cube can be rotated to the target display position by the handle according to the above-described guiding graph rotation manner, or can be automatically rotated to the tangential direction to display the cube after being rotated to be tangent to the tangential line.
If the target guide graph is a guide surface, as shown in fig. 12e, the cube is moved by the handle to be close to the guide surface, and the distance between the cube and the guide surface is smaller than the first distance threshold, the cube may be automatically adsorbed onto the guide surface, if there are other cubes on the guide surface, the cube placed this time may be adsorbed onto the adjacent position of the existing cube to be displayed, or the cube placed this time may be placed at the start position, and the other cubes on the position may be moved to the adjacent position, and the implementation process may refer to the description related to the automatic adsorption function in the above embodiment, which is not described herein again. Through the processing mode, the alignment and placement of the virtual objects are realized, and the processing efficiency and accuracy are improved.
It can be understood that, according to other types of guide graphics presented in the application scene, the implementation method for moving one or more target virtual objects to one or more display positions for display is implemented, including but not limited to the above-described automatic adsorption function, and a user may also refer to the guide graphics and the displayed geometric feature points thereof, and the geometric feature points of the target virtual objects, to manually move the target virtual objects to the corresponding target display positions, and the implementation process is not described in detail in this application.
In still other embodiments, in the implementation process of each interaction processing method described above, the target display position of each target virtual object may be determined, and/or fine tuning of the target display position may be implemented, and/or the display state of the target virtual object may be adjusted, by analyzing the geometric features and the attribute information of each target virtual object and the guide position information, and the like. Optionally, in the process of determining the target display position, in combination with the analysis of the attribute information, it is determined that the position is not suitable for placing the virtual object, and corresponding prompt information may be output to prompt the user to move the target virtual object, and the target display position is determined again.
According to the analysis mode, the display information belonging to the preset type, such as important information, is determined to exist at the target display position, and when the target virtual object is placed at the target display position for display, the transparency of the target virtual object can be adjusted, so that a user can see the display information below the target virtual object through the target virtual object.
Referring to fig. 13, a schematic structural diagram of an alternative example of the interaction processing apparatus proposed in the present application, the apparatus may be deployed on an electronic device side, as shown in fig. 13, and the apparatus may include:
a position guidance information output module 131, configured to obtain position guidance information for a target virtual object, and present the position guidance information in an output application scene;
a target display position determining module 132, configured to determine, in response to an interaction triggering operation on the target virtual object, a target display position of the target virtual object according to the interaction triggering position for the target virtual object and the position guidance information;
a target virtual object display module 133, configured to fuse the target virtual object to the target display position in the application scene for display.
In some embodiments, the position guidance information output module 131 may include:
the device comprises a guiding position point determining unit, a judging unit and a judging unit, wherein the guiding position point determining unit is used for identifying a first face in which a first scene object exists in an application scene and determining at least one guiding position point on the first face;
a first guidance configuration unit configured to configure position guidance information for the target virtual object using the determined at least one guidance position point.
In still other embodiments, the position guidance information output module 131 may include:
a position guidance information obtaining unit configured to obtain position guidance information for a target virtual object in response to a position guidance trigger operation for the target virtual object; the position guide information includes a target guide pattern constituted by a guide line and/or a guide surface.
Based on the analysis, optionally, the guiding position point determining unit 1311 may include:
the device comprises a first scene object determining unit, a second scene object determining unit and a display unit, wherein the first scene object determining unit is used for determining a first scene object in an application scene according to visual angle information of the application scene;
the geometric feature identification unit is used for determining a first surface where the first scene object exists and identifying geometric feature information of the first surface;
and the guiding position point obtaining unit is used for obtaining at least one guiding position point on the first surface according to the geometric characteristic information and a target guiding rule.
Optionally, in order to determine the first surface where the first scene object exists, the geometric feature recognition unit may include:
a first scanning unit, configured to scan a plurality of faces where the first scene object exists, and determine a face in a specified spatial plane as a first face; or,
a scanning selection unit configured to scan a plurality of faces where the first scene object exists, and detect a selection operation for the plurality of faces;
a first face determination unit configured to determine the selected first face in response to the selection operation.
In order to obtain the target guidance rule, the guidance position point obtaining unit may include:
a guidance rule calling unit, configured to call, from preset guidance rules configured for surfaces of different geometric features, a preset guidance rule corresponding to the identified geometric feature information as a target guidance rule; or,
the guidance rule configuration interface output unit is used for responding to a guidance rule configuration triggering operation, outputting a guidance rule configuration interface aiming at the application scene, and presenting a plurality of guidance modes to be selected on the guidance rule configuration interface;
a guidance mode selection unit configured to determine a target guidance rule in response to a selection operation of the plurality of guidance modes to be selected.
In still other embodiments, the position guidance information obtaining unit may include:
the guide graph creating unit is used for responding to the position guide creating operation aiming at the target virtual object, creating a guide graph according to the motion information of the interactive object and forming the position guide information aiming at the target virtual object; or,
the guiding graphic display interface output unit is used for responding to the position guiding and calling operation aiming at the target virtual object, outputting a guiding graphic display interface and presenting at least one guiding graphic on the guiding graphic display interface;
a second guidance composing unit configured to compose, from the selected guidance figure, position guidance information for the target virtual object in response to a selection operation of the presented guidance figure.
Optionally, the second guide configuring unit may include:
a guidance graphic display unit for displaying the created or selected guidance graphic in the application scene;
the display editing unit is used for responding to the display state editing operation of the displayed guide graph, adjusting the display state of the guide graph and obtaining a target guide graph aiming at a target virtual object;
a third guidance composing unit configured to compose, from at least the target guidance figure, position guidance information for the target virtual object.
In still other embodiments, the target display position determining module 132 may include:
a relative position relation obtaining unit, configured to obtain a relative position relation between a real-time moving position of a preset geometric feature point or surface of the target virtual object and each guide object included in the position guide information; the real-time mobile position is determined based on interactive triggering operation of an interactive object; each guide object comprises a guide position point and/or at least one guide graph;
a target display position determining unit, configured to determine a target display position of the target virtual object in the guidance object according to the relative position relationship and a target guidance rule;
based on this, the target virtual object display module 133 may include:
the mobile display unit is used for directly moving the target virtual object from the real-time moving position to a corresponding target display position in the application scene so as to align a plurality of objects on the same designated guide line or designated guide surface; the object comprises at least the target virtual object;
a display position adjusting unit, configured to detect that any virtual object exists at the target display position in the application scene, move the virtual object to a display position adjacent to the target display position for display, and display the target virtual object at the target display position;
and the display state adjusting unit is used for detecting that the display state of the target virtual object displayed by the application scene does not accord with the display requirement, responding to the adjustment operation of the target virtual object and adjusting the display state of the target virtual object.
It should be noted that, for various modules, units, and the like in the foregoing apparatus embodiments, all of which may be stored in a memory as program modules, and the processor executes the program modules stored in the memory to implement corresponding functions, and for functions implemented by the program modules and their combinations and achieved technical effects, reference may be made to the description of corresponding parts in the foregoing method embodiments, and this embodiment is not described again.
The present application also provides a computer-readable storage medium, on which a computer program can be stored, which can be called and loaded by a processor to implement the steps of the interaction processing method described in the above embodiments.
Finally, it should be noted that, with respect to the above embodiments, unless the context clearly dictates otherwise, the words "a", "an" and/or "the" do not denote a singular number, but may include a plurality. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements. An element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
Reference herein to terms such as "first," "second," or the like, is used for descriptive purposes only and to distinguish one operation, element, or module from another operation, element, or module without necessarily requiring or implying any actual such relationship or order between such elements, operations, or modules. And are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated, whereby a feature defined as "first" or "second" may explicitly or implicitly include one or more of such features.
In addition, in the present specification, the embodiments are described in a progressive or parallel manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device, the electronic device and the medium disclosed by the embodiment correspond to the method disclosed by the embodiment, so that the description is relatively simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. An interaction processing method, the method comprising:
obtaining position guide information aiming at a target virtual object, and presenting the position guide information in an output application scene;
responding to the interactive trigger operation of the target virtual object, and determining a target display position of the target virtual object according to the interactive trigger position aiming at the target virtual object and the position guide information;
fusing the target virtual object to the target display position in the application scene for display.
2. The method of claim 1, the obtaining location guidance information for a target virtual object, comprising:
identifying a first face in which a first scene object exists in an application scene, determining at least one guiding position point on the first face, and forming position guiding information for a target virtual object by using the determined at least one guiding position point; and/or the presence of a gas in the gas,
responding to a position guide trigger operation aiming at a target virtual object, and acquiring position guide information aiming at the target virtual object; the position guide information includes a target guide pattern constituted by a guide line and/or a guide surface.
3. The method of claim 2, the identifying a first face in an application scene in which a first scene object is present, determining at least one guide location point on the first face, comprising:
determining a first scene object in an application scene according to visual angle information of the application scene;
determining a first surface in which the first scene object exists, and identifying geometrical characteristic information of the first surface;
and acquiring at least one guiding position point on the first surface according to the geometric feature information and a target guiding rule.
4. The method of claim 3, the determining a first face in which the first scene object exists, comprising:
scanning a plurality of faces where the first scene object exists, and determining a face in a specified spatial plane as a first face; or,
scanning a plurality of faces in which the first scene object exists, and detecting a selection operation for the plurality of faces;
in response to the selecting operation, the selected first face is determined.
5. The method of claim 3, the target guidance rule obtaining method comprising:
calling a preset guide rule corresponding to the identified geometric feature information as a target guide rule from preset guide rules configured for different surfaces with geometric features; or,
responding to a guide rule configuration triggering operation, outputting a guide rule configuration interface aiming at the application scene, and presenting a plurality of guide modes to be selected on the guide rule configuration interface;
and determining a target guide rule in response to the selection operation of the plurality of guide modes to be selected.
6. The method of claim 2, the obtaining location guidance information for a target virtual object in response to a location guidance trigger operation for the target virtual object, comprising:
responding to the position guide creating operation aiming at the target virtual object, creating a guide graph according to the motion information of the interactive object, and forming position guide information aiming at the target virtual object; or,
responding to the position guide calling operation aiming at the target virtual object, outputting a guide graphic display interface, and presenting at least one guide graphic on the guide graphic display interface;
in response to a selection operation of the presented guidance graphic, position guidance information for the target virtual object is composed by the selected guidance graphic.
7. The method of claim 6, said composing position guidance information for said target virtual object comprising:
displaying the created or selected guide graphic in the application scene;
responding to the display state editing operation of the displayed guide graph, and adjusting the display state of the guide graph to obtain a target guide graph aiming at a target virtual object;
and at least the target guide graph forms position guide information aiming at the target virtual object.
8. The method of any of claims 2-7, the determining a target display location of the target virtual object as a function of the interaction triggering location for the target virtual object and the location guidance information, comprising:
acquiring the relative position relation between the real-time moving position of a preset geometric feature point or surface of the target virtual object and each guide object contained in the position guide information; the real-time mobile position is determined based on interactive triggering operation of an interactive object; each guide object comprises a guide position point and/or at least one guide graph;
determining a target display position of the target virtual object in the guide object according to the relative position relation and a target guide rule;
the fusing the target virtual object to the target display position in the application scene for displaying includes:
directly moving the target virtual object from the real-time moving position to a corresponding target display position in the application scene so as to align a plurality of objects on the same designated guide line or designated guide surface; the object comprises at least the target virtual object;
detecting that any virtual object exists at the target display position in the application scene, moving the virtual object to a display position adjacent to the target display position for displaying, and displaying the target virtual object at the target display position;
and responding to the adjustment operation of the target virtual object to adjust the display state of the target virtual object when the display state of the target virtual object displayed by the application scene is detected to be not in accordance with the display requirement.
9. An interaction processing apparatus, the apparatus comprising:
the position guide information output module is used for obtaining position guide information aiming at the target virtual object and presenting the position guide information in an output application scene;
a target display position determining module, configured to determine, in response to an interaction trigger operation on the target virtual object, a target display position of the target virtual object according to the interaction trigger position for the target virtual object and the position guidance information;
and the target virtual object display module is used for fusing the target virtual object to the target display position in the application scene for display.
10. An electronic device, the electronic device comprising:
a display module; a plurality of sensors;
a memory for storing a program for implementing the interactive processing method according to any one of claims 1 to 8;
a processor for loading and executing the program stored in the memory to realize the interactive processing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111646004.3A CN114706511B (en) | 2021-12-29 | 2021-12-29 | Interactive processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111646004.3A CN114706511B (en) | 2021-12-29 | 2021-12-29 | Interactive processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114706511A true CN114706511A (en) | 2022-07-05 |
CN114706511B CN114706511B (en) | 2024-07-23 |
Family
ID=82167637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111646004.3A Active CN114706511B (en) | 2021-12-29 | 2021-12-29 | Interactive processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114706511B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105898346A (en) * | 2016-04-21 | 2016-08-24 | 联想(北京)有限公司 | Control method, electronic equipment and control system |
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
US20210034870A1 (en) * | 2019-08-03 | 2021-02-04 | VIRNECT inc. | Augmented reality system capable of manipulating an augmented reality object |
-
2021
- 2021-12-29 CN CN202111646004.3A patent/CN114706511B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105898346A (en) * | 2016-04-21 | 2016-08-24 | 联想(北京)有限公司 | Control method, electronic equipment and control system |
US20210034870A1 (en) * | 2019-08-03 | 2021-02-04 | VIRNECT inc. | Augmented reality system capable of manipulating an augmented reality object |
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114706511B (en) | 2024-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11238666B2 (en) | Display of an occluded object in a hybrid-reality system | |
US20220414993A1 (en) | Image processing apparatus, image processing method, and program | |
US9778464B2 (en) | Shape recognition device, shape recognition program, and shape recognition method | |
US9685005B2 (en) | Virtual lasers for interacting with augmented reality environments | |
US11755122B2 (en) | Hand gesture-based emojis | |
US10401967B2 (en) | Touch free interface for augmented reality systems | |
CN105259654B (en) | Spectacle terminal and its control method | |
CN105027033B (en) | Method, device and computer-readable media for selecting Augmented Reality object | |
CN110476142A (en) | Virtual objects user interface is shown | |
EP1292877B1 (en) | Apparatus and method for indicating a target by image processing without three-dimensional modeling | |
US9979946B2 (en) | I/O device, I/O program, and I/O method | |
US20170256073A1 (en) | Minimizing variations in camera height to estimate distance to objects | |
WO2020048441A1 (en) | Communication connection method, terminal device and wireless communication system | |
US20190312917A1 (en) | Resource collaboration with co-presence indicators | |
US20190377474A1 (en) | Systems and methods for a mixed reality user interface | |
WO2019010337A1 (en) | Volumetric multi-selection interface for selecting multiple entities in 3d space | |
US11520409B2 (en) | Head mounted display device and operating method thereof | |
Lee et al. | Tunnelslice: Freehand subspace acquisition using an egocentric tunnel for wearable augmented reality | |
CN114706511A (en) | Interaction processing method and device and electronic equipment | |
WO2019207875A1 (en) | Information processing device, information processing method, and program | |
WO2022163772A1 (en) | Information processing method, information processing device, and non-volatile storage medium | |
CN112292658A (en) | Information processing apparatus, information processing method, and program | |
TWI821878B (en) | Interaction method and interaction system between reality and virtuality | |
US20240160294A1 (en) | Detection processing device, detection processing method, information processing system | |
CN116301370A (en) | Information processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |