CN112181551A - Information processing method and related equipment - Google Patents

Information processing method and related equipment Download PDF

Info

Publication number
CN112181551A
CN112181551A CN202010901507.XA CN202010901507A CN112181551A CN 112181551 A CN112181551 A CN 112181551A CN 202010901507 A CN202010901507 A CN 202010901507A CN 112181551 A CN112181551 A CN 112181551A
Authority
CN
China
Prior art keywords
electronic device
virtual area
operation instruction
display interface
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010901507.XA
Other languages
Chinese (zh)
Inventor
许强
孙军渭
陈源
何彦杉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010901507.XA priority Critical patent/CN112181551A/en
Publication of CN112181551A publication Critical patent/CN112181551A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Abstract

The embodiment of the application discloses an information processing method and related equipment, and the method can be used in the fields of augmented reality, virtual reality or mixed reality. The method is applied to first electronic equipment, wherein the first electronic equipment is augmented reality equipment, virtual reality equipment or mixed reality equipment, and the method comprises the following steps: the method comprises the steps of obtaining a first operation instruction, responding to the first operation instruction, and changing the rendering effect of objects located in a first virtual area in a display interface, wherein a plurality of objects are displayed in the display interface, and the first virtual area is an area in the display interface. The visibility of the shielded object in the display interface can be reduced by changing the rendering effect of the object in the first virtual area in the display interface, so that a user can more easily see the shielded object, and the interaction with the shielded object is further facilitated; in addition, the color of the object in the first virtual area in the display interface can be changed to present richer picture effects.

Description

Information processing method and related equipment
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to an information processing method and related device.
Background
With the popularization of Augmented Reality (AR), Virtual Reality (VR) technology, and Mixed Reality (MR), AR devices, VR devices, and MR devices have been widely used in many scenes of work, entertainment, and the like.
However, in a scene with dense objects, many objects are displayed on the display interface, and the multiple objects may be mutually shielded, so that a user is difficult to clearly see the shielded objects and further difficult to interact with the shielded objects.
Disclosure of Invention
The embodiment of the application provides an information processing method and related equipment, which can reduce the visibility of an occluded object in a display interface by changing the rendering effect of the object in a first virtual area in the display interface, so that a user can more easily see the occluded object, and further, the interaction with the occluded object is facilitated; in addition, the color of the object in the first virtual area in the display interface can be changed to present richer picture effects.
In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides an information processing method, which may be used in the fields of augmented reality, and mixed display. The method can comprise the following steps: the first electronic device acquires a first operation instruction, and the first electronic device is an AR device, a VR device or an MR device. Specifically, if the first electronic device and the second electronic device are two independent devices, the first electronic device receives a first operation instruction sent by the second electronic device, and the second electronic device is a device for a user to input an operation. If the first electronic device and the second electronic device are integrated in the same device, the first electronic device acquires a first operation input by a user and generates a first operation instruction corresponding to the first operation. The first electronic device responds to the first operation instruction, changes the rendering effect of objects located in a first virtual area in a display interface of the virtual space, wherein a plurality of objects are displayed in the display interface, and the first virtual area is an area in the display interface. Wherein the objects inside the first virtual area may only include objects that are completely inside the first virtual area. Alternatively, the object inside the first virtual area may include an object located completely inside the first virtual area, a portion of the object intersecting the first virtual area located inside the first virtual area, and a portion of the object intersecting the first virtual area located inside the first virtual area. The display interface of the first electronic device for changing the rendering effect comprises a current display interface or one or more display interfaces to be further displayed next; in other implementations, the presentation interface on which the first electronic device changes the rendering effect does not include the current presentation interface, including one or more presentation interfaces to be displayed next. The manner of changing the rendering effect of the object includes any one of three kinds of the following: control objects are not visible, objects are transparentized or objects are color changed.
In the implementation mode, in response to an acquired first operation instruction, the rendering effect of an object located in a first virtual area in a display interface is changed, the first virtual area is an area in the display interface, and when the object is in a dense scene, the visibility of a blocked object in the display interface can be reduced by changing the rendering effect of the object in the area in the middle of the display interface, so that a user can see the blocked object clearly and conveniently interact with the blocked object; in addition, the color and the like of the object in the middle area of the display interface can be changed by changing the rendering effect of the object in the middle area of the display interface, so that richer picture effects can be presented.
In one possible implementation manner of the first aspect, the method may further include: the first electronic equipment responds to the first operation instruction and displays the first virtual area in the display interface. Specifically, the first electronic device may show the first virtual area by changing a rendering effect of an object outside the first virtual area, where the rendering effect of the object inside the first virtual area after being changed is different from the rendering effect of the object outside the first virtual area after being changed. Outside the first virtual area and inside the first virtual area are a pair of opposing concepts, and objects outside the first virtual area may include only objects that are completely outside the first virtual area, i.e., not objects that intersect the first virtual area. It is also possible to include not only an object located entirely outside the first virtual area but also a portion located outside the first virtual area among objects intersecting the first virtual area, excluding a portion located inside the first virtual area among objects intersecting the first virtual area.
In the implementation mode, the first virtual area is additionally arranged in the display interface of the virtual space, so that a user can more clearly recognize the boundary of the first virtual area, and the viscosity of the user is improved.
In one possible implementation manner of the first aspect, the changing, by the first electronic device, a rendering effect of an object located inside the first virtual area in the presentation interface may include: the first electronic device controls the object in the display interface located inside the first virtual area to be invisible, that is, the first electronic device adjusts the transparency of the object in the display interface located inside the first virtual area to be one hundred percent. Or, the first electronic device performs transparency processing on the object located inside the first virtual area in the presentation interface, that is, the first electronic device adjusts the transparency of the object inside the first virtual area in the presentation interface to be between one percent and ninety-nine percent, so that the object hidden by the object inside the first virtual area in the presentation interface can be seen.
In the implementation mode, two different implementation modes of the first electronic device for changing the rendering effect of the object located in the first virtual area in the display interface are provided, and the implementation flexibility of the scheme is improved.
In one possible implementation manner of the first aspect, the method further includes: the first electronic equipment acquires a second operation instruction, and controls a first object associated with the first virtual area in the display interface to rotate in response to the second operation instruction. The second operation instruction at least carries a rotation direction, and the second operation instruction may specifically be an operation instruction rotating leftward, an operation instruction rotating rightward, an operation instruction rotating upward, an operation instruction rotating downward, an operation instruction rotating upward leftward, an operation instruction rotating downward leftward, an operation instruction rotating upward rightward, an operation instruction rotating downward rightward, and the like, and a trigger instruction that triggers the first electronic device to rotate to each direction along with an object intersecting with the first virtual area in the display interface of the virtual space is generated. The first object is any one of the following three objects: an object intersecting the boundary of the first virtual area in a presentation interface of the first electronic device; an object located inside a first virtual area in a display interface of a first electronic device; or, the objects within the first virtual area in the presentation interface of the first electronic device, where the objects within the first virtual area in the presentation interface include objects in the presentation interface that intersect with a boundary of the first virtual area and objects in the presentation interface that are completely inside the first virtual area. The minimum execution unit of the first electronic device when executing the rotation operation is a complete object.
In this implementation, the first electronic device may further control, in response to the second operation instruction, the first object associated with the first virtual area in the display interface to rotate, so that the user may rotate away an object that is not desired to be seen in the display interface of the virtual space through the first virtual area, and thus, the object that is blocked in the display interface is easier to see clearly, and further, the interaction with the blocked object is easier.
In one possible implementation of the first aspect, the first object associated with the first virtual area is any one of three types: and objects which are positioned in the visual field range of the user and intersect the boundary of the first virtual area in the display interface of the virtual space. And the object in the display interface of the virtual space is positioned in the visual field range of the user and positioned in the first virtual area. Or, the object which is positioned in the visual field range of the user and positioned in the first virtual area in the display interface of the virtual space.
In one possible implementation manner of the first aspect, the method further includes: the first electronic device acquires a third operation instruction, amplifies the first virtual area in response to the third operation instruction, selects an object located in the amplified first virtual area from the multiple objects on the display interface, and changes the rendering effect of the object located in the amplified first virtual area on the display interface. Or the first electronic device acquires a third operation instruction, performs reduction processing on the first virtual area in response to the third operation instruction, selects an object located inside the reduced first virtual area from the multiple objects on the display interface, and changes the rendering effect of the object located inside the reduced first virtual area on the display interface. The process of enlarging or reducing the first virtual area may be continuous or discrete.
In this implementation, not only can the rendering effect of the object inside the first virtual area in the display interface be changed through the first operation instruction, but also the first virtual area can be enlarged or reduced through the third operation instruction, and correspondingly, the rendering effect of the object inside the enlarged or reduced first virtual area is changed, that is, what the object to be changed is can be flexibly adjusted, so that the convenience of interaction between the user and the object displayed in the display interface is further improved.
In a possible implementation manner of the first aspect, the display interface of the first electronic device further displays a ray, and a source of the ray may be a second electronic device or a hand of a user, and the direction of the ray in the display interface of the first electronic device is adjusted by changing a posture of the second electronic device or changing a pointing direction of the hand of the user. And the first electronic equipment executes selection operation on the objects outside the first virtual area through the rays displayed in the display interface. Specifically, if the ray intersects with an object located outside the first virtual area in the presentation interface of the virtual space, it may be considered that the object is selected through the ray. Or, after the ray intersects with an object located outside the first virtual area in the display interface of the virtual space, the user is required to input a "confirm selection" operation, and the object is considered to be selected through the ray. Further, if the ray intersects with a plurality of objects located outside the first virtual area in the display interface of the virtual space, the object intersecting with the first ray in the display interface of the virtual space is regarded as an object intersecting with the ray.
In the implementation manner, the rendering effect of at least one object in the first virtual area can be changed, the rays can be displayed in the display interface, and the selection operation can be performed on the objects outside the first virtual area through the rays, so that the user can not only clearly see the shielded objects, but also interact with the shielded objects by using the rays, and the convenience of interaction between the user and the objects in the virtual scene is further improved.
In a possible implementation manner of the first aspect, if the ray does not intersect any object located outside the first virtual area in the presentation interface of the first electronic device, an intersection point of the ray and a boundary of the first virtual area may be regarded as a position of a cursor of the ray in the presentation interface of the virtual space, and one object closest to the cursor and located outside the first virtual area in the presentation interface may be regarded as one object selected by the ray.
In one possible implementation of the first aspect, the first virtual area includes an area within a preset range in front of the first electronic device. In this implementation, the user watches the object displayed in the display interface through the first electronic device, the preset range in front of the first electronic device is also the preset range in front of the field of view of the user, that is, the user is covered in the first virtual area, the user is located in the central area of the first virtual area, and when the rendering effect of the object inside the first virtual area is changed, the rendering effect of the object around the user is also changed, so that the immersion feeling of the user can be increased. Moreover, when the purpose of changing the rendering effect is to clearly see the blocked object, the first virtual area is arranged in front of the visual field of the user, so that the operation is more convenient and faster, and the viscosity of the user is favorably improved.
In one possible implementation manner of the first aspect, the first virtual area is an area centered on the first electronic device. Further, the first virtual area is an area centered on the electronic device and including only the front side of the first electronic device. For example, the first virtual area is an area located in front of the first electronic device in a sphere with the first electronic device as a center; for example, the first virtual area is an area located in front of the first electronic device in a cube centered on the electronic device. Alternatively, the first virtual area is an area centered on the electronic device or the body center of the user and including the front and the rear of the first electronic device. For example, the first virtual area is a sphere centered on the first electronic device; for another example, the first virtual area is a cube centered on the first electronic device.
In a possible implementation manner of the first aspect, the first electronic device obtains a fourth operation instruction, and performs a recovery operation on an object in a virtual space provided by the first electronic device in response to the fourth operation instruction. The restoration operation comprises the step of restoring the object positioned in the first virtual area in the display interface of the virtual space to the state of the rendering effect which is not changed. The object located inside the first virtual area may be an object located inside the first virtual area and appearing in an invisible state, or the object located inside the first virtual area may be an object located inside the first virtual area and appearing in a transparent state, or the object located inside the first virtual area may be an object located inside the first virtual area and changed in color. Optionally, the restoring operation further comprises restoring the first object associated with the first virtual area in the virtual space to a position prior to being rotated.
In a second aspect, an embodiment of the present application provides an information processing method, which may be used in the fields of augmented reality, and mixed display. The method can comprise the following steps: the second electronic equipment acquires a first operation instruction corresponding to the first operation; specifically, the second electronic device is a mobile phone, the first operation may be that the time for pressing the screen of the mobile phone for a long time exceeds a preset time, the first operation may also be that the screen of the mobile phone is continuously clicked, and the first operation may also be that a preset gesture is input through the screen of the mobile phone. Or, the second electronic device is a handle, a preset opening key is arranged on the handle, and when the user clicks the preset opening key, the first operation is considered to be input. Or a virtual opening key is rendered in the display interface of the virtual space, and the user clicks the virtual opening key in the display interface of the virtual space through a gesture to input the first operation. Or, a virtual opening key is rendered in a display interface of the virtual space, rays are also displayed in the display interface, and a user selects the virtual opening key through the rays to input the first operation. The second electronic equipment sends the first operation instruction to the first electronic equipment. The first electronic device is an AR device, a VR device or an MR device, the first operation instruction is used to instruct the first electronic device to change a rendering effect of an object located inside a first virtual area in a display interface of the first electronic device, the display interface of the first electronic device displays a plurality of objects, and the first virtual area is an area in the display interface of the first electronic device.
In one possible implementation manner of the second aspect, the method further includes: the second electronic device obtains a second operation instruction corresponding to a second operation, wherein the second operation is different from the first operation. Specifically, the second electronic device is a mobile phone, the second operation may be a sliding operation of sliding in each direction through a screen of the mobile phone, and the second electronic device generates a second operation instruction in the corresponding direction. Or, the second electronic device is a mobile phone or a handle, the second operation may be an operation of rotating the mobile phone or the handle in each direction, and the second electronic device generates a second operation instruction in the corresponding direction. Or the second electronic device is a handle, direction keys are pre-configured on the second electronic device, and when the user clicks the keys in various directions, the second electronic device is regarded as inputting the second operation. Or the operation input mode is gesture input, the second electronic device can acquire gesture operations of sliding of the user in each direction through the camera, and the second electronic device regards the gesture operations as second operations and generates a second operation instruction in the corresponding direction. The second electronic device sends a second operation instruction to the first electronic device, the second operation instruction is used for instructing the first electronic device to control a first object associated with the first virtual area in a display interface of the first electronic device to rotate, and the first object is any one of the following three objects: an object intersecting the boundary of the first virtual area in a presentation interface of the first electronic device; an object located inside a first virtual area in a display interface of a first electronic device; or, the object in the first virtual area in the display interface of the first electronic device.
In one possible implementation manner of the second aspect, the method further includes: and the second electronic equipment acquires a third operation instruction corresponding to a third operation, wherein the third operation, the second operation and the first operation are different operations. Specifically, the second electronic device is a mobile phone, and the third operation may be a sliding operation of inputting "zoom out" or "zoom in" through a screen of the mobile phone. For example, the "zoom out" operation may be a double-click operation, and the "zoom in" operation may be a three-click operation; for another example, the "zoom out" operation may be an inward rotation operation and the "zoom in" operation may be an outward rotation operation. Or, the second electronic device is a handle, and an enlargement or reduction button is pre-configured on the second electronic device, and when the user clicks the enlargement button, the user regards that a third operation (i.e., an input enlargement operation) is input; when the user clicks the zoom-out button, it is regarded that another third operation (i.e., a zoom-out operation) is input. Or the operation input mode is gesture input, the first electronic device can render an amplifying key and a reducing key in the display interface, and when the second electronic device acquires the gesture of the user through the camera and clicks the amplifying key, the gesture can be regarded as input amplifying operation; when the gesture of the user collected by the second electronic device through the camera is a click of the zoom-out key, the input of the zoom-out operation can be considered. Or the first electronic device may render an enlargement key and a reduction key in the display interface, the display interface may also display a ray, and the user performs a selection operation on the enlargement key or the reduction key in the virtual space through the ray to input a third operation. The second electronic device sends a third operation instruction to the first electronic device, the third operation instruction is used for instructing the first electronic device to enlarge or reduce the first virtual area, and the third operation instruction is also used for instructing the first electronic device to change the rendering effect of the object located in the enlarged or reduced first virtual area in the display interface of the first electronic device.
In a possible implementation manner of the second aspect, the second electronic device obtains a fourth operation instruction corresponding to the fourth operation, and sends the fourth operation instruction to the first electronic device, where the fourth operation instruction is used to instruct to perform a recovery operation on an object in a virtual space provided by the first electronic device. The restoration operation comprises restoring the object positioned in the first virtual area in the display interface of the virtual space to the state of unchanged rendering effect. Optionally, the restoring operation further comprises restoring the first object associated with the first virtual area in the virtual space to a position prior to being rotated.
For the meanings of the terms in the second aspect and various possible implementation manners of the second aspect and the beneficial effects brought by each possible implementation manner in the embodiment of the present application, reference may be made to descriptions in various possible implementation manners in the first aspect, and details are not repeated here.
In a third aspect, an embodiment of the present application provides an information processing method, which may be used in the fields of augmented reality, and mixed display. The method can comprise the following steps: the first electronic equipment acquires a second operation instruction, and controls a first object associated with a first virtual area in a display interface to rotate in response to the second operation instruction, wherein a plurality of objects are displayed in the display interface, the first virtual area is an area in the display interface, and the first object is any one of the following three objects: displaying an object in the interface, which is intersected with the boundary of the first virtual area; displaying an object in the interface, wherein the object is positioned in the first virtual area; or, displaying the object in the first virtual area in the interface.
In one possible implementation of the third aspect, the first virtual area includes an area within a preset range in front of the first electronic device.
In the third aspect of the embodiment of the present application, the first electronic device is further configured to execute other steps executed by the first electronic device in the first aspect, and for meanings of terms, specific implementation manners of the steps, and beneficial effects brought by each possible implementation manner in the third aspect and various possible implementation manners of the third aspect of the embodiment of the present application, reference may be made to descriptions in each possible implementation manner in the first aspect, and details are not repeated here.
In a fourth aspect, an embodiment of the present application provides an information processing apparatus, which may be used in the fields of augmented reality, and mixed display. The apparatus is applied to a first electronic device, the first electronic device is an AR device, a VR device or an MR device, the apparatus includes: the acquisition module is used for acquiring a first operation instruction; and the changing module is used for responding to the first operation instruction and changing the rendering effect of the objects positioned in the first virtual area in the display interface, wherein a plurality of objects are displayed in the display interface, and the first virtual area is an area in the display interface.
In a fourth aspect of the embodiment of the present application, the information processing apparatus is further configured to execute other steps executed by the first electronic device in the first aspect, and for specific implementation steps of the fourth aspect and various possible implementation manners of the fourth aspect of the embodiment of the present application and beneficial effects brought by each possible implementation manner, reference may be made to descriptions in each possible implementation manner in the first aspect, and details are not repeated here.
In a fifth aspect, an embodiment of the present application provides an information processing apparatus, which can be used in the fields of augmented reality, and mixed display. The device is applied to the second electronic equipment, and the device comprises: the acquisition module is used for acquiring a first operation instruction corresponding to a first operation; the display interface of the first electronic device is provided with a plurality of objects, and the first virtual area is an area in the display interface of the first electronic device.
In a fifth aspect of the embodiment of the present application, the information processing apparatus is further configured to execute other steps executed by the second electronic device in the second aspect, and for specific implementation steps of the fifth aspect and various possible implementation manners of the fifth aspect and beneficial effects brought by each possible implementation manner in the fifth aspect of the embodiment of the present application, reference may be made to descriptions in various possible implementation manners in the second aspect, and details are not repeated here.
In a sixth aspect, an embodiment of the present application provides an information processing apparatus, which can be used in the fields of augmented reality, and mixed display. The apparatus is applied to a first electronic device, the first electronic device is an AR device, a VR device or an MR device, the apparatus includes: the acquisition module is used for acquiring a second operation instruction; and the rotating module is used for responding to the second operation instruction and controlling the first object associated with the first virtual area in the display interface to rotate. A plurality of objects are displayed in the display interface, the first virtual area is an area in the display interface, and the first object is any one of the following three objects: displaying an object in the interface, which is intersected with the boundary of the first virtual area; displaying an object in the interface, wherein the object is positioned in the first virtual area; or, displaying the object in the first virtual area in the interface.
In a sixth aspect of the embodiment of the present application, the information processing apparatus is further configured to execute other steps executed by the first electronic device in the first aspect, and for specific implementation steps of the sixth aspect and various possible implementation manners of the sixth aspect of the embodiment of the present application and beneficial effects brought by each possible implementation manner, reference may be made to descriptions in the various possible implementation manners of the first aspect, and details are not repeated here.
In a seventh aspect, an embodiment of the present application provides an electronic device, which may include a processor, a coupling between the processor and a memory, where the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the information processing method according to the first aspect is implemented, or the information processing method according to the second aspect is implemented, or the information processing method according to the third aspect is implemented.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium, which stores therein a computer program that, when run on a computer, causes the computer to execute the information processing method according to the first aspect, or causes the computer to execute the information processing method according to the second aspect, or causes the computer to execute the information processing method according to the third aspect.
In a ninth aspect, an embodiment of the present application provides an information processing system, where the information processing system includes a first electronic device and a second electronic device, the first electronic device is an AR device, a VR device, or an MR device, and the first electronic device is connected to the second electronic device. The first electronic device is configured to implement the information processing method according to the first aspect, and the second electronic device is configured to implement the information processing method according to the second aspect.
In a tenth aspect, the present embodiments provide a circuit system, where the circuit system includes a processing circuit, and the processing circuit is configured to execute the information processing method according to the first aspect, or the processing circuit is configured to execute the information processing method according to the second aspect, or the processing circuit is configured to execute the information processing method according to the third aspect.
In an eleventh aspect, embodiments of the present application provide a computer program, which, when running on a computer, causes the computer to execute the information processing method according to the first aspect, or causes the computer to execute the information processing method according to the second aspect, or causes the computer to execute the information processing method according to the third aspect.
In a twelfth aspect, embodiments of the present application provide a chip system, where the chip system includes a processor, configured to enable a first electronic device or a second electronic device to implement functions involved in the foregoing aspects, for example, to transmit or process data and/or information involved in the foregoing methods. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the server or the communication device. The chip system may be formed by a chip, or may include a chip and other discrete devices.
Drawings
FIG. 1a is a system architecture diagram of an information handling system according to an embodiment of the present application;
FIG. 1b is a schematic diagram of another system architecture of an information processing system according to an embodiment of the present application;
FIG. 1c is a schematic diagram of another system architecture of an information processing system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an information processing method according to an embodiment of the present application;
fig. 3 is two schematic diagrams of an object inside a first virtual area in an information processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of two display interfaces in the information processing method according to the embodiment of the present application;
fig. 5 is another schematic diagram of two display interfaces in the information processing method according to the embodiment of the present application;
fig. 6 is two schematic diagrams illustrating a third operation in the information processing method according to the embodiment of the present application;
fig. 7 is a scene schematic diagram of an application scene of an information processing method according to an embodiment of the present application;
fig. 8 is another schematic view of an application scenario of an information processing method according to an embodiment of the present application;
fig. 9 is another schematic diagram of two display interfaces in the information processing method according to the embodiment of the present application;
fig. 10 is another schematic diagram of two display interfaces in the information processing method according to the embodiment of the present application;
fig. 11 is another schematic diagram of two display interfaces in the information processing method according to the embodiment of the present application;
fig. 12 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an information processing method and related equipment, which can reduce the visibility of an occluded object in a display interface by changing the rendering effect of the object in a first virtual area in the display interface, so that a user can more easily see the occluded object, and further, the interaction with the occluded object is facilitated; in addition, the color of the object in the first virtual area in the display interface can be changed to present richer picture effects.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the various embodiments of the application and how objects of the same nature can be distinguished. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Embodiments of the present application are described below with reference to the accompanying drawings. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The embodiment of the application can be applied to various virtual scenes such as AR, VR and MR, and the information processing system in the scenes comprises the first electronic device and the second electronic device. The second electronic device is used for acquiring the operation input by the user and sending the operation instruction corresponding to the operation to the first electronic device. The second electronic device may be embodied as a handle, a mobile phone, or a device for acquiring a gesture operation of the user. The second electronic device may be integrated with a sensor such as a touch sensor, a pressure sensor, a gyroscope sensor, an acceleration sensor, or a fingerprint sensor. The first electronic device is an AR device, a VR device or an MR device, and the first electronic device is used for executing rendering operation after acquiring an operation instruction sent by the first electronic device, and displaying a rendered display interface to a user, so that the user can be immersed in an augmented reality environment, a virtual reality environment or a mixed display environment.
In particular, in one implementation, the first electronic device may be a device integrating a processing function and a display function. As an example, such as a head-mounted display (HMD) integrated with processing and display functions, the HMD may be a VR device display, an AR device display, or an MR device display; as another example, for example, the first electronic device is a projected automatic virtual space (CAVE).
In another implementation manner, the first electronic device may be composed of two independent devices, namely a processing device and a display device, the processing device and the display device may be connected through a wired or wireless connection, and the wired connection may be a connection established between the electronic devices through a High Definition Multimedia Interface (HDMI), and media data is transmitted through an HDMI transmission line; the wireless connection may be a connection established between a plurality of electronic devices through, for example, Miracast protocol, and media data is transmitted through, for example, wireless local area network (WIFI). As an example, a head mounted display to which a host computer having a processing function is connected is exemplified, and the head mounted display has a function of displaying a presentation interface.
Further, in one implementation, the first electronic device and the second electronic device may be two devices that are independent of each other. As an example, the first electronic device is a head mounted display and the second electronic device is a handle; as another example, the first electronic device is a head mounted display and the second electronic device is a cell phone, for example.
In another implementation, the first electronic device and the second electronic device may also be integrated in the same device. As an example, for example in an AR game, a cell phone integrates the functions of input, processing and display; as another example, for example, a head-mounted display integrated with a processing function, a camera configured on the head-mounted display to capture gesture input of a user, and the like, it should be understood that the above examples are only for convenience of understanding an application scenario of the present solution and are not used to limit the present solution.
To more intuitively feel the application scenario of the embodiment of the present application, please refer to fig. 1a to 1c, and fig. 1a to 1c illustrate three system architectures of the information processing system according to the embodiment of the present application. In fig. 1a to 1c, the information processing system includes a first electronic device 10 and a second electronic device 20, and the first electronic device is taken as a head-mounted display device in each of fig. 1a to 1c as an example. Referring first to fig. 1a, the first electronic device 10 and the second electronic device 20 may communicate via one or more high-speed communication protocols (e.g., USB 2.0, USB 3.0, and USB 3.1). In some cases, the first electronic device 10 may connect to the second electronic device 20 using an audio/video interface, such as a high-definition multimedia interface (HDMI). In some cases, the first electronic device 10 may connect to the second electronic device 20 using DisplayPort standby mode for USB type-C standard interface. The DisplayPort standby mode may include a high speed USB communication interface and DisplayPort functionality.
The cable 30 may include suitable connectors that are inserted into the first electronic device 10 and the second electronic device 20 at either end. For example, the cable may include Universal Serial Bus (USB) connectors at both ends. The USB connectors may be identical USB type connectors, or each USB connector may be a different type of USB connector. The various types of USB connectors may include, but are not limited to, USB A-type connectors, USB B-type connectors, Micro-USB A connectors, Micro-USB B connectors, Micro-USB AB connectors, USB five pin Mini-B connectors, USB four pin Mini-B connectors, USB 3.0A-type connectors, USB 3.0B-type connectors, USB 3.0Micro B connectors, USB C-type connectors, and the like.
Fig. 1b shows a schematic diagram of a first electronic device 10 connected to a second electronic device 100 using a wireless connection 31 without the need for a cable (e.g. without the cable 30 shown in fig. 1 a. the first electronic device 10 may be connected to the second electronic device 20 via the wireless connection 31 by implementing one or more high-speed communication protocols (e.g. WIFI, bluetooth or bluetooth Low Energy (LE)).
Referring to fig. 1c, the user uses the handle as the second electronic device 20, and similar to fig. 1b, the handle 20 is communicatively connected to the first electronic device 10 by a wireless connection. It should be noted that although one second electronic device 20 is shown as the control device of the first electronic device 10 in each of the examples shown in fig. 1a to 1c, two (or more) additional external apparatuses may be paired and/or interact with the first electronic device 10 in a virtual space. During operation (after pairing), the second electronic device 20 (and/or other external apparatus) may communicate with the first electronic device 10 via, for example, a wired connection, or a wireless connection such as, for example, a WIFI or bluetooth connection, or other communication modes available to both apparatuses.
With reference to the foregoing description, an information processing method provided in an embodiment of the present application may be applied to the information processing systems shown in fig. 1a to fig. 1c, please refer to fig. 2, where fig. 2 is a schematic flow diagram of the information processing method provided in the embodiment of the present application, and the information processing method provided in the embodiment of the present application may include:
201. the second electronic equipment acquires a first operation instruction corresponding to the first operation.
In the embodiment of the application, a user can input a first operation through the second electronic device, and correspondingly, the second electronic device can generate a first operation instruction corresponding to the first operation. The first operation instruction is used for instructing the first electronic equipment to change the rendering effect of the object located inside the first virtual area in the display interface. When the user is immersed in a virtual space such as an augmented reality environment, a virtual reality environment, or a mixed display environment, a plurality of virtual objects are displayed in the presentation interface of the virtual space, the presentation interface of the first electronic device may only include the presentation interface within a field range in the virtual space viewed by the user, or the entire virtual space (including within and outside the field range of the user) in which the user is located may be regarded as the presentation interface of the first electronic device.
Specifically, since the second electronic device may be embodied in different display forms, the first operation corresponding to the second electronic device in different forms may be obtained in different manners. In an implementation manner, the second electronic device is a mobile phone, the first operation may be that a time for pressing a screen of the mobile phone for a long time exceeds a preset duration, the first operation may also be that a screen of the mobile phone is continuously clicked, and the first operation may also be that a preset gesture is input through the screen of the mobile phone, for example, a "opposite hook" shaped gesture is input through the screen of the mobile phone, or a "circle" shaped gesture is input through the screen of the mobile phone, which is not exhaustive here.
In another implementation manner, the second electronic device is a handle, a preset open button may be disposed on the handle, and when the user clicks the preset open button, the user regards as inputting the first operation. In another implementation manner, a virtual open key is rendered in the display interface of the virtual space, and a user clicks the virtual open key in the display interface of the virtual space through a gesture to input the first operation. The virtual opening key can be always displayed on a display interface of a virtual space, and can also be triggered to be displayed through other prepositive operations. By way of example, for example, a user operating through a sliding gesture of "from top left to bottom right" may trigger the presentation interface to display a menu bar in which the virtual open key exists, and so on, which are not exhaustive here.
In another implementation manner, a virtual key is rendered in a display interface of the virtual space, and a ray is also displayed in the display interface and used for assisting a user in performing a selection operation on one or more objects in the display interface. The source of the ray may be a second electronic device or a hand of the user, for example, the source of the ray may be a mobile phone, a handle, a head-mounted display device, and so on, so that the selection operation may be performed on a virtual key in the presentation interface of the virtual space through the ray (i.e., it is considered that the user inputs the first operation through the second electronic device). Further, on the premise of starting the ray function, the initial position of the ray can be displayed on the display interface of the virtual space, and the user can adjust the direction of the ray by adjusting the posture of the second electronic device or the pointing direction of the hand of the user, so that the user can adjust the position of the end point of the ray to the key to be selected. In the process of adjusting the ray direction, if the position of the ray is changed by adjusting the posture of the second electronic device, the second electronic device may acquire the posture information of the second electronic device in real time and send the posture information to the first electronic device, so that the first electronic device may adjust the specific pixel coordinate position of the end point of the ray in the display interface of the virtual space in real time (i.e., adjust the position of the ray in the display interface of the virtual space). If the position of the ray is changed by changing the pointing direction of the hand of the user (i.e., on the premise of gesture input), the second electronic device may acquire the pointing direction of the finger of the user in real time through the camera, so as to adjust the specific pixel coordinate position of the end point of the ray in the display interface of the virtual space in real time (i.e., adjust the position of the ray in the display interface of the virtual space).
It should be understood that the above examples are only for convenience of understanding, and in practical applications, the input mode of the first operation can be flexibly set according to specific application situations, and the examples herein are not used to limit the present solution.
202. The second electronic equipment sends the first operation instruction to the first electronic equipment.
In this embodiment of the application, after the second electronic device generates the first operation instruction, the first operation instruction is sent to the first electronic device, and correspondingly, the first electronic device receives the first operation instruction sent by the second electronic device. Specifically, in one case, the first electronic device and the second electronic device are two independent devices, and the second electronic device sends the first operation instruction to the first electronic device through a wireless interface or a wired interface. Further, if the first electronic device is composed of two independent devices, namely, a processing device and a display device, the second electronic device may directly send the first operation instruction to the processing device in the first electronic device, or send the first operation instruction to the display device in the first electronic device, and the display device in the first electronic device forwards the first operation instruction to the processing device in the first electronic device. In another case, if the first electronic device and the second electronic device are integrated in the same device, the second electronic device sends the first operation instruction to the first electronic device through the internal interface.
203. And the first electronic equipment responds to the first operation instruction and changes the rendering effect of the object positioned in the first virtual area in the display interface.
In this embodiment of the application, if the first electronic device and the second electronic device are two independent devices, the first electronic device may acquire the first operation instruction through steps 201 and 202. If the first electronic device and the second electronic device are integrated in the same device, steps 201 and 202 do not need to be executed, and the user can directly input the first operation through the first electronic device, and the first electronic device generates a first operation instruction corresponding to the first operation. As an example, the first electronic device is embodied as a head-mounted display device, and the first electronic device acquires a first operation input by a user through a camera integrated in the head-mounted display device; as another example, for example, the first electronic device is embodied as a mobile phone integrated with input, processing and output functions, the user may input the first operation and the like through a screen of the first electronic device, which is not exhaustive here.
After the first operation instruction is acquired, the first electronic device, in response to the first operation instruction, may determine a first virtual area from a display interface of the virtual space, and change a rendering effect of one or more objects located inside the first virtual area in the display interface.
In one implementation manner, the display interface of the first electronic device for changing the rendering effect includes a current display interface or one or more display interfaces to be further displayed next. In other implementations, the presentation interface on which the first electronic device changes the rendering effect does not include the current presentation interface, including one or more presentation interfaces to be displayed next.
The first virtual area is a partial area in a display interface of the virtual space. When the first electronic device is embodied as a head-mounted display device or includes the head-mounted display device in the first electronic device, the first virtual area includes an area within a preset range in front of the first electronic device, and a finger in front of the first electronic device may be in front of a user or in front of the gaze of the user. If the first electronic device is composed of two independent devices, namely a processing device (e.g. a host) and a display device (the display device is a head-mounted display device), the first virtual area may be referred to as being in front of the display device in the first electronic device.
In the embodiment of the application, a user watches an object displayed in a display interface through first electronic equipment, a preset range in front of the first electronic equipment is also a preset range in front of a visual field of the user, namely the user is covered in a first virtual area, the user is located in a central area of the first virtual area, and when a rendering effect of the object in the first virtual area is changed, the rendering effect of the object around the user is also changed, so that the immersion feeling of the user can be increased. Moreover, when the purpose of changing the rendering effect is to clearly see the blocked object, the first virtual area is arranged in front of the visual field of the user, so that the operation is more convenient and faster, and the viscosity of the user is favorably improved.
The outer boundary of the first virtual area may be a sphere, a cylinder, a square, a rectangle, etc., and correspondingly, the first virtual area may be a part of a sphere, a part of a cylinder, a part of a cube, a part of a cuboid, etc., and the shapes of the first virtual area are not exhaustive here.
Optionally, when the first electronic device is embodied as a head-mounted display device or includes the head-mounted display device in the first electronic device, the first virtual area may further include an area within a preset range behind the first electronic device, that is, the first virtual area may further include an area behind the user or within a preset range behind the gaze of the user.
Alternatively, the first virtual area may be an area centered on the first electronic device or the body center of the user. Further, in one implementation, the first virtual area is centered on the electronic device or the body center of the user, and includes only an area in front of the first electronic device. For example, the first virtual area is an area in front of the first electronic device in a sphere centered on the first electronic device, and the radius of the first virtual area may be 50 centimeters, 60 centimeters, 70 centimeters, or other lengths. For another example, the first virtual area is an area in front of the first electronic device in a cylinder centered on the first electronic device, and a cross-sectional radius of the cylinder is 50 cm, 60 cm, 70 cm, or other lengths. For another example, the first virtual area is an area located in front of the first electronic device in a cube centered on the electronic device, and a side of the cube is 100 centimeters, 120 centimeters, 140 centimeters, or another length.
In another implementation, the first virtual area is an area centered on the center of the electronic device or the user's body and including the front and the back of the first electronic device. For example, the first virtual area may be embodied as a sphere centered on the first electronic device, or the first virtual area may be embodied as a sphere centered on the body center of the user. The first virtual area may also appear as a cylinder centered at the center of the user's body. The first virtual area may also be represented as a cube or the like centered on the center of the body of the first electronic device or user, the shape and size of the first virtual area not being exhaustive here.
After the first electronic device determines the first virtual area from the presentation interface, it may be determined that one or more objects in the presentation interface are located entirely inside the first virtual area and one or more objects in the presentation interface are located entirely outside the first virtual area. Optionally, one or more objects in the presentation interface intersect the boundary of the first virtual area. The objects inside the first virtual area may only include objects that are completely inside the first virtual area, i.e. not objects that intersect the first virtual area. The object inside the first virtual area may also include an object located completely inside the first virtual area, a portion of the object intersecting the first virtual area located inside the first virtual area, and a portion of the object intersecting the first virtual area located inside the first virtual area.
For a more intuitive understanding of the present solution, please refer to fig. 3, and fig. 3 shows two schematic diagrams of an object inside a first virtual area in an information processing method according to an embodiment of the present application. Both sub-diagrams in fig. 3 demonstrate the concept inside the first virtual area in 2 dimensions, in real virtual space the user sees a 3-dimensional effect, which can be understood by analogy here. The gray portion is the extent of the first virtual area, as in the left and right sub-diagrams of fig. 3. In one case, as shown in the left sub-diagram of fig. 3, the objects inside the first virtual area refer to only three objects, a1, a2, and A3, which are completely inside the first virtual area. In another case, as shown in the right sub-diagram of fig. 3, the object inside the first virtual area is not only an object within the entire gray area, but also an object located completely inside the first area, and also includes a portion of the object intersecting the first virtual area, which is located inside the first virtual area.
Specifically, the manner for the first electronic device to change the rendering effect of the object located in the first virtual area in the display interface may be: the first electronic device controls the object in the display interface located inside the first virtual area to be invisible, that is, the first electronic device adjusts the transparency of the object in the display interface located inside the first virtual area to be one hundred percent. Or, the first electronic device performs transparency processing on the object located inside the first virtual area in the presentation interface, that is, the first electronic device adjusts the transparency of the object inside the first virtual area in the presentation interface to be between one percent and ninety-nine percent, so that the object hidden by the object inside the first virtual area in the presentation interface can be seen. Or the first electronic device adjusts the color of the object inside the first virtual area in the display interface to optimize the viewing comfort of the display interface and the like.
In the embodiment of the application, two different implementation modes of the first electronic device for changing the rendering effect of the object located in the first virtual area in the display interface are provided, and the implementation flexibility of the scheme is improved.
Optionally, the first electronic device may further display the first virtual area in a presentation interface of the virtual space in response to the first operation instruction. Specifically, the first electronic device may display the first virtual area by changing a rendering effect of an object outside the first virtual area, where the changed rendering effect of the object inside the first virtual area is different from the changed rendering effect of the object outside the first virtual area. Outside the first virtual area and inside the first virtual area are a pair of opposing concepts, and objects outside the first virtual area may include only objects that are completely outside the first virtual area, i.e., not objects that intersect the first virtual area. The object outside the first virtual area may include not only an object completely outside the first virtual area but also a portion outside the first virtual area among objects intersecting the first virtual area, excluding a portion inside the first virtual area among objects intersecting the first virtual area.
In the embodiment of the application, the first virtual area is additionally arranged in the display interface of the virtual space, so that a user can have clearer cognition on the boundary of the first virtual area, and the viscosity of the user is improved.
For a more intuitive understanding of the present solution, please refer to fig. 4 and 5, and fig. 4 and 5 are four interface schematic diagrams showing an interface in an information processing method provided in an embodiment of the present application. Fig. 4 illustrates an example of controlling the rendering effect of the object located inside the first virtual area in the display interface to be invisible, where the object located inside the first virtual area in the display interface is controlled to be invisible, and fig. 4 illustrates the user being watched through the third viewing angle, and the first virtual area is a sphere centered on the first electronic device. The left sub-diagram of fig. 4 shows a schematic interface diagram before changing the rendering effect of an object located inside the first virtual area in the presentation interface, the right sub-diagram of fig. 4 is a schematic interface diagram after controlling the object located inside the first virtual area in the presentation interface to be invisible, and a gray area around the head-mounted display device in the right sub-diagram of fig. 4 represents the first virtual area. As can be seen from a comparison between the left sub-diagram and the right sub-diagram of fig. 4, the objects represented by B1, B2, and B3 are objects located inside the first virtual area in the presentation interface of the 3-dimensional virtual space, and the objects represented by B1, B2, and B3 are not visible after the rendering effect of the objects located inside the first virtual area in the presentation interface is changed by the first electronic device.
In fig. 5, the rendering effect of the object located in the first virtual area in the display interface is changed by performing a transparency process on the object located in the first virtual area in the display interface, taking the displayed image as the first view angle of the user, and adding the first virtual area in the display interface as an example. Fig. 5 is a schematic interface diagram before changing the rendering effect of the object located in the first virtual area in the presentation interface, and fig. 5 is a schematic interface diagram after performing the transparentizing process on the object located in the first virtual area in the presentation interface. As can be seen from comparison between the left sub-diagram and the right sub-diagram of fig. 5, C1 (i.e., the column in the left sub-diagram of fig. 5) in the left sub-diagram of fig. 5 is an object located inside the first virtual area in the presentation interface (i.e., the virtual room) of the virtual space, and the column in the presentation interface of the virtual space in the right sub-diagram of fig. 5 is transparently processed. And after the first electronic device adds the first virtual area in the display interface of the virtual space, the rendering effect of the object outside the first virtual area is also changed. It should be understood that the examples in fig. 4 and 5 are only for convenience of understanding of the present solution and are not intended to limit the present solution.
204. And the second electronic equipment acquires a second operation instruction corresponding to the second operation.
In some embodiments of the application, the user may further input a second operation through the second electronic device, and correspondingly, the second electronic device may generate a second operation instruction corresponding to the second operation.
The second operation instruction is used for instructing the first electronic device to control the rotation of the first object associated with the first virtual area in the display interface, and it should be noted that the second operation is different from the first operation. Further, the second operation instruction at least carries a rotation direction, and the second operation instruction may specifically be an operation instruction rotating leftward, an operation instruction rotating rightward, an operation instruction rotating upward, an operation instruction rotating downward, an operation instruction rotating upward leftward, an operation instruction rotating downward leftward, an operation instruction rotating upward rightward, an operation instruction rotating downward rightward, and the like, and a trigger instruction that triggers the first electronic device to rotate in each direction along with an object intersecting the first virtual area in the display interface of the virtual space.
Specifically, since the second electronic device may be embodied in different display forms, the obtaining manner of the second operation corresponding to the second electronic device in different forms may be different. In one implementation manner, the second electronic device is a mobile phone, the second operation may be a sliding operation of sliding in each direction through a screen of the mobile phone, and the second electronic device generates a second operation instruction in the corresponding direction. As an example, for example, when the second electronic device receives a leftward sliding operation through a screen of a mobile phone, a second operation instruction for rotating leftward is generated; as another example, for example, when the second electronic device receives a right sliding operation through a screen of a mobile phone, a second operation instruction for turning right is generated.
In one implementation manner, the second electronic device is a mobile phone, the second operation may be an operation of rotating the mobile phone in each direction, and the second electronic device generates a second operation instruction in the corresponding direction; a sensor such as a gyroscope may be provided in the second electronic device to measure the roll direction of the second electronic device. As an example, for example, if the user turns the mobile phone to the left, a second operation instruction turning to the left is generated; as another example, for example, if the user turns the mobile phone upward, the second operation instruction of turning upward is generated.
In one implementation, the second electronic device is a handle, direction keys are pre-configured on the second electronic device, when a user clicks the keys in various directions, the user regards as inputting a second operation, and the second electronic device generates a second operation instruction corresponding to the direction. As an example, for example, when the user clicks a down button on the handle, a second operation instruction of rotating downwards is generated; as another example, for example, when the user clicks a right button on the handle, a second operation instruction for rotating to the right is generated.
In one implementation, the second electronic device is a handle, the second operation may be an operation of rotating the handle in each direction, and the second electronic device generates a second operation instruction in the corresponding direction. For a specific implementation manner, reference may be made to the above description when the second electronic device is a mobile phone, and details are not described herein.
In another implementation manner, the input mode of the operation is gesture input, and the second electronic device acquires the gesture of the user through the camera, so as to determine the operation input by the user. The second electronic device may collect gesture operations of the user sliding in each direction through the camera, and generate second operation instructions and the like corresponding to the directions.
Optionally, the second operation instruction may include not only the rotation direction but also a rotation time length or a rotation degree. In an implementation manner, the second electronic device may be preconfigured with a default rotation duration or a default rotation angle, and the second operation instruction is carried with the default rotation duration or the default rotation angle. As an example, the default rotation time period is 2 seconds, for example. In another implementation manner, the longer the duration of the second operation performed by the user is, the longer the rotation duration carried in the second operation instruction is, or the larger the rotation angle carried in the second operation instruction is.
205. And the second electronic equipment sends a second operation instruction to the first electronic equipment.
In this embodiment of the application, a specific implementation manner for the second electronic device to execute step 205 may refer to the description of step 202, and the difference is that the second electronic device sends the first operation instruction to the first electronic device in step 202, and the second electronic device sends the second operation instruction to the first electronic device in step 205, which is not described herein again.
206. The first electronic device controls the first object in the display interface, which is associated with the first virtual area, to rotate in response to the second operation instruction.
In some embodiments of the application, after the first electronic device obtains the second operation instruction, in response to the second operation instruction, the first electronic device controls the first object associated with the first virtual area in the display interface of the virtual space to rotate according to the rotation direction carried in the second operation instruction. The first electronic device may receive the second operation instruction from the second electronic device, or may directly generate the second operation instruction, and the specific implementation manner may refer to the description in step 203, which is not described herein again.
Optionally, the first electronic device further controls the first object associated with the first virtual area in the display interface of the virtual space to rotate according to the rotation time length and/or the rotation angle carried in the second operation instruction.
Wherein the first object associated with the first virtual area is any one of the following three types: and objects in the display interface of the virtual space, which intersect with the boundary of the first virtual area. And the object positioned in the first virtual area in the display interface of the virtual space. Or, the objects within the first virtual area in the presentation interface of the virtual space include the objects in the presentation interface of the virtual space that intersect with the boundary of the first virtual area and the objects in the presentation interface of the virtual space that are completely inside the first virtual area. Further, the minimum execution unit of the first electronic device when performing the rotation operation is a complete object.
Optionally, the first object associated with the first virtual area is any one of the following three types: and objects which are positioned in the visual field range of the user and intersect the boundary of the first virtual area in the display interface of the virtual space. And the object in the display interface of the virtual space is positioned in the visual field range of the user and positioned in the first virtual area. Or, the object which is positioned in the visual field range of the user and positioned in the first virtual area in the display interface of the virtual space.
In the embodiment of the application, the first electronic device can also control the rotation of the first object associated with the first virtual area in the display interface in response to the second operation instruction, so that a user can rotate away an object which is not wanted to be seen in the display interface of the virtual space through the first virtual area, and therefore the object which is blocked in the display interface can be seen clearly, and interaction with the blocked object can be performed easily.
For example, the first electronic device is a head-mounted display, the second electronic device is a mobile phone, the virtual workbench is displayed through a display interface of the virtual space, a plurality of three-dimensional icons are shown in the virtual workbench, and functions corresponding to different three-dimensional icons may be different. Multiple three-dimensional icons can be placed at different depths, and mutual occlusion can occur between different three-dimensional coordinates. The user inputs a first operation by long pressing the screen of the mobile phone (namely, the second electronic device), the mobile phone generates a first operation instruction corresponding to the first operation and sends the first operation instruction to the head-mounted display (namely, the first electronic device), and the head-mounted display responds to the first operation instruction, adds a first virtual area in front of the first electronic device in the display interface, and controls an object in the first virtual area not to be visible. The user inputs a second operation by sliding the screen of the mobile phone (namely, the second electronic device) to the right, the mobile phone generates a second operation instruction corresponding to the second operation and sends the second operation instruction to the head-mounted display device (namely, the first electronic device), the second operation instruction is an operation instruction rotating to the right, and the head-mounted display device responds to the second operation instruction and controls all objects intersected with the boundary of the first virtual area in the display interface to rotate to the right, so that the plurality of three-dimensional icons are removed, and the user can clearly see the blocked three-dimensional icons and further interact with the blocked three-dimensional icons.
Further, the first electronic device may place the three-dimensional icon with high frequency of use at a place close to the user and place the three-dimensional icon with low frequency of use at a place far from the user.
It should be noted that, steps 201 to 203 and steps 204 to 206 are optional steps, and only steps 201 to 203 may be executed, only steps 204 to 206 may be executed, or both steps 201 to 203 and steps 204 to 206 may be executed, but none of steps 201 to 203 and steps 204 to 206 may be executed. If steps 204 and 206 are performed, the rotation of the first object associated with the first virtual area to the user's desired position may be accomplished by performing steps 204 and 206 one or more times.
207. And the second electronic equipment acquires a third operation instruction corresponding to the third operation.
In some embodiments of the application, the user may further input a third operation through the second electronic device, and correspondingly, the second electronic device may generate a third operation instruction corresponding to the third operation. The third operation instruction is used to instruct the first electronic device to zoom in or zoom out the first virtual area, that is, the third operation is specifically a zoom-in operation or a zoom-out operation, and it should be noted that the third operation, the second operation, and the first operation are different operations. Further, the third operation instruction may specifically be a zoom-in instruction or a zoom-out instruction.
Specifically, since the second electronic device may be embodied in different display forms, the obtaining manner of the third operation corresponding to the second electronic device in different forms may be different. In one implementation, the second electronic device is a mobile phone, and the third operation may be a sliding operation of inputting "zoom out" or "zoom in" through a screen of the mobile phone. As an example, for example, the "zoom out" operation may be a double-click operation, and the "zoom in" operation may be a three-click operation; as another example, for example, the "zoom out" operation may be an inward rotation operation, the "zoom in" operation may be an outward rotation operation, and the like, and the "zoom out" operation and the "zoom in" operation may also be other gesture operations. To more intuitively understand the present solution, please refer to fig. 6, where fig. 6 illustrates two schematic diagrams of a third operation in the information processing method according to the embodiment of the present application. Referring to fig. 6, the left sub-diagram in fig. 6 represents a "zoom-out" operation, and the right sub-diagram in fig. 6 represents an "zoom-in" operation, it should be understood that, in practical cases, the third operation may also be embodied as other operations, which are not exhaustive, as long as the third operation, the second operation, and the first operation are different operations.
In another implementation manner, the second electronic device is a handle, and an enlargement or reduction button is pre-configured on the second electronic device, and when the user clicks the enlargement button, the user regards that a third operation (that is, an input enlargement operation) is input; when the user clicks the zoom-out button, it is regarded that another third operation (i.e., a zoom-out operation) is input.
In another implementation manner, the input mode of the operation is gesture input, and the second electronic device acquires the gesture of the user through the camera, so as to determine the operation input by the user. The gesture of the user may be consistent with the above-mentioned "zoom-out" operation and "zoom-in" operation, but the difference is that in the foregoing implementation, the gesture operation of the user is acquired through a mobile phone screen, and in the present implementation, the gesture operation of the user is acquired through a camera.
In another implementation manner, the input mode of the operation is gesture input, the first electronic device may render an enlargement key and a reduction key in the display interface of the virtual space, and when the gesture of the user, which is acquired by the second electronic device through the camera, is a click on the enlargement key, the second electronic device may be regarded as an input enlargement operation; when the gesture of the user collected by the second electronic device through the camera is a click of the zoom-out key, the input of the zoom-out operation can be considered.
In another implementation manner, the first electronic device may render an enlargement key and a reduction key in the display interface of the virtual space, similar to the description in step 201, a ray may also be displayed in the display interface of the virtual space, a source of the ray may be a second electronic device (e.g., a mobile phone or a handle) or a head-mounted display integrated with the second electronic device, and a user may perform a selection operation on the enlargement key or the reduction key in the virtual space through the ray, and the like.
208. And the second electronic equipment sends a third operation instruction to the first electronic equipment.
In this embodiment of the application, a specific implementation manner for the second electronic device to execute step 208 may refer to the description of step 202, and the difference is that the second electronic device sends the first operation instruction to the first electronic device in step 202, and the second electronic device sends the third operation instruction to the first electronic device in step 208, which is not described herein again.
209. And the first electronic equipment responds to the third operation instruction, performs amplification or reduction processing on the first virtual area, and changes the rendering effect of the object positioned in the amplified or reduced first virtual area in the display interface.
In some embodiments of the application, after the first electronic device obtains the third operation instruction, the first electronic device may perform enlargement or reduction processing on the first virtual area in the display interface of the virtual space, and change a rendering effect of an object located in the enlarged or reduced first virtual area in the display interface. A manner of acquiring the third operation instruction by the first electronic device is similar to that of acquiring the first operation instruction by the first electronic device, and reference may be specifically made to the description in step 203, which is not described herein again. Correspondingly, as the first virtual area is enlarged or reduced, the objects located in the first virtual area in the display interface of the virtual space are also changed, and the first electronic device needs to select the objects located in the enlarged or reduced first virtual area from the multiple objects in the display interface of the virtual space again, and change the rendering effect of the objects located in the enlarged or reduced first virtual area.
In the embodiment of the application, the rendering effect of the object inside the first virtual area in the display interface can be changed through the first operation instruction, the first virtual area can be enlarged or reduced through the third operation instruction, and correspondingly, the rendering effect of the object inside the enlarged or reduced first virtual area can be changed, i.e., which objects are required to be changed can be flexibly adjusted, so that the convenience of interaction between the user and the object displayed in the display interface is further improved.
Further, in one implementation, the process of enlarging or reducing the first virtual area is continuous. In another implementation, the process of enlarging or reducing the first virtual area is discrete. As an example, for example, the first electronic device is a head-mounted display, the second electronic device is a mobile phone, and a virtual workbench is displayed through a display interface of a virtual space, a plurality of three-dimensional icons are shown in the virtual workbench, and functions corresponding to different three-dimensional icons may be different. The first electronic device divides a display interface of the virtual space into a plurality of different layers (for example, four layers, which are divided into a first layer, a second layer, a third layer and a fourth layer from near to far according to the sequence of the user) according to different depths, and places a plurality of three-dimensional icons inside the layers with different depths respectively. The object located inside the first layer in the display interface of the virtual space is transparently processed, the user performs click operation on an amplification button displayed in the display interface of the virtual space through rays sent by a mobile phone, the first virtual area is amplified to the second layer, namely, the object inside the second layer is transparently processed, and it should be understood that the example is only convenient to understand the scheme and is not used for limiting the scheme.
Optionally, if the first electronic device further changes a rendering effect of an object located outside the first virtual area in the display interface of the virtual space, the first electronic device, in response to the third operation instruction, further selects an object located outside the enlarged/reduced first virtual area from the plurality of objects displayed on the display interface of the virtual space, and changes the rendering effect of the object located outside the enlarged/reduced first virtual area. The changed rendering effect of the object inside the enlarged or reduced first virtual area is different from the changed rendering effect of the object outside the enlarged or reduced first virtual area, and the specific different expression manner may refer to the description in step 203, which is not described herein again.
It should be noted that steps 207 to 209 are optional steps, and if steps 207 to 209 are not executed, step 210 may be directly executed after step 206 is executed.
For a more intuitive understanding of the present solution, please refer to fig. 7 and fig. 8, and fig. 7 and fig. 8 are two scene schematic diagrams of application scenes of the information processing method according to the embodiment of the present application. Fig. 7 and 8 both use the first electronic device as a head-mounted display device, and use the second electronic device as a mobile phone as an example. In fig. 7, for example, the first electronic device is a VR device, and each window in the virtual workbench is displayed in the display interface of the virtual space, a plurality of windows of different Application programs (Application) are displayed in the display interface of the virtual space (for example, a window of a game Application, a window of a retouching Application, a window of a video playing Application, a window of a gallery Application, and a window of a browser Application shown in fig. 7), the first electronic device may simultaneously place the windows of different Application programs in a three-dimensional scene, and the windows of different applications may have different depths, so that the windows of different applications may be mutually occluded, for example, the window of the retouching Application in fig. 7 occludes the window of the browser Application. The information processing method provided by the embodiment of the application can effectively remove the blocked window, so that a user can interact with the blocked window.
The specific process can be as follows: the user inputs a first operation by pressing the screen of the mobile phone (namely, the second electronic device) for a long time longer than a preset time, the mobile phone generates a first operation instruction corresponding to the first operation and sends the first operation instruction to the head-mounted display device (namely, the first electronic device), and the head-mounted display device controls at least one window positioned in the first virtual area in the display interface to be invisible in response to the received first operation instruction. The user inputs a second operation by sliding the screen of the mobile phone leftward, the mobile phone generates a second operation instruction (namely, an operation instruction rotating leftward) corresponding to the second operation, and sends the second operation instruction to the head-mounted display device, and the head-mounted display device controls a window intersected with the boundary of the first virtual area in the display interface to rotate leftward in response to the received second operation instruction, so that the window intersected with the boundary of the first virtual area in the display interface is removed from the view of the user. The user inputs an 'amplifying' operation (namely one of the third operations) through a screen of the mobile phone, the mobile phone generates a third operation instruction (namely an instruction for amplifying the first virtual area) corresponding to the third operation, and sends the third operation instruction to the head-mounted display device, and the head-mounted display device selects at least one window located inside the amplified first virtual area from a plurality of windows displayed in the display interface in response to the received third operation instruction, and controls the at least one window located inside the amplified first virtual area to be invisible. Further, the first electronic device may also place different windows at different depths, such as placing windows that are used more frequently near the user and windows that are used less frequently far away from the user or out of the user's field of view. By adopting the scheme provided by the embodiment of the application, the user can easily switch different windows in the virtual workbench, so that multitask work is realized, and the work efficiency is improved.
Referring to fig. 8, in fig. 8, for example, the first electronic device is an AR device or an MR device, that is, after the first electronic device acquires image information of a real scene, rendering a virtual scene on a part of objects in an image of the real scene, so as to present an effect of mixing the virtual scene with the real scene in a display interface of a virtual space. The user can perform the transparentization processing on part of the components in the automobile through the information processing method provided by the embodiment of the application, and can also transfer part of the components in the automobile out of the visual field range of the user, and the like, so that the user can conveniently and clearly see the internal structure of the automobile.
210. And the first electronic equipment executes selection operation on the objects outside the first virtual area through the rays displayed in the display interface.
In some embodiments of the application, the first electronic device may further display a ray in the presentation interface in the virtual space, where a source of the ray may be a second electronic device or a hand of a user, and the user adjusts a direction of the ray in the presentation interface in the virtual space by changing a posture of the second electronic device or changing a pointing direction of the hand of the user. If the ray intersects an object located outside the first virtual area (including an object located completely outside the first virtual area and an object intersecting the boundary of the first virtual area) in the presentation interface of the virtual space, it may be considered that the aforementioned object is selected by the ray. Or, after the ray intersects with an object located outside the first virtual area in the display interface of the virtual space, the user is required to input a "confirm selection" operation, and the object is considered to be selected through the ray. Further, if the ray intersects with a plurality of objects located outside the first virtual area in the display interface of the virtual space, the object intersecting with the first ray in the display interface of the virtual space is regarded as an object intersecting with the ray.
Further, when the second electronic device is presented in a different modality, the manner in which the user inputs the "confirm select" operation may be different. As an example, for example, the second electronic device is a mobile phone, and after the ray intersects with an object located outside the first virtual area in the presentation interface of the virtual space, the user may input a single click operation through a screen of the mobile phone as an input of a "confirm select" operation. As another example, for example, the second electronic device is a handle, and the handle may be configured with a "confirm" button, etc. in advance, which is not exhaustive here.
In the embodiment of the application, the rendering effect of at least one object in the first virtual area can be changed, the rays can be displayed in the display interface, and the selection operation can be executed on the objects outside the first virtual area through the rays, so that a user can see the shielded objects clearly, the rays can be used for interacting with the shielded objects, and the convenience of interaction between the user and the objects in the virtual scene is further improved.
For a more intuitive understanding of the present disclosure, please refer to fig. 9, and fig. 9 is a schematic diagram of two display interfaces in the information processing method according to the embodiment of the present disclosure. In fig. 9, the first electronic device is taken as a head-mounted display device, the second electronic device is taken as a mobile phone, and the first electronic device and the second electronic device are viewed from a third viewing angle. In the left sub-diagram of fig. 9, the user performs a selection operation on an object intersecting the boundary of the first virtual area in the display interface of the virtual space through a ray with the mobile phone as a source. In the right sub-diagram of fig. 9, a user performs a selection operation on an object located outside the first virtual area in the display interface of the virtual space by using a ray with a mobile phone as a source, and it should be understood that the example in fig. 9 is only for facilitating understanding of the scheme, and is not used for limiting the scheme.
Optionally, if the ray does not intersect any object located outside the first virtual area in the presentation interface of the virtual space, an intersection point of the ray and a boundary of the first virtual area may be regarded as a position of a cursor of the ray in the presentation interface of the virtual space, and is closest to the cursor, and an object located outside the first virtual area in the presentation interface of the virtual space is regarded as an object that the user wants to select through the ray, thereby implementing the selection operation on the aforementioned object.
For a more intuitive understanding of the present disclosure, please refer to fig. 10, where fig. 10 is a schematic diagram of two display interfaces in the information processing method according to the embodiment of the present disclosure. In fig. 10, the first electronic device is taken as a head-mounted display device, the second electronic device is taken as a mobile phone, and the first electronic device and the second electronic device are viewed from a third viewing angle. In the left sub-diagram of fig. 10, D1 represents the position of the cursor of the ray in the presentation interface of the virtual space, and D2 represents the object selected by the ray outside the first virtual area in the presentation interface of the virtual space. In the right sub-diagram of fig. 10, D3 represents the position of the cursor of the ray in the presentation interface of the virtual space, and D4 represents the object selected by the ray outside the first virtual area in the presentation interface of the virtual space. It should be understood that the example in fig. 10 is only for convenience of understanding the present solution and is not intended to limit the present solution.
Optionally, the user may also move the object outside the first virtual area through a ray in the presentation interface of the first electronic device. Specifically, after a user performs a selection operation on an object in the display interface of the virtual space through a ray in the display interface of the first electronic device, the object selected through the ray is regarded as an end point of the ray in the display interface of the virtual space, and the selected object in the display interface moves along with the movement of the position of the end point of the ray, so that the movement operation on the object outside the first virtual area can be realized.
After the selected object is placed at the target position which is wanted to be placed by the user in the display interface of the virtual space, the user can input the operation of 'deselection' through the second electronic device, so that the operation instruction of 'deselection' is sent to the first electronic device through the second electronic device, and the selected object in the display interface of the virtual space is released.
Further, when the second electronic device is represented in a different modality, the manner in which the user inputs the "deselect" operation may be different. As an example, for example, the second electronic device is a mobile phone, and after the moving operation is performed on one object located outside the first virtual area in the presentation interface of the virtual space through the ray, the user may input a single-click operation through a screen of the mobile phone to regard as inputting a "deselection" operation. As another example, for example, the second electronic device is a handle, and the handle may be configured with a "cancel" button in advance, which is not exhaustive here.
To more intuitively understand the present solution, please refer to fig. 11, where fig. 11 is a schematic diagram of two display interfaces in the information processing method according to the embodiment of the present application. Fig. 11 should be understood in conjunction with the above description of fig. 5, in fig. 11, taking the display interface of the virtual space viewed from the first viewing angle as an example, in both the sub-diagram (a) of fig. 11 and the sub-diagram (b) of fig. 11, the pillars in the display interface are in a transparent state, that is, the pillars in the display interface are located inside the first virtual area, and other objects in the display interface are located outside the first virtual area. In the sub-diagram of fig. 11 (a), E1 represents the position of the end point of the ray in the presentation interface of the virtual space (i.e. the light point on the chair), that is, in the sub-diagram of fig. 11 (a), the user selects the chair through the ray in the presentation interface of the first electronic device. After the chair is selected by the ray, the position of the end point of the ray in the presentation interface may be moved, so that the selected chair moves as the position of the end point of the ray moves. The sub-diagram (b) of fig. 11 represents the appearance of the interface after the moving operation is performed, and as shown in the sub-diagram (b) of fig. 11, the chair is moved to the rear of the table. It should be understood that the example in fig. 11 is only for convenience of understanding the present solution and is not intended to limit the present solution.
211. And the second electronic equipment acquires a fourth operation instruction corresponding to the fourth operation.
In some embodiments of the present application, after the user completes the filtering operation on the at least one object in the display interface of the virtual space, a fourth operation may be input through the second electronic device, and correspondingly, the second electronic device generates a fourth operation instruction corresponding to the fourth operation. The fourth operation instruction is used for instructing the first electronic device to execute a restoration operation on the object in the virtual space, and the restoration operation includes restoring the object located inside the first virtual area in the display interface of the virtual space to a state where the rendering effect is not changed. Further, the object located inside the first virtual area may be an object located inside the first virtual area and appearing in an invisible state, or the object located inside the first virtual area may be an object located inside the first virtual area and appearing in a transparent state, or the object located inside the first virtual area may be an object located inside the first virtual area and changed in color.
Optionally, the restoring operation may further include restoring an object located outside the first virtual area in the presentation interface of the virtual space to a state in which the rendering effect is not changed. Optionally, if steps 204 to 206 are executed, the restoring operation further includes restoring the first object associated with the first virtual area in the virtual space to the position before being rotated.
Specifically, in an implementation manner, the second electronic device is a handle, a "resume" button is pre-configured on the second electronic device, and when the user clicks the "resume" button, the user regards that the fourth operation is input. In another implementation manner, a "restore" key may be rendered in the display interface of the virtual space, a user may perform a selection operation on the "restore" key through a ray displayed in the display space to regard the selection operation as an input fourth operation, and a specific step of performing the selection operation through the ray displayed in the display interface may refer to the description in step 201, which is not described herein again.
In another implementation manner, the input mode of the operation is gesture input, and the second electronic device acquires the gesture of the user through the camera, so as to determine the operation input by the user. A 'restore' key can be rendered in the display interface of the virtual space, and a user can make a gesture of clicking the 'restore' key in the display interface to regard the gesture as inputting a fourth operation and the like.
212. And the second electronic equipment sends a fourth operation instruction to the first electronic equipment.
In this embodiment of the application, a specific implementation manner for the second electronic device to execute step 212 may refer to the description of step 202, and the difference is that the fourth operation instruction is sent by the second electronic device to the first electronic device in step 202, and the second operation instruction is sent by the second electronic device to the first electronic device in step 212, which is not described herein again.
213. And the first electronic equipment responds to the fourth operation instruction, and performs restoration operation on the object in the virtual space, wherein the restoration operation comprises the step of restoring the object positioned in the first virtual area in the display interface of the virtual space to the state when the rendering effect is not changed.
In some embodiments of the application, after the first electronic device obtains the fourth operation instruction, the first electronic device performs a recovery operation on at least one object in the display interface of the virtual space. It should be noted that the restoration operation does not require restoration of the changes to the objects in the virtual space in step 210. A manner of acquiring the fourth operation instruction by the first electronic device is similar to that of acquiring the first operation instruction by the first electronic device, and a specific implementation manner may refer to the description in step 203, which is not described herein again.
Specifically, if steps 201 to 203 are executed, steps 204 to 206 are not executed. Alternatively, if steps 201 to 203 and steps 207 to 210 are executed, steps 204 to 206 are not executed. The purpose of step 213 is to restore the object located inside the first virtual area in the presentation interface of the virtual space to the state in which step 203 was not performed. Step 203 may include: the first electronic equipment restores the object positioned in the first virtual area in the display interface of the virtual space to the state before the rendering effect is changed. Further, step 203 may comprise: the first electronic equipment controls the invisible object located in the first virtual area in the display interface of the virtual space to be restored to the visible state. Or the transparency of the object subjected to the transparentization processing and positioned in the first virtual area in the display interface of the virtual space is adjusted to zero percent by the first electronic equipment. Or the first electronic equipment restores the color of the object in the display interface of the virtual space to the initial state.
Optionally, if the rendering effect of the object located outside the first virtual area in the presentation interface of the virtual space is also changed in step 203, in step 213, the rendering effect of the object located outside the first virtual area in the presentation interface of the virtual space needs to be restored to the state where the rendering effect is not changed.
If steps 204 to 206 are executed, steps 201 to 203 and steps 207 to 210 are not executed. Step 213 may include: the first electronic device restores a first object in the virtual space associated with the first virtual area to a position prior to being rotated. In performing step 213, the first object may not be located within the user's field of view, i.e., not located in the currently presented interface of the first virtual area.
If steps 201 to 203 are executed, and steps 204 to 206 are executed. Step 213 may include: the first electronic equipment restores the object positioned in the first virtual area in the display interface of the virtual space to the state before the rendering effect is not changed, and restores the first object associated with the first virtual area in the virtual space to the position before the first object is rotated.
In the embodiment of the application, in response to an acquired first operation instruction, a rendering effect of an object located in a first virtual area in a display interface is changed, the first virtual area is an area in the display interface, and when the object is in a dense scene, the visibility of a blocked object in the display interface can be reduced by changing the rendering effect of the object in the area in the middle of the display interface, so that a user can see the blocked object more easily, and interaction with the blocked object is facilitated; in addition, the color and the like of the object in the middle area of the display interface can be changed by changing the rendering effect of the object in the middle area of the display interface, so that richer picture effects can be presented.
On the basis of the embodiments corresponding to fig. 1 to fig. 11, in order to better implement the above-mentioned scheme of the embodiments of the present application, the following also provides related equipment for implementing the above-mentioned scheme. Specifically, referring to fig. 12, fig. 12 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present disclosure. The information processing apparatus 1200 is applied to a first electronic device, which is an AR device, a VR device, or an MR device. The information processing apparatus 1200 may include an obtaining module 1201 and a changing module 1202, wherein the obtaining module 1201 is configured to obtain a first operation instruction; the changing module 1202 is configured to change, in response to the first operation instruction, a rendering effect of an object located in a first virtual area in the display interface, where a plurality of objects are displayed in the display interface, and the first virtual area is an area in the display interface.
In one possible design, the change module 1202 is specifically configured to control that an object located inside the first virtual area in the presentation interface is not visible; or, performing transparency processing on the object positioned in the first virtual area in the display interface.
In a possible design, please refer to fig. 13, and fig. 13 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present disclosure. The information processing apparatus 1200 further includes: the rotating module 1203 is configured to obtain a second operation instruction, and in response to the second operation instruction, control a first object associated with the first virtual area in the display interface to rotate, where the first object is any one of the following three types: displaying an object in the interface, which is intersected with the boundary of the first virtual area; displaying an object in the interface, wherein the object is positioned in the first virtual area; or, displaying the object in the first virtual area in the interface.
In one possible design, referring to fig. 13, the information processing apparatus 1200 further includes: the updating module 1204 is configured to obtain a third operation instruction, perform amplification processing on the first virtual area in response to the third operation instruction, and change a rendering effect of an object located in the amplified first virtual area in the display interface. Alternatively, the information processing apparatus 1200 further includes: the updating module 1204 is configured to obtain a third operation instruction, perform a reduction process on the first virtual area in response to the third operation instruction, and change a rendering effect of an object located in the reduced first virtual area in the display interface.
In one possible design, a ray is also displayed in the display interface, and the ray is used for performing a selection operation on an object outside the first virtual area.
In one possible design, the first virtual area includes an area within a preset range in front of the first electronic device.
It should be noted that, the information interaction, the execution process, and other contents between the modules/units in the information processing apparatus 1200 are based on the same concept as the method embodiments corresponding to fig. 2 to 11 in the present application, and specific contents may refer to the description in the foregoing method embodiments in the present application, and are not described again here.
An information processing apparatus is further provided in the embodiment of the present application, specifically referring to fig. 14, and fig. 14 is a schematic structural diagram of the information processing apparatus provided in the embodiment of the present application. The information processing apparatus 1400 is applied to the second electronic device, and the information processing apparatus 1400 includes: an acquisition module 1401 and a sending module 1402. The obtaining module 1401 is configured to obtain a first operation instruction corresponding to a first operation; the sending module 1402 is configured to send a first operation instruction to a first electronic device, where the first electronic device is an AR device, a VR device, or an MR device, the first operation instruction is used to instruct the first electronic device to change a rendering effect of an object located in a first virtual area in a display interface of the first electronic device, where a plurality of objects are displayed in the display interface of the first electronic device, and the first virtual area is an area in the display interface of the first electronic device.
In a possible design, the obtaining module 1401 is further configured to obtain a second operation instruction corresponding to a second operation; the sending module 1402 is further configured to send a second operation instruction to the first electronic device, where the second operation instruction is used to instruct the first electronic device to control a first object associated with the first virtual area in a display interface of the first electronic device to rotate. The first object is any one of the following three objects: an object intersecting the boundary of the first virtual area in a presentation interface of the first electronic device; an object located inside a first virtual area in a display interface of a first electronic device; or, the object in the first virtual area in the display interface of the first electronic device.
In a possible design, the obtaining module 1401 is further configured to obtain a third operation instruction corresponding to a third operation; the sending module 1402 is further configured to send a third operation instruction to the first electronic device, where the third operation instruction is used to instruct the first electronic device to enlarge or reduce the first virtual area.
It should be noted that, the information interaction, execution process, and other contents between the modules/units in the information processing apparatus 1400 are based on the same concept as the method embodiments corresponding to fig. 2 to fig. 11 in the present application, and specific contents may refer to the description in the foregoing method embodiments in the present application, and are not described herein again.
The embodiment of the present application further provides an information processing apparatus, specifically referring to fig. 15, and fig. 15 is a schematic structural diagram of the information processing apparatus provided in the embodiment of the present application. The information processing apparatus 1500 is applied to a first electronic device which is an AR device, a VR device, or an MR device, and the information processing apparatus 1500 includes an acquisition module 1501 and a rotation module 1502. The obtaining module 1501 is configured to obtain a second operation instruction; a rotation module 1502 is configured to control rotation of the first object associated with the first virtual area in the display interface in response to the second operation instruction. A plurality of objects are displayed in the display interface, the first virtual area is an area in the display interface, and the first object is any one of the following three objects: displaying an object in the interface, which is intersected with the boundary of the first virtual area; displaying an object in the interface, wherein the object is positioned in the first virtual area; or, displaying the object in the first virtual area in the interface.
In one possible design, the first virtual area includes an area within a preset range in front of the first electronic device.
It should be noted that, the information interaction, the execution process, and other contents between the modules/units in the information processing apparatus 1500 are based on the same concept as the method embodiments corresponding to fig. 2 to fig. 11 in the present application, and specific contents may refer to the description in the foregoing method embodiments in the present application, and are not described herein again.
Referring to fig. 16, fig. 16 is a schematic structural diagram of an electronic device provided in the embodiment of the present application, where an information processing apparatus 1200 described in the corresponding embodiment of fig. 12 or fig. 13 may be disposed on an electronic device 1600, and is used to implement the functions of the first electronic device in the corresponding embodiments of fig. 2 to fig. 11. Alternatively, the electronic device 1600 may be disposed with the information processing apparatus 1400 described in the embodiment corresponding to fig. 14, so as to implement the functions of the second electronic device in the embodiments corresponding to fig. 2 to fig. 11. Alternatively, the electronic device 1600 may be disposed with the information processing apparatus 1500 described in the embodiment corresponding to fig. 15, so as to implement the functions of the first electronic device in the embodiments corresponding to fig. 2 to fig. 11. Specifically, the electronic device 1600 includes: a receiver 1601, a transmitter 1602, a processor 1603 and a memory 1604 (wherein the number of processors 1603 in the electronic device 1600 may be one or more, for example one processor in fig. 16), wherein the processors 1603 may include an application processor 16031 and a communication processor 16032. In some embodiments of the present application, the receiver 1601, the transmitter 1602, the processor 1603, and the memory 1604 may be connected by a bus or other means.
The memory 1604 may include both read-only memory and random access memory, and provides instructions and data to the processor 1603. A portion of the memory 1604 may also include non-volatile random access memory (NVRAM). The memory 1604 stores the processor and the operating instructions, executable modules or data structures, or a subset thereof, or an expanded set thereof, wherein the operating instructions may include various operating instructions for implementing various operations.
Processor 1603 controls the operation of the execution apparatus. In a particular application, the various components of the execution device are coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, the various buses are referred to in the figures as a bus system.
The method disclosed in the embodiments of the present application may be applied to the processor 1603 or implemented by the processor 1603. The processor 1603 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by hardware integrated logic circuits or instructions in software form in the processor 1603. The processor 1603 may be a general-purpose processor, a Digital Signal Processor (DSP), a microprocessor or a microcontroller, and may further include an Application Specific Integrated Circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The processor 1603 may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1604, and the processor 1603 reads the information in the memory 1604 and completes the steps of the method in combination with its hardware.
The receiver 1601 is operable to receive input numeric or character information and to generate signal inputs related to performing device related settings and function control. The transmitter 1602 may be configured to output numeric or character information via a first interface; the transmitter 1602 is also operable to send instructions to the disk pack via the first interface to modify data in the disk pack; the transmitter 1602 may also include a display device such as a display screen.
In one case, the application processor 16031 is configured to perform the functions of the information processing apparatus 1200 described in the corresponding embodiment of fig. 12 or fig. 13. It should be noted that, for the specific implementation manner and the advantageous effects of the application processor 16031 to execute the functions of the information processing apparatus 1200 described in the embodiment corresponding to fig. 12 or fig. 13, reference may be made to the descriptions in each method embodiment corresponding to fig. 2 to fig. 11, and details are not repeated here.
In one case, the application processor 16031 is used to perform the functions of the information processing apparatus 1400 described in the corresponding embodiment of fig. 14. It should be noted that, for the specific implementation manner and the advantageous effects of the application processor 16031 to execute the functions of the information processing apparatus 1400 described in the embodiment corresponding to fig. 14, reference may be made to the descriptions in the method embodiments corresponding to fig. 2 to fig. 11, and details are not repeated here.
In one case, the application processor 16031 is configured to perform the functions of the information processing apparatus 1500 described in the embodiment corresponding to fig. 15. It should be noted that, for the specific implementation manner and the advantageous effects of the application processor 16031 to execute the functions of the information processing apparatus 1500 described in the embodiment corresponding to fig. 15, reference may be made to descriptions in each method embodiment corresponding to fig. 2 to fig. 11, and details are not repeated here.
Also provided in an embodiment of the present application is a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to perform the steps performed by a first electronic device in the method described in the foregoing embodiment shown in fig. 2 to 11, or cause the computer to perform the steps performed by a second electronic device in the method described in the foregoing embodiment shown in fig. 2 to 11.
The embodiment of the present application further provides an information processing system, where the information processing system includes a first electronic device and a second electronic device, the first electronic device is an AR device, a VR device, or an MR device, and the first electronic device is connected to the second electronic device. The first electronic device is configured to perform the steps performed by the first electronic device in the method described in the foregoing embodiments shown in fig. 2 to 11, and the second electronic device is configured to perform the steps performed by the second electronic device in the method described in the foregoing embodiments shown in fig. 2 to 11.
Embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to execute the steps performed by the first electronic device in the method described in the foregoing embodiments shown in fig. 2 to 11, or causes the computer to execute the steps performed by the second electronic device in the method described in the foregoing embodiments shown in fig. 2 to 11.
Further provided in embodiments of the present application is a circuit system, where the circuit system includes a processing circuit, and the processing circuit is configured to execute steps executed by a first electronic device in the method described in the foregoing embodiments shown in fig. 2 to 11, or execute steps executed by a second electronic device in the method described in the foregoing embodiments shown in fig. 2 to 11.
The information processing apparatus or the electronic device provided in the embodiment of the present application may specifically be a chip, where the chip includes: a processing unit, which may be for example a processor, and a communication unit, which may be for example an input/output interface, a pin or a circuit, etc. The processing unit may execute the computer execution instructions stored in the storage unit to enable the chip to execute the steps executed by the first electronic device in the information processing method described in the above embodiment shown in fig. 2 to 11, or to enable the chip in the electronic device to execute the steps executed by the second electronic device in the information processing method described in the above embodiment shown in fig. 2 to 11. Optionally, the storage unit is a storage unit in the chip, such as a register, a cache, and the like, and the storage unit may also be a storage unit located outside the chip in the wireless access device, such as a read-only memory (ROM) or another type of static storage device that can store static information and instructions, a Random Access Memory (RAM), and the like.
Wherein any of the aforementioned processors may be a general purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits configured to control the execution of the programs of the method of the first aspect.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general hardware, and certainly can also be implemented by special hardware including application specific integrated circuits, special CLUs, special memories, special components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application may be substantially embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

Claims (25)

1. An information processing method applied to a first electronic device, wherein the first electronic device is an Augmented Reality (AR) device, a Virtual Reality (VR) device or a Mixed Reality (MR) device, and the method comprises the following steps:
acquiring a first operation instruction;
and responding to the first operation instruction, and changing the rendering effect of the objects positioned in a first virtual area in a display interface, wherein a plurality of objects are displayed in the display interface, and the first virtual area is an area in the display interface.
2. The method of claim 1, wherein changing the rendering effect of the object in the presentation interface located inside the first virtual area comprises:
controlling an object in the presentation interface located inside the first virtual area to be invisible; alternatively, the first and second electrodes may be,
and carrying out transparentizing treatment on the object positioned in the first virtual area in the display interface.
3. The method of claim 1, further comprising:
acquiring a second operation instruction, and controlling a first object associated with the first virtual area in the display interface to rotate in response to the second operation instruction, wherein the first object is any one of the following three objects:
an object in the presentation interface that intersects a boundary of the first virtual area;
an object in the presentation interface located inside the first virtual area; alternatively, the first and second electrodes may be,
and displaying the object in the first virtual area in the interface.
4. The method of claim 1, further comprising:
acquiring a third operation instruction, responding to the third operation instruction, performing amplification processing on the first virtual area, and changing the rendering effect of an object positioned in the amplified first virtual area in the display interface; alternatively, the first and second electrodes may be,
and acquiring a third operation instruction, responding to the third operation instruction, performing reduction processing on the first virtual area, and changing the rendering effect of an object positioned in the reduced first virtual area in the display interface.
5. The method according to any one of claims 1 to 4, wherein a ray is further displayed in the presentation interface, and the ray is used for performing a selection operation on an object outside the first virtual area.
6. The method according to any one of claims 1 to 4, wherein the first virtual area comprises an area within a preset range in front of the first electronic device.
7. An information processing method applied to a second electronic device, the method comprising:
acquiring a first operation instruction corresponding to a first operation;
sending the first operation instruction to a first electronic device, wherein the first electronic device is an Augmented Reality (AR) device, a Virtual Reality (VR) device or a Mixed Reality (MR) device, the first operation instruction is used for instructing the first electronic device to change the rendering effect of an object located inside a first virtual area in a display interface of the first electronic device, a plurality of objects are displayed in the display interface of the first electronic device, and the first virtual area is an area in the display interface of the first electronic device.
8. The method of claim 7, further comprising:
acquiring a second operation instruction corresponding to the second operation;
sending the second operation instruction to the first electronic device, where the second operation instruction is used to instruct the first electronic device to control a first object associated with the first virtual area in a display interface of the first electronic device to rotate, and the first object is any one of the following three objects:
an object in a display interface of the first electronic device that intersects a boundary of the first virtual area;
an object located inside the first virtual area in a presentation interface of the first electronic device; alternatively, the first and second electrodes may be,
and displaying the object in the first virtual area in the interface of the first electronic equipment.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
acquiring a third operation instruction corresponding to the third operation;
and sending the third operation instruction to the first electronic device, wherein the third operation instruction is used for instructing the first electronic device to enlarge or reduce the first virtual area.
10. An information processing method applied to a first electronic device, wherein the first electronic device is an Augmented Reality (AR) device, a Virtual Reality (VR) device or a Mixed Reality (MR) device, and the method comprises the following steps:
acquiring a second operation instruction;
responding to the second operation instruction, controlling a first object associated with a first virtual area in a display interface to rotate, wherein a plurality of objects are displayed in the display interface, the first virtual area is an area in the display interface, and the first object is any one of the following three objects:
an object in the presentation interface that intersects a boundary of the first virtual area;
an object in the presentation interface located inside the first virtual area; alternatively, the first and second electrodes may be,
and displaying the object in the first virtual area in the interface.
11. The method of claim 10, wherein the first virtual area comprises an area within a preset range in front of the first electronic device.
12. An information processing apparatus, applied to a first electronic device, which is an Augmented Reality (AR) device, a Virtual Reality (VR) device, or a Mixed Reality (MR) device, the apparatus comprising:
the acquisition module is used for acquiring a first operation instruction;
and the changing module is used for responding to the first operation instruction and changing the rendering effect of the objects positioned in the first virtual area in the display interface, wherein a plurality of objects are displayed in the display interface, and the first virtual area is the area in the display interface.
13. The apparatus according to claim 12, wherein the changing module is specifically configured to control that an object located inside the first virtual area in the presentation interface is not visible; or, performing transparency processing on the object located in the first virtual area in the display interface.
14. The apparatus of claim 12, further comprising:
the rotating module is used for acquiring a second operation instruction, and controlling a first object associated with the first virtual area in the display interface to rotate in response to the second operation instruction, wherein the first object is any one of the following three objects:
an object in the presentation interface that intersects a boundary of the first virtual area;
an object in the presentation interface located inside the first virtual area; alternatively, the first and second electrodes may be,
and displaying the object in the first virtual area in the interface.
15. The apparatus of claim 12,
the device further comprises: the updating module is used for acquiring a third operation instruction, responding to the third operation instruction, amplifying the first virtual area, and changing the rendering effect of an object positioned in the amplified first virtual area in the display interface; alternatively, the first and second electrodes may be,
the device further comprises: and the updating module is used for acquiring a third operation instruction, responding to the third operation instruction, reducing the first virtual area, and changing the rendering effect of the object positioned in the reduced first virtual area in the display interface.
16. The apparatus according to any one of claims 12 to 15, wherein a ray is further displayed in the presentation interface, and the ray is used for performing a selection operation on an object outside the first virtual area.
17. The apparatus according to any one of claims 12 to 15, wherein the first virtual area comprises an area within a preset range in front of the first electronic device.
18. An information processing apparatus, applied to a second electronic device, comprising:
the acquisition module is used for acquiring a first operation instruction corresponding to a first operation;
a sending module, configured to send the first operation instruction to a first electronic device, where the first electronic device is an augmented reality AR device, a virtual reality VR device, or a mixed reality MR device, the first operation instruction is used to instruct the first electronic device to change a rendering effect of an object located inside a first virtual area in a display interface of the first electronic device, where a plurality of objects are displayed in the display interface of the first electronic device, and the first virtual area is an area in the display interface of the first electronic device.
19. The apparatus of claim 18,
the acquisition module is further used for acquiring a second operation instruction corresponding to a second operation;
the sending module is further configured to send the second operation instruction to the first electronic device, where the second operation instruction is used to instruct the first electronic device to control a first object associated with the first virtual area in a display interface of the first electronic device to rotate, and the first object is any one of the following three objects:
an object in a display interface of the first electronic device that intersects a boundary of the first virtual area;
an object located inside the first virtual area in a presentation interface of the first electronic device; alternatively, the first and second electrodes may be,
and displaying the object in the first virtual area in the interface of the first electronic equipment.
20. The apparatus of claim 18 or 19,
the obtaining module is further configured to obtain a third operation instruction corresponding to a third operation;
the sending module is further configured to send the third operation instruction to the first electronic device, where the third operation instruction is used to instruct the first electronic device to enlarge or reduce the first virtual area.
21. An information processing apparatus, applied to a first electronic device, which is an Augmented Reality (AR) device, a Virtual Reality (VR) device, or a Mixed Reality (MR) device, the apparatus comprising:
the acquisition module is used for acquiring a second operation instruction;
a rotation module, configured to control, in response to the second operation instruction, a first object associated with a first virtual area in a display interface to rotate, where multiple objects are displayed in the display interface, the first virtual area is an area in the display interface, and the first object is any one of the following three types:
an object in the presentation interface that intersects a boundary of the first virtual area;
an object in the presentation interface located inside the first virtual area; alternatively, the first and second electrodes may be,
and displaying the object in the first virtual area in the interface.
22. The apparatus of claim 21, wherein the first virtual area comprises an area within a preset range in front of the first electronic device.
23. An electronic device comprising a processor coupled to a memory, the memory storing program instructions that, when executed by the processor, implement the method of any of claims 1 to 6, or implement the method of any of claims 7 to 9, or implement the method of claim 10 or 11.
24. A computer-readable storage medium, characterized by comprising a program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 6, or causes the computer to perform the method of any one of claims 7 to 9, or causes the computer to perform the method of claim 10 or 11.
25. An information processing system, characterized in that the system comprises a first electronic device and a second electronic device, the first electronic device being an Augmented Reality (AR) device, a Virtual Reality (VR) device or a Mixed Reality (MR) device;
the first electronic device is connected with the second electronic device, the first electronic device is used for realizing the method of any one of claims 1 to 6, and the second electronic device is used for realizing the method of any one of claims 7 to 9.
CN202010901507.XA 2020-08-31 2020-08-31 Information processing method and related equipment Pending CN112181551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010901507.XA CN112181551A (en) 2020-08-31 2020-08-31 Information processing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010901507.XA CN112181551A (en) 2020-08-31 2020-08-31 Information processing method and related equipment

Publications (1)

Publication Number Publication Date
CN112181551A true CN112181551A (en) 2021-01-05

Family

ID=73924026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010901507.XA Pending CN112181551A (en) 2020-08-31 2020-08-31 Information processing method and related equipment

Country Status (1)

Country Link
CN (1) CN112181551A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342433A (en) * 2021-05-08 2021-09-03 杭州灵伴科技有限公司 Application page display method, head-mounted display device and computer readable medium
CN116755563A (en) * 2023-07-14 2023-09-15 优奈柯恩(北京)科技有限公司 Interactive control method and device for head-mounted display equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9389703B1 (en) * 2014-06-23 2016-07-12 Amazon Technologies, Inc. Virtual screen bezel
CN106575154A (en) * 2014-07-25 2017-04-19 微软技术许可有限责任公司 Smart transparency for holographic objects
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9389703B1 (en) * 2014-06-23 2016-07-12 Amazon Technologies, Inc. Virtual screen bezel
CN106575154A (en) * 2014-07-25 2017-04-19 微软技术许可有限责任公司 Smart transparency for holographic objects
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342433A (en) * 2021-05-08 2021-09-03 杭州灵伴科技有限公司 Application page display method, head-mounted display device and computer readable medium
CN116755563A (en) * 2023-07-14 2023-09-15 优奈柯恩(北京)科技有限公司 Interactive control method and device for head-mounted display equipment

Similar Documents

Publication Publication Date Title
JP6288084B2 (en) Display control device, display control method, and recording medium
JP2019079056A (en) Mobile terminal, method, program, and recording medium
WO2020063091A1 (en) Picture processing method and terminal device
US20140375587A1 (en) Method of controlling virtual object or view point on two dimensional interactive display
EP4246287A1 (en) Method and system for displaying virtual prop in real environment image, and storage medium
TWI493388B (en) Apparatus and method for full 3d interaction on a mobile device, mobile device, and non-transitory computer readable storage medium
JP7005161B2 (en) Electronic devices and their control methods
US20140075370A1 (en) Dockable Tool Framework for Interaction with Large Scale Wall Displays
WO2022161432A1 (en) Display control method and apparatus, and electronic device and medium
EP3304273B1 (en) User terminal device, electronic device, and method of controlling user terminal device and electronic device
WO2023061280A1 (en) Application program display method and apparatus, and electronic device
CN112181551A (en) Information processing method and related equipment
JP2012079065A (en) Electronic device, icon display method and program for electronic device
CN112363658B (en) Interaction method and device for video call
US10936148B1 (en) Touch interaction in augmented and virtual reality applications
EP2965164B1 (en) Causing specific location of an object provided to a device
WO2022142270A1 (en) Video playback method and video playback apparatus
WO2023236602A1 (en) Display control method and device for virtual object, and storage medium and electronic device
CN110494915B (en) Electronic device, control method thereof, and computer-readable medium
WO2020087504A1 (en) Screenshot interaction method, electronic device, and computer-readable storage medium
JP7005160B2 (en) Electronic devices and their control methods
JP2020046983A (en) Program, information processing apparatus, and method
JP6999822B2 (en) Terminal device and control method of terminal device
CN111782053A (en) Model editing method, device, equipment and storage medium
CN107077276B (en) Method and apparatus for providing user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination