CN113608616A - Virtual content display method and device, electronic equipment and storage medium - Google Patents

Virtual content display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113608616A
CN113608616A CN202110914654.5A CN202110914654A CN113608616A CN 113608616 A CN113608616 A CN 113608616A CN 202110914654 A CN202110914654 A CN 202110914654A CN 113608616 A CN113608616 A CN 113608616A
Authority
CN
China
Prior art keywords
virtual content
motion state
display mode
motion
target display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110914654.5A
Other languages
Chinese (zh)
Inventor
卢金莲
韦豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202110914654.5A priority Critical patent/CN113608616A/en
Publication of CN113608616A publication Critical patent/CN113608616A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to a method and an apparatus for displaying virtual content, an electronic device, and a storage medium, wherein the method includes: acquiring a motion state of a first device in response to a motion operation for the first device; determining a target display mode of the virtual content based on the motion state, wherein the target display mode is matched with the motion state; and displaying the virtual content according to the target display mode.

Description

Virtual content display method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying virtual content, an electronic device, and a storage medium.
Background
The virtual content is information generated virtually, and can simulate the real world or the content existing in the real world to provide better virtual experience for users.
However, in order to interact with the virtual content, the user often can only obtain corresponding feedback by clicking, which is inconvenient and easily reduces the experience of the virtual experience.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure proposes a display scheme of virtual content.
According to an aspect of the present disclosure, there is provided a method of displaying virtual content, including:
acquiring a motion state of a first device in response to a motion operation for the first device; determining a target display mode of the virtual content based on the motion state, wherein the target display mode is matched with the motion state; and displaying the virtual content according to the target display mode.
In the embodiment of the present disclosure, the motion state of the first device may be acquired in response to a motion operation for the first device, and based on the motion state, a target display manner in which the virtual content matches the motion state is determined, so that the virtual content is displayed in the target display manner. Through the process, according to the display method and device of the virtual content, the electronic device and the storage medium provided by the embodiment of the disclosure, the display mode of the virtual content can be adjusted based on the motion state of the first device by using the matching relationship between the target display mode and the motion state, so that interaction with the virtual content can be performed more conveniently, the interaction mode with the virtual content is enriched, and the virtual experience of a user is improved.
In one possible implementation manner, the determining a target display manner of the virtual content based on the motion state includes: determining a dynamic display mode matched with the motion state as a target display mode of the virtual content based on the motion state; and/or determining a display switching mode matched with the motion state as a target display mode of the virtual content based on the motion state.
Through the embodiment of the disclosure, the dynamic display mode and/or the display switching mode matched with the motion state can be used as the target display mode, and the types of the target display modes are enriched, so that the interaction modes with the virtual content are further enriched, and the experience of virtual experience is improved.
In one possible implementation, the motion state includes: movement and/or rotation; the determining, based on the motion state, a dynamic display mode matched with the motion state as a target display mode of the virtual content includes one or more of the following operations: determining a second acceleration of the dynamic display of the virtual content as a target display mode of the virtual content based on a first acceleration of the movement of the first device, wherein the second acceleration is matched with the first acceleration; and/or determining a second offset angle for dynamically displaying the virtual content as a target display mode of the virtual content based on a first offset angle of the first device, wherein the second offset angle is matched with the first offset angle.
Through the embodiment of the disclosure, the dynamic display mode can be flexibly selected according to the type of the motion state, and the acceleration, the offset angle and the like of the target display mode are matched with the motion state of the first device, so that the flexibility of interaction with the virtual content is enriched, and the experience feeling in the interaction process can be further improved by utilizing the matching of the acceleration and the offset angle.
In one possible implementation, the motion state includes: a whipping state; the determining, based on the motion state, a display switching manner matched with the motion state as a target display manner of the virtual content includes: and determining a display switching mode of the virtual content as a target display mode of the virtual content based on the swing state of the first device, wherein the display switching mode is matched with the swing state.
Through the embodiment of the disclosure, the incidence relation between the swing state and the display switching mode can be flexibly set, the flexibility of interaction with the virtual content is further enriched, and the experience feeling in the interaction process is improved.
In a possible implementation manner, the acquiring the motion state of the first device includes: collecting motion information of the first device; determining a motion state of the first device based on the motion information.
By collecting the motion information of the first equipment, the motion state of the first equipment is determined according to the motion information, the motion operation aiming at the first equipment can be converted into corresponding information data, and then the motion state of the first equipment is calculated, so that the motion state of the first equipment can be more accurately determined, a target display mode matched with the motion state is more accurately determined, and the accuracy of the whole method is improved.
In a possible implementation manner, the acquiring motion information of the first device includes: and acquiring IMU data of the first equipment according to an inertial measurement unit IMU in the first equipment to obtain the motion information of the first equipment.
Through IMU according to the first equipment, carry out the information to the IMU data of first equipment, obtain the motion information of first equipment, can utilize IMU in the first equipment, the motion information of first equipment is gathered conveniently, and the multiple gesture of first equipment is convenient for confirm based on IMU data, thereby determine the motion state of multiple difference, then match multiple target display mode, thereby derive the interactive mode of multiple and virtual content, improve interactive naturality, convenience and high efficiency, further promote virtual experience.
In one possible implementation, the determining the motion state of the first device based on the motion information includes: determining an acceleration change state of the first device as a motion state of the first device based on acceleration information in the motion information; and/or determining the offset angle of the first equipment as the motion state of the first equipment based on the angular speed information in the motion information.
Determining an acceleration change state of the first device as a motion state based on the acceleration information; and/or determining the offset angle of the first device based on the angular velocity information, and determining a more accurate motion state of the first device through multiple modes as the motion state of the first device, so that the target display mode of the virtual object can be determined more accurately subsequently, and the accuracy and flexibility of the whole method are improved.
In a possible implementation manner, the displaying the virtual content according to the target display manner includes: and controlling the second equipment to display the virtual content according to the target display mode.
The second target equipment is controlled to display the virtual content according to the target display mode, the display equipment of the virtual content can be flexibly selected according to actual conditions, the flexibility of virtual content display is improved, the interaction mode with the virtual content is further enriched, and virtual experience is improved.
In one possible implementation, the virtual content includes AR objects for presentation in an augmented reality AR, and/or VR objects for presentation in a virtual reality VR.
By the embodiment of the disclosure, the virtual content can be flexibly displayed in scenes such as virtual reality or augmented reality, the application range of the display method is expanded, and the practicability of the display method is improved.
According to an aspect of the present disclosure, there is provided a display apparatus of virtual content, including:
the motion state acquisition module is used for responding to motion operation aiming at first equipment and acquiring the motion state of the first equipment; a display mode determining module, configured to determine a target display mode of the virtual content based on the motion state, where the target display mode is matched with the motion state; and the display module is used for displaying the virtual content according to the target display mode.
In one possible implementation manner, the display manner determining module is configured to: determining a dynamic display mode matched with the motion state as a target display mode of the virtual content based on the motion state; and/or determining a display switching mode matched with the motion state as a target display mode of the virtual content based on the motion state.
In one possible implementation, the motion state includes: movement and/or rotation; the display mode determination module is further configured to: determining a second acceleration of the dynamic display of the virtual content as a target display mode of the virtual content based on a first acceleration of the movement of the first device, wherein the second acceleration is matched with the first acceleration; and/or determining a second offset angle for dynamically displaying the virtual content as a target display mode of the virtual content based on a first offset angle of the first device, wherein the second offset angle is matched with the first offset angle.
In one possible implementation, the motion state includes: a whipping state; the display mode determination module is further configured to: and determining a display switching mode of the virtual content as a target display mode of the virtual content based on the swing state of the first device, wherein the display switching mode is matched with the swing state.
In one possible implementation manner, the motion state obtaining module is configured to: collecting motion information of the first device; determining a motion state of the first device based on the motion information.
In one possible implementation manner, the motion state obtaining module is further configured to: and acquiring IMU data of the first equipment according to an inertial measurement unit IMU in the first equipment to obtain the motion information of the first equipment.
In one possible implementation manner, the motion state obtaining module is further configured to: determining an acceleration change state of the first device as a motion state of the first device based on acceleration information in the motion information; and/or determining the offset angle of the first equipment as the motion state of the first equipment based on the angular speed information in the motion information.
In one possible implementation, the display module is configured to: and controlling the second equipment to display the virtual content according to the target display mode.
In one possible implementation, the virtual content includes AR objects for presentation in an augmented reality AR, and/or VR objects for presentation in a virtual reality VR.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the present disclosure, the motion state of the first device may be acquired in response to a motion operation for the first device, and based on the motion state, a target display manner in which the virtual content matches the motion state is determined, so that the virtual content is displayed in the target display manner. Through the process, according to the display method and device of the virtual content, the electronic device and the storage medium provided by the embodiment of the disclosure, the display mode of the virtual content can be adjusted based on the motion state of the first device by using the matching relationship between the target display mode and the motion state, so that interaction with the virtual content can be performed more conveniently, the interaction mode with the virtual content is enriched, and the virtual experience of a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a flowchart of a display method of virtual content according to an embodiment of the present disclosure.
Fig. 2 illustrates a flowchart of a display method of virtual content according to an embodiment of the present disclosure.
Fig. 3 illustrates a flowchart of a display method of virtual content according to an embodiment of the present disclosure.
Fig. 4 illustrates a block diagram of a display device of virtual content according to an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of an application example according to the present disclosure.
Fig. 6 shows a schematic diagram of an application example according to the present disclosure.
Fig. 7 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure.
Fig. 8 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 illustrates a flowchart of a display method of virtual content according to an embodiment of the present disclosure, which may be applied to a display apparatus of virtual content, which may be a terminal device, a server, or other processing device. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like.
In some possible implementations, the method of displaying the virtual content may also be implemented by a processor calling computer readable instructions stored in a memory.
As shown in fig. 1, in one possible implementation manner, the method for displaying virtual content may include:
in step S11, in response to the moving operation for the first device, the moving state of the first device is acquired.
The virtual content may be information content generated virtually, how the virtual content is generated, and in what application scenario, the virtual content may be flexibly selected according to an actual situation, and is not limited to the following disclosure embodiments.
In some possible implementations, the virtual content may include an Augmented Reality (AR) object for presentation that may be fused with the real world to augment the real world. The implementation form of the AR object is not limited in the embodiment of the present disclosure, and may be various types of virtual information contents such as characters, images, three-dimensional models, music, or videos, or may be a mixed form of the above various types of virtual information contents. For example, in one example, the virtual content may be a virtual news page for presentation in the AR.
In some possible implementations, the Virtual content may include a Virtual Reality (VR) object for presentation that may simulate a real world or an object in the real world. The implementation form of the VR object is also not limited in this embodiment of the disclosure, and various implementation forms of the AR object may be referred to, which is not described herein again.
In some possible implementations, the virtual content may include both an AR object and a VR object, and the AR object and the VR object may be implemented in the same or different manners.
The first device may be any device that interacts with the virtual content, and the implementation manner of the first device may also be flexibly changed according to the difference of the virtual content. In a possible implementation manner, in the case that the virtual content is an AR object, the first device may be a device for scanning or capturing the real world, such as an image acquisition apparatus like a camera for scanning the real world, or a terminal device like a mobile phone, a tablet computer, or AR glasses with an AR function. In a possible implementation manner, in a case that the virtual content is a VR object, the first device may be a VR handheld terminal or a VR interaction device such as VR glasses.
In some possible implementation manners, the first device may be the same device as the display apparatus for virtual content, such as the same mobile phone or AR terminal, that implements the method provided by the embodiment of the present disclosure; in some possible implementations, the first device may also be a different device from the display apparatus of the virtual content for implementing the method provided by the embodiment of the present disclosure, for example, the first device may be an AR terminal device or a VR terminal device, and the display apparatus of the virtual content is an arithmetic device such as a server.
The operation of moving the first device may be any operation of controlling the first device to move, and is not limited to the following disclosed embodiments. In some possible implementations, the motion operation for the first device may include one or more of moving, rotating, or shaking.
The motion state of the first device may be a state that the first device assumes when the first device is performing motion under a motion operation, and thus the motion state is related to the motion operation for the first device, such as moving the first device, the first device assuming a moving motion state; rotating a first device, the first device exhibiting a rotational motion state; and shaking the first equipment, wherein the first equipment is in a shaking state and is in a motion state, and the like.
How to obtain the motion state of the first device can be flexibly determined according to the actual situation. In some possible implementations, an image of the first device during the movement process may be acquired, and image recognition may be performed to determine the movement state of the first device. In some possible implementation manners, the motion state of the first device may also be obtained through sensing information acquired by a sensor built in the first device during the motion process. In some possible implementation manners, sensing information of the first device in the movement process may be collected through some sensors located outside the first device, so as to obtain the movement state of the first device, and the like. Various implementations of obtaining the motion state of the first device can be seen in the following disclosure embodiments, which are not first developed herein.
In step S12, the target display mode of the virtual content is determined based on the motion state.
The target display mode may be a mode in which the virtual content is to be displayed, and the target display mode of the virtual content may change with the difference of the motion state. In some possible implementation manners, the target display manner may be matched with the motion state, and the matching relationship between the target display manner and the motion state is not limited in the embodiments of the present disclosure, and may be flexibly determined according to the actual situation, and is not limited in the following disclosure embodiments.
In some possible implementations, the target display mode may be consistent with the motion state, such as the virtual content may be dynamically displayed in accordance with the motion state consistent with the first device.
In some possible implementations, the target display mode may also be similar to the motion state, such as dynamic display of virtual content, and the dynamic process may be similar to but different from the motion state mode of the first device, for example, the first device moves according to the speed a, the virtual content may be dynamically displayed moving according to the speed B, the first device selects according to the angle C, the virtual content may be dynamically displayed by rotating by the angle D, and so on.
In some possible implementations, the matching between the target display manner and the motion state may be that there is some association relationship or some association relationship between the target display manner and the motion state, for example, when the first device moves according to a preset manner E, the target display manner may be a display manner F associated with the manner E, and the like, for example, in some possible implementations, when the motion state of the first device is set to be a flick state, the target display manner may be set to be a display content switching manner; and setting the target display mode to be fuzzy display and the like under the condition that the moving speed of the first equipment reaches a preset threshold value. The setting of the association method can be flexibly selected according to actual conditions, and is not limited to the following disclosure embodiments.
The implementation process of determining the target display mode based on the motion state can be flexibly changed according to the motion state and the difference of the target display mode, which is described in detail in the following disclosure embodiments, and is not expanded first.
In some possible implementations, the target display mode of the virtual content may be further determined when the motion state reaches a certain preset state threshold, so as to reduce interference caused by a wrong movement of the first device. For example, the target display mode may be determined when the moving distance of the first device exceeds a preset distance threshold, or when the rotation angle of the first device exceeds a preset angle threshold, or when the swing amplitude exceeds a preset amplitude threshold, or the like. The numerical values of the preset distance and the preset state threshold, such as the preset angle threshold or the preset amplitude threshold, are not limited in the embodiment of the present disclosure, and can be flexibly determined according to the actual situation.
Step S13 is to display the virtual content in the target display mode.
In step S13, the virtual content may be displayed in the target display manner determined in step S12. The display position of the virtual content may be flexibly selected according to actual situations, for example, the virtual content may be displayed on a display interface of the first device, or on a display interface of another device, and is not limited to the following embodiments.
In one possible implementation, step S13 may include: and controlling the second equipment to display the virtual content according to the target display mode.
Wherein the second device may be any device for displaying virtual content. In some possible implementations, the second device may be the same device as the first device, such as a mobile phone, a tablet computer, or terminal devices such as AR glasses having an AR function mentioned in the above-mentioned embodiments of disclosure, or VR interaction devices such as VR glasses. In some possible implementations, the second device may also be a different device from the first device, for example, in an AR scene, the first device may be a camera for scanning a real scene, and the second device may be a display apparatus for displaying the real scene and virtual content, such as a display screen or a projection device; or in an AR scenario, the first device may be a VR handset for interacting, and the second device may be a VR display device for displaying virtual content, etc.
As described in the foregoing embodiments, the first device may be the same as or different from the display device of the virtual content, and similarly, the second device may also be the same as or different from the display device of the virtual content.
The process of controlling the second device to display the virtual content in the target display manner may be flexibly determined according to actual situations, for example, the process may be performed by sending a control command and the virtual content to the second device to instruct the second device to display the virtual content in the target display manner.
The second target equipment is controlled to display the virtual content according to the target display mode, the display equipment of the virtual content can be flexibly selected according to actual conditions, the flexibility of virtual content display is improved, the interaction mode with the virtual content is further enriched, and virtual experience is improved.
In the embodiment of the present disclosure, the motion state of the first device may be acquired in response to a motion operation for the first device, and based on the motion state, a target display manner in which the virtual content matches the motion state is determined, so that the virtual content is displayed in the target display manner. Through the process, according to the display method and device of the virtual content, the electronic device and the storage medium provided by the embodiment of the disclosure, the display mode of the virtual content can be adjusted based on the motion state of the first device by using the matching relationship between the target display mode and the motion state, so that interaction with the virtual content can be performed more conveniently, the interaction mode with the virtual content is enriched, and the virtual experience of a user is improved.
Fig. 2 shows a flowchart of a display method of virtual content according to an embodiment of the present disclosure, and as shown in one possible implementation manner, the acquiring of the motion state of the first device in step S11 may include:
step S111, collecting motion information of the first equipment;
in step S112, the motion state of the first device is determined based on the motion information.
The motion information of the first device may be motion information generated by the first device moving under a motion operation, and information content included in the motion information may be flexibly determined according to an actual situation, and is not limited to the following disclosure embodiments. In one possible implementation, the motion information may include one or more of motion velocity, acceleration, and angular velocity of the first device.
The mode of acquiring the motion information of the first device may be flexibly determined according to actual conditions, and as described in the above disclosed embodiments, the motion information may be acquired by a sensor inside or outside the first device, or determined by performing image acquisition on the motion process of the first device, and the like.
In one possible implementation, step S111 may include: and acquiring IMU data of the first equipment according to an inertial measurement unit IMU in the first equipment to obtain the motion information of the first equipment.
An Inertial Measurement Unit (IMU) is a device for measuring three-axis attitude angles (or angular rates) and acceleration of an object, and data acquired by the IMU may be referred to as IMU data. The data information included in the IMUS data may be flexibly determined according to actual situations, for example, acceleration information and/or acceleration information may be included. In one example, the IMU may contain three single axis accelerometers and three single axis gyroscopes, the accelerometers detecting acceleration signals of the object in the carrier coordinate system in three independent axes, and the gyroscopes detecting angular velocity signals of the carrier relative to the navigation coordinate system, measuring the angular velocity and acceleration of the object in three-dimensional space, and solving for the attitude of the object.
The acquired IMU data may be used as motion information of the first device for determining the motion state of the first device in step S112.
Through IMU according to the first equipment, carry out the information to the IMU data of first equipment, obtain the motion information of first equipment, can utilize IMU in the first equipment, the motion information of first equipment is gathered conveniently, and the multiple gesture of first equipment is convenient for confirm based on IMU data, thereby determine the motion state of multiple difference, then match multiple target display mode, thereby derive the interactive mode of multiple and virtual content, improve interactive naturality, convenience and high efficiency, further promote virtual experience.
In step S112, since the information content included in the motion information can be flexibly changed, the manner of determining the motion state of the first device based on the motion information can also be flexibly changed accordingly. In some possible implementations, the motion state of the first device may be determined to be movement if the motion information reflects a change in acceleration; determining the motion state of the first device as rotation in a case where the motion information reflects the change in the angular velocity; and determining the motion state of the first device as a shaking state and the like under the condition that the motion information reflects the change of both the acceleration and the angular velocity. In some possible modes, a more accurate motion state and the like can be obtained based on data in the motion information.
In one possible implementation, step S112 may include:
and determining the acceleration change state of the first equipment as the motion state of the first equipment based on the acceleration information in the motion information. And/or the presence of a gas in the gas,
based on the angular velocity information in the motion information, an offset angle of the first device is determined as a motion state of the first device.
The acceleration information in the motion information may include accelerations of the first device at a plurality of time points during the motion process, a change rate of the acceleration, and the like.
The method for determining the acceleration change state of the first device based on the acceleration information may be flexibly determined according to actual conditions, for example, an acceleration change curve of the first device may be drawn according to the acceleration information of the first device at multiple moments in the motion process, and the acceleration change curve is used as the acceleration change state of the first device; or the acceleration information of the first device at a plurality of moments in the motion process is mutually corresponding to the plurality of moments to form an acceleration change table as the acceleration change state of the first device.
The acceleration change state of the first device may be directly used as the motion state of the first device, or it may be determined whether the first device moves or swings based on the acceleration change state, and the acceleration change state and the determination result may be used as the motion state of the first device.
The angular velocity information in the motion information may include angular velocities of the first device at a plurality of time points during the motion, angular velocities in different directions, and the like.
The method for determining the offset angle of the first device based on the angular velocity information may be flexibly determined according to actual conditions, for example, the offset angle of the first device in the three-dimensional space may be determined by a calculation method such as a trigonometric function according to the angular velocities in the plurality of directions in the IMU data.
The offset angle of the first device may be directly used as the motion state of the first device, or it may be determined whether the first device is rotated or flicked based on the offset angle, and the offset angle and the determination result may be used together as the motion state of the first device.
Determining an acceleration change state of the first device as a motion state based on the acceleration information; and/or determining the offset angle of the first device based on the angular velocity information, and determining a more accurate motion state of the first device through multiple modes as the motion state of the first device, so that the target display mode of the virtual object can be determined more accurately subsequently, and the accuracy and flexibility of the whole method are improved.
By collecting the motion information of the first equipment, the motion state of the first equipment is determined according to the motion information, the motion operation aiming at the first equipment can be converted into corresponding information data, and then the motion state of the first equipment is calculated, so that the motion state of the first equipment can be more accurately determined, a target display mode matched with the motion state is more accurately determined, and the accuracy of the whole method is improved.
Fig. 3 shows a flowchart of a display method of virtual content according to an embodiment of the present disclosure, and as shown in the figure, in one possible implementation, step S12 may include:
and step S121, determining a dynamic display mode matched with the motion state as a target display mode of the virtual content based on the motion state. And/or the presence of a gas in the gas,
and step S122, determining a display switching mode matched with the motion state as a target display mode of the virtual content based on the motion state.
The dynamic display mode may be that the virtual content moves during the display process, such as moving or rotating on the display interface, or the state changes, such as changing from clear to fuzzy, or changing from large to small, and so on. In some possible implementations, the dynamic display mode is matched with the motion state, and the motion mode of the dynamic display mode is similar to or consistent with the motion state, for example, the moving speed or acceleration of the virtual content is the same as or proportional to the motion state, or the rotation angle of the virtual content is consistent with or proportional to the offset angle of the motion state. In some possible implementations, the dynamic display mode is matched with the motion state, or the state change mode of the dynamic display mode is similar to or consistent with the motion state, for example, the change speed from clear to blurred is consistent with or proportional to the acceleration in the motion state, or the speed with changed size is consistent with or proportional to the acceleration in the motion state.
The matching mode between the dynamic display mode and the motion state can be flexibly set according to actual situations, and is not limited to the embodiments of the present disclosure, and the implementation mode of step S121 may also change along with the difference of the matching modes. Some possible implementations of step S121 can be detailed in the following disclosure embodiments, which are not expanded here.
The display switching manner may be a manner in which the virtual content is switched during the display process, such as sliding, scrolling, or page switching of the virtual content. In some possible implementations, the display switching manner is matched with the motion state, and the switching process may be similar to or consistent with the motion state, for example, the speed of sliding, scrolling or page switching is the same as or proportional to the motion state, or the angle of switching during the switching process of the virtual content is consistent with or proportional to the offset angle of the motion state.
The display switching manner and the matching manner of the motion state can also be flexibly set according to the actual situation, and are not limited to the embodiments of the present disclosure, and the implementation manner of step S122 is also changed with the difference of the matching manner, which is described in the following embodiments of the present disclosure, and is not expanded here.
Through the embodiment of the disclosure, the dynamic display mode and/or the display switching mode matched with the motion state can be used as the target display mode, and the types of the target display modes are enriched, so that the interaction modes with the virtual content are further enriched, and the experience of virtual experience is improved.
As described in the above-described disclosed embodiments, the motion state may include one or more of a moving, rotating, or whipping state. In one possible implementation, step S121 may include one or more of the following operations:
and determining a second acceleration of the dynamic display of the virtual content as a target display mode of the virtual content based on the first acceleration of the movement of the first equipment, wherein the second acceleration is matched with the first acceleration. And/or the presence of a gas in the gas,
and determining a second offset angle for dynamically displaying the virtual content as a target display mode of the virtual content based on the first offset angle rotated by the first device, wherein the second offset angle is matched with the first offset angle.
The first acceleration may be an acceleration generated by the first device during the movement, and in a possible implementation manner, the first acceleration may be an acceleration generated when the first device is in a moving state of movement. The first acceleration may be a fixed value or a variable value, and is not limited in the embodiment of the present disclosure. The manner of obtaining the first acceleration of the first device may be described in detail in the above embodiments, and is not described herein again.
Based on the first acceleration, a second acceleration at which the virtual content is dynamically displayed can be determined, wherein the way of dynamically displaying the virtual content can be flexibly changed, for example, the virtual content can be fixedly moved in one direction, or can be sequentially moved in multiple directions according to certain rules, or can be randomly moved in multiple directions; in some possible implementations, the dynamic display mode of the virtual content may also be not moving, but changing the state, such as changing the definition or changing the size in the above-mentioned embodiments.
The second acceleration may be an acceleration of the virtual content changing during the dynamic display process, for example, an acceleration of movement, an acceleration of state change, or the like. The second acceleration may be matched to the first acceleration, and the second acceleration may have the same value as the first acceleration, or the second acceleration may have a change pattern that is the same as the change pattern of the first acceleration.
In one example, the second acceleration of the dynamic display of the virtual content is determined based on the first acceleration of the movement of the first device, and the target display mode of the virtual content may be: and determining the acceleration of the virtual content in the target display mode from clear to fuzzy based on the second change curve.
The first offset angle may be an angle offset by the first device during the movement, and in a possible implementation manner, may be an angle of rotation of the first device itself in a case where the first device is in a rotational movement state. The first offset angle may be a fixed value or a variable value, and is not limited in the embodiment of the present disclosure. The manner of obtaining the first offset angle of the first device may be detailed in the embodiments disclosed above, and is not described herein again.
Based on the first offset angle, a second offset angle at which the virtual content is dynamically displayed may be determined, where a manner of dynamically displaying the virtual content may be flexibly changed, for example, the virtual content may be rotated in a certain direction, or may be rotated back and forth within a certain angle range.
The second offset angle may be an angle at which the virtual content is offset during the dynamic display process, such as an angle of rotation, an angle of offset of a plane occurring in the displayed interface, or the like. The second offset angle is matched with the first offset angle, and may be the same as the first offset angle in value, or may be in a certain proportional relationship with the first offset angle.
In one example, the second offset angle for dynamically displaying the virtual content is determined based on the first offset angle of the first device rotation, and the target display mode of the virtual content may be: and determining a second offset angle in the process of rotating or offsetting the virtual content according to the first offset angle rotated by the first device.
In some possible implementations, the target display manner of the virtual content may be determined based on the first acceleration and the first offset angle, for example, the virtual content is rotated by the first offset angle while changing from clear to blurred according to the first acceleration; or rotating the virtual content according to the first offset angle, and determining a second acceleration of the rotation according to the first acceleration.
Through the embodiment of the disclosure, the dynamic display mode can be flexibly selected according to the type of the motion state, and the acceleration, the offset angle and the like of the target display mode are matched with the motion state of the first device, so that the flexibility of interaction with the virtual content is enriched, and the experience feeling in the interaction process can be further improved by utilizing the matching of the acceleration and the offset angle.
In one possible implementation, step S122 may include:
and determining a display switching mode of the virtual content as a target display mode of the virtual content based on the swing state of the first device, wherein the display switching mode is matched with the swing state.
The whipping state may include an acceleration of the first device during the movement and/or an angular velocity of the first device during the movement. The shaking acceleration and the angular velocity may be fixed values or variable values, and are not limited in the embodiment of the present disclosure. The manner of obtaining the shaking acceleration and the angular velocity of the first device can be seen in the above embodiments, and is not described herein again.
Based on the flicking state, a display switching manner of the virtual content may be determined, where an implementation form of the display switching manner may refer to each of the above-described disclosed embodiments, and details are not described here. The display switching mode is matched with the swing state, and the display switching mode can be that the display switching acceleration is consistent with the acceleration in the swing state or presents a proportional relation, or the display switching direction is consistent with the angular speed in the swing state or presents a proportional relation, and the like.
In one example, the display switching manner of the virtual content is determined based on the swing state of the first device, and the target display manner as the virtual content may be: and switching the display page of the virtual content according to the direction in the swing state of the first device, and simultaneously determining the acceleration for switching the display page based on the acceleration in the swing state.
Through the embodiment of the disclosure, the incidence relation between the swing state and the display switching mode can be flexibly set, the flexibility of interaction with the virtual content is further enriched, and the experience feeling in the interaction process is improved.
The above disclosed embodiments are only some exemplary implementations, and in some possible implementations, the matching relationships between the motion states and the dynamic display modes and the display switching modes mentioned in the above disclosed embodiments may also be combined or exchanged with each other, so as to further enrich the interaction modes with the virtual content.
Fig. 4 shows a block diagram of a display apparatus 20 of virtual content according to an embodiment of the present disclosure, which includes, as shown in fig. 3:
a motion state acquiring module 21, configured to acquire a motion state of the first device in response to a motion operation for the first device.
And a display mode determining module 22, configured to determine a target display mode of the virtual content based on the motion state, where the target display mode is matched with the motion state.
And the display module 23 is configured to display the virtual content according to the target display mode.
In one possible implementation manner, the display manner determining module is configured to: determining a dynamic display mode matched with the motion state as a target display mode of the virtual content based on the motion state; and/or determining a display switching mode matched with the motion state as a target display mode of the virtual content based on the motion state.
In one possible implementation, the motion state includes: movement and/or rotation; the display mode determination module is further configured to: determining a second acceleration for dynamically displaying the virtual content based on the first acceleration of the movement of the first equipment, wherein the second acceleration is matched with the first acceleration and serves as a target display mode of the virtual content; and/or determining a second offset angle for dynamically displaying the virtual content as a target display mode of the virtual content based on the first offset angle rotated by the first device, wherein the second offset angle is matched with the first offset angle.
In one possible implementation, the motion state includes: a whipping state; the display mode determination module is further configured to: and determining a display switching mode of the virtual content as a target display mode of the virtual content based on the swing state of the first device, wherein the display switching mode is matched with the swing state.
In one possible implementation manner, the motion state obtaining module is configured to: collecting motion information of first equipment; based on the motion information, a motion state of the first device is determined.
In one possible implementation manner, the motion state obtaining module is further configured to: and acquiring IMU data of the first equipment according to an inertial measurement unit IMU in the first equipment to obtain the motion information of the first equipment.
In one possible implementation manner, the motion state obtaining module is further configured to: determining an acceleration change state of the first device as a motion state of the first device based on acceleration information in the motion information; and/or determining the offset angle of the first device as the motion state of the first device based on the angular velocity information in the motion information.
In one possible implementation, the display module is configured to: and controlling the second equipment to display the virtual content according to the target display mode.
In one possible implementation, the virtual content includes AR objects for presentation in an augmented reality AR, and/or VR objects for presentation in a virtual reality VR.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Application scenario example
Fig. 5 and fig. 6 are schematic diagrams illustrating an application example according to the present disclosure, and as shown in the drawings, the application example of the present disclosure proposes a method for displaying virtual content, which may be implemented in the application scenario of fig. 5 through the flow disclosed in fig. 6.
As shown in fig. 6, the method for displaying virtual content according to the application example of the present disclosure may include the following steps:
as shown in fig. 5, in an application example of the present disclosure, after the mobile phone starts the AR application, a current scene may be scanned by a camera on the mobile phone, and virtual news is displayed on the scanned current scene (table) as virtual content (the news content in the figure is subjected to mosaic processing).
Under the condition of carrying out swinging, moving or rotating motion operation on the mobile phone, the motion state of the mobile phone can be determined by collecting IMU data of the mobile phone, and then the motion operation condition is determined, and the display mode of virtual news in the mobile phone changes according to the motion state of the mobile phone, so that interactive response is formed between the mobile phone and the motion operation. For example, when the mobile phone swings up and down, the virtual news can be switched and displayed according to the up and down swing; under the condition that the mobile phone moves rapidly, the virtual news moves along with the mobile phone and shows a fuzzy display state; under the condition that the mobile phone rotates, the virtual news can automatically rotate to a proper watching angle according to the rotation angle of the mobile phone.
By the application example of the disclosure, different operations aiming at virtual news can be realized by controlling different gestures generated by the motion of the mobile phone, and the operations are not limited to the operation of touching the screen by fingers. Therefore, the interactive experience of the virtual contents such as the virtual news and the like is improved, the interactive mode is enriched, and the interaction with the virtual contents is more diversified.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code which, when run on a device, executes instructions for implementing a method as provided by any of the above embodiments.
Embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed, cause a computer to perform the operations of the method provided by any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like.
The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Fig. 7 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 7, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 8 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 8, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A method for displaying virtual content, the method comprising:
acquiring a motion state of a first device in response to a motion operation for the first device;
determining a target display mode of the virtual content based on the motion state, wherein the target display mode is matched with the motion state;
and displaying the virtual content according to the target display mode.
2. The method of claim 1, wherein determining the target display mode of the virtual content based on the motion state comprises:
determining a dynamic display mode matched with the motion state as a target display mode of the virtual content based on the motion state; and/or the presence of a gas in the gas,
and determining a display switching mode matched with the motion state as a target display mode of the virtual content based on the motion state.
3. The method of claim 2, wherein the motion state comprises: movement and/or rotation;
the determining, based on the motion state, a dynamic display mode matched with the motion state as a target display mode of the virtual content includes one or more of the following operations:
determining a second acceleration of the dynamic display of the virtual content as a target display mode of the virtual content based on a first acceleration of the movement of the first device, wherein the second acceleration is matched with the first acceleration; and/or the presence of a gas in the gas,
and determining a second offset angle for dynamically displaying the virtual content as a target display mode of the virtual content based on a first offset angle of the first device, wherein the second offset angle is matched with the first offset angle.
4. A method according to claim 2 or 3, wherein the motion state comprises: a whipping state;
the determining, based on the motion state, a display switching manner matched with the motion state as a target display manner of the virtual content includes:
and determining a display switching mode of the virtual content as a target display mode of the virtual content based on the swing state of the first device, wherein the display switching mode is matched with the swing state.
5. The method according to any one of claims 1 to 4, wherein the obtaining the motion state of the first device comprises:
collecting motion information of the first device;
determining a motion state of the first device based on the motion information.
6. The method of claim 5, wherein the collecting motion information of the first device comprises:
and acquiring IMU data of the first equipment according to an inertial measurement unit IMU in the first equipment to obtain the motion information of the first equipment.
7. The method of claim 5 or 6, wherein the determining the motion state of the first device based on the motion information comprises:
determining an acceleration change state of the first device as a motion state of the first device based on acceleration information in the motion information; and/or the presence of a gas in the gas,
determining an offset angle of the first device as a motion state of the first device based on angular velocity information in the motion information.
8. The method according to any one of claims 1 to 7, wherein the displaying the virtual content according to the target display mode comprises:
and controlling the second equipment to display the virtual content according to the target display mode.
9. The method of any of claims 1 to 8, wherein the virtual content comprises AR objects for presentation in an augmented reality AR and/or VR objects for presentation in a virtual reality VR.
10. An apparatus for displaying virtual content, the apparatus comprising:
the motion state acquisition module is used for responding to motion operation aiming at first equipment and acquiring the motion state of the first equipment;
a display mode determining module, configured to determine a target display mode of the virtual content based on the motion state, where the target display mode is matched with the motion state;
and the display module is used for displaying the virtual content according to the target display mode.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 9.
12. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 9.
CN202110914654.5A 2021-08-10 2021-08-10 Virtual content display method and device, electronic equipment and storage medium Pending CN113608616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110914654.5A CN113608616A (en) 2021-08-10 2021-08-10 Virtual content display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110914654.5A CN113608616A (en) 2021-08-10 2021-08-10 Virtual content display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113608616A true CN113608616A (en) 2021-11-05

Family

ID=78340123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110914654.5A Pending CN113608616A (en) 2021-08-10 2021-08-10 Virtual content display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113608616A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023246307A1 (en) * 2022-06-23 2023-12-28 腾讯科技(深圳)有限公司 Information processing method and apparatus in virtual environment, and device and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019166005A1 (en) * 2018-03-01 2019-09-06 惠州Tcl移动通信有限公司 Smart terminal, sensing control method therefor, and apparatus having storage function
WO2019179314A1 (en) * 2018-03-22 2019-09-26 腾讯科技(深圳)有限公司 Method for displaying marker point position, electronic device, and computer readable storage medium
CN111710047A (en) * 2020-06-05 2020-09-25 北京有竹居网络技术有限公司 Information display method and device and electronic equipment
CN112764658A (en) * 2021-01-26 2021-05-07 北京小米移动软件有限公司 Content display method and device and storage medium
CN112835484A (en) * 2021-02-02 2021-05-25 北京地平线机器人技术研发有限公司 Dynamic display method and device based on operation body, storage medium and electronic equipment
CN113031781A (en) * 2021-04-16 2021-06-25 深圳市慧鲤科技有限公司 Augmented reality resource display method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019166005A1 (en) * 2018-03-01 2019-09-06 惠州Tcl移动通信有限公司 Smart terminal, sensing control method therefor, and apparatus having storage function
WO2019179314A1 (en) * 2018-03-22 2019-09-26 腾讯科技(深圳)有限公司 Method for displaying marker point position, electronic device, and computer readable storage medium
CN111710047A (en) * 2020-06-05 2020-09-25 北京有竹居网络技术有限公司 Information display method and device and electronic equipment
CN112764658A (en) * 2021-01-26 2021-05-07 北京小米移动软件有限公司 Content display method and device and storage medium
CN112835484A (en) * 2021-02-02 2021-05-25 北京地平线机器人技术研发有限公司 Dynamic display method and device based on operation body, storage medium and electronic equipment
CN113031781A (en) * 2021-04-16 2021-06-25 深圳市慧鲤科技有限公司 Augmented reality resource display method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023246307A1 (en) * 2022-06-23 2023-12-28 腾讯科技(深圳)有限公司 Information processing method and apparatus in virtual environment, and device and program product

Similar Documents

Publication Publication Date Title
CN107743604B (en) Touch screen hover detection in augmented and/or virtual reality environments
CN107209568B (en) Method, system, and storage medium for controlling projection in virtual reality space
CN110991327A (en) Interaction method and device, electronic equipment and storage medium
WO2015188614A1 (en) Method and device for operating computer and mobile phone in virtual world, and glasses using same
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
CN110889382A (en) Virtual image rendering method and device, electronic equipment and storage medium
CN112991553B (en) Information display method and device, electronic equipment and storage medium
US20170269712A1 (en) Immersive virtual experience using a mobile communication device
US20180032152A1 (en) Mobile terminal and method for determining scrolling speed
CN109446912B (en) Face image processing method and device, electronic equipment and storage medium
WO2023051356A1 (en) Virtual object display method and apparatus, and electronic device and storage medium
CN111368114B (en) Information display method, device, equipment and storage medium
CN113806054A (en) Task processing method and device, electronic equipment and storage medium
CN113989469A (en) AR (augmented reality) scenery spot display method and device, electronic equipment and storage medium
WO2022134475A1 (en) Point cloud map construction method and apparatus, electronic device, storage medium and program
CN114067087A (en) AR display method and apparatus, electronic device and storage medium
WO2023273498A1 (en) Depth detection method and apparatus, electronic device, and storage medium
CN113608616A (en) Virtual content display method and device, electronic equipment and storage medium
CN112432636B (en) Positioning method and device, electronic equipment and storage medium
CN113611152A (en) Parking lot navigation method and device, electronic equipment and storage medium
CN114119829A (en) Material processing method and device of virtual scene, electronic equipment and storage medium
CN112837372A (en) Data generation method and device, electronic equipment and storage medium
CN114327197A (en) Message sending method, device, equipment and medium
WO2022237071A1 (en) Locating method and apparatus, and electronic device, storage medium and computer program
WO2022110777A1 (en) Positioning method and apparatus, electronic device, storage medium, computer program product, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination