CN116841427A - Display method, display device, electronic apparatus, and readable storage medium - Google Patents
Display method, display device, electronic apparatus, and readable storage medium Download PDFInfo
- Publication number
- CN116841427A CN116841427A CN202210288152.0A CN202210288152A CN116841427A CN 116841427 A CN116841427 A CN 116841427A CN 202210288152 A CN202210288152 A CN 202210288152A CN 116841427 A CN116841427 A CN 116841427A
- Authority
- CN
- China
- Prior art keywords
- virtual
- scene
- size
- scene model
- virtual object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000004044 response Effects 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 8
- 238000007654 immersion Methods 0.000 description 6
- 238000009877 rendering Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a display method, a display device, electronic equipment and a readable storage medium, and belongs to the technical field of three-dimensional modeling. The display method comprises the following steps: acquiring a scene model; acquiring a size proportion parameter, wherein the size proportion parameter comprises the size of a scene corresponding to the scene model and the size of a first object; establishing a virtual object in the scene model according to the size proportion parameters; and displaying the virtual scene according to the position information of the virtual object in the scene model.
Description
Technical Field
The application belongs to the technical field of three-dimensional modeling, and particularly relates to a display method, a display device, electronic equipment and a readable storage medium.
Background
In the prior art, in the process of checking the property through the virtual reality technology, a user can check the virtual scene inside the property only according to a preset viewpoint, so that different users can check the same content only when checking the property model, and the immersion sense of the user in checking the property is influenced.
Disclosure of Invention
The embodiment of the application aims to provide a display method, a display device, electronic equipment and a readable storage medium, which realize that users with different body types can browse scene models from different viewpoints and improve the immersion sense of the users browsing the scene models.
In a first aspect, an embodiment of the present application provides a display method, for a virtual reality device, where the display method includes: acquiring a scene model; acquiring a size proportion parameter, wherein the size proportion parameter comprises the size of a scene corresponding to the scene model and the size of a first object; establishing a virtual object in the scene model according to the size proportion parameters; and displaying the virtual scene according to the position information of the virtual object in the scene model.
In a second aspect, an embodiment of the present application provides a display apparatus, including: the first acquisition module is used for acquiring a scene model; the second acquisition module is used for acquiring size proportion parameters, wherein the size proportion parameters comprise the size of a scene corresponding to the scene model and the size of the first object; the building module is used for building a virtual object in the scene model according to the size proportion parameters; and the display module is used for displaying the virtual scene according to the position information of the virtual object in the scene model.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the method as in the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the display method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface coupled to the processor for running a program or instructions implementing the steps of the display method as in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement a display method as in the first aspect.
In the embodiment of the application, after a scene model and a size proportion parameter are acquired, a virtual object conforming to the real proportion is configured in the scene model according to the size proportion parameter, and the virtual object comprises a preset viewpoint for viewing the scene model. According to the position information of the virtual object in the scene model, the actual position of the preset viewpoint in the scene model can be determined, so that the scene in the scene model is rendered, and the rendered virtual scene is displayed through the virtual reality equipment. Because the virtual object and the preset view point configured in the virtual object are set according to the size proportion parameters, the virtual scene obtained by rendering can be matched with the real scene actually observed by the first object.
In the embodiment of the application, when a user browses a scene model, a virtual object is established in the scene model according to the real size proportion parameter between the user and the scene. And displaying the virtual scene according to the position information of the virtual object in the scene model, so that users with different body types can browse the scene model at different viewpoints, and the immersion of the users browsing the scene model is improved.
Drawings
Fig. 1 shows a flow chart of a display method according to an embodiment of the present application;
fig. 2 is a block diagram showing a structure of a display device according to an embodiment of the present application;
fig. 3 shows a block diagram of an electronic device according to an embodiment of the present application;
fig. 4 shows a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The display method, the display device, the electronic equipment and the readable storage medium provided by the embodiment of the application are described in detail below with reference to fig. 1 to 4 by means of specific embodiments and application scenes thereof.
In an embodiment of the present application, a display method is provided for a virtual reality device, and fig. 1 shows a flow chart of the display method provided in the embodiment of the present application, as shown in fig. 1, the display method includes:
102, acquiring a scene model;
it is understood that the scene model includes, but is not limited to, a model of the inside of the property and a model of the campus where the property is located. The scene model is a model obtained by modeling in advance. The user can send a view request corresponding to the scene model through the virtual reality equipment, so that the virtual reality equipment can acquire the scene model from the server.
104, acquiring a size proportion parameter, wherein the size proportion parameter comprises the proportion of the size of a scene corresponding to the scene model and the size of the first object;
the size of the first object is the real size of the virtual object to be configured in the scene model, and the size of the scene model is the real size of the acquired scene of the scene model. The size proportion parameter between the real size of the scene and the real size of the first object can be obtained according to the real size of the scene and the real size of the first object.
It is worth noting that the size of the first object may be selected as a physical size parameter of the user viewing the scene model.
Step 106, establishing a virtual object in the scene model according to the size proportion parameters;
wherein, because the size proportion parameter is the real size proportion of the first object and the scene, the virtual object is established in the scene model according to the size proportion parameter, and the size proportion of the virtual object in the scene model can be matched with the real size proportion. In the process of establishing the virtual object, not only is the virtual object modeled in the scene model, but also a preset viewpoint is set for the virtual object, so that the virtual object comprises the preset viewpoint. The position of the preset viewpoint is selected as a user real viewpoint for viewing the scene model, eyes of the user are configured on the model of the virtual object according to the size proportion parameters, and the position of the eyes on the virtual object is used as the preset viewpoint.
And step 108, displaying the virtual scene according to the position information of the virtual object in the scene model.
Wherein, since the virtual object includes the preset viewpoint, after determining the position information of the virtual object in the scene model, the position of the preset viewpoint relative to the scene model can be determined, and the scene model is rendered according to the position of the preset viewpoint relative to the scene, so that the virtual scene can be displayed.
It is understood that the virtual reality device includes a head-mounted display device, through which a rendered virtual scene can be displayed.
In the embodiment of the application, after a scene model and a size proportion parameter are acquired, a virtual object conforming to the real proportion is configured in the scene model according to the size proportion parameter, and the virtual object comprises a preset viewpoint for viewing the scene model. According to the position information of the virtual object in the scene model, the actual position of the preset viewpoint in the scene model can be determined, so that the scene in the scene model is rendered, and the rendered virtual scene is displayed through the virtual reality equipment. Because the virtual object and the preset view point configured in the virtual object are set according to the size proportion parameters, the virtual scene obtained by rendering can be matched with the real scene actually observed by the first object.
Specifically, the scene model is selected as an internal model of the property, and the first object is a user viewing the internal model of the property through the virtual reality device. And establishing a virtual object associated with the user in the internal model of the real estate according to the acquired size proportion parameters. The user can control the virtual object to move in the house interior model through the virtual reality device, and view the virtual scenes of all positions in the house interior model with the viewpoint of the virtual object.
In the related art, in the process of viewing the real estate model through the virtual reality technology, a user can only view a virtual scene inside the real estate according to a preset viewpoint. Because the physical parameters of different users are different, the users with different physical parameters all view the virtual scene of the house property model with the same viewpoint, so that the virtual scene is different from the real scene of the house interior viewed by the users.
In the embodiment of the application, when a user browses a scene model, a virtual object is established in the scene model according to the real size proportion parameter between the user and the scene. And displaying the virtual scene according to the position information of the virtual object in the scene model, so that users with different body types can browse the scene model at different viewpoints, and the immersion of the users browsing the scene model is improved.
In some embodiments of the application, building a virtual object in a scene model according to a scale parameter includes: determining model parameters of the virtual object in the scene model according to the size proportion parameters; establishing a virtual object according to the model parameters; the virtual object is placed into the scene model.
In the embodiment of the application, the size proportion parameter comprises real proportion information of the first object and the scene model. After the scene model is acquired, the size parameter of the scene model can be determined, and the model parameter of the virtual object in the scene model can be determined according to the size parameter and the size proportion parameter of the scene model. Modeling the virtual object according to the model parameters, so that the proportion of the virtual object obtained by modeling and the scene model accords with the size proportion in the real size proportion parameters. After modeling is completed, the virtual object is configured into the scene detection model.
Specifically, the scene model is a real estate internal model, and the virtual reality device can acquire the size information of the real estate internal model after acquiring the real estate internal model. After the dimension proportion parameters are obtained, the virtual reality device calculates model parameters of the virtual object according to the dimension information and the dimension proportion parameters of the internal model of the real estate, and therefore the virtual object is modeled according to the model parameters.
It is worth to say that the modeling unit is configured in the virtual reality equipment, the model precision of the virtual object is set lower than that of the scene model, and the calculated amount of the virtual reality equipment is reduced. And the virtual reality device can set a preset viewpoint for the virtual object according to the size proportion parameter in the modeling process, and load images according to the preset viewpoint of the virtual object in the process of displaying the virtual scene.
According to the embodiment of the application, the virtual reality device can accurately model the virtual object according to the size proportion parameter and the size information of the scene model, and the size proportion of the virtual object in the scene model is ensured to be consistent with the real proportion of the first object in the scene.
In some embodiments of the present application, placing a virtual object into a scene model includes: receiving a first input; determining a preset position of the virtual object in the scene model in response to the first input; and configuring the virtual object at a preset position.
In the embodiment of the application, after the virtual reality device completes modeling of the virtual object, the user can set the initial position (preset position) of the virtual object in the scene model by operating the virtual reality device. And configuring the virtual object with the modeling completion to a preset position in the scene model.
Specifically, after the virtual reality device completes modeling of the virtual object, prompt information of the modeling completion is generated and displayed. After the user views the prompt information, the user can mark the preset position of the virtual object reality in the scene model by executing first input to the virtual reality device. After the virtual reality device receives the first input, configuring the virtual object with the modeled virtual object in the scene model according to the preset position corresponding to the first input.
In the embodiment of the application, after the modeling of the virtual object is completed, the user can set the initial position of the virtual object by operating the virtual reality equipment, so that the user can set the initial observation point of the scene model according to the actual requirement.
In some embodiments of the present application, displaying a virtual scene based on location information of a virtual object in a scene model, includes: according to the position information, determining an observation viewpoint of the virtual object in the scene model; receiving a second input; responsive to the second input, determining an observation perspective of the virtual object in the scene model; and displaying the virtual scene according to the observation viewpoint and the observation view angle.
In the embodiment of the application, the virtual object is provided with the preset view point after the modeling is completed, and the observation view point of the virtual object in the model can be determined according to the position information of the virtual object in the scene model. The user can adjust the observation angle at the current observation point by operating the virtual reality device. The virtual reality device is capable of rendering and displaying a virtual scene from an observation viewpoint and an observation perspective of an observation viewpoint position.
Specifically, the virtual reality device may be selected as a wearable device, the wearable device being capable of collecting motion information of a user's head, the second input being the motion information. And the virtual reality equipment determines an observation view angle of the user at the observation view point according to the collected action information, so that the observation view point and a virtual scene under the observation view angle are displayed.
According to the embodiment of the application, the virtual reality device can adjust the observation visual angle at the current observation visual point according to the second input responding to the user, so that the virtual scene required to be checked by the user can be accurately displayed.
In some embodiments of the present application, the display method further includes: receiving a third input; in response to the third input, a location of the virtual object in the scene model is updated.
In the embodiment of the application, the virtual reality device can respond to the third input of the user to adjust the position of the virtual object in the scene model. As the position of the virtual object in the scene model changes, the virtual scene displayed in the virtual reality device changes.
In a possible implementation manner, the virtual reality device includes a wearable device, an action acquisition component is configured in the virtual reality device, action information of a user can be acquired through the action acquisition component, the action information is a third input, and the virtual reality device can adjust the position of the virtual object in the scene model according to the third input.
In another possible embodiment, the virtual reality device includes an operating handle through which the user can send a third input to control movement of the virtual object in the scene model.
In the embodiment of the application, the user controls the virtual object in the scene model to move through the virtual reality equipment, so that the position of the virtual object is updated, and the displayed virtual scene can be adjusted according to the actual requirement by the user.
In some embodiments of the present application, the display method further includes: acquiring a preset moving track of the virtual object in the scene model; and updating the position of the virtual object in the scene model according to the preset moving track.
In the embodiment of the application, the scene models are respectively provided with the preset moving tracks, the preset moving tracks are in one-to-one correspondence with the scene models, and after the user acquires the scene models, the corresponding preset moving tracks can be downloaded in the server according to the identification information of the scene models. After the modeling of the virtual object is completed, the virtual reality device can control the virtual object to move in the scene model according to the acquired preset movement track. As the virtual object moves, the virtual scene displayed by the virtual reality device changes.
According to the embodiment of the application, the preset moving track of the virtual object is obtained through the virtual reality equipment, and the virtual object is controlled to move in the scene model according to the preset moving track, so that the virtual scene can be checked according to the preset moving track without manual operation of a user.
In some embodiments of the application, obtaining the dimensional scaling parameter comprises: acquiring a first size parameter of a first object; acquiring a second size parameter in the scene model, wherein the second size parameter comprises the size of the scene; and determining a size proportion parameter according to the first size parameter and the second size parameter.
In the embodiment of the application, according to the first size parameter of the first object and the second size parameter of the scene corresponding to the scene model, wherein the first size parameter and the second size parameter are the real sizes of the first object and the scene respectively. And calculating the ratio according to the first size parameter and the second size parameter to obtain the ratio parameter.
The first size parameter may be a real size parameter of the first object acquired by the virtual reality device, or a size parameter manually input to the virtual reality device by a user.
It should be noted that the second size parameter is a parameter stored in the server together with the scene model after modeling the scene is completed.
In some possible embodiments, the virtual reality device includes a size acquisition device, and the real size parameter of the first object can be obtained by scanning the first object through the size acquisition device.
In some other possible embodiments, the virtual reality device receives a size parameter input by a user, and takes the received size parameter as the first size parameter.
In the embodiment of the application, the virtual reality equipment can determine the size proportion parameter of the first object and the scene according to the real sizes of the scene and the first object, thereby ensuring the accuracy of the obtained size proportion parameter and further improving the reality of the proportion between the virtual object and the scene model.
In some embodiments of the present application, a virtual reality device includes an acquisition apparatus that acquires a first size parameter of a first object, including: collecting the first size parameter through a collecting device; and/or receiving a fourth input; in response to the fourth input, a first size parameter of the first object is set according to the fourth input.
In the embodiment of the application, the virtual reality equipment comprises the acquisition device, and the acquisition device can acquire parameters such as height, width and the like of the first object under the condition that the first object is a user, so as to determine the first size parameter of the first object.
The user is also able to send a fourth input with the first size parameter of the first object to the virtual reality device. A fourth input is received at the virtual reality device from the user, and a first size parameter of the first object is determined from the fourth input.
Specifically, when the user needs to simulate the movement of the user in the scene, the virtual reality device can be selected to collect the first size parameter of the user through the collecting device, determine the size proportion parameter according to the first size parameter, model the virtual object according to the obtained size proportion parameter, and when the user browses the virtual scene, the user can simulate the viewing angle of the user to view the virtual scene. Under the condition that a user needs to simulate movement of other people in a scene, a fourth input can be input to the virtual reality equipment, the virtual reality equipment obtains first dimension parameters of other users according to the fourth input, determines dimension proportion parameters according to the first dimension parameters, models a virtual object according to the obtained dimension proportion parameters, and when the user browses the virtual scene, the user can simulate the visual angle of other users to view the virtual scene.
In the embodiment of the application, the virtual reality equipment can directly acquire the first size parameter through the acquisition device. The user can also set the first size parameter of the first object directly through the fourth input.
In some embodiments of the present application, the number of the virtual reality devices and the number of the first objects are all plural, and the first objects are in one-to-one correspondence with the virtual reality devices; the display method further comprises the following steps: establishing a mapping relation between a plurality of virtual objects and a plurality of virtual devices; and displaying different virtual scenes in the plurality of virtual reality devices according to the mapping relation and the position information of the plurality of virtual objects in the scene model.
In the embodiment of the application, the number of the virtual reality devices corresponds to the number of the first objects one by one, so that different users can browse the same scene model at the same time, and the observation view points and the observation view angles of the virtual scenes browsed by different virtual reality devices of each user are different.
Specifically, when the output amount of the user (first object) is plural, the plural users wear different virtual reality devices, respectively, and the plural virtual reality devices can access the server at the same time, and the plural virtual objects corresponding to the plural users are arranged in the scene model. The plurality of users can browse the virtual scene with the observation point and the observation angle of the respective virtual objects through the respective virtual reality devices.
In the embodiment of the application, under the condition that a plurality of users need to watch the scenes in the scene model at the same time, different virtual objects can be established for different users, so that the plurality of users can browse the virtual scenes of the scene model at different observation viewpoints and observation angles.
According to the display method provided by the embodiment of the application, the execution main body can be a display device. In the embodiment of the present application, a display device executing a display method is taken as an example, and the display device provided in the embodiment of the present application is described.
In some embodiments of the present application, a display device is provided, fig. 2 shows a block diagram of a display device provided in an embodiment of the present application, and as shown in fig. 2, a display device 200 includes:
a first obtaining module 202, configured to obtain a scene model;
a second obtaining module 204, configured to obtain a size proportion parameter, where the size proportion parameter includes a proportion of a size of a scene corresponding to the scene model and a size of the first object;
a building module 206, configured to build a virtual object in the scene model according to the size proportion parameter;
a display module 208, configured to display the virtual scene according to the location information of the virtual object in the scene model.
In the embodiment of the application, after a scene model and a size proportion parameter are acquired, a virtual object conforming to the real proportion is configured in the scene model according to the size proportion parameter, and the virtual object comprises a preset viewpoint for viewing the scene model. According to the position information of the virtual object in the scene model, the actual position of the preset viewpoint in the scene model can be determined, so that the scene in the scene model is rendered, and the rendered virtual scene is displayed through the virtual reality equipment. Because the virtual object and the preset view point configured in the virtual object are set according to the size proportion parameters, the virtual scene obtained by rendering can be matched with the real scene actually observed by the first object.
Specifically, the scene model is selected as an internal model of the property, and the first object is a user viewing the internal model of the property through the virtual reality device. And establishing a virtual object associated with the user in the internal model of the real estate according to the acquired size proportion parameters. The user can control the virtual object to move in the house interior model through the virtual reality device, and view the virtual scenes of all positions in the house interior model with the viewpoint of the virtual object.
In the related art, in the process of viewing the real estate model through the virtual reality technology, a user can only view a virtual scene inside the real estate according to a preset viewpoint. Because the physical parameters of different users are different, the users with different physical parameters all view the virtual scene of the house property model with the same viewpoint, so that the virtual scene is different from the real scene of the house interior viewed by the users.
In the embodiment of the application, when a user browses a scene model, a virtual object is established in the scene model according to the real size proportion parameter between the user and the scene. And displaying the virtual scene according to the position information of the virtual object in the scene model, so that users with different body types can browse the scene model at different viewpoints, and the immersion of the users browsing the scene model is improved.
In some embodiments of the present application, the display apparatus 200 includes:
the determining module is used for determining model parameters of the virtual object in the scene model according to the size proportion parameters;
the establishing module 206 is further configured to establish a virtual object according to the model parameters;
and the configuration module is used for placing the virtual object into the scene model.
The size scale parameter comprises real scale information of the first object and the scene model. After the scene model is acquired, the size parameter of the scene model can be determined, and the model parameter of the virtual object in the scene model can be determined according to the size parameter and the size proportion parameter of the scene model. Modeling the virtual object according to the model parameters, so that the proportion of the virtual object obtained by modeling and the scene model accords with the size proportion in the real size proportion parameters. After modeling is completed, the virtual object is configured into the scene detection model.
Specifically, the scene model is a real estate internal model, and the virtual reality device can acquire the size information of the real estate internal model after acquiring the real estate internal model. After the dimension proportion parameters are obtained, the virtual reality device calculates model parameters of the virtual object according to the dimension information and the dimension proportion parameters of the internal model of the real estate, and therefore the virtual object is modeled according to the model parameters.
It is worth to say that the modeling unit is configured in the virtual reality equipment, the model precision of the virtual object is set lower than that of the scene model, and the calculated amount of the virtual reality equipment is reduced. And the virtual reality device can set a preset viewpoint for the virtual object according to the size proportion parameter in the modeling process, and load images according to the preset viewpoint of the virtual object in the process of displaying the virtual scene.
According to the embodiment of the application, the virtual reality device can accurately model the virtual object according to the size proportion parameter and the size information of the scene model, and the size proportion of the virtual object in the scene model is ensured to be consistent with the real proportion of the first object in the scene.
In some embodiments of the present application, the display apparatus 200 includes:
a receiving module for receiving a first input;
the determining module is further used for responding to the first input and determining the preset position of the virtual object in the scene model;
the configuration module is also used for configuring the virtual object at a preset position.
In the embodiment of the application, after the virtual reality device completes modeling of the virtual object, the user can set the initial position (preset position) of the virtual object in the scene model by operating the virtual reality device. And configuring the virtual object with the modeling completion to a preset position in the scene model.
Specifically, after the virtual reality device completes modeling of the virtual object, prompt information of the modeling completion is generated and displayed. After the user views the prompt information, the user can mark the preset position of the virtual object reality in the scene model by executing first input to the virtual reality device. After the virtual reality device receives the first input, configuring the virtual object with the modeled virtual object in the scene model according to the preset position corresponding to the first input.
In the embodiment of the application, after the modeling of the virtual object is completed, the user can set the initial position of the virtual object by operating the virtual reality equipment, so that the user can set the initial observation point of the scene model according to the actual requirement.
In some embodiments of the present application, the determining module is further configured to determine an observation viewpoint of the virtual object in the scene model according to the location information;
the receiving module is also used for receiving a second input;
a determination module, further configured to determine an observation perspective of the virtual object in the scene model in response to the second input;
the display module 208 is further configured to display the virtual scene according to the observation point and the observation angle.
In the embodiment of the application, the virtual object is provided with the preset view point after the modeling is completed, and the observation view point of the virtual object in the model can be determined according to the position information of the virtual object in the scene model. The user can adjust the observation angle at the current observation point by operating the virtual reality device. The virtual reality device is capable of rendering and displaying a virtual scene from an observation viewpoint and an observation perspective of an observation viewpoint position.
Specifically, the virtual reality device may be selected as a wearable device, the wearable device being capable of collecting motion information of a user's head, the second input being the motion information. And the virtual reality equipment determines an observation view angle of the user at the observation view point according to the collected action information, so that the observation view point and a virtual scene under the observation view angle are displayed.
According to the embodiment of the application, the virtual reality device can adjust the observation visual angle at the current observation visual point according to the second input responding to the user, so that the virtual scene required to be checked by the user can be accurately displayed.
In some embodiments of the application, the receiving module is further configured to receive a third input;
the display device 200 further includes:
an update module for updating the location of the virtual object in the scene model in response to the third input.
According to the embodiment of the application, the virtual reality equipment can respond to the third input of the user to adjust the position of the virtual object in the scene model. As the position of the virtual object in the scene model changes, the virtual scene displayed in the virtual reality device changes.
In a possible implementation manner, the virtual reality device includes a wearable device, an action acquisition component is configured in the virtual reality device, action information of a user can be acquired through the action acquisition component, the action information is a third input, and the virtual reality device can adjust the position of the virtual object in the scene model according to the third input.
In another possible embodiment, the virtual reality device includes an operating handle through which the user can send a third input to control movement of the virtual object in the scene model.
In the embodiment of the application, the user controls the virtual object in the scene model to move through the virtual reality equipment, so that the position of the virtual object is updated, and the displayed virtual scene can be adjusted according to the actual requirement by the user.
In some embodiments of the present application, the first obtaining module 202 is further configured to obtain a preset movement track of the virtual object in the scene model;
and the updating module is also used for updating the position of the virtual object in the scene model according to the preset moving track.
In the embodiment of the application, the scene models are respectively provided with the preset moving tracks, the preset moving tracks are in one-to-one correspondence with the scene models, and after the user acquires the scene models, the corresponding preset moving tracks can be downloaded in the server according to the identification information of the scene models. After the modeling of the virtual object is completed, the virtual reality device can control the virtual object to move in the scene model according to the acquired preset movement track. As the virtual object moves, the virtual scene displayed by the virtual reality device changes.
According to the embodiment of the application, the preset moving track of the virtual object is obtained through the virtual reality equipment, and the virtual object is controlled to move in the scene model according to the preset moving track, so that the virtual scene can be checked according to the preset moving track without manual operation of a user.
In some embodiments of the present application, the second obtaining module 204 is further configured to obtain a first size parameter of the first object;
the first obtaining module 202 is further configured to obtain a second size parameter in the scene model, where the second size parameter includes a size of the scene;
the determining module is further used for determining a size proportion parameter according to the first size parameter and the second size parameter.
In the embodiment of the application, according to the first size parameter of the first object and the second size parameter of the scene corresponding to the scene model, wherein the first size parameter and the second size parameter are the real sizes of the first object and the scene respectively. And calculating the ratio according to the first size parameter and the second size parameter to obtain the ratio parameter.
The first size parameter may be a real size parameter of the first object acquired by the virtual reality device, or a size parameter manually input to the virtual reality device by a user.
It should be noted that the second size parameter is a parameter stored in the server together with the scene model after modeling the scene is completed.
In some possible embodiments, the virtual reality device includes a size acquisition device, and the real size parameter of the first object can be obtained by scanning the first object through the size acquisition device.
In some other possible embodiments, the virtual reality device receives a size parameter input by a user, and takes the received size parameter as the first size parameter.
In the embodiment of the application, the virtual reality equipment can determine the size proportion parameter of the first object and the scene according to the real sizes of the scene and the first object, thereby ensuring the accuracy of the obtained size proportion parameter and further improving the reality of the proportion between the virtual object and the scene model.
In some embodiments of the present application, the display apparatus 200 further includes:
the acquisition module is used for acquiring the first size parameter through the acquisition device;
the receiving module is also used for receiving a fourth input;
the setting module is used for responding to the fourth input and setting the first size parameter of the first object according to the fourth input.
In the embodiment of the application, the virtual reality equipment comprises the acquisition device, and the acquisition device can acquire parameters such as height, width and the like of the first object under the condition that the first object is a user, so as to determine the first size parameter of the first object.
The user is also able to send a fourth input with the first size parameter of the first object to the virtual reality device. A fourth input is received at the virtual reality device from the user, and a first size parameter of the first object is determined from the fourth input.
Specifically, when the user needs to simulate the movement of the user in the scene, the virtual reality device can be selected to collect the first size parameter of the user through the collecting device, determine the size proportion parameter according to the first size parameter, model the virtual object according to the obtained size proportion parameter, and when the user browses the virtual scene, the user can simulate the viewing angle of the user to view the virtual scene. Under the condition that a user needs to simulate movement of other people in a scene, a fourth input can be input to the virtual reality equipment, the virtual reality equipment obtains first dimension parameters of other users according to the fourth input, determines dimension proportion parameters according to the first dimension parameters, models a virtual object according to the obtained dimension proportion parameters, and when the user browses the virtual scene, the user can simulate the visual angle of other users to view the virtual scene.
In the embodiment of the application, the virtual reality equipment can directly acquire the first size parameter through the acquisition device. The user can also set the first size parameter of the first object directly through the fourth input.
In some embodiments of the present application, the number of the virtual reality devices and the first objects are multiple, and the first objects are in one-to-one correspondence with the virtual reality devices.
The display device 200 further includes:
the setting module is used for establishing mapping relation between a plurality of virtual objects and a plurality of virtual devices;
the display module 208 is further configured to display different virtual scenes in the plurality of virtual reality devices according to the mapping relationship and the location information of the plurality of virtual objects in the scene model.
According to the embodiment of the application, the number of the virtual reality devices is in one-to-one correspondence with the number of the first objects, so that different users can browse the same scene model at the same time, and the observation viewpoints and the observation angles of the virtual scenes browsed by each user through different virtual reality devices are different.
Specifically, when the output amount of the user (first object) is plural, the plural users wear different virtual reality devices, respectively, and the plural virtual reality devices can access the server at the same time, and the plural virtual objects corresponding to the plural users are arranged in the scene model. The plurality of users can browse the virtual scene with the observation point and the observation angle of the respective virtual objects through the respective virtual reality devices.
In the embodiment of the application, under the condition that a plurality of users need to watch the scenes in the scene model at the same time, different virtual objects can be established for different users, so that the plurality of users can browse the virtual scenes of the scene model at different observation viewpoints and observation angles.
The display device in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The display device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The display device provided by the embodiment of the application can realize each process realized by the embodiment of the method, and in order to avoid repetition, the description is omitted here.
Optionally, an embodiment of the present application further provides an electronic device, fig. 3 shows a block diagram of an electronic device according to an embodiment of the present application, as shown in fig. 3, an electronic device 300 includes a processor 302 and a memory 304, where the memory 304 stores a program or an instruction that can be executed on the processor 302, and the program or the instruction when executed by the processor 302 implements each step of the foregoing method embodiment, and the same technical effects can be achieved, so that repetition is avoided and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the network module 402 is configured to obtain a scene model;
an input unit 404, configured to obtain a size proportion parameter, where the size proportion parameter includes a proportion of a size of a scene corresponding to the scene model and a size of the first object;
a processor 410 for creating a virtual object in the scene model according to the size scale parameters;
and a display unit 406 for displaying the virtual scene according to the position information of the virtual object in the scene model.
In the embodiment of the application, when a user browses a scene model, a virtual object is established in the scene model according to the real size proportion parameter between the user and the scene. And displaying the virtual scene according to the position information of the virtual object in the scene model, so that users with different body types can browse the scene model at different viewpoints, and the immersion of the users browsing the scene model is improved.
Further, the processor 410 is configured to determine model parameters of the virtual object in the scene model according to the size scale parameters;
a processor 410 for creating a virtual object according to the model parameters;
a processor 410 for placing virtual objects into a scene model.
According to the embodiment of the application, the virtual reality device can accurately model the virtual object according to the size proportion parameter and the size information of the scene model, and the size proportion of the virtual object in the scene model is ensured to be consistent with the real proportion of the first object in the scene.
Further, a user input unit 407 for receiving a first input;
a processor 410 for determining a preset position of the virtual object in the scene model in response to the first input;
the processor 410 is configured to configure the virtual object at a preset position.
In the embodiment of the application, after the modeling of the virtual object is completed, the user can set the initial position of the virtual object by operating the virtual reality equipment, so that the user can set the initial observation point of the scene model according to the actual requirement.
Further, the processor 410 is configured to determine an observation point of the virtual object in the scene model according to the location information;
A user input unit 407 for receiving a second input;
a processor 410 for determining an observation perspective of the virtual object in the scene model in response to the second input;
and a display unit 406 for displaying the virtual scene according to the observation point and the observation angle.
According to the embodiment of the application, the virtual reality device can adjust the observation visual angle at the current observation visual point according to the second input responding to the user, so that the virtual scene required to be checked by the user can be accurately displayed.
Further, a user input unit 407 for receiving a third input;
the processor 410 is configured to update the location of the virtual object in the scene model in response to the third input.
In the embodiment of the application, the user controls the virtual object in the scene model to move through the virtual reality equipment, so that the position of the virtual object is updated, and the displayed virtual scene can be adjusted according to the actual requirement by the user.
Further, the network module 402 is configured to obtain a preset movement track of the virtual object in the scene model;
the processor 410 is configured to update the position of the virtual object in the scene model according to the preset movement track.
According to the embodiment of the application, the preset moving track of the virtual object is obtained through the virtual reality equipment, and the virtual object is controlled to move in the scene model according to the preset moving track, so that the virtual scene can be checked according to the preset moving track without manual operation of a user.
Further, an input unit 404, configured to obtain a first size parameter of the first object;
a network module 402, configured to obtain a second size parameter in the scene model, where the second size parameter includes a size of the scene;
a processor 410 for determining a size ratio parameter based on the first size parameter and the second size parameter.
In the embodiment of the application, the virtual reality equipment can determine the size proportion parameter of the first object and the scene according to the real sizes of the scene and the first object, thereby ensuring the accuracy of the obtained size proportion parameter and further improving the reality of the proportion between the virtual object and the scene model.
Further, a sensor 405, configured to acquire, by using an acquisition device, the first dimension parameter;
a user input unit 407 for receiving a fourth input;
the processor 410 is configured to set a first size parameter of the first object according to the fourth input in response to the fourth input.
In the embodiment of the application, the virtual reality equipment can directly acquire the first size parameter through the acquisition device. The user can also set the first size parameter of the first object directly through the fourth input.
Further, the number of the virtual reality devices and the number of the first objects are multiple, and the first objects are in one-to-one correspondence with the virtual reality devices.
A processor 410, configured to establish a mapping relationship between a plurality of virtual objects and a plurality of virtual devices;
a display unit 406, configured to display different virtual scenes in the plurality of virtual reality devices according to the mapping relationship and the position information of the plurality of virtual objects in the scene model.
In the embodiment of the application, under the condition that a plurality of users need to watch the scenes in the scene model at the same time, different virtual objects can be established for different users, so that the plurality of users can browse the virtual scenes of the scene model at different observation viewpoints and observation angles.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes at least one of a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 409 may include volatile memory or nonvolatile memory, or the memory 409 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 409 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
Processor 410 may include one or more processing units; optionally, the processor 410 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the above-mentioned display method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is provided here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the display method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the display method described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (12)
1. A display method for a virtual reality device, the display method comprising:
acquiring a scene model;
acquiring a size proportion parameter, wherein the size proportion parameter comprises the proportion of the size of a scene corresponding to the scene model and the size of the first object;
establishing a virtual object in the scene model according to the size proportion parameter;
and displaying the virtual scene according to the position information of the virtual object in the scene model.
2. The display method according to claim 1, wherein said creating a virtual object in said scene model according to said scale parameter comprises:
determining model parameters of the virtual object in the scene model according to the size proportion parameters;
establishing the virtual object according to the model parameters;
and placing the virtual object into the scene model.
3. The display method according to claim 2, wherein said placing the virtual object into the scene model comprises:
receiving a first input;
determining a preset position of the virtual object in the scene model in response to the first input;
and configuring the virtual object at the preset position.
4. The display method according to claim 1, wherein displaying the virtual scene according to the position information of the virtual object in the scene model includes:
determining an observation viewpoint of the virtual object in the scene model according to the position information;
receiving a second input;
determining an observation perspective of the virtual object in the scene model in response to the second input;
and displaying the virtual scene according to the observation viewpoint and the observation view angle.
5. The display method according to any one of claims 1 to 4, characterized by further comprising:
receiving a third input;
in response to the third input, a location of the virtual object in the scene model is updated.
6. The display method according to any one of claims 1 to 4, characterized by further comprising:
acquiring a preset moving track of the virtual object in the scene model;
and updating the position of the virtual object in the scene model according to the preset moving track.
7. The display method according to any one of claims 1 to 4, wherein the acquiring the size ratio parameter includes:
Acquiring a first size parameter of the first object;
acquiring a second size parameter in the scene model, wherein the second size parameter comprises the size of a scene corresponding to the scene model;
and determining the size proportion parameter according to the first size parameter and the second size parameter.
8. The display method according to claim 7, wherein the virtual reality device includes an acquisition means, and the acquiring the first size parameter of the first object includes:
collecting the first size parameter through the collecting device; and/or
Receiving a fourth input;
in response to the fourth input, a first size parameter of the first object is set according to the fourth input.
9. The display method according to any one of claims 1 to 4, wherein,
the number of the virtual reality devices and the number of the first objects are multiple, and the first objects are in one-to-one correspondence with the virtual reality devices;
the display method further comprises the following steps:
establishing mapping relations between a plurality of virtual objects and a plurality of virtual devices;
and displaying different virtual scenes in the plurality of virtual reality devices according to the mapping relation and the position information of the plurality of virtual objects in the scene model.
10. A display device, comprising:
the first acquisition module is used for acquiring a scene model;
the second acquisition module is used for acquiring a size proportion parameter, wherein the size proportion parameter comprises the proportion of the size of a scene corresponding to the scene model and the size of the first object;
the establishing module is used for establishing a virtual object in the scene model according to the size proportion parameter;
and the display module is used for displaying the virtual scene according to the position information of the virtual object in the scene model.
11. An electronic device, comprising:
a memory having stored thereon programs or instructions;
a processor for implementing the steps of the display method according to any one of claims 1 to 9 when executing the program or instructions.
12. A readable storage medium having stored thereon a program or instructions, which when executed by a processor, implement the steps of the display method according to any of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210288152.0A CN116841427A (en) | 2022-03-23 | 2022-03-23 | Display method, display device, electronic apparatus, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210288152.0A CN116841427A (en) | 2022-03-23 | 2022-03-23 | Display method, display device, electronic apparatus, and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116841427A true CN116841427A (en) | 2023-10-03 |
Family
ID=88160345
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210288152.0A Pending CN116841427A (en) | 2022-03-23 | 2022-03-23 | Display method, display device, electronic apparatus, and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116841427A (en) |
-
2022
- 2022-03-23 CN CN202210288152.0A patent/CN116841427A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111145352A (en) | House live-action picture display method and device, terminal equipment and storage medium | |
US20170153787A1 (en) | Injection of 3-d virtual objects of museum artifact in ar space and interaction with the same | |
CN111739169B (en) | Product display method, system, medium and electronic equipment based on augmented reality | |
US11880999B2 (en) | Personalized scene image processing method, apparatus and storage medium | |
CN114387400A (en) | Three-dimensional scene display method, display device, electronic equipment and server | |
CN112927259A (en) | Multi-camera-based bare hand tracking display method, device and system | |
CN109859100A (en) | Display methods, electronic equipment and the computer readable storage medium of virtual background | |
CN111179438A (en) | AR model dynamic fixing method and device, electronic equipment and storage medium | |
CN114638939A (en) | Model generation method, model generation device, electronic device, and readable storage medium | |
CN111901518B (en) | Display method and device and electronic equipment | |
KR102063408B1 (en) | Method and apparatus for interaction with virtual objects | |
JP2874865B1 (en) | Visitor guide support apparatus and method | |
CN116841427A (en) | Display method, display device, electronic apparatus, and readable storage medium | |
CN115861579A (en) | Display method and device thereof | |
CN112328155B (en) | Input device control method and device and electronic device | |
CN114327174A (en) | Virtual reality scene display method and cursor three-dimensional display method and device | |
CN112565597A (en) | Display method and device | |
CN112529770A (en) | Image processing method, image processing device, electronic equipment and readable storage medium | |
CN114785958B (en) | Angle measuring method and device | |
CN110060355B (en) | Interface display method, device, equipment and storage medium | |
CN114332433A (en) | Information output method and device, readable storage medium and electronic equipment | |
CN117406861A (en) | Display method, display device and electronic equipment | |
KR20190043618A (en) | Image correction method and system using correction pattern analysis | |
CN116860104A (en) | Sand table model display method and device, electronic equipment and readable storage medium | |
CN116863104A (en) | House property display method and device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |