CN113112613A - Model display method and device, electronic equipment and storage medium - Google Patents

Model display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113112613A
CN113112613A CN202110438621.8A CN202110438621A CN113112613A CN 113112613 A CN113112613 A CN 113112613A CN 202110438621 A CN202110438621 A CN 202110438621A CN 113112613 A CN113112613 A CN 113112613A
Authority
CN
China
Prior art keywords
model
scene
viewpoint
virtual
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110438621.8A
Other languages
Chinese (zh)
Other versions
CN113112613B (en
Inventor
白杰
李阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seashell Housing Beijing Technology Co Ltd
Original Assignee
Beijing Fangjianghu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fangjianghu Technology Co Ltd filed Critical Beijing Fangjianghu Technology Co Ltd
Priority to CN202110438621.8A priority Critical patent/CN113112613B/en
Publication of CN113112613A publication Critical patent/CN113112613A/en
Application granted granted Critical
Publication of CN113112613B publication Critical patent/CN113112613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses a model display method, a model display device, electronic equipment and a storage medium. The model display method comprises the following steps: displaying a first model scene in a first model, wherein a viewpoint of the first model scene is a current virtual position of a user in the first model; determining a target virtual viewpoint matching the current virtual position from a virtual viewpoint set included in model data of a second model, wherein the first model and the second model are generated based on the same spatial structure; and in response to detecting a second model display operation aiming at the second model, displaying a second model scene in the second model, wherein the viewpoint of the second model scene is the target virtual viewpoint, and the visual angle of the second model scene is the current virtual visual angle of the user. The embodiment of the disclosure can improve the connectivity of model display and improve the experience of a user in browsing the model to a certain extent.

Description

Model display method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of virtual reality, and in particular, to a model display method and apparatus, an electronic device, and a storage medium.
Background
The object displayed by the model can be a real-world entity or a fictional object.
In the prior art, real-world entities, or fictional objects, are often displayed to a user by building a model.
For example, during VR (Virtual Reality) viewing of a house, a three-dimensional model of a real house, or a three-dimensional model of a Virtual house, may be displayed to a user. At present, after a three-dimensional model display request of a user is received, model data is loaded in a link jump mode and is displayed to the user after being rendered. However, the speed at which three-dimensional models are displayed is generally slow. Moreover, when a user browses a certain room in the real house three-dimensional model, the user jumps to the virtual house three-dimensional model through the link, the user can jump to a preset room and a preset visual angle, the user's viewing viewpoint and the visual angle are suddenly switched, the browsing connectivity is affected, and bad use experience can be generated to a certain extent.
Disclosure of Invention
The embodiment of the disclosure provides a model display method and device, an electronic device and a storage medium, which can improve the connectivity of model display and improve the experience of a user in browsing a model to a certain extent by displaying a second model scene under a target virtual viewpoint matched with the current virtual position of the user in a first model and under the current virtual view angle of the user.
According to a first aspect of the embodiments of the present disclosure, there is provided a model display method, including:
displaying a first model scene in a first model, wherein a viewpoint of the first model scene is a current virtual position of a user in the first model;
determining a target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in model data of a second model, wherein the first model and the second model are generated based on the same spatial structure;
in response to detecting a second model display operation for the second model, displaying a second model scene in the second model, wherein the viewpoint of the second model scene is the target virtual viewpoint and the perspective of the second model scene is the current virtual perspective of the user.
Optionally, in the method according to any embodiment of the present disclosure, the determining, from the set of virtual viewpoints included in the model data of the second model, a target virtual viewpoint matching the current virtual position includes:
determining the distance between each shooting viewpoint in a preset shooting viewpoint set and the real position corresponding to the current virtual position to obtain a distance set, wherein the shooting viewpoints in the preset shooting viewpoint set correspond to the virtual viewpoints in the virtual viewpoint set one by one;
determining a preset number of distances from the distance set according to the sequence from small to large;
and determining the virtual viewpoint corresponding to the distance in the preset number of distances as the target virtual viewpoint matched with the current virtual position.
Optionally, in the method according to any embodiment of the present disclosure, the determining, from the set of virtual viewpoints included in the model data of the second model, a target virtual viewpoint matching the current virtual position includes:
and determining a virtual viewpoint corresponding to a shooting viewpoint, in the virtual viewpoint set included in the model data of the second model, of which the distance between the real positions corresponding to the current virtual position is smaller than or equal to a preset distance threshold value, as the target virtual viewpoint matched with the current virtual position.
Optionally, in the method of any embodiment of the present disclosure, the displaying a second model scene in the second model includes
And displaying a second model scene in the second model on an execution page of the second model display operation.
Optionally, in the method according to any embodiment of the present disclosure, before the user performs the second model display operation, a model scene of the first model is displayed on an execution page of the second model display operation; and
the displaying a second model scene in the second model on the execution page of the second model display operation comprises:
updating the model scene of the first model to the second model scene; or
And displaying an execution page of the operation on the second model, and displaying the model scene of the first model and the second model scene.
Optionally, in the method of any embodiment of the present disclosure, before the displaying a second model scene in the second model in response to detecting a second model display operation for the second model, the method further includes:
responding to the model data of the second model including unloaded model data under the target virtual viewpoint, taking the unloaded model data as target model data, and loading the target model data; and
the displaying a second model scene in the second model in response to detecting a second model display operation for the second model, comprising:
in response to detecting a second model display operation for the second model, determining model scene data for the second model scene from the target model data;
displaying the second model scene by rendering the model scene data.
Optionally, in the method of any embodiment of the present disclosure, the method further includes:
acquiring current virtual pose information of the user in real time;
and determining model data matched with the latest acquired current virtual pose information from the model data loaded to the local, and displaying the model scene in the second model by rendering the model data matched with the latest acquired current virtual pose information.
Optionally, in the method according to any embodiment of the present disclosure, the first model scene is a real scene of a house, the second model scene is a simulated scene obtained by virtualizing a finishing effect on the house, and both the first model and the second model are three-dimensional models.
According to a second aspect of the embodiments of the present disclosure, there is provided a model display apparatus including:
a first display unit configured to display a first model scene in a first model, wherein a viewpoint of the first model scene is a current virtual position of a user in the first model;
a first determination unit configured to determine a target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in model data of a second model, wherein the first model and the second model are generated based on the same spatial structure;
a second display unit configured to display a second model scene in the second model in response to detecting a second model display operation for the second model, wherein a viewpoint of the second model scene is the target virtual viewpoint and a viewing angle of the second model scene is a current virtual viewing angle of the user.
Optionally, in the apparatus according to any embodiment of the present disclosure, the first determining unit includes:
a first determining subunit, configured to determine a distance between each shooting viewpoint in a preset shooting viewpoint set and a real position corresponding to the current virtual position, to obtain a distance set, where the shooting viewpoints in the preset shooting viewpoint set correspond to virtual viewpoints in the virtual viewpoint set one to one;
a second determining subunit configured to determine a preset number of the distances from the distance set in order from small to large;
a third determining subunit configured to determine the virtual viewpoint corresponding to the distance of the preset number of distances as the target virtual viewpoint matching the current virtual position.
Optionally, in the apparatus according to any embodiment of the present disclosure, the first determining unit includes:
a fourth determining subunit configured to determine, as the target virtual viewpoint matching the current virtual position, a virtual viewpoint corresponding to a shooting viewpoint for which a distance between real positions corresponding to the current virtual position is less than or equal to a preset distance threshold, from among the set of virtual viewpoints included in the model data of the second model.
Optionally, in the apparatus of any embodiment of the present disclosure, the second display unit includes
A first display subunit configured to display a second model scene in the second model on an execution page of the second model display operation.
Optionally, in the apparatus according to any embodiment of the present disclosure, before the user performs the second model display operation, a model scene of the first model is displayed on an execution page of the second model display operation; and
the first display subunit includes:
an update module configured to update a model scene of the first model to the second model scene; or
A display module configured to display an execution page of an operation in the second model, a model scene of the first model, and the second model scene.
Optionally, in the apparatus of any embodiment of the present disclosure, the apparatus further includes:
a loading unit configured to load, in response to model data of the second model including model data that is not loaded at the target virtual viewpoint, the target model data using the model data that is not loaded as target model data; and
the second display unit includes:
a fifth determining subunit configured to determine, in response to detecting a second model display operation for the second model, model scene data of the second model scene from the target model data;
a second display subunit configured to display the second model scene by rendering the model scene data.
Optionally, in the apparatus of any embodiment of the present disclosure, the apparatus further includes:
an acquisition unit configured to acquire current virtual pose information of the user in real time;
a second determination unit configured to determine, from the model data that has been loaded locally, model data that matches the newly acquired current virtual pose information, and to display a model scene in the second model by rendering the model data that matches the newly acquired current virtual pose information.
Optionally, in the apparatus according to any embodiment of the present disclosure, the first model scene is a real scene of a house, the second model scene is a simulated scene after a decoration effect of the house is virtualized, and both the first model and the second model are three-dimensional models.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory for storing a computer program;
a processor for executing the computer program stored in the memory, and when the computer program is executed, the method of any embodiment of the model display method of the first aspect of the present disclosure is implemented.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer readable medium, which when executed by a processor, implements the method according to any one of the embodiments of the model display method of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program comprising computer readable code which, when run on a device, causes a processor in the device to execute instructions for implementing the steps in the method as in any of the embodiments of the model display method of the first aspect described above.
Based on the model display method, the apparatus, the electronic device, and the storage medium provided by the embodiments of the present disclosure, a first model scene in a first model may be displayed, where a viewpoint of the first model scene is a current virtual position of a user in the first model, and then a target virtual viewpoint matching the current virtual position is determined from a set of virtual viewpoints included in model data of a second model, where the first model and the second model are generated based on a same spatial structure, and finally, when a second model display operation for the second model is detected, a second model scene in the second model is displayed, where the viewpoint of the second model scene is the target virtual viewpoint and a viewing angle of the second model scene is a current virtual viewing angle of the user. The embodiment of the disclosure can display the second model scene under the target virtual viewpoint matched with the current virtual position of the user in the first model and under the current virtual visual angle of the user, so that the connectivity of model display can be improved, and the experience of the user in browsing the model is improved to a certain extent.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of a first embodiment of a model display method of the present disclosure.
FIG. 2 is a flow chart of a second embodiment of a model display method of the present disclosure.
Fig. 3A and 3B are schematic diagrams of a three-dimensional model display manner in an embodiment of a model display method according to the present disclosure.
Fig. 4 is a schematic structural diagram of an embodiment of a model display device according to the present disclosure.
Fig. 5 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to at least one of a terminal device, a computer system, and a server, which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with at least one electronic device of a terminal device, computer system, and server include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
At least one of the terminal device, the computer system, and the server may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Referring to FIG. 1, a flow 100 of a first embodiment of a model display method according to the present disclosure is shown. The model display method comprises the following steps:
and 101, displaying a first model scene in the first model.
In this embodiment, an execution subject (e.g., a server, a terminal device, a VR device, a model display apparatus, etc.) of the model display method may display a first model scene in the first model. And the viewpoint of the first model scene is the current virtual position of the user in the first model.
The first model may be an image for presenting the appearance of the object or the internal structure of the object to the user. By way of example, the first model may include, but is not limited to: images, three-dimensional models constructed based on point cloud data, and the like.
Here, the first model may be a model previously constructed based on a real scene or a virtual scene, or may be a model determined from a predetermined set of models according to the operation or position of the user.
For example, in constructing the model (including the first model), a point cloud data acquisition device (e.g., lidar, etc.) may be employed to capture at one or more capture viewpoints. Thereafter, a model may be constructed using the obtained point cloud data.
In addition, the first model may also be a model obtained by performing operations such as replacement, displacement, deformation, and the like on data corresponding to the indicated scene on the basis of obtaining the point cloud data.
The first model scene may be a scene rendered by all or part of the imagery in the first model.
In general, the current virtual position of the user in the model (including the first model) may be acquired via an electronic device (e.g., a cell phone, a VR device, etc.) used by the user, or may be changed via the user's manipulation (e.g., sliding) of an input-output device (e.g., a touch screen, etc.). The current virtual position of the user in the model may characterize the position of the user in the model, i.e. the viewpoint of the first model scene.
Optionally, the execution subject may also obtain the current virtual position of the user in real time, that is, the current virtual position of the user may be updated over time or by the operation of the user.
And 102, determining a target virtual viewpoint matched with the current virtual position from the virtual viewpoint set included in the model data of the second model.
In this embodiment, the execution subject may determine the target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in the model data of the second model. The first model and the second model may be generated based on the same spatial structure. For example, the first model may be a model of a real house, and the second model may be a model obtained by virtualizing a finishing effect on the house.
The second model may be a model previously constructed based on a real scene or a virtual scene, or may be a model determined from a predetermined set of models according to the operation or position of the user.
For example, in constructing the model (including the second model), a point cloud data acquisition device (e.g., lidar, etc.) may be employed to capture at one or more capture viewpoints. Thereafter, a model may be constructed using the obtained point cloud data.
In addition, the second model may also be a model obtained by performing operations such as replacement, displacement, deformation, and the like on data corresponding to the indicated scene on the basis of obtaining the point cloud data.
Here, the model data of the constructed model may include a set of virtual viewpoints. Wherein each virtual viewpoint may correspond to one photographing viewpoint.
In some optional implementations of this embodiment, the executing entity may execute the step 102 in a manner that, from a set of virtual viewpoints included in the model data of the second model, a target virtual viewpoint matching the current virtual position is determined:
and determining a virtual viewpoint corresponding to a shooting viewpoint, in the virtual viewpoint set included in the model data of the second model, for which the distance between the real positions corresponding to the current virtual position is less than or equal to a preset distance threshold, as a target virtual viewpoint matched with the current virtual position.
Here, the number of target virtual viewpoints corresponding to the position information may be one or a plurality of target virtual viewpoints.
The real location corresponding to the current virtual location may be a location existing in the objective world. For example, if the second model is generated based on the spatial structure of an objectively existing house. Then any virtual location in the second model (including the current virtual location) will correspond to a real location in the house.
Optionally, the preset distance threshold may be determined based on at least one of: the travel speed of the user (e.g., walking speed, etc.), the network speed of the electronic device (e.g., cell phone, VR device, etc.) used by the user, and the size of the storage space occupied by the model data of the three-dimensional model.
It can be understood that, in the above alternative implementation manner, the target virtual viewpoint matched with the current virtual position may be determined according to a preset distance threshold, so that the browsing range of the user in the second model may be predetermined, and the second model scene that the user needs to browse may be displayed more quickly through subsequent steps, thereby improving the experience of the user in watching the model.
In some optional implementations of this embodiment, the executing entity may also execute the step 102 in a manner that, from a set of virtual viewpoints included in the model data of the second model, a target virtual viewpoint matching the current virtual position is determined:
firstly, determining the distance between each shooting viewpoint in a preset shooting viewpoint set and the real position corresponding to the current virtual position to obtain a distance set. And the shooting viewpoints in the preset shooting viewpoint set correspond to the virtual viewpoints in the virtual viewpoint set one by one.
The shooting viewpoint in the preset shooting viewpoint set may be a position in a real scene (e.g., in a house) for placing a point cloud data acquisition device (e.g., a laser radar). And presetting the shooting viewpoints in the shooting viewpoint set to correspond to the virtual viewpoints in the virtual viewpoint set one by one. And each distance in the distance set represents the distance between a single shooting viewpoint in the preset shooting viewpoint set and the real position corresponding to the current virtual position.
Next, a preset number (e.g., 3, 5, etc.) of distances is determined from the above distance sets in descending order.
And finally, determining the virtual viewpoint corresponding to each distance in the preset number of distances as a target virtual viewpoint matched with the current virtual position.
Here, since the photographing viewpoints in the preset photographing viewpoint set correspond to the virtual viewpoints in the virtual viewpoint set one-to-one, and each distance represents a distance between a single photographing viewpoint and a real position corresponding to a current virtual position, each distance may correspond to one virtual viewpoint.
It can be understood that, in the above optional implementation manner, a preset number of virtual viewpoints close to the real position corresponding to the current virtual position may be selected as the target virtual viewpoints, so that the browsing range of the user in the second model may be predetermined based on the real position corresponding to the current virtual position, and the second model scene that the user needs to browse may be displayed more quickly through subsequent steps, thereby improving the experience of the user in watching the model.
And 103, responding to the detection of a second model display operation aiming at the second model, and displaying a second model scene in the second model.
In this embodiment, in a case where a second model display operation for the second model is detected, the execution body may display a second model scene in the second model. The viewpoint of the second model scene is the target virtual viewpoint, and the angle of view of the second model scene is the current virtual angle of view of the user.
The second model display operation may be a predetermined operation for instructing display of the second model to the user. As an example, the second model display operation may be clicking a preset virtual key in a page, or pressing a predetermined physical button.
Here, the current virtual perspective may be determined based on pose information of the user currently (e.g., at the time of execution 103), and furthermore, the current virtual perspective may be updated based on an operation of the user. As an example, the initial viewing angle and viewpoint of the second model scene may be consistent with the viewing angle and viewpoint of the first model scene.
The model display method provided in the above embodiment of the present disclosure may display a first model scene in a first model, where a viewpoint of the first model scene is a current virtual position of a user in the first model, then determine a target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in model data of a second model, where the first model and the second model are generated based on a same spatial structure, and finally display a second model scene in the second model when a second model display operation for the second model is detected, where the viewpoint of the second model scene is the target virtual viewpoint and an angle of view of the second model scene is a current virtual angle of view of the user. The embodiment of the disclosure can display the second model scene under the target virtual viewpoint matched with the current virtual position of the user in the first model and under the current virtual visual angle of the user, so that the connectivity of model display can be improved, and the experience of the user in browsing the model is improved to a certain extent.
In some optional implementations of this embodiment, the executing body may execute 103 as follows:
in the case where a second model display operation for the second model is detected, the execution agent may display a second model scene in the second model on an execution page of the second model display operation.
It can be understood that in the prior art, a link jump mode is usually adopted to display a new model scene in a new page. Specifically, at present, the processing such as loading and rendering of corresponding model data is started after the user clicks the jump link, and a new model scene is displayed on a new page different from the page currently browsed by the user. The optional implementation manner can display the second model scene in the second model on the execution page of the second model display operation without webpage skipping, and is beneficial to improving the speed of displaying the model scene.
In some application scenarios in the above-mentioned alternative implementation, before the user performs the second model display operation, the execution page of the second model display operation displays a model scenario (including but not limited to the first model scenario) of the first model. On the basis, the execution main body can display the second model scene on the page in any one of the following modes:
in a first mode, the model scene of the first model is updated to the second model scene. For example, the model scene of the first model is updated to the second model scene in a fade-in/fade-out manner.
In a second mode, a model scene of the first model and the second model scene are displayed on the execution page of the second model display operation.
It is to be understood that, in a first implementation manner of the foregoing alternative implementation manners, the model scene of the first model may be updated to the model scene of the second model. Further, in some cases, the user may view and compare the first model scenario and the second model scenario by switching. In the second implementation manner, the first model scene and the second model scene may be displayed on the same page, so that the user can compare the first model scene and the second model scene more conveniently.
In some cases of the application scenarios, the first model scenario is a real scenario of a house, and the second model scenario is a simulated scenario obtained by virtualizing a decoration effect of the house. The first model and the second model are both three-dimensional models.
It can be understood that, in the above situation, the virtual simulation scene of the decoration effect respectively matched with the visual angle and the viewpoint of the real scene of the house of the user can be displayed, the linking property of model display can be improved, the user can conveniently compare the visual angle and the viewpoint of the real scene of the house, the decoration suggestion is provided, and the browsing experience of the user is improved to a certain extent.
In some optional implementation manners of this embodiment, the executing main body may further perform the following steps:
firstly, the current virtual pose information of the user is acquired in real time.
Then, from the model data that has been loaded locally, model data that matches the latest obtained current virtual pose information is determined, and the model scene in the second model is displayed by rendering the model data that matches the latest obtained current virtual pose information.
It is to be understood that, in the above alternative implementation, after the second model scene is displayed, the executing body may further update the displayed scene along with the transformation of the current virtual pose information of the user.
With further reference to fig. 2, fig. 2 is a flow chart of a second embodiment of the model display method of the present disclosure. The process 200 of the model display method includes:
and 201, displaying a first model scene in the first model.
In this embodiment, an execution subject (e.g., a server, a terminal device, a VR device, a model display apparatus, etc.) of the model display method may display a first model scene in the first model. And the viewpoint of the first model scene is the current virtual position of the user in the first model.
In this embodiment, step 201 is substantially the same as step 101 in the embodiment corresponding to fig. 1, and is not described here again.
A target virtual viewpoint matching the current virtual position is determined 202 from the set of virtual viewpoints included in the model data of the second model.
In this embodiment, the execution subject may determine the target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in the model data of the second model. Wherein the first model and the second model are generated based on the same spatial structure.
In this embodiment, step 202 is substantially the same as step 102 in the embodiment corresponding to fig. 1, and is not described herein again.
And 203, in response to the model data of the second model including the model data not loaded at the target virtual viewpoint, loading the model data not loaded as target model data.
In this embodiment, the executing body may load the model data, which is not loaded from the target virtual viewpoint, into the second model when the model data includes the model data that is not loaded from the target virtual viewpoint.
And 204, in response to detecting a second model display operation for the second model, determining model scene data of the second model scene from the target model data.
In this embodiment, in a case where a second model display operation for the second model is detected, the executing body may determine model scene data of the second model scene from the target model data. The viewpoint of the second model scene is the target virtual viewpoint, and the angle of view of the second model scene is the current virtual angle of view of the user.
205, the second model scene is displayed by rendering the model scene data.
In this embodiment, the execution body may render the model scene data, and then display the second model scene.
It should be noted that, besides the above-mentioned contents, the embodiment of the present application may further include the same or similar features and effects as the embodiment corresponding to fig. 1, and details are not repeated herein.
As can be seen from fig. 2, in the process 200 of the model display method in this embodiment, before the second model display operation is detected, model data at the target virtual viewpoint that matches the real position corresponding to the current virtual position may be pre-loaded, so that, when the second model display operation is detected, the model data of the second model scene that matches the real position corresponding to the current virtual position is directly rendered from the locally loaded model data, and the rendered model data is displayed. This helps to relieve network stress and can improve the display speed of the model.
By way of example, continuing to refer to fig. 3A and 3B, fig. 3A and 3B are schematic diagrams of a three-dimensional model display manner in an embodiment of a model display method of the present disclosure.
In this example, a first model scene in a first model may be displayed first. And the viewpoint of the first model scene is the current virtual position of the user in the first model. Wherein, the current virtual position may include: the current point location of the user, the current viewing angle direction of the user, and the like. Then, the viewing angle switching can be performed. Here, the point location and the angle of view of the initial display of the second model may be determined according to the angle of view, the point location. For example, as shown in fig. 3A, the user can view model data of a three-dimensional model of an unfinished real house at the current location, i.e., a first model scene.
And then, obtaining model data of the three-dimensional model of the corresponding decorated virtual scene under the current visual angle of the real scene through the plan view of the house type diagram of the house.
Then, the user can find the matched point location and view angle, click a preset button (for example, a 'watching decoration' button) on the current page, render the model data of the decorated three-dimensional model of the current point location through the front end, and switch the decorated effect on the page. Here, switching may be performed in such a manner that the current situation is faded out and the effect after finishing is faded out. For example, as shown in fig. 3B, the user may view model data of the three-dimensional model of the decorated virtual scene at the current location. The viewpoint and the angle of view of the scene in fig. 3B and fig. 3A may be the same.
At present, in a scene of watching a house by a VR, if a user wants to watch a decorated effect, the user is often required to click a preset virtual key or a link and jump to a new link to check. And after the user clicks a preset virtual key or a link, the corresponding model data is loaded, rendered and the like. In the model display method in the above example, when the user previews in the VR, the user can switch the decoration mode at any point and any view angle, link skip is not required, and the front end renders the model to quickly complete the switching.
With further reference to fig. 4, as an implementation of the method shown in the above figures, the present disclosure provides an embodiment of a model display apparatus, which corresponds to the embodiment of the method shown in fig. 1 to 2, and which may include the same or corresponding features as the embodiment of the method shown in fig. 1 to 2, in addition to the features described below, and produce the same or corresponding effects as the embodiment of the method shown in fig. 1 to 2. The device can be applied to various electronic equipment.
As shown in fig. 4, the model display apparatus 400 of the present embodiment includes: a first display unit 401, a first determination unit 402, and a second display unit 403. The first display unit 401 is configured to display a first model scene in a first model, where a viewpoint of the first model scene is a current virtual position of a user in the first model; a first determining unit 402 configured to determine a target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in model data of a second model, wherein the first model and the second model are generated based on the same spatial structure; a second display unit 403, configured to display a second model scene in the second model in response to detecting a second model display operation for the second model, wherein the viewpoint of the second model scene is the target virtual viewpoint, and the angle of view of the second model scene is the current virtual angle of view of the user.
In this embodiment, the first display unit 401 of the model display apparatus 400 may display a first model scene in a first model, wherein a viewpoint of the first model scene is a current virtual position of a user in the first model.
In this embodiment, the second determining unit 402 may determine a target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in model data of a second model, where the first model and the second model are generated based on the same spatial structure.
In this embodiment, the second display unit 403 may display a second model scene in the second model in response to detecting a second model display operation for the second model, where a viewpoint of the second model scene is the target virtual viewpoint and an angle of view of the second model scene is a current virtual angle of view of the user.
In some optional implementations of this embodiment, the first determining unit 402 includes:
a first determining subunit (not shown in the figures), configured to determine distances between each shooting viewpoint in a preset shooting viewpoint set and a real position corresponding to the current virtual position, to obtain a distance set, where the shooting viewpoints in the preset shooting viewpoint set correspond to virtual viewpoints in the virtual viewpoint set one to one;
a second determining subunit (not shown in the figure) configured to determine, in descending order, a preset number of the distances from the distance set;
a third determining subunit (not shown in the figure) configured to determine the virtual viewpoint corresponding to the distance of the preset number of distances as the target virtual viewpoint matching the current virtual position.
In some optional implementations of this embodiment, the first determining unit 402 includes:
a fourth determining subunit (not shown in the figure), configured to determine, as the target virtual viewpoint matching the current virtual position, a virtual viewpoint corresponding to a shooting viewpoint in which a distance between real positions corresponding to the current virtual position is smaller than or equal to a preset distance threshold, from among the set of virtual viewpoints included in the model data of the second model.
In some optional implementations of the embodiment, the second display unit 403 includes
And a first display subunit (not shown in the figure) configured to display a second model scene in the second model on the execution page of the second model display operation.
In some optional implementations of this embodiment, before the user performs the second model display operation, a model scene of the first model is displayed on an execution page of the second model display operation; and
the first display subunit includes:
an updating module (not shown in the figure) configured to update the model scene of the first model to the second model scene; or
And a display module (not shown) configured to display the executed page of the operation in the second model, and to display the model scene of the first model and the second model scene.
In some optional implementations of this embodiment, the apparatus 400 further includes:
a loading unit (not shown in the figure) configured to load the target model data by using the unloaded model data as the target model data in response to the model data of the second model including the unloaded model data at the target virtual viewpoint; and
the second display unit includes 403:
a fifth determining subunit (not shown in the drawings) configured to determine model scene data of the second model scene from the target model data in response to detection of a second model display operation for the second model;
and a second display subunit (not shown) configured to display the second model scene by rendering the model scene data.
In some optional implementations of this embodiment, the apparatus 400 further includes:
an acquisition unit (not shown in the figure) configured to acquire current virtual pose information of the user in real time;
a second determination unit (not shown in the figure) configured to determine, from the model data that has been loaded locally, model data that matches the newly acquired current virtual pose information, and to display the model scene in the second model by rendering the model data that matches the newly acquired current virtual pose information.
In some optional implementation manners of this embodiment, the first model scene is a real scene of a house, the second model scene is a simulated scene obtained by virtualizing a decoration effect of the house, and both the first model and the second model are three-dimensional models.
In the model display apparatus 400 provided in the above embodiment of the present disclosure, the first display unit 401 is configured to display a first model scene in a first model, where a viewpoint of the first model scene is a current virtual position of a user in the first model; a first determining unit 402 configured to determine a target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in model data of a second model, wherein the first model and the second model are generated based on the same spatial structure; a second display unit 403, configured to display a second model scene in the second model in response to detecting a second model display operation for the second model, wherein the viewpoint of the second model scene is the target virtual viewpoint, and the angle of view of the second model scene is the current virtual angle of view of the user. The embodiment of the disclosure can display the second model scene under the target virtual viewpoint matched with the current virtual position of the user in the first model and under the current virtual visual angle of the user, so that the connectivity of model display can be improved, and the experience of the user in browsing the model is improved to a certain extent.
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 5. The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom.
FIG. 5 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
As shown in fig. 5, the electronic device includes one or more processors 501 and memory 502.
The processor 501 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
Memory 502 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 501 to implement the model display methods of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device may further include: an input device 503 and an output device 504, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is a first device or a second device, the input device 503 may be the microphone or the microphone array described above for capturing the input signal of the sound source. When the electronic device is a stand-alone device, the input means 503 may be a communication network connector for receiving the acquired input signals from the first device and the second device.
The input device 503 may also include, for example, a keyboard, a mouse, and the like. The output device 504 may output various information to the outside, including the determined distance information, direction information, and the like. The output devices 504 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 5, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the model display method according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
The description of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A method for displaying a model, the method comprising:
displaying a first model scene in a first model, wherein a viewpoint of the first model scene is a current virtual position of a user in the first model;
determining a target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in model data of a second model, wherein the first model and the second model are generated based on the same spatial structure;
in response to detecting a second model display operation for the second model, displaying a second model scene in the second model, wherein the viewpoint of the second model scene is the target virtual viewpoint and the perspective of the second model scene is the current virtual perspective of the user.
2. The method of claim 1, wherein determining the target virtual viewpoint matching the current virtual position from the set of virtual viewpoints included in the model data of the second model comprises:
determining the distance between each shooting viewpoint in a preset shooting viewpoint set and the real position corresponding to the current virtual position to obtain a distance set, wherein the shooting viewpoints in the preset shooting viewpoint set correspond to the virtual viewpoints in the virtual viewpoint set one by one;
determining a preset number of distances from the distance set according to the sequence from small to large;
and determining the virtual viewpoint corresponding to the distance in the preset number of distances as the target virtual viewpoint matched with the current virtual position.
3. The method of claim 1, wherein determining the target virtual viewpoint matching the current virtual position from the set of virtual viewpoints included in the model data of the second model comprises:
and determining a virtual viewpoint corresponding to a shooting viewpoint, in the virtual viewpoint set included in the model data of the second model, of which the distance between the real positions corresponding to the current virtual position is smaller than or equal to a preset distance threshold value, as the target virtual viewpoint matched with the current virtual position.
4. The method according to any one of claims 1-3, wherein said displaying a second model scene in said second model comprises
And displaying a second model scene in the second model on an execution page of the second model display operation.
5. The method according to claim 4, wherein before the user performs the second model display operation, the execution page of the second model display operation displays the model scene of the first model; and
the displaying a second model scene in the second model on the execution page of the second model display operation comprises:
updating the model scene of the first model to the second model scene; or
And displaying an execution page of the operation on the second model, and displaying the model scene of the first model and the second model scene.
6. The method of any one of claims 1-5, wherein prior to said displaying a second model scene in the second model in response to detecting a second model display operation for the second model, the method further comprises:
responding to the model data of the second model including unloaded model data under the target virtual viewpoint, taking the unloaded model data as target model data, and loading the target model data; and
the displaying a second model scene in the second model in response to detecting a second model display operation for the second model, comprising:
in response to detecting a second model display operation for the second model, determining model scene data for the second model scene from the target model data;
displaying the second model scene by rendering the model scene data.
7. The method of claim 6, further comprising:
acquiring current virtual pose information of the user in real time;
and determining model data matched with the latest acquired current virtual pose information from the model data loaded to the local, and displaying the model scene in the second model by rendering the model data matched with the latest acquired current virtual pose information.
8. A model display apparatus, characterized in that the apparatus comprises:
a first display unit configured to display a first model scene in a first model, wherein a viewpoint of the first model scene is a current virtual position of a user in the first model;
a first determination unit configured to determine a target virtual viewpoint matching the current virtual position from a set of virtual viewpoints included in model data of a second model, wherein the first model and the second model are generated based on the same spatial structure;
a second display unit configured to display a second model scene in the second model in response to detecting a second model display operation for the second model, wherein a viewpoint of the second model scene is the target virtual viewpoint and a viewing angle of the second model scene is a current virtual viewing angle of the user.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing a computer program stored in the memory, and when executed, implementing the method of any of the preceding claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of the preceding claims 1 to 7.
CN202110438621.8A 2021-04-22 2021-04-22 Model display method and device, electronic equipment and storage medium Active CN113112613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110438621.8A CN113112613B (en) 2021-04-22 2021-04-22 Model display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110438621.8A CN113112613B (en) 2021-04-22 2021-04-22 Model display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113112613A true CN113112613A (en) 2021-07-13
CN113112613B CN113112613B (en) 2022-03-15

Family

ID=76719673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110438621.8A Active CN113112613B (en) 2021-04-22 2021-04-22 Model display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113112613B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132482A1 (en) * 2004-11-12 2006-06-22 Oh Byong M Method for inter-scene transitions
US20090033740A1 (en) * 2007-07-31 2009-02-05 Kddi Corporation Video method for generating free viewpoint video image using divided local regions
CN101627410A (en) * 2007-03-15 2010-01-13 汤姆森许可贸易公司 Methods and apparatus for automated aesthetic transitioning between scene graphs
WO2012042998A1 (en) * 2010-09-28 2012-04-05 シャープ株式会社 Image processing device, image processing method, program, and recording medium
CN104378617A (en) * 2014-10-30 2015-02-25 宁波大学 Method for obtaining pixels in virtual viewpoint
US20150302651A1 (en) * 2014-04-18 2015-10-22 Sam Shpigelman System and method for augmented or virtual reality entertainment experience
US20160093105A1 (en) * 2014-09-30 2016-03-31 Sony Computer Entertainment Inc. Display of text information on a head-mounted display
CN106157354A (en) * 2015-05-06 2016-11-23 腾讯科技(深圳)有限公司 A kind of three-dimensional scenic changing method and system
CN106371218A (en) * 2016-10-28 2017-02-01 苏州苏大维格光电科技股份有限公司 Head-mounted three-dimensional display device
US20190335162A1 (en) * 2018-04-26 2019-10-31 Canon Kabushiki Kaisha System that generates virtual viewpoint image, method and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132482A1 (en) * 2004-11-12 2006-06-22 Oh Byong M Method for inter-scene transitions
CN101627410A (en) * 2007-03-15 2010-01-13 汤姆森许可贸易公司 Methods and apparatus for automated aesthetic transitioning between scene graphs
US20090033740A1 (en) * 2007-07-31 2009-02-05 Kddi Corporation Video method for generating free viewpoint video image using divided local regions
WO2012042998A1 (en) * 2010-09-28 2012-04-05 シャープ株式会社 Image processing device, image processing method, program, and recording medium
US20150302651A1 (en) * 2014-04-18 2015-10-22 Sam Shpigelman System and method for augmented or virtual reality entertainment experience
US20160093105A1 (en) * 2014-09-30 2016-03-31 Sony Computer Entertainment Inc. Display of text information on a head-mounted display
CN104378617A (en) * 2014-10-30 2015-02-25 宁波大学 Method for obtaining pixels in virtual viewpoint
CN106157354A (en) * 2015-05-06 2016-11-23 腾讯科技(深圳)有限公司 A kind of three-dimensional scenic changing method and system
CN106371218A (en) * 2016-10-28 2017-02-01 苏州苏大维格光电科技股份有限公司 Head-mounted three-dimensional display device
US20190335162A1 (en) * 2018-04-26 2019-10-31 Canon Kabushiki Kaisha System that generates virtual viewpoint image, method and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张桦等: "基于多个TIP模型的虚拟场景平滑过渡", 《光电子.激光》 *

Also Published As

Publication number Publication date
CN113112613B (en) 2022-03-15

Similar Documents

Publication Publication Date Title
CN107678647B (en) Virtual shooting subject control method and device, electronic equipment and storage medium
CN110352446B (en) Method and apparatus for obtaining image and recording medium thereof
EP3129871B1 (en) Generating a screenshot
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
EP2814000B1 (en) Image processing apparatus, image processing method, and program
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
CN111414225A (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
CN108776544B (en) Interaction method and device in augmented reality, storage medium and electronic equipment
US20140232748A1 (en) Device, method and computer readable recording medium for operating the same
CN114387400A (en) Three-dimensional scene display method, display device, electronic equipment and server
US10282904B1 (en) Providing augmented reality view of objects
CN111724231A (en) Commodity information display method and device
US11423366B2 (en) Using augmented reality for secure transactions
WO2019241033A1 (en) Emulated multi-screen display device
CN110227255B (en) Interactive control method and device for virtual container in VR game and electronic device
CN111429519B (en) Three-dimensional scene display method and device, readable storage medium and electronic equipment
CN113112613B (en) Model display method and device, electronic equipment and storage medium
CN116188738A (en) Method, apparatus, device and storage medium for interaction in virtual environment
KR20180058097A (en) Electronic device for displaying image and method for controlling thereof
CN116430990A (en) Interaction method, device, equipment and storage medium in virtual environment
CN115518378A (en) Method and device for displaying virtual article in game, electronic equipment and storage medium
CN110882537B (en) Interaction method, device, medium and electronic equipment
CN114463104A (en) Method, apparatus and computer program product for processing VR scenarios
CN111563956A (en) Three-dimensional display method, device, equipment and medium for two-dimensional picture
WO2015030623A1 (en) Methods and systems for locating substantially planar surfaces of 3d scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210903

Address after: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd.

Address before: 101300 room 24, 62 Farm Road, Erjie village, Yangzhen Town, Shunyi District, Beijing

Applicant before: Beijing fangjianghu Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant