CN111562845A - Method, device and equipment for realizing three-dimensional space scene interaction - Google Patents

Method, device and equipment for realizing three-dimensional space scene interaction Download PDF

Info

Publication number
CN111562845A
CN111562845A CN202010401813.7A CN202010401813A CN111562845A CN 111562845 A CN111562845 A CN 111562845A CN 202010401813 A CN202010401813 A CN 202010401813A CN 111562845 A CN111562845 A CN 111562845A
Authority
CN
China
Prior art keywords
pixel point
user
dimensional
dimensional model
footprint information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010401813.7A
Other languages
Chinese (zh)
Other versions
CN111562845B (en
Inventor
白杰
姚锟
贾松林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Can See Beijing Technology Co ltd AS
Original Assignee
Beike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beike Technology Co Ltd filed Critical Beike Technology Co Ltd
Priority to CN202010401813.7A priority Critical patent/CN111562845B/en
Publication of CN111562845A publication Critical patent/CN111562845A/en
Priority to PCT/CN2021/093628 priority patent/WO2021228200A1/en
Application granted granted Critical
Publication of CN111562845B publication Critical patent/CN111562845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method, apparatus, medium, and device for enabling three-dimensional spatial scene interaction are disclosed. The method comprises the following steps: if the fact that a user needs to set footprint information in a three-dimensional space scene is detected, acquiring a first pixel point in a current visual panoramic image corresponding to a current visual angle of the user in the three-dimensional space scene; determining a three-dimensional model corresponding to the first pixel point; determining a model position of the footprint information of the user in the three-dimensional model; setting footprint information of the user at the model location; and the footprint information of the user is used for being displayed to a browsing user of the three-dimensional space scene. The technical scheme provided by the disclosure realizes interaction between the user and the three-dimensional space scene, is beneficial to improving the participation sense of the user, and can bring richer VR panorama experience for at least one browsing user of the three-dimensional space scene by footprint information left by the user; and finally, the commercial value of the three-dimensional space scene is promoted.

Description

Method, device and equipment for realizing three-dimensional space scene interaction
Technical Field
The present disclosure relates to a virtual reality panorama technology, and in particular, to a method, an apparatus, a storage medium, and an electronic device for implementing three-dimensional space scene interaction.
Background
VR (Virtual Reality) panoramic technology is an emerging rich media technology. Because the VR panorama technology can present a three-dimensional scene for a user at 720 degrees without dead angles, and brings immersive visual experience for the user, the VR panorama technology is widely applied to various fields such as online shopping malls, travel services, house property services and the like. How to make the VR panorama technique bring richer use experience to the user is a technology problem of being worth paying attention to.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a method for realizing three-dimensional space scene interaction, a device for realizing three-dimensional space scene interaction, a storage medium and electronic equipment.
According to an aspect of the embodiments of the present disclosure, there is provided a method for realizing three-dimensional space scene interaction, including: if the fact that a user needs to set footprint information in a three-dimensional space scene is detected, acquiring a first pixel point in a current visual panoramic image corresponding to a current visual angle of the user in the three-dimensional space scene; determining a three-dimensional model corresponding to the first pixel point; determining a model position of the footprint information of the user in the three-dimensional model; setting footprint information of the user at the model location; and the footprint information of the user is used for being displayed to a browsing user of the three-dimensional space scene.
In an embodiment of the present disclosure, the footprint information includes: at least one of text, pictures, audio, video, and three-dimensional models.
In another embodiment of the present disclosure, the obtaining a first pixel point in a current visual panorama corresponding to a current viewing angle of the user in the three-dimensional space scene includes: and acquiring a central pixel point of the current visual panoramic image corresponding to the current visual angle of the user in the three-dimensional space scene, wherein the central pixel point is used as a first pixel point.
In yet another embodiment of the present disclosure, the determining the three-dimensional model corresponding to the first pixel point includes: judging whether a three-dimensional model is set for the first pixel point; if a three-dimensional model is set for the first pixel point, taking the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point; and if the three-dimensional model is not set for the first pixel point, taking the three-dimensional model set for other pixel points in the current visual panoramic image as the three-dimensional model corresponding to the first pixel point.
In another embodiment of the present disclosure, the taking the three-dimensional model set for the other pixel points in the current visual panorama as the three-dimensional model corresponding to the first pixel point includes: and traversing other pixel points in the current visual panoramic image corresponding to the current visual angle in the three-dimensional space scene by taking the first pixel point as a starting point according to a preset traversal rule, if the pixel point provided with the three-dimensional model is traversed, updating the first pixel point to the pixel point provided with the three-dimensional model, obtaining the three-dimensional model corresponding to the first pixel point, and stopping traversal.
In another embodiment of the present disclosure, the obtaining a first pixel point in a current visual panorama corresponding to a current viewing angle of the user in the three-dimensional space scene includes: and according to the operation that the user sets a footprint information target position in the current visual panoramic image corresponding to the current visual angle in the three-dimensional space scene, acquiring a pixel point in the current visual panoramic image corresponding to the footprint information target position, wherein the pixel point is used as a first pixel point.
In yet another embodiment of the present disclosure, the determining the three-dimensional model corresponding to the first pixel point includes: judging whether a three-dimensional model is set for the first pixel point; if a three-dimensional model is set for the first pixel point, taking the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point; and if the first pixel point is not provided with the three-dimensional model, outputting prompt information for updating the footprint information target position, and taking the pixel point provided with the three-dimensional model as the first pixel point to obtain the three-dimensional model corresponding to the first pixel point when judging that the pixel point in the current visual panoramic image corresponding to the updated footprint information target position is provided with the three-dimensional model.
In still another embodiment of the present disclosure, the determining the model position of the footprint information of the user in the three-dimensional model includes: and obtaining the model position of the first pixel point in the three-dimensional model, wherein the model position of the first pixel point in the three-dimensional model is used as the model position of the footprint information of the user in the three-dimensional model.
In yet another embodiment of the present disclosure, the method further comprises: for any browsing user of the three-dimensional space scene, determining a footprint area corresponding to a current visual angle of the browsing user in the three-dimensional space scene; determining footprint information in the three-dimensional model, which belongs to the footprint area; and displaying the footprint information belonging to the footprint area in a current visual panoramic image corresponding to a current visual angle of the browsing user in the three-dimensional space scene.
In another embodiment of the present disclosure, the determining a footprint area corresponding to a current viewing angle of the browsing user in the three-dimensional space scene includes: acquiring a central pixel point of a current visual panoramic image corresponding to a current visual angle of the browsing user in the three-dimensional space scene; and determining a footprint area in the current visual panoramic image by taking the central pixel point as a circle center and a preset length as a radius.
In another embodiment of the present disclosure, the displaying the footprint area in the current visual panorama corresponding to the current viewing angle of the browsing user in the three-dimensional space scene includes: for the plurality of pieces of footprint information which belong to the footprint areas and have different model positions, displaying the plurality of pieces of footprint information in the current visual panorama according to the image positions of the plurality of pieces of footprint information in the current visual panorama respectively; and for the different footprint information which belongs to the footprint area and has the same model position, allocating different image positions for the different footprint information in the current visual panoramic image, and displaying the different footprint information in the current visual panoramic image according to the allocated image positions.
In yet another embodiment of the present disclosure, the method further comprises: determining at least one footprint information in the three-dimensional model not belonging to the footprint area/current visual panorama; and displaying the at least one footprint information in a bullet screen form in a current visual panoramic image corresponding to a current visual angle of the browsing user in the three-dimensional space scene.
According to another aspect of the embodiments of the present disclosure, there is provided an apparatus for implementing three-dimensional spatial scene interaction, the apparatus including: the pixel point obtaining module is used for obtaining a first pixel point in a current visual panoramic image corresponding to a current visual angle of a user in a three-dimensional scene if the fact that the user needs to set footprint information in the three-dimensional scene is detected; a three-dimensional model determining module, configured to determine a three-dimensional model corresponding to the first pixel point; a model position determining module for determining a model position of the footprint information of the user in the three-dimensional model; a footprint information setting module for setting footprint information of the user at the model location; and the footprint information of the user is used for being displayed to a browsing user of the three-dimensional space scene.
In an embodiment of the present disclosure, the footprint information includes: at least one of text, pictures, audio, video, and three-dimensional models.
In another embodiment of the present disclosure, the pixel point obtaining module includes: the first sub-module is used for acquiring a center pixel point of a current visual panoramic image corresponding to a current visual angle of the user in the three-dimensional space scene, and the center pixel point is used as a first pixel point.
In yet another embodiment of the present disclosure, the determining a three-dimensional model module includes: the second submodule is used for judging whether a three-dimensional model is set for the first pixel point; a third sub-module, configured to, if the determination result of the second sub-module is that a three-dimensional model is set for the first pixel point, use the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point; and a fourth sub-module, configured to, if the determination result of the second sub-module is that no three-dimensional model is set for the first pixel point, use a three-dimensional model set for another pixel point in the current visual panorama as the three-dimensional model corresponding to the first pixel point.
In yet another embodiment of the present disclosure, the fourth sub-module is further configured to: and if the judgment result of the second sub-module is that no three-dimensional model is set for the first pixel point, traversing other pixel points in the current visual panoramic image corresponding to the current visual angle in the three-dimensional space scene by taking the first pixel point as a starting point according to a preset traversal rule, updating the first pixel point to the pixel point provided with the three-dimensional model if the pixel point provided with the three-dimensional model is traversed, obtaining the three-dimensional model corresponding to the first pixel point, and stopping the traversal.
In another embodiment of the present disclosure, the module for obtaining a pixel point includes: and the fifth sub-module is used for acquiring a pixel point in the current visual panorama corresponding to the footprint information target position according to the operation of setting the footprint information target position in the current visual panorama corresponding to the current visual angle of the user in the three-dimensional space scene, and the pixel point is used as a first pixel point.
In yet another embodiment of the present disclosure, the determining a three-dimensional model module includes: a sixth sub-module, configured to determine whether a three-dimensional model is set for the first pixel point; a seventh sub-module, configured to, if the determination result of the sixth sub-module is that a three-dimensional model is set for the first pixel point, use the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point; and an eighth sub-module, configured to output prompt information for updating the footprint information target position if the determination result of the sixth sub-module is that no three-dimensional model is set for the first pixel point, and when it is determined that a three-dimensional model is set for a pixel point in the current visual panorama corresponding to the updated footprint information target position, use the pixel point with the three-dimensional model as the first pixel point to obtain a three-dimensional model corresponding to the first pixel point.
In yet another embodiment of the present disclosure, the determine model location module is further configured to: and obtaining the model position of the first pixel point in the three-dimensional model, wherein the model position of the first pixel point in the three-dimensional model is used as the model position of the footprint information of the user in the three-dimensional model.
In yet another embodiment of the present disclosure, the apparatus further includes: a footprint area determining module, configured to determine, for any browsing user of the three-dimensional space scene, a footprint area corresponding to a current view angle of the browsing user in the three-dimensional space scene; the footprint information determining module is used for determining footprint information belonging to the footprint area in the three-dimensional model; and the footprint information display module is used for displaying the footprint information belonging to the footprint area in a current visual panoramic image corresponding to the current visual angle of the browsing user in the three-dimensional space scene.
In yet another embodiment of the present disclosure, the determine footprint area module is further configured to: acquiring a central pixel point of a current visual panoramic image corresponding to a current visual angle of the browsing user in the three-dimensional space scene; and determining a footprint area in the current visual panoramic image by taking the central pixel point as a circle center and a preset length as a radius.
In still another embodiment of the present disclosure, the display footprint information module is further configured to: for the plurality of pieces of footprint information which belong to the footprint areas and have different model positions, displaying the plurality of pieces of footprint information in the current visual panorama according to the image positions of the plurality of pieces of footprint information in the current visual panorama respectively; and for the different footprint information which belongs to the footprint area and has the same model position, allocating different image positions for the different footprint information in the current visual panoramic image, and displaying the different footprint information in the current visual panoramic image according to the allocated image positions.
In yet another embodiment of the present disclosure, the apparatus further includes: the bullet screen display module is used for: determining at least one footprint information in the three-dimensional model not belonging to the footprint area/current visual panorama; and displaying the at least one footprint information in a bullet screen form in a current visual panoramic image corresponding to a current visual angle of the browsing user in the three-dimensional space scene.
According to still another aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, where the storage medium stores a computer program for executing the above method for realizing three-dimensional spatial scene interaction.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; and the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method for realizing the three-dimensional space scene interaction.
According to the method and the device for realizing three-dimensional space scene interaction provided by the embodiment of the disclosure, the three-dimensional model corresponding to the first pixel point and the position of the footprint information in the three-dimensional model are obtained by using the first pixel point in the current visual panoramic image of the user needing to set the footprint information, and the footprint information set by the user can be associated with the corresponding position of the corresponding three-dimensional model, so that the footprint information of the user can be presented at the proper position of the three-dimensional space scene when the three-dimensional space scene is formed by using the panoramic image based on the current visual angle of the user, and the user can feel the specific part in the three-dimensional space scene and accurately present the footprint information at the corresponding position of the three-dimensional space scene. Therefore, the technical scheme provided by the disclosure realizes interaction between the user and the three-dimensional space scene, is beneficial to improving the participation and immersion of the user, and can bring richer VR panoramic experience for at least one browsing user of the three-dimensional space scene by the footprint information left by the user; and finally, the commercial value of the three-dimensional space scene is promoted.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of one embodiment of a suitable scenario for use with the present disclosure;
FIG. 2 is a flow chart of one embodiment of a method for enabling three-dimensional spatial scene interaction according to the present disclosure;
FIG. 3 is a flow diagram of one embodiment of a three-dimensional model to determine correspondence of a first pixel point according to the present disclosure;
FIG. 4 is a flow diagram of another embodiment of the present disclosure for determining a three-dimensional model corresponding to a first pixel point;
FIG. 5 is a flow diagram of one embodiment of the present disclosure for presenting footprint information to a browsing user;
FIG. 6 is a schematic structural diagram illustrating an embodiment of an apparatus for implementing three-dimensional spatial scene interaction according to the present disclosure;
fig. 7 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments according to the present disclosure will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more than two and "at least one" may refer to one, two or more than two.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, such as a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the present disclosure may be implemented in electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with an electronic device, such as a terminal device, computer system, or server, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks may be performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the disclosure
In the process of implementing the present disclosure, the inventors found that, in the process of a user feeling a three-dimensional space scene by adjusting the current viewing angle of the user, feelings such as certain emotions and ideas are often generated. If the user can set the footprint information for representing the feeling of the user into the three-dimensional space scene, the method is not only beneficial to improving the participation feeling of the user, but also can bring richer VR panoramic experience to other users watching the three-dimensional space scene.
Brief description of the drawings
An example of an application scenario of the technology for realizing three-dimensional space scene interaction provided by the present disclosure is described below with reference to fig. 1.
In the house property field, a three-dimensional space scene can be set for a house to be rented or sold by utilizing a VR panorama technology. Any user can visit through the network and watch the three-dimensional space scene of the corresponding house anytime and anywhere. In the process that a user watches the three-dimensional space scene of the corresponding house, the method allows the user to leave the own footprint information for the house browsed by the user, and can display the footprint information left by the user for the house and the footprint information left by other users for the house to the user.
In a specific example, assuming that a user is browsing a three-dimensional spatial scene of a house in a two-room-one-hall in a cell, the current viewable panorama as seen based on the user's current perspective is shown in fig. 1.
The footprint information left by other users for the three-dimensional space scene of the house in the two rooms and one hall comprises: "like this group of sofa, stick and click", "this decorate partition is good", "this sofa is good, high-grade,", "combination match is very good, praise and praise", "the longest twenty characters of the very unique-file design of the tea table" and the three-dimensional model 100 shown in the upper right corner of fig. 1. The footprint information of other users left aiming at the three-dimensional space scene of the house is presented for the user who browses the three-dimensional space scene of the house, so that the user can know the feeling of the other users to the house, the cognition of the user to the house is deepened, and the browsing experience of the user to the house is improved.
In addition, the user can also release the feeling of the user on the house in the process of watching the three-dimensional space scene of the house, namely, the user leaves the footprint information of the user in the three-dimensional space scene. For example, the user may set footprint information such as "this post makes the house look more distinctive" at the position of the post shown in fig. 1. The footprint information set by the user can be displayed in the three-dimensional space scene shown in fig. 1 in real time, that is, the user can see the footprint information left by the user in the process of watching the three-dimensional space scene of the house, so that the participation sense of the user can be improved.
In addition, all other footprint information set by the user for the house and not belonging to the three-dimensional space scene shown in fig. 1 can be presented to the user in the form of a bullet screen, so that the interest of the user in browsing the three-dimensional space scene at other positions of the house is promoted.
The technology for realizing three-dimensional space scene interaction provided by the disclosure can also be applied to other various scenes, for example, when a user browses the three-dimensional space scene of a library, corresponding footprint information can be set for a book or a chair or a coffee machine in the library. The footprint information set by the user for the book may be the look and feel of the user for the book or the number of pages currently read by the user, etc. The scenes to which the technology for realizing three-dimensional space scene interaction provided by the present disclosure can be applied are not illustrated.
Exemplary method
Fig. 2 is a flowchart of an embodiment of a method for realizing three-dimensional spatial scene interaction according to the present disclosure. The method of the embodiment shown in fig. 2 comprises: s200, S201, S202, and S203. The following describes each step.
S200, if it is detected that the user needs to set footprint information in the three-dimensional space scene, acquiring a first pixel point in a current visual panoramic image corresponding to a current visual angle of the user in the three-dimensional space scene.
The three-dimensional space scene in the present disclosure may refer to a space scene having a three-dimensional stereoscopic effect presented to a user by using a preset panorama and a preset three-dimensional model. For example, the three-dimensional space scene may be a three-dimensional space scene set for a library, a three-dimensional space scene set for a house, a three-dimensional space scene set for a coffee shop, a three-dimensional space scene set for a shopping mall, or the like.
The method and the device for setting the footprint information in the three-dimensional space scene can detect that the user needs to set the footprint information in the three-dimensional space scene when the user triggers the function of setting the footprint information in the three-dimensional space scene. For example, when the user clicks a button for setting the footprint information or a corresponding option on a menu, the present disclosure detects that the user needs to set the footprint information in the three-dimensional spatial scene. For another example, the user may trigger a function of setting footprint information in the three-dimensional spatial scene using a preset shortcut. The footprint information of the user in the present disclosure may be information that can indicate that the user visited the three-dimensional spatial scene once. The footprint information may be considered as visiting trace information of the user.
The current perspective of the user in the three-dimensional spatial scene in the present disclosure may refer to: the position and angle at which the user is currently viewing the three-dimensional spatial scene. The current viewing angle of a user in a three-dimensional spatial scene generally changes along with the operation of the user. For example, a user may control a current view angle of the user in a three-dimensional space scene by performing operations such as dragging on a touch screen. The current viewing angle of the user in the three-dimensional space scene determines the content/area of the panorama that the user can currently see, i.e. the current viewing angle of the user in the three-dimensional space scene determines the current viewable panorama.
The first pixel point in the present disclosure is a pixel point in the current visual panorama. The method and the device can obtain the first pixel point according to a preset default rule. For example, the first pixel point may be a specific pixel point in the current visual panorama, or may be any pixel point in the current visual panorama.
S201, determining a three-dimensional model corresponding to the first pixel point.
The three-dimensional spatial scene in the present disclosure is typically formed from a plurality of three-dimensional models. Of course, the three-dimensional spatial scene in the present disclosure may also be formed by one three-dimensional model. A pixel point in the current visual panorama seen by the user may be a representation of a point in the three-dimensional model. A pixel in the current visual panorama seen by the user may not be a representation of any point in the three-dimensional model. That is, in the general case, any point in any three-dimensional model in a three-dimensional scene may be present in the panorama, and the points in the panorama may not be present at all points in the three-dimensional model in the three-dimensional scene. Of course, this disclosure does not exclude the possibility that a portion of points in the three-dimensional model in the three-dimensional spatial scene are not presented in the panorama.
And when the first pixel point is used for presenting a point in the three-dimensional model, the three-dimensional model where the point is located is the three-dimensional model corresponding to the first pixel point.
When the first pixel point is used to render a point in the non-three-dimensional model, the present disclosure may update the first pixel point with other pixel points in the current visual panorama. Certainly, the present disclosure may not update the first pixel point, and at this time, the three-dimensional model corresponding to the first pixel point may be: and the three-dimensional models corresponding to other pixel points which are close to the first pixel point and used for presenting the points in the three-dimensional models in the current visual panoramic image. That is to say, under the condition that the first pixel point is used for presenting a point in the non-three-dimensional model and the first pixel point is not updated by the present disclosure, the present disclosure may use a three-dimensional model corresponding to another pixel point in the current visual panorama as the three-dimensional model corresponding to the first pixel point.
S202, determining the model position of the footprint information of the user in the three-dimensional model.
Because at least part of pixel points in the panorama have a mapping relation with points in the three-dimensional model, the position of the first pixel point or other pixel points in the three-dimensional model, namely the model position, can be obtained by the method. The model position is the model position of the footprint information of the user.
All three-dimensional models in the three-dimensional space scene in the present disclosure may be respectively provided with their own three-dimensional coordinate systems, or may have the same three-dimensional coordinate system. The model position of the user's footprint information in the three-dimensional model may be represented by (x, y, z). That is, the user's footprint information in this disclosure is deep.
And S203, setting the footprint information of the user at the position of the model.
The present disclosure setting footprint information of the user at the model location may include: setting a three-dimensional model identification and a three-dimensional coordinate for the footprint information of the user, and storing the corresponding relation of the three-dimensional model identification, the three-dimensional coordinate and the footprint information of the user.
The footprint information of the user in the present disclosure may be used to display to browsing users (e.g., all or part of browsing users) of the three-dimensional spatial scene. The browsing user of the three-dimensional space scene in the present disclosure may include a user who sets the footprint information.
According to the method, the three-dimensional model corresponding to the first pixel point and the position of the footprint information in the three-dimensional model are obtained by utilizing the first pixel point in the current visual panoramic picture of the user needing to set the footprint information, and the footprint information set by the user can be associated with the corresponding position of the corresponding three-dimensional model, so that the footprint information of the user can be presented at the proper position of the three-dimensional space scene when the three-dimensional space scene is formed by utilizing the panoramic picture based on the current visual angle of the user, and the user can feel the specific position in the three-dimensional space scene conveniently and accurately present the specific position in the three-dimensional space scene. Therefore, the technical scheme provided by the disclosure realizes interaction between the user and the three-dimensional space scene, is beneficial to improving the participation sense and immersion sense of the user, and improving the stay time of the user in the three-dimensional space scene, and the footprint information left by the user can also bring richer VR panoramic experience for at least one browsing user of the three-dimensional space scene; and finally, the commercial value of the three-dimensional space scene is promoted.
In one optional example, the footprint information in the present disclosure includes: at least one of text, pictures, audio, video, and three-dimensional models. Text in this disclosure may be considered a message in the form of characters (e.g., words, letters, numbers, or symbols, etc.). The pictures in the present disclosure may be considered as messages in the form of images (such as photographs or emoticons). Audio in this disclosure may be considered a message in the form of a sound (which may also be referred to as a linger, etc.). The video in this disclosure may be considered as a message in the form of a video. The three-dimensional model in the present disclosure may be considered as a message in a stereoscopic form. The footprint information of the user in the present disclosure may be referred to as a message of the user. A piece of footprint information set by the user may include: one or more of text, pictures, audio, video, and three-dimensional models. According to the method and the device, the footprint information of the user comprises at least one of text, pictures, audio, video and three-dimensional models, so that the representation form of the footprint information of the user is enriched, and the interaction mode of the user and the three-dimensional space scene is enriched.
In an optional example, one implementation manner of the present disclosure to acquire a first pixel point in a current visual panorama corresponding to a current viewing angle of a user in a three-dimensional space scene may be: and acquiring a central pixel point of the current visual panoramic image corresponding to the current visual angle of the user in the three-dimensional space scene, and taking the central pixel point as a first pixel point. For example, assuming that a user triggers a function of setting footprint information in a three-dimensional space scene by clicking a button or an option on a menu or the like at a current viewing angle in the three-dimensional space scene, the center pixel point of the current visual panorama can be directly used as a first pixel point in the disclosure. The center pixel point in the disclosure can be regarded as a default pixel point set by the disclosure aiming at the footprint information of the user, and the user can change the default pixel point by dragging and the like. In one example, a center pixel in the present disclosure may be considered to be a pixel in a center region of the current visual panorama. The central region of the current visual panorama can include one pixel or a plurality of pixels. This is disclosed through directly regarding the central pixel of current visual panorama as first pixel, not only is favorable to obtaining first pixel fast, moreover, is favorable to making the footprint information that the user set up lie in comparatively apparent position in current visual panorama.
In an optional example, another implementation manner of the present disclosure to acquire a first pixel point in a current visual panorama corresponding to a current viewing angle of a user in a three-dimensional space scene may be: according to the operation that a user sets a footprint information target position in a current visual panorama corresponding to a current visual angle in a three-dimensional space scene, a pixel point in the current visual panorama corresponding to the footprint information target position is obtained and is used as a first pixel point. That is to say, in the case where the user performs the operation of setting the footprint information target position, the present disclosure may take, as the first pixel point, a pixel point at which the footprint information target position formed in the current visual panorama by the operation is located.
Optionally, the operation of setting the footprint information target location in the present disclosure may be an operation of determining a starting target location of the footprint information, an operation of determining an ending target location of the footprint information, and an operation of determining a central target location of the footprint information.
Optionally, the operation of setting the footprint information target position in the present disclosure may specifically be a click operation or a scroll operation or a drag operation based on a tool such as a mouse or a keyboard, and may also specifically be a click operation or a drag operation based on a touch screen. The present disclosure does not limit the specific operation of setting the footprint information target location.
According to the method and the device, the first pixel point is determined according to the operation of setting the footprint information target position of the user, so that the footprint information set by the user is favorably positioned at the position expected by the user, the flexibility of setting the footprint information is favorably improved, and the position of the footprint information is favorably enabled to be more appropriate.
Optionally, it is assumed that, in the process of viewing the current visual panorama browsing by the user based on the current viewing angle of the user in the three-dimensional space scene, the user triggers the function of setting the footprint information in the three-dimensional space scene by clicking a button or an option on a menu, and the like, and at this time, the user can set the position of the desired footprint information in the current visual panorama by clicking a left mouse button, moving a cursor by up, down, left and right keys in a keyboard, or clicking a corresponding position in a touch screen, and the like, so that the present disclosure can use a pixel point where the position is located as the first pixel point.
Optionally, it is assumed that, in the process of viewing the current visual panorama browsing by the user based on the current viewing angle of the user in the three-dimensional space scene, the user triggers the function of setting the footprint information in the three-dimensional space scene by clicking a button or an option on a menu, and the like, and at this time, the present disclosure may use the center pixel point of the current visual panorama as the first pixel point. If the user does not change the first pixel point, the center pixel point is used as the final first pixel point. If the user changes the first pixel point by means of dragging operation of a left mouse button, cursor movement operation of upper, lower, left and right keys in a keyboard, dragging operation of a finger on a touch screen and the like, the pixel point where the specific position obtained by an operation result is located is taken as the first pixel point.
In an alternative example, an implementation of the present disclosure to determine a three-dimensional model corresponding to a first pixel point may be as shown in fig. 3.
In fig. 3, S300, a central pixel point of a current visual panorama corresponding to a current viewing angle of a user in a three-dimensional scene is obtained, and the central pixel point is used as a first pixel point.
Optionally, the center pixel in the present disclosure may be regarded as a default pixel set by the present disclosure for the user's footprint information. In one example, assuming that the current visual panorama is an image of (2n +1) × (2m +1) (where n and m are both integers greater than 1), the present disclosure may directly take the pixel point (n +1, m +1) in the current visual panorama as the center pixel point. In another example, assuming that the current visual panorama is a 2n × 2m image (where n and m are both integers greater than 1), the present disclosure may use a pixel point (n, m), a pixel point (n +1, m), a pixel point (n, m +1), and a pixel point (n +1, m +1) in the current visual panorama as a central area of the current visual panorama, so that the present disclosure may use any one of the pixel points in the central area as the central pixel point.
S301, judging whether a three-dimensional model is set for the first pixel point. If a three-dimensional model is set for the first pixel point, go to S302. If the three-dimensional model is not set for the first pixel point, S303 is reached.
Optionally, since not all the pixel points in the current visual panorama are the corresponding points in the three-dimensional model, but the present disclosure needs to set the footprint information of the user at the corresponding positions in the three-dimensional model, the present disclosure needs to determine whether the three-dimensional model is set for the first pixel point, that is, the present disclosure needs to determine whether the first pixel point is a pixel point for presenting the corresponding point in the three-dimensional model.
S302, taking the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point.
And S303, taking the three-dimensional models set aiming at other pixel points in the current visual panoramic image as the three-dimensional models corresponding to the first pixel point.
Optionally, other pixel points in the current visual panorama in the present disclosure are pixel points provided with a three-dimensional model in the current visual panorama. The method and the device can search the pixel points provided with the three-dimensional model according to the preset rule. In one example, the other found pixels can be the pixels closest to the first pixel in a certain direction (e.g., left direction, right direction, top direction, or bottom direction).
Optionally, the disclosure may use the first pixel point as a starting point, and traverse the pixel point in the current visual panorama corresponding to the current viewing angle in the three-dimensional scene according to a preset traversal rule, and if the pixel point provided with the three-dimensional model is traversed, obtain the three-dimensional model corresponding to the first pixel point, and stop the traversal process. For example, the present disclosure may use the first pixel point as a starting point, traverse the pixel point in the current visual panorama leftward, and determine whether a three-dimensional model is set for the currently traversed pixel point, if the determination result is that a three-dimensional model is set for the currently traversed pixel point, stop the traversal process, and use the three-dimensional model obtained by the current traversal as the three-dimensional model corresponding to the first pixel point. In addition, the first pixel point can be updated by utilizing the traversed pixel point provided with the three-dimensional model, and certainly, the first pixel point can not be updated.
According to the method and the device, whether the three-dimensional model is arranged on the first pixel point or not is judged, and different operations are executed according to the judgment result, so that the phenomenon that the footprint information of a user cannot be arranged at the corresponding model position in the three-dimensional model under the condition that the three-dimensional model is not arranged on the first pixel point is avoided. Further, according to the method and the device, other pixel points provided with the three-dimensional model are obtained by utilizing the preset traversal rule, and the three-dimensional model set aiming at the other pixel points is used as the three-dimensional model corresponding to the first pixel point, so that the three-dimensional model corresponding to the first pixel point can be rapidly obtained.
In an alternative example, an implementation of the present disclosure to determine a three-dimensional model corresponding to a first pixel point may be as shown in fig. 4.
In fig. 4, S400, according to an operation of setting a footprint information target position in a current visual panorama corresponding to a current viewing angle in a three-dimensional scene by a user, a pixel point in the current visual panorama corresponding to the footprint information target position is obtained, and the pixel point is used as a first pixel point.
Optionally, the present disclosure may allow the user to set a specific position of the footprint information (i.e., a footprint information target position) in the current visual panorama by himself, for example, after the user triggers a function of setting the footprint information in the three-dimensional space scene, the user may set the footprint information target position in the current visual panorama by clicking, sliding, dragging, and the like on the touch screen. The footprint information target location may be an upper left vertex, a lower left vertex, an upper right vertex, or a lower right vertex of the text box, etc. The footprint information target position may be an upper left vertex, a lower left vertex, an upper right vertex, a lower right vertex, or the like of the picture. The footprint information target position in the present disclosure may be a pixel point in the current visual panorama, and the pixel point is the first pixel point.
S401, whether a three-dimensional model is set for the first pixel point is judged. If a three-dimensional model is set for the first pixel point, go to S402. If the three-dimensional model is not set for the first pixel point, go to S403.
S402, taking the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point.
And S403, outputting prompt information for updating the footprint information target position.
Optionally, the prompt information in the present disclosure is used to prompt the user to update the currently set footprint information target location. Namely, the prompt message is used for prompting the user that the currently set footprint information target position cannot set the footprint information, and the user should reset the footprint information target position. The prompt message can be output in the form of characters, audio or graphics. After the prompt information is output, the present disclosure should wait for the subsequent operation of the user, and if the user triggers the function of canceling the setting of the footprint information at this time, the flow shown in fig. 4 is ended.
S404, when the fact that the user executes the operation of updating the footprint information target position is detected, pixel points in the current visual panoramic image corresponding to the updated footprint information target position are used as first pixel points. Returning to S401.
Alternatively, if the user currently performs an operation of updating the footprint information target location, the present disclosure obtains the footprint information target location again. The target position of the footprint information obtained again in the present disclosure may also be a pixel point in the current visual panorama, and the pixel point is the first pixel point. That is to say, the first pixel point obtained last time is updated by the similarity of the target position of the currently obtained footprint information.
According to the method and the device, whether the three-dimensional model is arranged at the footprint information target position set by the user is judged, and different operations are executed according to the judgment result, so that the phenomenon that the footprint information of the user cannot be arranged at the corresponding model position in the three-dimensional model under the condition that the three-dimensional model is not arranged at the footprint information target position is avoided. The present disclosure is advantageous to prompt the user to finally set their footprint information at the corresponding position of the three-dimensional model by using the loop process of S401 to S404, thereby facilitating the position of the footprint information to be more appropriate.
In an optional example, in the case that the first pixel point in the disclosure is provided with the three-dimensional model, because a mapping relationship exists between the first pixel point in the current visual panorama and a point in the three-dimensional model, the disclosure may obtain, based on the mapping relationship, a point in the three-dimensional model corresponding to the first pixel point, where the point is a model position of the first pixel point in the three-dimensional model, and the disclosure may directly use the model position of the first pixel point in the three-dimensional model as a model position of the footprint information of the user in the three-dimensional model, thereby facilitating to quickly and accurately obtain the model position of the footprint information of the user in the three-dimensional model.
In one optional example, in a process of viewing a three-dimensional space scene by a browsing user, the disclosure may present the browsing user with footprint information left by at least one user in the three-dimensional space scene. One example is shown in fig. 5.
In fig. 5, S500, for any browsing user of a three-dimensional space scene, a footprint area corresponding to a current viewing angle of the browsing user in the three-dimensional space scene is determined.
Optionally, the browsing user in the present disclosure includes a user who sets footprint information in the three-dimensional space scene. The footprint area in the present disclosure may be regarded as an area set for the footprint information that needs to be displayed. The footprint area can be based on the current visual panoramic image or based on a three-dimensional model. The size of the footprint area may be preset. The footprint area may be rectangular or circular or triangular in shape, etc.
Optionally, when the footprint area is a footprint area based on the current visual panorama, an implementation manner of determining the footprint area by the present disclosure may be: firstly, a central pixel point of a current visual panoramic image corresponding to a current visual angle of a browsing user in a three-dimensional space scene is obtained, and then, a footprint area in the current visual panoramic image is determined by taking the central pixel point as a circle center and taking a preset length (such as 1.5 meters in the three-dimensional space scene and the like, and the length of 1.5 meters can be converted into the length of the current visual panoramic image) as a radius. Because at least part of pixel points in the footprint area in the current visual panorama have a mapping relation with points in the three-dimensional model, the footprint area in the current visual panorama is utilized, and the footprint information needing to be displayed at present can be conveniently obtained. In addition, the footprint area in the current visual panorama of the present disclosure may be considered as one circle, i.e., the footprint area in the current visual panorama has no depth information.
Optionally, when the footprint area is a footprint area based on the three-dimensional model, an implementation manner of determining the footprint area according to the present disclosure may be: firstly, obtaining a central pixel point of a current visual panoramic image corresponding to a current visual angle of a browsing user in a three-dimensional space scene, determining whether the central pixel point is provided with a three-dimensional model, if the central pixel point is provided with the three-dimensional model, determining a model position of the central pixel point in the three-dimensional model, and then determining a footprint area in the three-dimensional model by taking the model position as a circle center and a preset length (such as 1.5 meters in the three-dimensional space scene) as a radius. The footprint area may be located entirely within one three-dimensional model or may span multiple three-dimensional models. In addition, the footprint area in the three-dimensional model of the present disclosure may be considered as one cylinder, i.e., the footprint area in the three-dimensional model has depth information.
S501, determining the footprint information belonging to the footprint area in the three-dimensional model.
Optionally, when the footprint area is based on the current visible panoramic image, the method and the device can traverse whether each pixel point in the footprint area has a mapping relationship with a point in the three-dimensional model, and if the mapping relationship exists, then judge whether the point in the three-dimensional model having the mapping relationship with the pixel point has footprint information, and if the footprint information is set, the footprint information can be regarded as the footprint information belonging to the footprint area.
Optionally, when the footprint area is a three-dimensional model-based footprint area, the disclosure may traverse whether each point in the footprint area is provided with footprint information, and if so, may regard the footprint information as the footprint information belonging to the footprint area.
And S502, displaying the footprint information belonging to the footprint area in the current visual panoramic image corresponding to the current visual angle of the browsing user in the three-dimensional space scene.
Optionally, the method and the device for displaying the footprint information can determine the positions of the footprint information belonging to the footprint area in the current visual panoramic image according to the model positions of the footprint information, so that the footprint information can be displayed at the positions of the footprint information in the current visual panoramic image respectively. In the process of displaying the footprint information, the phenomenon of overlapping display of different footprint information in the current visual panoramic image can be avoided as much as possible.
Alternatively, the plurality of footprint information obtained by the present disclosure may have different model locations, or may have the same model location (i.e., model location conflict of footprint information). For a plurality of pieces of footprint information which belong to the footprint area and have different positions, the method can directly display each piece of footprint information in the current visual panoramic image according to the image positions of the plurality of pieces of footprint information in the current visual panoramic image, and the method can allow the displayed pieces of footprint information to be partially overlapped and can also enable the pieces of footprint information not to be overlapped through position control. For different footprint information with the same model position belonging to the footprint range area, different image positions can be respectively allocated to the different footprint information in the current visual panoramic image, and the different footprint information with the same model position is displayed in the current visual panoramic image according to the allocated image positions, so that the overlapping display phenomenon of the different footprint information in the current visual panoramic image can be avoided.
Optionally, the present disclosure may display all the footprint information belonging to the footprint area, and may also display part of the footprint information belonging to the footprint area. For example, when the amount of all the footprint information belonging to the footprint area is too large (for example, the amount exceeds a predetermined amount), the present disclosure may select a part of the footprint information from the footprint area according to a predetermined rule and display the selected part of the footprint information in the current visual panorama.
Alternatively, the present disclosure may randomly pick out a predetermined amount of footprint information from all the footprint information belonging to the footprint area, and display the randomly picked-out partial footprint information in the current visible panorama.
Optionally, the method can preferentially select the footprint information set by the browsing user from all the footprint information belonging to the footprint area, preferentially select the good-quality footprint information and the like, and display the selected part of the footprint information in the current visual panoramic image.
In one optional example, the present disclosure may take the form of a bullet screen, displaying footprint information outside of the current visual panorama for the browsing user. For example, the present disclosure may determine all the footprint information in the three-dimensional model, which does not belong to the current visual panorama, and display all the footprint information or part of the footprint information in the current visual panorama corresponding to the current viewing angle of the browsing user in the three-dimensional spatial scene in the form of a bullet screen.
In one optional example, the present disclosure may take the form of a bullet screen, displaying footprint information outside of the footprint area for the browsing user. For example, the present disclosure may determine all the footprint information in the three-dimensional model, which does not belong to the footprint area, and display all the footprint information or part of the footprint information in a current visual panorama corresponding to a current viewing angle of the browsing user in the three-dimensional space scene in a bullet screen manner.
According to the method and the device, the form of the bullet screen is adopted, the footprint information which does not belong to the footprint area/the current visual panoramic image is displayed, the browsing user can be facilitated to explore other parts of the three-dimensional space scene, the immersion feeling of the browsing user is improved, the VR panoramic experience of the browsing user is further promoted, and therefore the commercial value of the three-dimensional space scene is promoted.
Exemplary devices
Fig. 6 is a schematic structural diagram of an embodiment of an apparatus for implementing three-dimensional spatial scene interaction according to the present disclosure. The apparatus of this embodiment may be used to implement the method embodiments of the present disclosure described above.
As shown in fig. 6, the apparatus of the present embodiment includes: a pixel point obtaining module 600, a three-dimensional model determining module 601, a model position determining module 602, and a footprint information setting module 603. In addition, the apparatus of the present disclosure may further include: a determine footprint area module 604, a determine footprint information module 605, a display footprint information module 606, and a bullet screen display module 607.
The pixel point obtaining module 600 is configured to obtain a first pixel point in the current visual panorama corresponding to the current viewing angle of the user in the three-dimensional scene if it is detected that the user needs to set the footprint information in the three-dimensional scene.
Optionally, the footprint information in the present disclosure may include: at least one of text, pictures, audio, video, and three-dimensional models.
Optionally, the module 600 for obtaining a pixel point may include: a first sub-module 6001. The first sub-module 6001 is configured to obtain a center pixel point of a current visual panorama corresponding to a current viewing angle of a user in a three-dimensional scene, where the center pixel point is used as a first pixel point.
Optionally, the module 600 for obtaining a pixel point may include: a fifth sub-module 6002. The fifth sub-module 6002 is configured to obtain a pixel point in the current visual panorama corresponding to the footprint information target position according to an operation of setting the footprint information target position in the current visual panorama corresponding to the current viewing angle of the user in the three-dimensional space scene, and the fifth sub-module 6002 may use the pixel point as the first pixel point.
The three-dimensional model determining module 601 is configured to determine a three-dimensional model corresponding to the first pixel point acquired by the pixel point acquiring module 600.
Optionally, in a case that the pixel point obtaining module 600 includes the first sub-module 6001, the determining a three-dimensional model module 601 may include: a second submodule 6011, a third submodule 6012, and a fourth submodule 6013. The second submodule 6011 is configured to determine whether a three-dimensional model is set for the first pixel point. The third submodule 6012 is configured to, if the determination result of the second submodule 6011 is that a three-dimensional model is set for the first pixel point, use the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point. The fourth submodule 6013 is configured to, if the determination result of the second submodule 6011 is that no three-dimensional model is set for the first pixel point, use a three-dimensional model set for another pixel point in the current visible panorama as the three-dimensional model corresponding to the first pixel point. For example, if the determination result of the second sub-module 6011 is that no three-dimensional model is set for the first pixel point, the fourth sub-module 6013 may traverse other pixel points in the current visual panorama corresponding to the current viewing angle in the three-dimensional space scene according to a preset traversal rule with the first pixel point as a starting point, update the first pixel point to a pixel point provided with a three-dimensional model if the pixel point provided with the three-dimensional model is traversed, obtain a three-dimensional model corresponding to the first pixel point, and stop the traversal this time.
In the case that the obtain pixel point module 600 includes the fifth sub-module 6002, the determining a three-dimensional model module 601 may include: a sixth submodule 6014, a seventh submodule 6015 and an eighth submodule 6016. The sixth submodule 6014 is configured to determine whether a three-dimensional model is set for the first pixel point. If the judgment result of the sixth sub-module 6014 is that a three-dimensional model is set for the first pixel point, the seventh sub-module 6015 takes the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point. If the judgment result of the sixth submodule 6014 is that no three-dimensional model is set for the first pixel point, the eighth submodule 6016 may output prompt information for updating the footprint information target position, and when the sixth submodule 6014 judges that a three-dimensional model is set for a pixel point in the current visible panoramic image corresponding to the updated footprint information target position, the pixel point with the three-dimensional model is used as the first pixel point, and the eighth submodule 6016 obtains the three-dimensional model corresponding to the first pixel point.
The determine model location module 602 is used to determine the model location of the user's footprint information in the three-dimensional model determined by the determine three-dimensional model module 601. For example, the determine model location module 602 may obtain a model location of the first pixel point in the three-dimensional model, and the determine model location module 602 may use the model location of the first pixel point in the three-dimensional model as a model location of the footprint information of the user in the three-dimensional model.
The setup footprint information module 603 is used to set the user's footprint information at the model location determined by the determine model location module 602. The user's footprint information set by the footprint information setting module 603 is used for displaying to a browsing user of the three-dimensional space scene.
The footprint area determining module 604 is configured to determine, for any browsing user of the three-dimensional space scene, a footprint area corresponding to a current viewing angle of the browsing user in the three-dimensional space scene. For example, the module 604 for determining a footprint area may first obtain a central pixel point of the current visual panorama corresponding to the current viewing angle of the browsing user in the three-dimensional space scene, and then the module 604 for determining a footprint area in the current visual panorama takes the central pixel point as a circle center and a predetermined length as a radius.
The determine footprint information module 605 is used to determine footprint information in the three-dimensional model that belongs to the footprint area determined by the determine footprint area module 604.
The footprint information displaying module 606 is configured to display the footprint information belonging to the footprint area determined by the footprint information determining module 605 in the current visual panorama corresponding to the current viewing angle of the browsing user in the three-dimensional space scene.
Optionally, for a plurality of footprint information belonging to the footprint area and having different model positions, the display footprint information module 606 may display the plurality of footprint information in the current visual panorama according to the image positions of the plurality of footprint information in the current visual panorama, respectively.
Optionally, for different footprint information belonging to the footprint area and having the same model position, the display footprint information module 606 may assign different image positions to the different footprint information in the current visual panorama, and display the different footprint information in the current visual panorama according to the assigned image positions.
The bullet screen display module 607 is configured to determine at least one piece of footprint information in the three-dimensional model, where the footprint information does not belong to the footprint area/the current visual panorama, and the bullet screen display module 607 displays the at least one piece of footprint information in the current visual panorama corresponding to the current viewing angle of the browsing user in the three-dimensional space scene in the form of a bullet screen.
The operations specifically executed by the modules and the sub-modules included in the modules may be referred to in the description of the method embodiments with reference to fig. 2 to 5, and are not described in detail here.
Exemplary electronic device
An electronic device according to an embodiment of the present disclosure is described below with reference to fig. 7. FIG. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 7, the electronic device 71 includes one or more processors 711 and memory 712.
The processor 711 may be a Central Processing Unit (CPU) or other form of processing unit having the capability and/or instruction execution capabilities to implement three-dimensional spatial scene interaction, and may control other components in the electronic device 71 to perform desired functions.
Memory 712 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory, for example, may include: random Access Memory (RAM) and/or cache memory (cache), etc. The nonvolatile memory, for example, may include: read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 711 to implement the methods for implementing three-dimensional spatial scene interaction of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 71 may further include: input devices 713 and output devices 714, among other components, interconnected by a bus system and/or other form of connection mechanism (not shown). The input device 713 may also include, for example, a keyboard, a mouse, and the like. The output device 714 can output various information to the outside. The output devices 714 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 71 relevant to the present disclosure are shown in fig. 7, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 71 may include any other suitable components, depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method for enabling three-dimensional spatial scene interaction according to various embodiments of the present disclosure described in the "exemplary methods" section of this specification above.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps in the method for enabling three-dimensional spatial scene interaction according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium may include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, and systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," comprising, "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects, and the like, will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A method for enabling three-dimensional spatial scene interaction, comprising:
if the fact that a user needs to set footprint information in a three-dimensional space scene is detected, acquiring a first pixel point in a current visual panoramic image corresponding to a current visual angle of the user in the three-dimensional space scene;
determining a three-dimensional model corresponding to the first pixel point;
determining a model position of the footprint information of the user in the three-dimensional model;
setting footprint information of the user at the model location;
and the footprint information of the user is used for being displayed to a browsing user of the three-dimensional space scene.
2. The method of claim 1, wherein the footprint information comprises:
at least one of text, pictures, audio, video, and three-dimensional models.
3. The method according to claim 1 or 2, wherein the obtaining a first pixel point in a current visual panorama corresponding to a current view angle of the user in the three-dimensional space scene comprises:
and acquiring a central pixel point of the current visual panoramic image corresponding to the current visual angle of the user in the three-dimensional space scene, wherein the central pixel point is used as a first pixel point.
4. The method of claim 3, wherein said determining the three-dimensional model corresponding to the first pixel point comprises:
judging whether a three-dimensional model is set for the first pixel point;
if a three-dimensional model is set for the first pixel point, taking the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point;
and if the three-dimensional model is not set for the first pixel point, taking the three-dimensional model set for other pixel points in the current visual panoramic image as the three-dimensional model corresponding to the first pixel point.
5. An apparatus for enabling three-dimensional spatial scene interaction, wherein the apparatus comprises:
the pixel point obtaining module is used for obtaining a first pixel point in a current visual panoramic image corresponding to a current visual angle of a user in a three-dimensional scene if the fact that the user needs to set footprint information in the three-dimensional scene is detected;
a three-dimensional model determining module, configured to determine a three-dimensional model corresponding to the first pixel point;
a model position determining module for determining a model position of the footprint information of the user in the three-dimensional model;
a footprint information setting module for setting footprint information of the user at the model location;
and the footprint information of the user is used for being displayed to a browsing user of the three-dimensional space scene.
6. The apparatus of claim 5, wherein the footprint information comprises:
at least one of text, pictures, audio, video, and three-dimensional models.
7. The apparatus of claim 5 or 6, wherein the means for obtaining pixel points comprises:
the first sub-module is used for acquiring a center pixel point of a current visual panoramic image corresponding to a current visual angle of the user in the three-dimensional space scene, and the center pixel point is used as a first pixel point.
8. The apparatus of claim 7, wherein the determine three-dimensional model module comprises:
the second submodule is used for judging whether a three-dimensional model is set for the first pixel point;
a third sub-module, configured to, if the determination result of the second sub-module is that a three-dimensional model is set for the first pixel point, use the three-dimensional model set for the first pixel point as the three-dimensional model corresponding to the first pixel point;
and a fourth sub-module, configured to, if the determination result of the second sub-module is that no three-dimensional model is set for the first pixel point, use a three-dimensional model set for another pixel point in the current visual panorama as the three-dimensional model corresponding to the first pixel point.
9. A computer-readable storage medium, the storage medium storing a computer program for performing the method of any of the above claims 1-4.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method of any one of claims 1-4.
CN202010401813.7A 2020-05-13 2020-05-13 Method, device and equipment for realizing three-dimensional space scene interaction Active CN111562845B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010401813.7A CN111562845B (en) 2020-05-13 2020-05-13 Method, device and equipment for realizing three-dimensional space scene interaction
PCT/CN2021/093628 WO2021228200A1 (en) 2020-05-13 2021-05-13 Method for realizing interaction in three-dimensional space scene, apparatus and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010401813.7A CN111562845B (en) 2020-05-13 2020-05-13 Method, device and equipment for realizing three-dimensional space scene interaction

Publications (2)

Publication Number Publication Date
CN111562845A true CN111562845A (en) 2020-08-21
CN111562845B CN111562845B (en) 2022-12-27

Family

ID=72070958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010401813.7A Active CN111562845B (en) 2020-05-13 2020-05-13 Method, device and equipment for realizing three-dimensional space scene interaction

Country Status (1)

Country Link
CN (1) CN111562845B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340598A (en) * 2020-03-20 2020-06-26 北京爱笔科技有限公司 Method and device for adding interactive label
WO2021228200A1 (en) * 2020-05-13 2021-11-18 贝壳技术有限公司 Method for realizing interaction in three-dimensional space scene, apparatus and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897468A (en) * 2018-05-30 2018-11-27 链家网(北京)科技有限公司 A kind of method and system of the virtual three-dimensional space panorama into the source of houses
US20190138085A1 (en) * 2017-06-20 2019-05-09 Nokia Technologies Oy Provision of Virtual Reality Content
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium
CN110531847A (en) * 2019-07-26 2019-12-03 中国人民解放军军事科学院国防科技创新研究院 A kind of novel social contact method and system based on augmented reality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138085A1 (en) * 2017-06-20 2019-05-09 Nokia Technologies Oy Provision of Virtual Reality Content
CN108897468A (en) * 2018-05-30 2018-11-27 链家网(北京)科技有限公司 A kind of method and system of the virtual three-dimensional space panorama into the source of houses
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium
CN110531847A (en) * 2019-07-26 2019-12-03 中国人民解放军军事科学院国防科技创新研究院 A kind of novel social contact method and system based on augmented reality

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340598A (en) * 2020-03-20 2020-06-26 北京爱笔科技有限公司 Method and device for adding interactive label
CN111340598B (en) * 2020-03-20 2024-01-16 北京爱笔科技有限公司 Method and device for adding interactive labels
WO2021228200A1 (en) * 2020-05-13 2021-11-18 贝壳技术有限公司 Method for realizing interaction in three-dimensional space scene, apparatus and device

Also Published As

Publication number Publication date
CN111562845B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
US10755485B2 (en) Augmented reality product preview
CN111178191B (en) Information playing method and device, computer readable storage medium and electronic equipment
US9542070B2 (en) Method and apparatus for providing an interactive user interface
CN113286159B (en) Page display method, device and equipment of application program
US9792268B2 (en) Zoomable web-based wall with natural user interface
KR102440339B1 (en) Methods of sharing personal information, devices, terminals and storage media
CN111562845B (en) Method, device and equipment for realizing three-dimensional space scene interaction
CN112907755B (en) Model display method and device in three-dimensional house model
US20170214980A1 (en) Method and system for presenting media content in environment
CN115134649A (en) Method and system for presenting interactive elements within video content
CN112068754B (en) House resource display method and device
CN114842175B (en) Interactive presentation method, device, equipment and medium for three-dimensional label
CN113298602A (en) Commodity object information interaction method and device and electronic equipment
CN112051956A (en) House source interaction method and device
WO2021228200A1 (en) Method for realizing interaction in three-dimensional space scene, apparatus and device
CN111724231A (en) Commodity information display method and device
CN110990106B (en) Data display method and device, computer equipment and storage medium
WO2024055462A1 (en) Vr scene processing method and apparatus, electronic device and storage medium
Nitika et al. A study of Augmented Reality performance in web browsers (WebAR)
CN114463104B (en) Method, apparatus, and computer-readable storage medium for processing VR scene
CN114170381A (en) Three-dimensional path display method and device, readable storage medium and electronic equipment
JP5950367B2 (en) Ripple user interface of information display system
CN113961066A (en) Method and device for switching visual angle, electronic equipment and readable medium
KR20180071492A (en) Realistic contents service system using kinect sensor
Dou et al. Show something: intelligent shopping assistant supporting quick scene understanding and immersive preview

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201026

Address after: 100085 Floor 102-1, Building No. 35, West Second Banner Road, Haidian District, Beijing

Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd.

Address before: 300 457 days Unit 5, Room 1, 112, Room 1, Office Building C, Nangang Industrial Zone, Binhai New Area Economic and Technological Development Zone, Tianjin

Applicant before: BEIKE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220325

Address after: 100085 8th floor, building 1, Hongyuan Shouzhu building, Shangdi 6th Street, Haidian District, Beijing

Applicant after: As you can see (Beijing) Technology Co.,Ltd.

Address before: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant before: Seashell Housing (Beijing) Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant