CN114967914A - Virtual display method, device, equipment and storage medium - Google Patents

Virtual display method, device, equipment and storage medium Download PDF

Info

Publication number
CN114967914A
CN114967914A CN202210524846.XA CN202210524846A CN114967914A CN 114967914 A CN114967914 A CN 114967914A CN 202210524846 A CN202210524846 A CN 202210524846A CN 114967914 A CN114967914 A CN 114967914A
Authority
CN
China
Prior art keywords
virtual
display
virtual space
dimensional model
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210524846.XA
Other languages
Chinese (zh)
Inventor
马骞女
揭志伟
孙红亮
朱赟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202210524846.XA priority Critical patent/CN114967914A/en
Publication of CN114967914A publication Critical patent/CN114967914A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The application discloses a virtual display method, a virtual display device, virtual display equipment and a storage medium. The virtual display method comprises the following steps: responding to an operation instruction of a user, and determining a navigation theme; acquiring position data of the display device, and determining a virtual space from the navigation theme based on the position data; and displaying target content of the three-dimensional model in the virtual space based on a preset interaction rule. Through the method, the virtual space is determined from the navigation theme according to the position data of the display equipment, and the target content of the three-dimensional model in the virtual space is displayed.

Description

Virtual display method, device, equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a virtual display method, apparatus, device, and storage medium.
Background
Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information with the real world. The method and the system widely apply various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like. The augmented reality is applied to the real world after virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer are subjected to analog simulation, so that the two kinds of information are mutually supplemented, and the real world is enhanced.
The augmented reality technology has been developed for many years, and it is expected that the technology can be used in daily life and work for a day, so that convenience in life is provided, work efficiency is improved, and the like.
However, with the rapid development of the cultural tourism industry, more and more user groups visit various exhibitions or museums and the like, and there are more viewing points in the exhibitions or the museums, and for the viewing points, the cultural features are usually shown to tourists in a way of manual interpretation by tour guide, speech interpretation by WeChat code scanning or speech interpreter interpretation, but the viewing manner lacks interactivity, and the interpretation content is not comprehensive, so that the expected exhibition effect is difficult to achieve.
Disclosure of Invention
In order to solve the above problems in the prior art, the present application provides a virtual display method, apparatus, device, and storage medium.
In order to solve the technical problems in the prior art, the present application provides a virtual display method, including: responding to an operation instruction of a user, and determining a navigation theme; acquiring position data of a display device, and determining a virtual space from the navigation theme based on the position data; and displaying target content of the three-dimensional model in the virtual space based on a preset interaction rule.
Therefore, according to the position data of the display equipment, the virtual space is determined from the navigation theme, and the target content of the three-dimensional model in the virtual space is displayed.
In an embodiment, the obtaining of the position data of the display device and the determining of the virtual space from the navigation topic based on the position data includes: acquiring a real-time image, and determining position data of a target object in the real-time image; determining a relative positional relationship of the subject matter and the display device based on the positional data of the subject matter and the positional data of the display device; determining the virtual space based on the relative positional relationship.
Therefore, the virtual space is determined by determining the relative position relationship between the target object and the display device, the virtual space desired by the user can be determined more accurately, and the convenience of the user in the use process is improved.
In an embodiment, the displaying target content of the three-dimensional model in the virtual space based on the preset interaction rule includes: selecting a plurality of the target objects in the real-time image, and respectively calculating the distance between the plurality of the target objects and the display equipment; searching the virtual space for a plurality of the three-dimensional models corresponding to the object, respectively; and determining the sequence of displaying the target contents corresponding to the three-dimensional models according to the distance between the plurality of target objects and the display equipment.
Therefore, the sequence of the target contents of the corresponding three-dimensional model is determined according to the actual distance between the target object and the display equipment, so that the target contents can be automatically displayed, the display effect is improved, and the user experience of the user in the browsing process is improved.
In an embodiment, the displaying target content of the three-dimensional model in the virtual space based on the preset interaction rule includes: acquiring a three-dimensional model of the target content to be displayed in the virtual space; displaying the identification information corresponding to each three-dimensional model in a play list; and selecting the identification information from the play list, and displaying the target content of the corresponding three-dimensional model based on the selected identification information.
Therefore, the identification information corresponding to the three-dimensional model is presented to the user in a playlist mode, the user can conveniently check target content capable of being displayed, convenience in the use process is improved, meanwhile, only the target content of the three-dimensional model corresponding to the selected identification information needs to be displayed, and the display effect of the target content can be improved.
In an embodiment, the displaying, in the playlist, the identification information corresponding to each of the three-dimensional models includes: acquiring the selected display times of the three-dimensional models from a database; and comparing the display times of each three-dimensional model, sequencing the identification information of the three-dimensional models based on the comparison result, and displaying the identification information in the playlist.
Therefore, the display times are used as the basis for forming the playlist, so that the user can conveniently look up the identification information of the three-dimensional model, and the experience effect of the user is improved.
In one embodiment, the method comprises: searching a virtual preview space of a next point from the navigation subject based on the relative position relation; and displaying prompt information of the virtual preview space of the next point location.
Therefore, the user is reminded of the virtual preview space of the next point location through the prompt message, the virtual preview space can be utilized to play a local navigation role, and the user can also be enabled to predict the scenic spot layout of the navigation center in advance.
In one embodiment, the method comprises: creating a first display window and a second display window based on the virtual space and the virtual preview space; and displaying the virtual space by using the first display window, and displaying the virtual preview space by using the second display window.
Therefore, the preview effect of the virtual preview space is achieved through the second display window, and interactivity and interestingness in the navigation process are improved.
To solve the technical problems in the prior art, the present application provides a virtual display apparatus, including: the system comprises a determining module, an obtaining module and a display module, wherein the determining module is used for responding to an operation instruction of a user and determining a navigation theme; the acquisition module is used for acquiring position data of the display equipment and determining a virtual space from the navigation theme based on the position data; the display module is used for displaying the target content of the three-dimensional model in the virtual space based on a preset interaction rule.
In order to solve the technical problem existing in the prior art, the present application provides a virtual display device, and an augmented reality musical instrument display device includes: a processor and a memory, the memory having stored therein a computer program, the processor being adapted to execute the computer program to implement the method as described above.
To solve the technical problems in the prior art, the present application provides a computer-readable storage medium storing program instructions, which when executed by a processor implement the above-mentioned method.
Compared with the prior art, the augmented reality instrument display method comprises the steps of responding to an operation instruction of a user, and determining a navigation theme; acquiring position data of the display device, and determining a virtual space from the navigation theme based on the position data; and displaying target content of the three-dimensional model in the virtual space based on a preset interaction rule. Through the method, the virtual space is determined from the navigation theme according to the position data of the display equipment, and the target content of the three-dimensional model in the virtual space is displayed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flowchart illustrating a virtual display method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating an embodiment of step S102 in FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of step S103 in FIG. 1;
FIG. 4 is a flowchart illustrating an embodiment of step S103 in FIG. 1;
FIG. 5 is a flowchart illustrating an embodiment of step S402 in FIG. 4;
FIG. 6 is a flowchart illustrating an embodiment of a virtual display method provided by the present application;
FIG. 7 is a schematic structural diagram of a virtual display apparatus according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of an embodiment of a virtual display apparatus provided in the present application;
FIG. 9 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be noted that the following examples are only illustrative of the present application, and do not limit the scope of the present application. Likewise, the following examples are only some examples of the present application, not all examples, and all other examples obtained by a person of ordinary skill in the art without making any creative effort fall within the protection scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Throughout the description of the present application, it is intended that the terms "mounted," "disposed," "connected," and "connected" be construed broadly and encompass, for example, fixed connections, removable connections, or integral connections unless expressly stated or limited otherwise; can be mechanically connected or electrically connected; they may be directly connected or may be connected via an intermediate medium. To one of ordinary skill in the art, the foregoing may be combined in any suitable manner with the specific meaning ascribed to the present application.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like.
The detection or identification processing of relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Based on the above technical basis, the present application provides a virtual display method, and referring to fig. 1, fig. 1 is a schematic flow diagram of an embodiment of the virtual display method provided by the present application. Specifically, the following steps S101 to S103 may be included:
step S101: and determining the navigation subject in response to the operation instruction of the user.
The operation instruction of the user may include an operation instruction made for the musical instrument model, and the operation instruction may be, for example, a single click, a double click, a drag, a slide, or the like instruction for the musical instrument model. Furthermore, different operation instructions can be obtained according to different sliding directions of the user. Of course, the operation instruction may also be a voice operation. For example, when the preset voice information is recognized, the user is considered to input the operation instruction, so that the operation instruction can be input in different modes, and the whole interaction process is convenient and fast.
The navigation subject comprises a space model, material contents to be displayed and the like. After the spatial model is created, material content can be placed in the created spatial model in an editing application. The space model of the navigation theme can be obtained by reconstructing a space dense model through a visual recognition algorithm and a high-precision environment reconstruction correlation algorithm. Specifically, space pictures of a target space (such as a museum, a navigation museum, and the like) taken at different shooting angles can be acquired, then image features of the space are identified from the acquired space pictures, and a virtual space is reconstructed by using a visual identification algorithm and a high-precision environment reconstruction correlation algorithm to obtain a space model of a navigation theme. Wherein the number of captured images is more than 1, for example, 10, 20, 50, etc., the more the number of images, the more accurate the spatial model of the navigation subject is constructed, but the more time is required for constructing the spatial model of the navigation subject relatively.
The material contents can be in various types, and one type of material contents and the constructed space model can form a navigation theme. That is, a variety of guide themes can be formed based on different material contents added in the same spatial model. The contents presented in the various guide themes are different in the emphasis point, and exemplarily, the first guide theme is focused on explaining the historical sources of cultural relics in the exhibition hall; the second type of theme of guide is focused on realizing the interactive mode with the cultural relics in the exhibition hall; the third type of guiding theme focuses on combining the first type of guiding theme with the second type of guiding theme.
And when receiving an operation instruction of the user, selecting the navigation theme which the user wants to use based on the operation instruction.
Step S102: position data of the display device is acquired, and a virtual space is determined from the navigation topic based on the position data.
The display device may be any electronic device capable of supporting AR functionality including, but not limited to, AR glasses, a tablet, a smartphone, and the like. For example, the display device may present an AR effect, which may be understood as displaying a virtual object merged into a real scene in the display device, and may directly render the presentation content of the virtual object and merge the rendered content with the real scene, for example, present a group of virtual cultural relics, where the display effect is a real cultural relic placed in the real scene, or present a merged display picture after merging the presentation content of the virtual object with a picture of the real scene. In this embodiment, the display device may be a display device such as a mobile terminal.
The display device is typically carried by a user, and the location of the user can be determined by determining the location of the display device. The position data of the display device can be acquired by receiving positioning data transmitted by a positioning device, for example, the positioning device can be a satellite positioning device, an inertial positioning device, or a combination of the two; the satellite positioning device may be a global positioning system, galileo satellite positioning system, glonass satellite positioning system or beidou positioning system based device.
The inside space of exhibition halls such as exhibition halls, museum is comparatively complicated usually, and the historical relic of display is more, and is the same based on the theme of the exhibition halls that form, and it is formed by a plurality of virtual space combinations usually. After the user determines the navigation theme, the spatial position of the user needs to be determined through the positioning data, and then the virtual space corresponding to the spatial position of the user is determined in the navigation theme.
Step S103: and displaying target content of the three-dimensional model in the virtual space based on a preset interaction rule.
The preset interaction rule may include that the target content is automatically displayed when the virtual space is determined, the target content may also be displayed after the user gives an operation instruction, and the like, and the specific interaction rule may be determined according to an actual situation. The three-dimensional model in the virtual space may include a three-dimensional model of a cultural relic, and exemplarily, the cultural relic may include famous paintings, calligraphy, inscription, seal, porcelain, carving, jade, musical instruments, lacquerware, maos, filigrees, silk embroidery, four treasures of the cultural house, and bronze wares of the world war age, civilian and dudi afterschool supplies, royal classical system cultural relics, and foreign clocks and the like. The target content may include the interpretation of the historical source of the corresponding cultural relic, the vocal music of the musical instrument, the display of the famous painting, and the like.
In the embodiment, the virtual space can be determined from the navigation theme according to the position data of the display device, and the target content of the three-dimensional model in the virtual space is displayed.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an embodiment of step S102 in fig. 1. Specifically, the following steps S201 to S203 may be included:
step S201: and acquiring a real-time image, and determining the position data of the target object in the real-time image.
A camera sensor in the display equipment can be used for shooting a real scene to obtain a real-time image; or scanning a real scene through display equipment to obtain a real-time image; or selecting an image of a certain frame from the video as a real-time image. The subject matter may be cultural relics or other indicators, etc., for example, the subject matter may be famous pictures, calligraphy, inscriptions, seals, signs, navigation boards, etc. The position relation of the subject matter can be represented by a landmark building, and for example, when the subject matter is a cultural relic, the position data of the cultural relic can be embodied by the floor of the exhibition hall, the room number of the floor and the like.
Step S202: based on the position data of the subject matter and the position data of the display device, the relative positional relationship of the subject matter and the display device is determined.
After the position data of the marker and the position data of the display device are determined, the relative position relationship between the target and the display device, that is, the relative position relationship between the target and the user, is determined. The relative positional relationship may be a positional relationship of the target object with respect to the user, for example, the target object is located in a direction in which a face of the user faces.
Step S203: the virtual space is determined based on the relative positional relationship.
In the embodiment, a real-time image can be obtained by shooting a real scene through a camera sensor in the display equipment; or scanning a real scene through a display device to acquire a real-time image. For example, when the display device is a mobile terminal, the camera for acquiring the real-time image may be a front camera of the mobile terminal, and the direction of the image acquired by the front camera is generally the direction in which the face of the user faces, that is, the relative position relationship between the target object and the display device is the direction in which the target object is located. After the relative position relationship is determined, a virtual space can be determined, and the above-mentioned subject matter exists in the virtual space, for example, the determined virtual space may be a virtual space corresponding to a space that a user is at a current position and can see through human eyes.
The virtual space is determined by determining the relative position relationship between the target object and the display device, so that the virtual space desired by the user can be more accurately determined, and the convenience of the user in the use process is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of an embodiment of step S103 in fig. 1. Specifically, the following steps S301 to S303 may be included:
step S301: and selecting a plurality of target objects in the real-time image, and respectively calculating the distance between the plurality of target objects and the display equipment.
A camera sensor in the display equipment can be used for shooting a real scene to obtain a real-time image; or scanning a real scene through a display device to acquire a real-time image. The real-time image acquired by the display apparatus generally has a plurality of subject matters, which illustratively include famous paintings, calligraphies, inscriptions, and the like, respectively. The distance between each target object and the display device may be calculated separately, illustratively, the distance between the famous painting and the display device is a first distance, the distance between the calligraphy and the display device is a second distance, and the distance between the inscription and the display device is a third distance.
Step S302: a plurality of three-dimensional models corresponding to the target object are searched in the virtual space respectively.
After selecting the target object from the real-time image, the corresponding three-dimensional model may be selected from the virtual space, and for example, when the target object is the famous painting, calligraphy, and inscription, the three-dimensional model corresponding to the famous painting, the three-dimensional model corresponding to the calligraphy, and the three-dimensional model corresponding to the inscription need to be searched in the virtual space. The searching method may be to obtain image data of a plurality of objects from the real-time image, and then match the image data with the three-dimensional model to search for a corresponding three-dimensional model successfully matched. When the matching is performed, whether the matching is successful or not can be determined according to the matching degree of the image data and the three-dimensional model, and for example, the matching degree can reach more than 90%, more than 85% or other values.
Step S303: and determining the sequence of displaying the target contents corresponding to the three-dimensional models according to the distance between the plurality of target objects and the display equipment.
Each three-dimensional model corresponds to target content, and the sequential display order of the target content can be determined according to the distance value between the target object and the display equipment, for example, the target content of the three-dimensional model with smaller distance can be displayed first, and the target content of other three-dimensional models can be displayed in sequence according to the mode that the distance value is gradually increased; conversely, the target contents of the three-dimensional models with larger spacing can be displayed first, and then the target contents of other three-dimensional models can be sequentially displayed according to the mode that the spacing value gradually becomes smaller.
Illustratively, the subject matter respectively comprises a famous painting, calligraphy and inscription, the famous painting is spaced from the display device by a first distance, the calligraphy is spaced from the display device by a second distance, the inscription is spaced from the display device by a third distance, and the first distance is smaller than the second distance, and the second distance is smaller than the third distance. When the target content of the three-dimensional model with smaller space needs to be displayed first, and the target content of other three-dimensional models needs to be displayed in sequence according to the mode that the space value is gradually increased, the target content of the famous three-dimensional model is displayed first, then the target content of the three-dimensional model of calligraphy is displayed, and finally the target content of the inscription three-dimensional model is displayed.
The sequence of the target contents of the corresponding three-dimensional model is determined according to the actual distance between the target object and the display equipment, so that the target contents can be automatically displayed, the display effect is improved, and the user experience of the user in the browsing process is improved.
Referring to fig. 4, fig. 4 is a schematic flowchart of an embodiment of step S103 in fig. 1. Specifically, the following steps S401 to S403 may be included:
step S401: and acquiring a three-dimensional model of target content to be displayed in the virtual space.
A plurality of target contents to be displayed exist in the virtual space, and a three-dimensional model of the target contents to be displayed can be determined by acquiring an operation instruction of a user. Illustratively, the virtual space is provided with three-dimensional models such as famous paintings, calligraphy, inscriptions, seal seals and musical instruments, and after receiving an operation instruction of a user, the three-dimensional models of the famous paintings, calligraphy and musical instruments are selected to determine the three-dimensional model of the target content to be displayed.
Step S402: and displaying the identification information corresponding to each three-dimensional model in a play list.
Each three-dimensional model corresponds to different identification information, and the corresponding three-dimensional model can be uniquely determined through the identification information. The playlist may be displayed in the form of a small window in the display device, and the shape of the playlist may be unlimited, for example, as a circle, a box, and the like. And displaying the identification information in the playlist in sequence.
Illustratively, the three-dimensional model of the target content that needs to be displayed may include a three-dimensional model of a famous painting, a three-dimensional model of a calligraphy, a three-dimensional model of a musical instrument, and the like. The identification information of the three-dimensional model of the famous painting can be famous painting A, the identification information of the three-dimensional model of the calligraphy can be calligraphy B, and the identification information of the three-dimensional model of the musical instrument can be musical instrument C. And then displaying the famous painting A, the calligraphy B and the musical instrument C in a play list according to a certain sorting rule, illustratively, sorting according to the position of the cultural relic corresponding to the three-dimensional model in a real space, the attention of the cultural relic corresponding to the three-dimensional model by a user and the like.
Step S403: and selecting identification information from the playlist, and displaying the target content of the corresponding three-dimensional model based on the selected identification information.
The identification information can be selected in response to an operation instruction of a user, such as clicking or voice operation. Or automatically selecting the identification information in the playlist according to basic rules from top to bottom and from left to right. And each piece of identification information corresponds to a three-dimensional model, after the identification information is selected, the three-dimensional model corresponding to the selected identification information can be determined, and then target content corresponding to the three-dimensional model is displayed through display equipment.
The identification information corresponding to the three-dimensional model is presented to the user in a playlist mode, so that the user can conveniently check target content capable of being displayed, convenience in the use process is improved, and meanwhile, the display effect of the target content can be improved only by displaying the target content of the three-dimensional model corresponding to the selected identification information.
Referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of step S402 in fig. 4. Specifically, the following steps S501 to S502 may be included:
step S501: and acquiring the display times of the selected three-dimensional models from the database.
After the three-dimensional model of the target content to be displayed in the virtual space is selected, the number of times the plurality of three-dimensional models are selected for display may be obtained from the database. The number of times of display may be defined as the number of times of the three-dimensional model that is selected by the user to display the target content and is in response to the user's operation instruction within a certain preset time period. Wherein the preset time period may be one week, one month, etc. The operation instruction of the user may include a touch instruction such as a single click, a double click, a drag, and a slide, and may also include a voice instruction. Illustratively, when the selected three-dimensional model is a three-dimensional model of a famous painting, calligraphy and musical instrument, and the three-dimensional models of the famous painting, calligraphy and musical instrument are acquired and displayed for a selected number of times in a week as a, b and c respectively.
Step S502: and comparing the display times of each three-dimensional model, sorting the identification information of the three-dimensional models based on the comparison result, and displaying the identification information in a play list.
Comparing the display times of each three-dimensional model, illustratively, the display times of the three-dimensional models obtained into the famous painting, calligraphy and musical instrument selected in one week are respectively a, b and c, wherein a is greater than b, and b is greater than c. According to the comparison result, the identification information of the three-dimensional model with more display times can be displayed in front of the playlist, and sequentially displayed in the playlist according to the mode that the display times are gradually reduced, namely the identification information of the famous painting is displayed in front of the playlist, then the identification information of the calligraphy is displayed, and finally the identification information of the musical instrument is displayed. Therefore, the identification information of the three-dimensional model with high heat in a certain time period can be displayed in the front of the playlist, so that a user can conveniently look up the identification information of the three-dimensional model, and the experience effect of the user is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart of an embodiment of a virtual display method provided in the present application. Specifically, the following steps S601 to S607 may be included:
step S601: and determining the navigation subject in response to the operation instruction of the user.
Step S601 is the same as step S101, and is not described herein again.
Step S602: and acquiring a real-time image, and determining the position data of the target object in the real-time image.
Step S602 is the same as step S201, and is not described herein again.
Step S603: based on the position data of the subject matter and the position data of the display device, the relative positional relationship of the subject matter and the display device is determined.
Step S603 is the same as step S202, and is not described herein again.
Step S604: the virtual space is determined based on the relative positional relationship.
Step S604 is the same as step S203, and is not described herein again.
Step S605: and searching the virtual preview space of the next point from the navigation subject based on the relative position relation.
The relative positional relationship may be a positional relationship of the target object with respect to the user, for example, the target object is located in a direction in which a face of the user faces. That is, a camera sensor in the display equipment is used for shooting a real scene to obtain a real-time image; or scanning a real scene through a display device to acquire a real-time image. Taking the display device as a mobile terminal as an example, the user usually faces the display screen of the display device, and then faces the target object through the rear camera sensor to obtain a real-time image of the target object, that is, the face of the user also faces the target object. After the direction of the user's face is clarified, the position where the user may need to go next can be roughly determined, at which time the virtual preset space of the next point is searched in the navigation subject. The virtual preset space of the next point location may not include the virtual space, for example, the virtual space may be a channel a in a navigation center, and the virtual preset space is a channel B in the navigation center, and if the user stands in the channel a and walks in the direction of the face of the user, the user enters the channel B.
Step S606: and displaying prompt information of the virtual preview space of the next point.
The prompt information may include voice prompts, text prompts, and the like. Exemplarily, after the virtual preview space is detected, the user is reminded of the virtual preview space of the next point location through the prompt message, so that the virtual preview space can be utilized to play a local navigation role, and the user can predict the scenery spot layout of the navigation center in advance.
In one embodiment, a virtual display method may include: creating a first display window and a second display window based on the virtual space and the virtual preview space; and displaying the virtual space by using the first display window, and displaying the virtual preview space by using the second display window. Wherein the step may be located before or after step S606.
The first display window and the second display window are both located in the display screen of the display device, and since the user is still at the position of the navigation hall corresponding to the virtual space, the first display window may be larger than the second display window, so that the presence of the second display window does not affect the continued browsing of information in the first display window. In one embodiment, the first display window may exist on the entire display screen of the display device, the second display window covers a part of the first display window, and the second display window may be arbitrarily moved in the first display window under an operation instruction of a user. Therefore, the preview effect of the virtual preview space is realized through the second display window, and the interactivity and the interestingness in the navigation process are improved.
Step S607: and displaying target content of the three-dimensional model in the virtual space based on a preset interaction rule.
Step S607 is the same as step S103, and is not described herein again.
According to the scheme, the virtual space is determined from the navigation theme according to the position data of the display equipment, and the target content of the three-dimensional model in the virtual space is displayed.
The virtual display method in this embodiment may be applied to a virtual display apparatus, and the virtual display apparatus in this embodiment may be a server, a mobile device, or a system in which a server and a mobile device cooperate with each other. Accordingly, each part, for example, each unit, sub-unit, module, and sub-module, included in the mobile device may be all disposed in the server, may also be all disposed in the mobile device, and may also be disposed in the server and the mobile device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein.
In order to implement the virtual display method of the above embodiment, the present application provides a virtual display apparatus. Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a virtual display apparatus 70 provided in the present application.
Specifically, the virtual display device 70 may include: a determination module 71, an acquisition module 72 and a display module 73.
The determination module 71 is configured to determine a navigation subject in response to an operation instruction of a user.
The obtaining module 72 is configured to obtain position data of the display device, and determine a virtual space from the navigation topic based on the position data.
The display module 73 is configured to display target content of the three-dimensional model in the virtual space based on a preset interaction rule.
According to the scheme, the virtual space is determined from the navigation theme according to the position data of the display equipment, and the target content of the three-dimensional model in the virtual space is displayed.
In an embodiment of the present application, each module in the virtual display apparatus 70 shown in fig. 7 may be respectively or entirely combined into one or several units to form the virtual display apparatus, or some unit(s) may be further split into multiple sub-units with smaller functions, which may implement the same operation without affecting implementation of technical effects of the embodiment of the present application. The modules are divided based on logic functions, and in practical application, the functions of one module can be realized by a plurality of units, or the functions of a plurality of modules can be realized by one unit. In other embodiments of the present application, the virtual display device 70 may also include other units, and in practical applications, these functions may also be implemented by being assisted by other units, and may be implemented by cooperation of multiple units.
The method is applied to the virtual display equipment. Referring to fig. 8 in detail, fig. 8 is a schematic structural diagram of an embodiment of a virtual display device provided in the present application, where the virtual display device 80 of the present embodiment includes a processor 81 and a memory 82. The memory 82 stores a computer program, and the processor 81 is configured to execute the computer program to implement the virtual display method.
The processor 81 may be an integrated circuit chip having signal processing capability. Processor 81 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application, where the computer storage medium 90 of the present embodiment includes a computer program 91 that can be executed to implement the virtual display method.
The computer storage medium 90 of this embodiment may be a medium that can store program instructions, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may also be a server that stores the program instructions, and the server may send the stored program instructions to other devices for operation, or may self-operate the stored program instructions.
In addition, if the above functions are implemented in the form of software functions and sold or used as a standalone product, they may be stored in a storage medium readable by a mobile terminal, that is, the present application also provides a storage device storing program data, which can be executed to implement the method of the above embodiments, and the storage device may be, for example, a usb disk, an optical disk, a server, etc. That is, the present application may be embodied as a software product, which includes several instructions for causing an intelligent terminal to perform all or part of the steps of the methods described in the embodiments.
In the description of the present application, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specified otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (e.g., a personal computer, server, network device, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The above description is only an embodiment of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A virtual display method, comprising:
responding to an operation instruction of a user, and determining a navigation theme;
acquiring position data of a display device, and determining a virtual space from the navigation theme based on the position data;
and displaying target content of the three-dimensional model in the virtual space based on a preset interaction rule.
2. The method of claim 1, wherein the obtaining location data of a display device, and determining a virtual space from the navigation topic based on the location data comprises:
acquiring a real-time image, and determining position data of a target object in the real-time image;
determining a relative positional relationship of the subject matter and the display device based on the positional data of the subject matter and the positional data of the display device;
determining the virtual space based on the relative positional relationship.
3. The method according to claim 2, wherein the displaying the target content of the three-dimensional model in the virtual space based on the preset interaction rule comprises:
selecting a plurality of the target objects in the real-time image, and respectively calculating the distance between the plurality of the target objects and the display equipment;
searching the virtual space for a plurality of the three-dimensional models corresponding to the object, respectively;
and determining the sequence of displaying the target contents corresponding to the three-dimensional models according to the distance between the plurality of target objects and the display equipment.
4. The method according to claim 1, wherein the displaying the target content of the three-dimensional model in the virtual space based on the preset interaction rule comprises:
acquiring a three-dimensional model of the target content to be displayed in the virtual space;
displaying the identification information corresponding to each three-dimensional model in a play list;
and selecting the identification information from the play list, and displaying the target content of the corresponding three-dimensional model based on the selected identification information.
5. The method according to claim 4, wherein said displaying the identification information corresponding to each of the three-dimensional models in a playlist comprises:
acquiring the selected display times of the three-dimensional models from a database;
and comparing the display times of each three-dimensional model, sequencing the identification information of the three-dimensional models based on the comparison result, and displaying the identification information in the playlist.
6. The method of claim 2, wherein the method comprises:
searching a virtual preview space of a next point from the navigation subject based on the relative position relation;
and displaying prompt information of the virtual preview space of the next point location.
7. The method of claim 6, wherein the method comprises:
creating a first display window and a second display window based on the virtual space and the virtual preview space;
and displaying the virtual space by using the first display window, and displaying the virtual preview space by using the second display window.
8. A virtual display apparatus, comprising:
the determining module is used for responding to an operation instruction of a user and determining a navigation theme;
the acquisition module is used for acquiring position data of the display equipment and determining a virtual space from the navigation theme based on the position data;
and the display module is used for displaying the target content of the three-dimensional model in the virtual space based on a preset interaction rule.
9. A virtual display device, comprising: a processor and a memory, the memory having stored therein a computer program for executing the computer program to implement the method of any of claims 1 to 8.
10. A computer readable storage medium having stored thereon program instructions, characterized in that the program instructions, when executed by a processor, implement the method of any of claims 1 to 8.
CN202210524846.XA 2022-05-13 2022-05-13 Virtual display method, device, equipment and storage medium Pending CN114967914A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210524846.XA CN114967914A (en) 2022-05-13 2022-05-13 Virtual display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210524846.XA CN114967914A (en) 2022-05-13 2022-05-13 Virtual display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114967914A true CN114967914A (en) 2022-08-30

Family

ID=82982909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210524846.XA Pending CN114967914A (en) 2022-05-13 2022-05-13 Virtual display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114967914A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117312477A (en) * 2023-11-28 2023-12-29 北京三月雨文化传播有限责任公司 AR technology-based indoor intelligent exhibition positioning method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117312477A (en) * 2023-11-28 2023-12-29 北京三月雨文化传播有限责任公司 AR technology-based indoor intelligent exhibition positioning method, device, equipment and medium
CN117312477B (en) * 2023-11-28 2024-02-20 北京三月雨文化传播有限责任公司 AR technology-based indoor intelligent exhibition positioning method, device, equipment and medium

Similar Documents

Publication Publication Date Title
US11417365B1 (en) Methods, systems and apparatuses for multi-directional still pictures and/or multi-directional motion pictures
Richards-Rissetto et al. Geospatial Virtual Heritage: a gesture-based 3D GIS to engage the public with Ancient Maya Archaeology
US11657085B1 (en) Optical devices and apparatuses for capturing, structuring, and using interlinked multi-directional still pictures and/or multi-directional motion pictures
Kasapakis et al. Augmented reality in cultural heritage: Field of view awareness in an archaeological site mobile guide
Maiwald et al. A 4D information system for the exploration of multitemporal images and maps using photogrammetry, web technologies and VR/AR
Hu et al. Hybrid three-dimensional representation based on panoramic images and three-dimensional models for a virtual museum: Data collection, model, and visualization
Niebling et al. 4D augmented city models, photogrammetric creation and dissemination
TW202314535A (en) Data display method, computer device and computer-readable storage medium
Agus et al. Data-driven analysis of virtual 3D exploration of a large sculpture collection in real-world museum exhibitions
CN114967914A (en) Virtual display method, device, equipment and storage medium
CN109863746A (en) Interactive data visualization environment
Bousbahi et al. Mobile augmented reality adaptation through smartphone device based hybrid tracking to support cultural heritage experience
CN111652986B (en) Stage effect presentation method and device, electronic equipment and storage medium
Střelák Augmented reality tourist guide
Münster et al. URBAN HISTORY IN 4 DIMENSIONS–SUPPORTING RESEARCH AND EDUCATION
Baldissini et al. Interacting with the Andrea Palladio Works: the history of Palladian information system interfaces
Cappellini Electronic Imaging & the Visual Arts. EVA 2013 Florence
Yuan Design guidelines for mobile augmented reality reconstruction
Liarokapis et al. Experiencing personalised heritage exhibitions through multimodal mixed reality interfaces
Furferi et al. Enhancing traditional museum fruition: current state and emerging tendencies
Daraghmi Augmented Reality Based Mobile App for a University Campus
A Ali Abd El-Ghany et al. Augmented Reality Applications and their Role in Improving the Tourist Experience: Applied on Travel Agencies in Egypt
Tai et al. Supplementary Physical Device for In-Depth Augmented Reality Touring of Architectural Heritage Sites
Саченко et al. The use of augmented reality for renovation of cultural heritage sites
LOPEZ AUGMENTED REALITY FOR DISSEMINATING CULTURAL AND HISTORICAL HERITAGE AT CEMITÉRIO DOS PRAZERES

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination