CN113742507A - Method for three-dimensionally displaying an article and associated device - Google Patents

Method for three-dimensionally displaying an article and associated device Download PDF

Info

Publication number
CN113742507A
CN113742507A CN202111095021.2A CN202111095021A CN113742507A CN 113742507 A CN113742507 A CN 113742507A CN 202111095021 A CN202111095021 A CN 202111095021A CN 113742507 A CN113742507 A CN 113742507A
Authority
CN
China
Prior art keywords
item
target
dimensional
target item
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111095021.2A
Other languages
Chinese (zh)
Inventor
史欣于
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202111095021.2A priority Critical patent/CN113742507A/en
Publication of CN113742507A publication Critical patent/CN113742507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The present disclosure provides a method of three-dimensionally displaying an item and related apparatus. The method comprises the following steps: determining a target article; acquiring a three-dimensional model and three-dimensional display parameters corresponding to the target object; and three-dimensional display is carried out on the target object based on the three-dimensional model and the three-dimensional display parameters.

Description

Method for three-dimensionally displaying an article and associated device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method for three-dimensionally displaying an article and a related device.
Background
The cultural relic exhibit has high historical value and aesthetic value and has great research significance in the aspects of formal artistic history materials and the like. However, the traditional exhibition hall has high requirements on time and space when displaying the cultural relics, and the precious cultural relics are difficult to see details. The current online exhibits mainly use two-dimensional pictures, so that the usability is greatly limited.
Disclosure of Invention
The present disclosure provides a method for three-dimensionally displaying an article and related apparatus.
In a first aspect of the present disclosure, there is provided a method for three-dimensionally displaying an article, comprising:
determining a target article;
acquiring a three-dimensional model and three-dimensional display parameters corresponding to the target object; and
and performing three-dimensional display on the target object based on the three-dimensional model and the three-dimensional display parameters.
In a second aspect of the present disclosure, a terminal device is provided, which includes one or more processors, a memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the method according to the first aspect.
In a third aspect of the present disclosure, there is provided a system for three-dimensionally displaying an article, comprising:
the terminal device of the second aspect, configured to: sending a three-dimensional article display request of a target article to a server; and
a server configured to: and returning the three-dimensional display code of the target object to the terminal equipment according to the three-dimensional display object request, so that the terminal equipment runs the three-dimensional display code to display the three-dimensional picture of the target object.
In a fourth aspect of the disclosure, a non-transitory computer-readable storage medium containing a computer program is provided, which, when executed by one or more processors, causes the processors to perform the method of the first aspect.
In a fifth aspect of the present disclosure, there is provided a computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
According to the method and the related equipment for three-dimensionally displaying the article, the article can be displayed in detail better by three-dimensionally displaying the article, so that visitors can observe the article conveniently, and the user experience is better.
Drawings
In order to more clearly illustrate the technical solutions in the present disclosure or related technologies, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1A illustrates a schematic diagram of an exemplary system provided by an embodiment of the present disclosure.
FIG. 1B illustrates a schematic diagram of an exemplary venue 400, according to an embodiment of the present disclosure.
FIG. 1C illustrates a schematic diagram of an exemplary interface, in accordance with embodiments of the present disclosure.
FIG. 1D illustrates a schematic diagram of another exemplary interface according to an embodiment of the present disclosure.
FIG. 1E illustrates a schematic diagram of yet another exemplary interface, in accordance with an embodiment of the present disclosure.
Fig. 1F illustrates a schematic diagram of another exemplary interface according to an embodiment of the present disclosure.
FIG. 1G illustrates a schematic diagram of yet another exemplary interface according to an embodiment of the present disclosure.
Fig. 2 shows a flow diagram of an exemplary method provided by an embodiment of the present disclosure.
Fig. 3 shows a more specific hardware structure diagram of a terminal device according to an embodiment of the present disclosure.
Detailed Description
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
It is to be noted that technical terms or scientific terms used in the embodiments of the present disclosure should have a general meaning as understood by those having ordinary skill in the art to which the present disclosure belongs, unless otherwise defined. The use of "first," "second," and similar terms in the embodiments of the disclosure is not intended to indicate any order, quantity, or importance, but rather to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
Fig. 1A illustrates a schematic diagram of an exemplary system 100 provided by an embodiment of the present disclosure.
As shown in fig. 1A, the system 100 may include a server 200 and a plurality of terminal devices 300 a-300 n. The number of the servers 200 may be one or more, and when the number of the servers 200 is multiple, a distributed architecture may be adopted. The terminal devices 300 a-300 n may be used to three-dimensionally present an item and may be various types of terminal devices. For example, the terminal device 300a may be a mobile terminal (e.g., a mobile phone), the terminal device 300b may be a Personal Computer (PC), and the terminal device 300n may be a notebook computer (laptop). The server 200 and the terminal devices 300a to 300n may be connected through a wired network or a wireless network and implement data interaction.
In some embodiments, the system 100 may be applied to a museum or the like for three-dimensional display of cultural relics exhibited by the museum. The terminal devices 300a to 300n may be terminal devices for visitors to be installed inside or around a venue.
FIG. 1B illustrates a schematic diagram of an exemplary venue 400, according to an embodiment of the present disclosure.
As shown in FIG. 1B, items 402-420 are displayed in each exhibition hall and the central exhibition hall of the venue 400, respectively. Here, for example, the terminal device 300b may be provided in a central exhibition room of the venue 400 for use by a visitor who can view a three-dimensional picture of an item exhibited in the venue 400 and related information of the item on the terminal device 300 b. For another example, the visitor 500 may view a three-dimensional picture of an item displayed in the venue 400 and related information of the item by carrying the terminal device 300a with him. The terminal device 300a may be, for example, the mobile phone of the visitor 500 itself, an interpreter provided by the venue 400, or the like.
In some embodiments, the system 100 may be implemented based on three. For example, codes for implementing a method for three-dimensionally displaying an article based on three. js may be written and deployed in the server 200, so that the terminal devices 300a to 300n obtain the codes from the server 200 and load the codes, and then three-dimensionally displaying the article on the terminal devices 300a to 300n through a browser. The code may include code for implementing each function of the three-dimensional presentation, configuration parameters required for implementing each function, and the like. Js is a three-dimensional engine packaged based on WebGL, codes of a method for realizing three-dimensional display of an article based on the js are used, a complex three-dimensional computer graph is created and displayed in a browser by using a JavaScript function library or API, the complex three-dimensional computer graph can be directly operated through the browser, a traditional independent application program or a plug-in and the like are not needed, and the method is more convenient to realize.
It can be understood that, by deploying the code of the method for three-dimensionally displaying an article in the server 200, any terminal device can acquire the code from the server 200 by inputting a corresponding address for acquiring the code in the browser, so that the method has the advantages of higher compatibility and wide application range. In some scenarios, however, deployment may be required for a separate terminal device, and thus the code may be stored locally at the terminal device and need not be retrieved from the server 200.
In some embodiments, the server 200 may further include a database in which a three-dimensional model of an item may be stored. For example, when the system 100 is applied to a museum scene, a three-dimensional model of all exhibits exhibited by the museum may be stored in the database. The three-dimensional model can be modeled by a three-dimensional modeling software or other tools. In some embodiments, the article may be split into a plurality of portions, and then each portion is modeled separately to obtain a three-dimensional sub-model corresponding to each portion, thereby obtaining a three-dimensional model of the article. For example, in the case of a porcelain bottle having four parts including a body, a cap, a left handle and a right handle, sub models can be obtained by modeling the four parts during modeling, and then the four sub models can be combined into a three-dimensional model of the porcelain bottle. In this way, the item may be displayed separately based on the three-dimensional sub-model when subsequently displayed so that the visitor may observe more details of the item. The modeled model may be in obj format, for example.
After entering the venue 400, the visitor can browse the three-dimensional images and the related information of the items 402 to 420 displayed in the venue 400 by using the terminal device 300b provided in the venue 400, or sequentially browse the three-dimensional images and the related information of the items 402 to 420 along the walking route by using the mobile phone 300 a.
In the initial state, the terminal devices 300a to 300n may not display any three-dimensional images of the items 402 to 420 displayed in the venue 400. The visitor may transmit a three-dimensional displayed item request of the target item to the server 200 by inputting a corresponding address (e.g., a web address) in the browser of the terminal device 300 a-300 n.
The server 200 may return a three-dimensional display code (e.g., HTML code) of the target item to a corresponding terminal device according to the three-dimensional display item request, so that the terminal device runs the three-dimensional display code and runs in a browser, thereby displaying a three-dimensional picture of the target item in the browser.
According to different actual deployment conditions of the terminal equipment and different setting modes of the initial state of the terminal equipment, different initial pictures displayed in the initial state can be displayed by inputting different addresses.
For example, if a terminal device is only installed at a specific location in the venue 400 (e.g., the terminal device 300B installed in the central exhibition hall of fig. 1B), there is no one-to-one correspondence between the terminal device and the exhibited item, and therefore, when the terminal device 300B sends a three-dimensional exhibited item request to the server 200 by inputting a corresponding address (e.g., 3d.museum. com) in the browser in the initial state, the initial screen displayed after the code returned by the server 200 runs in the browser of the terminal device 300B is a schematic diagram of the venue 400. FIG. 1C illustrates a schematic diagram of an exemplary interface 600, according to an embodiment of the present disclosure. As shown in fig. 1C, in the present embodiment, the initial picture shows a scene map showing a picture (which may be two-dimensional or three-dimensional) of the venue 400. The coordinates and icons of the exhibits 402-420 can be configured in the code returned by the server 200, so that different exhibits can be displayed at different positions of the initial picture, as shown in fig. 1C.
In some embodiments, a default address may also be set in the browser as an address for accessing the server 200 to acquire a code for three-dimensional exhibition of an article, so that a visitor can automatically acquire the code by starting the browser to exhibit an initial picture on the terminal device. In this embodiment, if the visitor needs to further view the three-dimensional image of the target object, the corresponding target object needs to be selected in the interface 600, and then the corresponding three-dimensional image is displayed. After the interface 600 is displayed, the terminal device 300b may monitor whether a target item selection event occurs (for example, a visitor double-clicks a certain exhibit in a scene map through a mouse), and if the target item selection event is monitored, the target item may be determined based on the target item selection event, so as to display a three-dimensional picture of the target item. In some embodiments, after displaying the interface 600, the terminal device 300b may further monitor whether a target item click event occurs (for example, a visitor clicks a certain exhibited item in the scene map with a mouse), and if the target item click event is monitored, the target item may be determined based on the target item click event, and then display related information 6002 of the target item in the interface 600 (the information may be obtained from the server 200), as shown in fig. 1D. The style of the bullet box showing the related information 6002 can be set by a user.
For another example, if the terminal devices are in one-to-one correspondence with the exhibits, for example, one terminal device is disposed beside each exhibit for three-dimensional exhibition of the exhibit, then when the terminal device sends a three-dimensional exhibition article request of the target article (exhibit 1) to the server 200 by inputting a corresponding address (e.g., 3d.museum. com/exhibit 1) in the browser in the initial state, the initial picture displayed after the code returned by the server 200 runs in the browser of the terminal device 300b is a schematic diagram of the exhibit 1. FIG. 1E illustrates a schematic diagram of an exemplary interface 602, in accordance with embodiments of the present disclosure. As shown in fig. 1E, in the present embodiment, a three-dimensional picture of the target item is displayed in the initial picture. In some embodiments, a default address may also be set in the browser as an address for accessing the server 200 to obtain the code for three-dimensional exhibition of the item, so that the visitor can automatically obtain the code by starting the browser to exhibit the three-dimensional picture of the item 1 on the terminal device. It is understood that in this embodiment, each terminal device corresponds to an exhibit one to one, and therefore, the default address of the browser in each terminal device is different from the exhibit, for example, the default address of the browser of the terminal device corresponding to the exhibit 2 may be, for example, 3d.
In some embodiments, the visitor 500 may also use the mobile terminal 300a carried by itself to present a three-dimensional picture of an item. For example, a display cabinet of the target item may be provided with an identifier such as a bar code or a two-dimensional code, and the identifier has a code address (e.g., 3d.museum. com/exhibit 1) corresponding to the target item, and the visitor 500 scans the identifier by using the mobile terminal 300a, so as to obtain the code address, and sends a three-dimensional exhibited item request of the target item to the server 200. The server 200 may determine the target item according to the address information carried in the identifier of the target item, and return a code corresponding to the target item to the terminal device 300 b.
For another example, the terminal device 300a may enter the initial screen by inputting an address (e.g., 3d.museum.com) corresponding to the three-dimensional displayed item in the browser, then automatically acquire the positioning information (e.g., GPS information or positioning information determined based on an indoor positioning manner (bluetooth, WIFI, etc.)) of the terminal device, and then send a three-dimensional displayed item request carrying the positioning information to the server 200. The server 200 may determine the position of the terminal device in the scene map according to the positioning information, thereby determining the target item according to the position of the terminal device in the scene map, and returning a code corresponding to the target item to the terminal device 300a, so that the terminal device 300a displays a three-dimensional picture of the target item.
After acquiring the corresponding code, the terminal device 300a or 300b runs the code using the browser, and starts three-dimensional display. After determining the target item, the terminal device 300a or 300b may obtain the three-dimensional model corresponding to the target item from the database of the server 200, and perform three-dimensional display on the target item based on the three-dimensional display parameters preset in the code, as shown in fig. 1E.
In some embodiments, when the method for three-dimensionally displaying an article is implemented based on three. An example of a portion of this code is shown below.
Figure BDA0003268908010000061
Figure BDA0003268908010000071
As can be seen from the above examples, the configurable items in the code may include:
model Url: the model url judges the model type according to the suffix and supports mainstream model formats such as fbx, obj, gltf and the like;
scale: scaling, and not transmitting default 1;
showAxes: if the coordinate axis is displayed, the default is not displayed;
pointLightHelper: whether to display an auxiliary point light source;
bgColor: the background color is configured according to the page requirement, and defaults to the background color displayed by the slide of the p2 page;
cameraPosition: the position of the camera is used for quickly adjusting the initial visual angle;
the isFixed: the visual angle can be fixed, the user is not allowed to drag the observation model, and the user can view the model by changing the angle by default;
showIcons: and when the icon is displayed, the coordinates and the icon can be displayed at the corresponding position. Display information can be additionally displayed, such as clicking an icon to display a popup box, and displaying equipment information by the popup box.
It will be appreciated that the above examples are merely illustrative and that the required parameters may vary depending on the actual requirements.
The terminal device 300a or 300b can quickly present the three-dimensional model in the browser page of the terminal device 300a or 300b by entering basic parameters through the code configured components. Except for the model address, other parameters can adopt default configuration and self-configuration.
After the terminal device 300a or 300b loads and runs the code returned by the server 200, the OBJLoader may be used to read the three-dimensional model data, the mtlloloader may be used to load the model material, and parameters such as the renderer, the scene, the camera, and the light are preset. The scene parameters can be preset with ambient light parameters to ensure normal display of the articles, and simultaneously, the parameters of the point light source can be preset to facilitate more truly simulating the material display effect of the articles. The initial position of the three-dimensional model of the target item may be placed at a center point of the scene.
The visitor can perform further operations on the three-dimensional displayed item on the terminal device 300a or 300 b.
In some embodiments, for example, the visitor may adjust the angle, size, and position of the three-dimensional model of the item displayed in the interface 602. For example, an OrbitControls camera control can be introduced to perform zooming, translation and rotation operations on the scene, so that a visitor can adjust the three-dimensional model to a required angle and position for observation. With the requestAnimationFrame module, the rotation.
In some embodiments, as shown in FIG. 1E, terminal device 300a or 300b may present icons at specific locations of interface 602 to prompt the visitor to adjust the three-dimensional model by adjusting the icons. For example, as shown in FIG. 1E, an arc arrow icon 6022 may be placed next to the three-dimensional model of the exhibit to prompt the visitor to adjust the angle of the three-dimensional model by sliding adjustment of the icon 6022. As another example, a cross arrow icon 6024 may be placed next to the three-dimensional model of the exhibit to prompt the visitor to adjust the position of the three-dimensional model by sliding adjustment of the icon 6024. As another example, a zoom icon 6026 may be placed next to the three-dimensional model of the exhibit to prompt the visitor to resize the three-dimensional model by sliding adjustment of the icon 6026. The browser of the terminal device 300a or 300b may listen whether a slide adjust item event occurs for a particular icon to determine whether the visitor has adjusted the icons.
If the browser of the terminal device 300a or 300b monitors that the event of sliding adjustment of the article occurs, the sliding start point and the sliding end point of the event can be determined based on the event, the corresponding camera parameter adjustment amount is determined according to the sliding start point and the sliding end point, and then the corresponding camera parameter is adjusted according to the camera parameter adjustment amount, so as to adjust the angle, the position and the size of the three-dimensional model.
In some embodiments, for example, the visitor may adjust the lighting of the three-dimensional model of the item displayed in interface 602.
In some embodiments, as shown in FIG. 1E, the terminal device 300a or 300b may present some icons at specific locations of the interface 602 to prompt the visitor to adjust the lighting of the three-dimensional model by adjusting the icons. For example, as shown in FIG. 1E, a three-dimensional model of the exhibit may have a horizontal arrow icon 6028 positioned next to the light to prompt the visitor to adjust the x-axis of the point source of the three-dimensional model by sliding adjustment of the icon 6028. As another example, a vertical arrow icon 6030 may be placed next to the three-dimensional model of the exhibit to prompt the visitor to adjust the y-axis of the point source of the three-dimensional model by sliding the icon 6030. For another example, a diagonal arrow icon 6032 may be placed next to the three-dimensional model of the exhibit to prompt the visitor to adjust the z-axis of the point source of the three-dimensional model by sliding the icon 6032. For another example, a vertical arrow icon 6034 may be placed next to the three-dimensional model of the exhibit to prompt the visitor to adjust the intensity of the point light source of the three-dimensional model by sliding the icon 6034. The browser of the terminal device 300a or 300b may listen whether a slide adjust item event occurs for a particular icon to determine whether the visitor has adjusted the icons. The maximum value and the minimum value corresponding to the arrow icon, and the numerical value increased and decreased by dragging each time are preset to be adapted to the current scene.
If the browser of the terminal device 300a or 300b monitors that a sliding adjustment light event occurs, the corresponding sliding start point and sliding end point may be determined based on the sliding adjustment light event, and the corresponding light parameter adjustment amount may be determined according to the sliding start point and the sliding end point, and then the light parameter may be adjusted according to the light parameter adjustment amount, so as to adjust the x, y, z axes and intensity of the point light source.
In some embodiments, taking terminal device 300b operating with a mouse as an example, a Vector2 class representing a two-dimensional Vector, a const mouse ═ new three Vector2() may be introduced to record coordinates (x, y) of the mouse position relative to the model display area. The value ranges in the x and y directions may be-1 to +1, and the formula is mouse.x ═ (< abscissa of the mouse relative to the visible area >/< width of the visible area >) × 2-1, and mouse.y ═ - (< ordinate of the mouse relative to the visible area >/< height of the visible area >) × 2+ 1. By monitoring a mouse movement event, assuming that the obtained page element node corresponding to the display model area is a node corresponding to the icon 6022, and obtaining the position of the obtained page element node relative to the browser window through getBoundingClientRect (). The horizontal coordinate of the mouse position relative to the browser page and the horizontal position of the icon 6022 relative to the browser window are subtracted to obtain the abscissa of the mouse relative to the visible area, and similarly, the ordinate of the mouse position relative to the browser page and the vertical position of the icon 6022 relative to the browser window are subtracted to obtain the abscissa of the mouse relative to the visible area. By adopting the method, the sliding starting point and the sliding end point of the mouse can be calculated, and then the corresponding parameter adjustment quantity is calculated.
In some embodiments, the terminal device 300a or 300b may also display text information on a specific part of the three-dimensional model. For example, the terminal device 300a or 300b may determine whether a click event (or a mouse-in event) occurs at a specific portion of the three-dimensional model, and further determine whether to display text information of the object portion.
If the browser of the terminal device 300a or 300b monitors that a target article part selection event occurs at a specific part of the three-dimensional model, a target selection part on the target article is determined based on the target article part selection event, then corresponding text introduction information is determined according to the target selection part, and finally the text introduction information is displayed at the target selection part. Fig. 1F shows a schematic view of another exemplary interface 602, according to an embodiment of the present disclosure. As shown in fig. 1F, when the body of the target object is clicked, the text introduction 6038 corresponding to the part is displayed in the interface 602.
In some embodiments, taking the terminal device 300b as an example of mouse operation, when the mouse moves into the area where the specific part of the three-dimensional model of the article is located, the mouse moving event is monitored, and then the corresponding part of the text introduction information (or text explanation information) is displayed. In some embodiments, an initially invisible page element name textBox may be set, and when a mouse moves into an area where a specific portion of a three-dimensional model of an article is located, textual introduction information that should be correspondingly displayed may be found according to a name of a corresponding three-dimensional sub-model, and the textBox is displayed and set as the content of the textBox, as shown in fig. 1F.
In addition, in some embodiments, if the browser of the terminal device 300a or 300b monitors that a target item location selection event (or mouse-in event) occurs at a specific location of the three-dimensional model, the location may be highlighted by highlighting a light-emitting edge of the location, so as to prompt the visitor of the location of the currently selected item. Taking the terminal device 300b operating through a mouse as an example, highlighting of an object moved by the mouse can be realized by introducing a post-processing channel postprocessing of three. And meanwhile, the OutlinePass contour channel can be selected to be used for adding a luminous effect to the edge of the selected object. According to the display effect, the intensity, the flicker frequency and the light-emitting color of the edge light can be adjusted.
Furthermore, when the mouse is moved out, the textBox may be hidden again and the target array of the lighting effect may be cleared.
In some embodiments, the terminal device 300a or 300b may also split the three-dimensional model for presentation. For example, as shown in fig. 1E, the terminal device 300a or 300b may display a split presentation icon 6036 at a specific location of the interface 602 to prompt the visitor to split present the three-dimensional model by clicking on the icon 6036. The terminal device 300a or 300b can determine whether the visitor has issued a split display instruction for the target item by listening whether the icon 6036 is clicked by a mouse.
If the browser of the terminal device 300a or 300b monitors the event that the icon 6036 is clicked, the parts of the target item can be displayed in a split manner according to the three-dimensional sub-model (formed during modeling) of the parts of the target item.
In the example of implementing the system 100 in three. js, when modeling, a specific item is referred to according to a grid (mesh) of a three-dimensional model of the item, which may be named according to the name of the item, for example, by adding a prefix "object _ n" (n represents the number of the item, which may be a natural number starting from 1 and assigned to each item in turn), which facilitates to quickly identify which parts belong to the three-dimensional model of the item and participate in disassembly in three. js. Meanwhile, suffixes, such as "_ top", "_ left", "_ right", "_ middle", "_ bottom", etc., may be added to name the grids (mesh) of the three-dimensional submodels of the respective parts, facilitating the splitting of the three-dimensional model based on the suffixes. In the example of porcelain shown in fig. 1E, the above-mentioned classification satisfies basic requirements, and for example, suffixes "top", "left", "right", and "middle" correspond to a porcelain cover, a left handle, a right handle, and a bottle body, respectively. The meshes (mesh) of all models can be stored by setting an array modelist, which is added when the name of an element carries an "object _ n" prefix, recursively traversing all elements in the scene when loading the model with OBJLoader.
In some embodiments, two disassembling models can be preset in the system 100, corresponding to the porcelain with the bottle cap and the porcelain without the bottle cap. Therefore, when performing the disassembling, the terminal device 300a or 300b may determine the type of the target item to be disassembled first, for example, a scheme of resolving the disassembling model by whether there is a model mesh with a suffix name of "_ top".
If the type of the target item is a first type (e.g., with bottle cap china), the target item may be split into a first number of multiple portions (e.g., upper, middle, and lower three portions), and if the type of the target item is a second type (e.g., without bottle cap china), the target item may be split into a second number of multiple portions (e.g., upper and lower two portions).
In some embodiments, the final positions of the parts of the two splitting schemes are different based on the best display effect in the display area of the browser model. The porcelain with a bottle cap with a _topsuffix is equally split into an upper part, a middle part and a lower part in the display central area, and the porcelain without a bottle cap with a suffix is split into an upper part and a lower part, so that the y value of the final position is adjusted according to the actual display effect.
It will be appreciated that if the target item is also of a third type, for example a covered china with handles on both sides, the handles may also be detached, which may be done according to a suffix name. The position of the corresponding model mesh in the horizontal direction can be adjusted accordingly. Fig. 1G shows a schematic view of the breaking apart of a covered porcelain with two side handles into four parts.
In the splitting process, in order to make the splitting action smoother, a complementary animation library tweens in JavaScript can be introduced to realize natural transition of position change of each part. The transition is realized by encapsulating a movement function (move), in which the transition time, for example, defaults to 800 ms, i.e., 800 ms is required to move from the current position to the target position. For example, the function parameters may be: the object obj being moved and the object position, in a format such as x:10, y:10, z:10, representing the coordinates, indicate that the object is moved to (10,10, 10). The parameter of the original position coordinate may not be transmitted when the original value is 0, and the default original position coordinate is (0,0, 0). For example, if the porcelain cover is moved up 5 units from (0,0,0), the position parameter of the move function transmitted into the terminal device 300a or 300b need only be { y:5 }.
In some embodiments, in the already split model, the visitor may further adjust the split parts. For example, as shown in fig. 1G, the terminal device 300a or 300b may display a corresponding icon at a specific position of the interface 602 to prompt the visitor to adjust the split parts of the three-dimensional model by sliding and adjusting the icon.
Because the split portions occupy a large space of the interface 602, displaying the adjustment icons for each portion may result in an overcrowded interface 602, and it is difficult to lay out a large number of icons. Thus, in some embodiments, the currently selected portion may be determined by listening to whether a mouse-in event occurred for a particular portion, and the corresponding icon may be displayed for that portion. For example, as shown in fig. 1G, assuming that the current mouse is moved into the body and the terminal device 300a or 300b monitors the event, an arc-shaped arrow icon 6040 for adjusting the angle of the body portion, a cross-shaped arrow icon 6042 for adjusting the position of the body portion, a zoom icon 6044 for adjusting the size of the body portion may be presented through the browser.
Further, similar to the adjustment of the whole three-dimensional model, the terminal device 300a or 300b may perform corresponding adjustment on the body portion by monitoring the sliding event on the icon, which is not described herein again. Similarly, for the left-side handle, the right-side handle and the bottle cap, the specific adjustment can be realized in this way, and details are not repeated here.
In some embodiments, a ray casting Raycaster may be introduced for calculating what object the mouse moved through in three-dimensional space. And updating the ray through the position of the camera and the mouse, and calculating the focus of the object and the ray. According to the model decomposition step, the model mesh is stored in the modelList, and therefore, the focus of the object and the ray can be calculated according to the model mesh in the modelList.
In some embodiments, similar to the overall display of the three-dimensional model, the split part may also display the text introduction 6048 when a mouse-in event is detected, as shown in fig. 1G. At the same time, the split part can be highlighted. The specific working principle is similar to that of the integral display of the three-dimensional model, and is not repeated herein.
In some embodiments, the terminal device 300a or 300b may also perform merged display again on the split displayed three-dimensional model. For example, as shown in fig. 1G, the terminal apparatus 300a or 300b may display a merged presentation icon 6046 at a specific position of the interface 602 to prompt the visitor to perform the merged presentation of the plurality of three-dimensional submodels by clicking the icon 6046. The terminal device 300a or 300b can determine whether the visitor has issued a combined presentation instruction for the target item by listening whether the icon 6046 is mouse clicked.
If the browser of the terminal device 300a or 300b monitors the event that the icon 6046 is clicked, the parts of the target item can be merged and displayed according to the initial positions of the three-dimensional sub-models (formed during modeling) of the parts of the target item. For example, when merging models, it is possible to traverse all model meshes in the modelList, call the move function to pass in elements and initial positions before disassembly, such as { x:0, y:0, z:0}, and then complete merging. The combined interface is shown with reference to FIG. 1E or FIG. 1F.
It can be seen from the above embodiments that the method for three-dimensionally displaying an article and the related device provided by the embodiments of the present disclosure can realize the all-around display of historical relics or precious exhibits, and visitors can view details of the exhibits from any angle, and adjust lights, scenes, and the like according to personal preferences to achieve the optimal viewing effect. Meanwhile, the disassembling and combining functions are provided, so that the combination mode of the inside of the exhibit and the parts can be known in detail. In addition, the whole text explanation is provided, and the corresponding detailed introduction can be displayed when the mouse is moved into each component. The method and the related equipment for displaying the articles in three dimensions provided by the embodiment of the disclosure enable a viewer to enjoy learning exhibits on line.
According to the method and the related equipment for three-dimensionally displaying the articles, the on-line three-dimensional display article model enables the details of the articles to be presented to the maximum extent by means of decomposing and combining the model, adjusting scenes and the like, and therefore the problems that the exhibition is limited by time and space and the details cannot be deeply learned are solved. The visitor can rotate the model to watch details at will; adjusting the position, intensity and color of the light according to actual needs to achieve the optimal viewing effect; the model disassembly is provided with two disassembly schemes, the model can be disassembled and displayed, the inside of the exhibit can be observed conveniently, and the disassembling and combining processes have configurable natural transition; the mouse is moved to different parts of the model to display respective interpretations.
The embodiment of the disclosure also provides a method for three-dimensionally displaying an article. Fig. 2 illustrates a flow diagram of an exemplary method 700 provided by an embodiment of the present disclosure. The method 700 may be implemented by the terminal devices 300 a-300 n and has the technical effect of the corresponding embodiments of the system 100 described above. As shown in fig. 2, the method 700 may include the following steps.
At step 702, a terminal device (e.g., terminal device 300b) may determine a target item.
In step 704, the terminal device 300b may obtain a three-dimensional model and three-dimensional display parameters corresponding to the target item.
In step 706, the terminal device 300b may perform three-dimensional display on the target item based on the three-dimensional model and the three-dimensional display parameters.
In some embodiments, the method is a three-dimensional item-displaying method based on three.js; js, the method 700 may further include the following steps:
monitoring whether a sliding adjustment article event occurs; wherein the slide adjust item event comprises an event that adjusts an angle, position, or size of the target item;
in response to the occurrence of a slide adjustment item event, determining a slide starting point and a slide ending point of the slide adjustment item event, and determining a corresponding camera parameter adjustment amount according to the slide starting point and the slide ending point of the slide adjustment item event; and
and adjusting the camera parameters according to the camera parameter adjustment amount.
In some embodiments, the three-dimensional display parameters further include a light parameter in three. js, and the method 700 may further include the steps of:
monitoring whether a sliding light adjusting event occurs;
in response to the occurrence of a sliding light event, determining a sliding starting point and a sliding end point of the sliding light event, and determining a corresponding light parameter adjustment amount according to the sliding starting point and the sliding end point of the sliding light event; and
and adjusting the light parameters according to the light parameter adjustment amount.
In some embodiments, the target item is split into a plurality of portions, the three-dimensional model includes a plurality of three-dimensional submodels, the three-dimensional submodels correspond to the portions, the method 700 may further include the steps of:
receiving a split display instruction for the target item; and
and based on the splitting and displaying instruction, splitting and displaying the multiple parts of the target item according to the three-dimensional submodels of the multiple parts of the target item.
In some embodiments, splitting the multiple portions of the target item for display according to the three-dimensional submodel of the multiple portions of the target item based on the splitting display instruction further comprises:
determining a type of the target item;
responsive to the type of the target item being a first type, splitting the target item into a first number of multiple portions; or
Responsive to the type of the target item being a second type, splitting the target item into a second number of multiple portions;
wherein the first number is greater than the second number.
In some embodiments, displaying the plurality of portions of the target item in a split manner comprises:
determining a movement starting point and a movement end point of a target portion of the target item; and
and calling a moving function to move the target part according to the moving starting point and the moving end point, wherein transition time is set in the moving function so that the target part moves from the moving starting point to the moving end point at a constant speed according to the transition time.
In some embodiments, the method is a three-dimensional item-displaying method based on three.js; js, the method 700 may further include the following steps:
monitoring whether a sliding adjustment article part event occurs; wherein the slide adjust item portion event comprises an event that adjusts an angle, position, or size of a target portion of the target item;
in response to the occurrence of a slide adjusting item portion event, determining a slide starting point and a slide ending point of the slide adjusting item portion event, and determining a corresponding camera parameter adjustment amount according to the slide starting point and the slide ending point of the slide adjusting item portion event; and
and adjusting the camera parameters according to the camera parameter adjustment amount.
In some embodiments, the method 700 may further include the steps of:
monitoring whether a target object part selection event occurs or not;
in response to an occurrence of a target item location selection event, determining a target selection location on the target item based on the target item location selection event;
determining corresponding text introduction information according to the target selected part; and
and displaying the text introduction information at the target selected part.
In some embodiments, determining the target item further comprises:
receiving a scene map and displaying the scene map;
monitoring whether a target object selection event occurs; and
in response to occurrence of a target item selection event, determining the target item based on the target item selection event.
It should be noted that the above describes some embodiments of the disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the same inventive concept, corresponding to any of the above-described embodiments, the present disclosure further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the program, the method 700 described in any of the above embodiments is implemented.
Fig. 3 shows a more specific hardware structure diagram of a terminal device 800 according to an embodiment of the present disclosure. As shown in fig. 3, the apparatus 800 may include: a processor 802, a memory 804, an input/output interface 806, a communication interface 808, and a bus 810. Wherein the processor 802, memory 804, input/output interface 806, and communication interface 808 are communicatively coupled to each other within the device via a bus 810.
The processor 802 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 804 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 804 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 804 and called to be executed by the processor 802.
The input/output interface 806 is used for connecting an input/output module to input and output information. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 808 is used for connecting a communication module (not shown in the figure) to implement communication interaction between the present device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 810 includes a path that transfers information between various components of the device, such as processor 802, memory 804, input/output interface 806, and communication interface 808.
It should be noted that although the above-described device only shows the processor 802, the memory 804, the input/output interface 806, the communication interface 808 and the bus 810, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The electronic device of the above embodiment is used to implement the corresponding method 700 in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, corresponding to any of the above-described embodiment methods, the present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method 700 as described in any of the above embodiments.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The computer instructions stored in the storage medium of the above embodiment are used to enable the computer to execute the method 700 of any embodiment, and have the beneficial effects of the corresponding method embodiment, which are not described herein again.
The present disclosure also provides a computer program product comprising a computer program, corresponding to any of the embodiment methods 700 described above, based on the same inventive concept. In some embodiments, the computer program is executable by one or more processors to cause the processors to perform the method 700. Corresponding to the execution subject corresponding to each step in the embodiments of the method 700, the processor executing the corresponding step may be the corresponding execution subject.
The computer program product of the foregoing embodiment is used for enabling a processor to execute the method 700 according to any of the foregoing embodiments, and has the advantages of corresponding method embodiments, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the present disclosure, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present disclosure as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures for simplicity of illustration and discussion, and so as not to obscure the embodiments of the disclosure. Furthermore, devices may be shown in block diagram form in order to avoid obscuring embodiments of the present disclosure, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the present disclosure are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
The disclosed embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalents, improvements, and the like that may be made within the spirit and principles of the embodiments of the disclosure are intended to be included within the scope of the disclosure.

Claims (15)

1. A method of three-dimensionally displaying an article, comprising:
determining a target article;
acquiring a three-dimensional model and three-dimensional display parameters corresponding to the target object; and
and performing three-dimensional display on the target object based on the three-dimensional model and the three-dimensional display parameters.
2. The method of claim 1, wherein the method is a three-dimensional item display method based on three.js; js, the method further comprising:
monitoring whether a sliding adjustment article event occurs; wherein the slide adjust item event comprises an event that adjusts an angle, position, or size of the target item;
in response to the occurrence of a slide adjustment item event, determining a slide starting point and a slide ending point of the slide adjustment item event, and determining a corresponding camera parameter adjustment amount according to the slide starting point and the slide ending point of the slide adjustment item event; and
and adjusting the camera parameters according to the camera parameter adjustment amount.
3. The method of claim 2, wherein the three-dimensional display parameters further comprise light parameters in three.
Monitoring whether a sliding light adjusting event occurs;
in response to the occurrence of a sliding light event, determining a sliding starting point and a sliding end point of the sliding light event, and determining a corresponding light parameter adjustment amount according to the sliding starting point and the sliding end point of the sliding light event; and
and adjusting the light parameters according to the light parameter adjustment amount.
4. The method of claim 1, wherein the target item is split into a plurality of portions, the three-dimensional model comprising a plurality of three-dimensional submodels, the three-dimensional submodels corresponding to the portions, the method further comprising:
receiving a split display instruction for the target item; and
and based on the splitting and displaying instruction, splitting and displaying the multiple parts of the target item according to the three-dimensional submodels of the multiple parts of the target item.
5. The method of claim 4, wherein the split-display of the plurality of portions of the target item according to the three-dimensional sub-model of the plurality of portions of the target item based on the split-display instruction further comprises:
determining a type of the target item;
responsive to the type of the target item being a first type, splitting the target item into a first number of multiple portions; or
Responsive to the type of the target item being a second type, splitting the target item into a second number of multiple portions;
wherein the first number is greater than the second number.
6. The method of claim 5, wherein splitting the plurality of portions displaying the target item comprises:
determining a movement starting point and a movement end point of a target portion of the target item; and
and calling a moving function to move the target part according to the moving starting point and the moving end point, wherein transition time is set in the moving function so that the target part moves from the moving starting point to the moving end point at a constant speed according to the transition time.
7. The method of claim 5, wherein the method is a three-dimensional item display method based on three.js; js, the method further comprising:
monitoring whether a sliding adjustment article part event occurs; wherein the slide adjust item portion event comprises an event that adjusts an angle, position, or size of a target portion of the target item;
in response to the occurrence of a slide adjusting item portion event, determining a slide starting point and a slide ending point of the slide adjusting item portion event, and determining a corresponding camera parameter adjustment amount according to the slide starting point and the slide ending point of the slide adjusting item portion event; and
and adjusting the camera parameters according to the camera parameter adjustment amount.
8. The method of claim 1, further comprising:
monitoring whether a target object part selection event occurs or not;
in response to an occurrence of a target item location selection event, determining a target selection location on the target item based on the target item location selection event;
determining corresponding text introduction information according to the target selected part; and
and displaying the text introduction information at the target selected part.
9. The method of claim 1, wherein determining a target item further comprises:
receiving a scene map and displaying the scene map;
monitoring whether a target object selection event occurs; and
in response to occurrence of a target item selection event, determining the target item based on the target item selection event.
10. A terminal device comprising one or more processors, memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the method of any of claims 1-9.
11. A system for three-dimensionally displaying an item, comprising:
the terminal device of claim 10, configured to: sending a three-dimensional article display request of a target article to a server; and
a server configured to: and returning the three-dimensional display code of the target object to the terminal equipment according to the three-dimensional display object request, so that the terminal equipment runs the three-dimensional display code to display the three-dimensional picture of the target object.
12. The system of claim 11, wherein the terminal device is configured to: scanning the identification of the target item; and sending a three-dimensional display item request of the target item to the server based on the identification of the target item;
the server configured to: and determining the target object according to the identification of the target object.
13. The system of claim 11, wherein the terminal device is configured to: receiving positioning information of the terminal equipment; sending a three-dimensional article display request carrying the positioning information of the terminal equipment to the server;
the server configured to: determining the position of the terminal equipment in a scene map according to the positioning information of the terminal equipment; and determining the target object according to the position of the terminal equipment in the scene map.
14. A non-transitory computer-readable storage medium containing a computer program which, when executed by one or more processors, causes the processors to perform the method of any one of claims 1-9.
15. A computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-9.
CN202111095021.2A 2021-09-17 2021-09-17 Method for three-dimensionally displaying an article and associated device Pending CN113742507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111095021.2A CN113742507A (en) 2021-09-17 2021-09-17 Method for three-dimensionally displaying an article and associated device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111095021.2A CN113742507A (en) 2021-09-17 2021-09-17 Method for three-dimensionally displaying an article and associated device

Publications (1)

Publication Number Publication Date
CN113742507A true CN113742507A (en) 2021-12-03

Family

ID=78739713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111095021.2A Pending CN113742507A (en) 2021-09-17 2021-09-17 Method for three-dimensionally displaying an article and associated device

Country Status (1)

Country Link
CN (1) CN113742507A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115664935A (en) * 2022-10-21 2023-01-31 圣名科技(广州)有限责任公司 Method and device for realizing alarm based on target framework, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115664935A (en) * 2022-10-21 2023-01-31 圣名科技(广州)有限责任公司 Method and device for realizing alarm based on target framework, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10863168B2 (en) 3D user interface—360-degree visualization of 2D webpage content
US9437038B1 (en) Simulating three-dimensional views using depth relationships among planes of content
US11003305B2 (en) 3D user interface
US9183672B1 (en) Embeddable three-dimensional (3D) image viewer
CN111414225B (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
US9530243B1 (en) Generating virtual shadows for displayable elements
CN103472985A (en) User editing method of three-dimensional (3D) shopping platform display interface
JP7432005B2 (en) Methods, devices, equipment and computer programs for converting two-dimensional images into three-dimensional images
CN107861711B (en) Page adaptation method and device
US10623713B2 (en) 3D user interface—non-native stereoscopic image conversion
Murru et al. Practical augmented visualization on handheld devices for cultural heritage
CN114638939A (en) Model generation method, model generation device, electronic device, and readable storage medium
CN113742507A (en) Method for three-dimensionally displaying an article and associated device
CN116243831B (en) Virtual cloud exhibition hall interaction method and system
CN110990106B (en) Data display method and device, computer equipment and storage medium
US20200226833A1 (en) A method and system for providing a user interface for a 3d environment
CN107038176B (en) Method, device and equipment for rendering web graph page
CN114913277A (en) Method, device, equipment and medium for three-dimensional interactive display of object
Lu et al. Design of immersive and interactive application based on augmented reality and machine learning
CN117576359B (en) 3D model construction method and device based on Unity webpage platform
CN114935977B (en) Spatial anchor point processing method and device, electronic equipment and storage medium
Birk et al. User-position aware adaptive display of 3D data without additional stereoscopic hardware
Ansal et al. Product Design Using Virtual Reality
Murru et al. Augmented Visualization on Handheld Devices for Cultural Heritage
CN117742677A (en) XR engine low-code development platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination