CN111949904A - Data processing method and device based on browser and terminal - Google Patents

Data processing method and device based on browser and terminal Download PDF

Info

Publication number
CN111949904A
CN111949904A CN201910407796.5A CN201910407796A CN111949904A CN 111949904 A CN111949904 A CN 111949904A CN 201910407796 A CN201910407796 A CN 201910407796A CN 111949904 A CN111949904 A CN 111949904A
Authority
CN
China
Prior art keywords
target object
behavior data
display
model
operation behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910407796.5A
Other languages
Chinese (zh)
Inventor
罗刚
肖渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yayue Technology Co ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910407796.5A priority Critical patent/CN111949904A/en
Publication of CN111949904A publication Critical patent/CN111949904A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9577Optimising the visualization of content, e.g. distillation of HTML documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The embodiment of the invention discloses a data processing method, a device and a terminal based on a browser, wherein the method comprises the following steps: responding to an access request for a webpage identifier in a browser, acquiring an object model corresponding to the webpage identifier, acquiring environmental image data by adopting a camera, and performing augmented reality display on the object model and the environmental image data in a display page of the browser; acquiring first operation behavior data of terminal equipment to which the browser belongs, switching and displaying a display area in the inner scene of the object model in the display page according to the first operation behavior data, and determining a target object from the display area; and acquiring second operation behavior data aiming at the target object, and displaying the target object in the display page according to the second operation behavior data. By adopting the embodiment of the invention, the display modes of the exhibits can be enriched, and the browsing efficiency can be improved.

Description

Data processing method and device based on browser and terminal
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a data processing method and apparatus based on a browser, and a terminal.
Background
With the development of the internet, more and more exhibits are displayed on line, so that exhibitors can check the exhibits on line at any time and any place.
In the prior art, the exhibition mode can be realized by shooting each visual angle of a real exhibit, and uploading the shot picture data and the character description corresponding to the exhibit to the online, so that an exhibitor can check the information of the exhibit on the online. Therefore, the pictures and the character descriptions of the exhibits are displayed on the line, and the display form of the exhibits is too single; moreover, if the exhibitor wants to browse the complete picture of the exhibit and the exhibition hall, the exhibitor needs to check a large amount of picture data and text description of the exhibit and the exhibition hall, and then browsing efficiency is low.
Disclosure of Invention
The embodiment of the invention provides a data processing method and device based on a browser, which can enrich display modes of exhibits and improve browsing efficiency.
An embodiment of the present invention provides a data processing method based on a browser, including:
responding to an access request for a webpage identifier in a browser, acquiring an object model corresponding to the webpage identifier, acquiring environmental image data by adopting a camera, and performing augmented reality display on the object model and the environmental image data in a display page of the browser;
acquiring first operation behavior data of terminal equipment to which the browser belongs, switching and displaying a display area in the inner scene of the object model in the display page according to the first operation behavior data, and determining a target object from the display area;
and acquiring second operation behavior data aiming at the target object, and displaying the target object in the display page according to the second operation behavior data.
The acquiring first operation behavior data for the terminal device to which the browser belongs, switching and displaying a display area in the internal scene of the object model in the display page according to the first operation behavior data, and determining a target object from the display area includes:
acquiring first operation behavior data of terminal equipment to which the browser belongs, and determining a visual orientation parameter according to the first operation behavior data;
switching and displaying a display area matched with the visual orientation parameter in the inner scene of the object model in the display page; the presentation area comprises at least one object;
and determining the selected target object from the at least one object in response to a selection trigger operation aiming at the display area.
The acquiring first operation behavior data of the terminal device to which the browser belongs, and determining the visual orientation parameter according to the first operation behavior data includes:
acquiring first operation behavior data of terminal equipment to which the browser belongs, and determining displacement increment information and rotation angle information corresponding to the terminal equipment according to the first operation behavior data;
and determining visual orientation parameters according to the displacement increment information and the rotation angle information.
The acquiring first operation behavior data of the terminal device to which the browser belongs, and determining the visual orientation parameter according to the first operation behavior data includes:
acquiring first operation behavior data of terminal equipment to which the browser belongs, and determining a tag type selected in a terminal screen based on the first operation behavior data;
acquiring visual orientation parameters matched with the label types from an orientation parameter table; the orientation parameter table comprises visual orientation parameters corresponding to a plurality of label types respectively.
Wherein the second operational behavior data comprises a rotation operation;
the obtaining second operation behavior data for the target object, and displaying the target object in the display page according to the second operation behavior data includes:
responding to the rotating operation aiming at the target object, and acquiring an object rotating speed and an object rotating angle corresponding to the target object;
determining an object orientation parameter corresponding to the target object according to the object rotation speed and the object rotation angle;
and displaying the target object in the display page according to the object orientation parameter.
Wherein the second operation behavior data comprises a detail trigger operation;
the obtaining second operation behavior data for the target object, and displaying the target object in the display page according to the second operation behavior data includes:
responding to the detail triggering operation aiming at the target object, and calling detail description information corresponding to the target object;
and creating an information display window covering the target object, and displaying the detailed description information in the information display window.
Wherein the second operation behavior data comprises a disassembly trigger operation or a combination trigger operation;
the obtaining second operation behavior data for the target object, and displaying the target object in the display page according to the second operation behavior data includes:
responding to disassembly triggering operation aiming at the target object, creating an animation display area in the display page, and displaying component disassembly animation corresponding to the target object in the animation display area;
and responding to the combined trigger operation aiming at the disassembled target object, and displaying the component combined animation corresponding to the target object in the animation display area.
Wherein the method further comprises:
if the object model is packaged into a first shared webpage identifier, sending the first shared webpage identifier to an interaction platform so that a target terminal in the interaction platform can access the object model through the first shared webpage identifier;
and if the target object is packaged as a second shared webpage identifier, sending the second shared webpage identifier to the interaction platform so that the target terminal in the interaction platform accesses the target object through the second shared webpage identifier.
Wherein the method further comprises:
the method comprises the steps of obtaining three-dimensional model data of a plurality of objects and three-dimensional model data of containers bearing the objects, and generating an object model which comprises the objects and the containers and corresponds to the three-dimensional model data according to a three-dimensional engine.
The three-dimensional engine comprises a webpage three-dimensional engine and a visual three-dimensional engine;
generating an object model corresponding to the three-dimensional model data and including the plurality of objects and the container according to the three-dimensional engine, including:
in the visual three-dimensional engine, obtaining a display effect model corresponding to the three-dimensional model data, converting the display effect model into a format type corresponding to the webpage three-dimensional engine, and inputting the display effect model after format conversion into the webpage three-dimensional engine;
and in the webpage three-dimensional engine, generating an object model corresponding to the display effect model based on an augmented reality control in the browser, and outputting a webpage identifier corresponding to the object model.
An embodiment of the present invention provides a data processing apparatus based on a browser, including:
the response request module is used for responding to an access request aiming at a webpage identifier in a browser, acquiring an object model corresponding to the webpage identifier, acquiring environment image data by adopting a camera, and performing augmented reality display on the object model and the environment image data in a display page of the browser;
the object determination module is used for acquiring first operation behavior data aiming at the terminal equipment to which the browser belongs, switching and displaying a display area in the internal scene of the object model in the display page according to the first operation behavior data, and determining a target object from the display area;
and the object display module is used for acquiring second operation behavior data aiming at the target object and displaying the target object in the display page according to the second operation behavior data.
Wherein the object determination module comprises:
the visual orientation parameter determining unit is used for acquiring first operation behavior data aiming at the terminal equipment to which the browser belongs and determining a visual orientation parameter according to the first operation behavior data;
the display area display unit is used for switching and displaying a display area matched with the visual orientation parameter in the inner scene of the object model in the display page; the presentation area comprises at least one object;
and the selection operation response unit is used for responding to the selection trigger operation aiming at the display area and determining the selected target object from the at least one object.
Wherein the visual orientation parameter determination unit comprises:
the information acquisition subunit is configured to acquire first operation behavior data for a terminal device to which the browser belongs, and determine, according to the first operation behavior data, displacement increment information and rotation angle information corresponding to the terminal device;
and the first determining subunit is used for determining the visual orientation parameter according to the displacement increment information and the rotation angle information.
Wherein the visual orientation parameter determination unit comprises:
the tag type obtaining subunit is configured to obtain first operation behavior data for a terminal device to which the browser belongs, and determine a tag type selected in a terminal screen based on the first operation behavior data;
the second determining subunit is used for acquiring the visual orientation parameters matched with the label type from the orientation parameter table; the orientation parameter table comprises visual orientation parameters corresponding to a plurality of label types respectively.
Wherein the second operational behavior data comprises a rotation operation;
the object display module includes:
a rotation operation response unit for acquiring an object rotation speed and an object rotation angle corresponding to the target object in response to a rotation operation for the target object;
an object orientation parameter determining unit, configured to determine an object orientation parameter corresponding to the target object according to the object rotation speed and the object rotation angle;
and the target object display unit is used for displaying the target object in the display page according to the object orientation parameter.
Wherein the second operation behavior data comprises a detail trigger operation;
the object display module includes:
a detail operation response unit, configured to respond to a detail trigger operation for the target object and invoke detail description information corresponding to the target object;
and the detail description display unit is used for creating an information display window covering the target object, and displaying the detail description information in the information display window.
Wherein the second operation behavior data comprises a disassembly trigger operation or a combination trigger operation;
the object display module includes:
a disassembly operation response unit, configured to respond to a disassembly trigger operation for the target object, create an animation display area in the display page, and display an assembly disassembly animation corresponding to the target object in the animation display area;
and the combined operation response unit is used for responding to the combined trigger operation aiming at the disassembled target object and displaying the component combined animation corresponding to the target object in the animation display area.
Wherein the apparatus further comprises:
the sharing module is used for sending the first shared webpage identifier to an interaction platform if the object model is packaged into the first shared webpage identifier, so that a target terminal in the interaction platform can access the object model through the first shared webpage identifier;
the sharing module is further configured to send the second shared webpage identifier to the interaction platform if the target object is encapsulated as the second shared webpage identifier, so that the target terminal in the interaction platform accesses the target object through the second shared webpage identifier.
Wherein the apparatus further comprises:
the model generation module is used for acquiring three-dimensional model data of a plurality of objects and three-dimensional model data of containers bearing the objects, and generating an object model which comprises the objects and the containers and corresponds to the three-dimensional model data according to a three-dimensional engine.
The three-dimensional engine comprises a webpage three-dimensional engine and a visual three-dimensional engine;
the model generation module includes:
the format conversion unit is used for acquiring a display effect model corresponding to the three-dimensional model data in the visual three-dimensional engine, converting the display effect model into a format type corresponding to the webpage three-dimensional engine, and inputting the display effect model after format conversion into the webpage three-dimensional engine;
and the webpage identification output unit is used for generating an object model corresponding to the display effect model based on the augmented reality control in the browser in the webpage three-dimensional engine and outputting the webpage identification corresponding to the object model.
An embodiment of the present invention provides a terminal, including: a processor and a memory;
the processor is connected to a memory, wherein the memory is used for storing program codes, and the processor is used for calling the program codes to execute the method in one aspect of the embodiment of the invention.
An aspect of the present embodiments provides a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a processor, perform a method as in an aspect of the present embodiments.
The method and the device for displaying the object model in the browser have the advantages that the object model corresponding to the webpage identifier is obtained by responding to the access request for the webpage identifier in the browser, the environment image data are collected by the camera, the object model and the environment image data are displayed in an augmented reality mode in the display page of the browser, the display area in the internal scene of the object model can be switched and displayed in the display page by obtaining the first operation behavior data of the user for the terminal equipment to which the browser belongs, the target object can be determined from the display area, the second operation behavior data of the user for the target object can be obtained, and the target object is displayed according to the second operation behavior data. Therefore, in the process of displaying model data (namely a target object or an object model), an augmented reality technology can be adopted to perform augmented reality display on the environment image data and the model data on a display page in a browser, the triggering operation of a user on the model data can be responded, the model data can be correspondingly displayed, and the display mode of the model data can be enriched; the user can directly operate the three-dimensional model data of the object in the display page of the browser so as to check the overall appearance of the object and the detailed description information of the object, and further the browsing efficiency can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a data processing method based on a browser according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a browser-based data processing method according to an embodiment of the present invention;
3 a-3 c are schematic diagrams of an interface for displaying a target object based on second operation behavior data according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another browser-based data processing method according to an embodiment of the present invention;
FIGS. 5 a-5 c are schematic interface diagrams of a method for obtaining a model of an object according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a light rendering implementation according to an embodiment of the present invention;
FIG. 7 is a flow chart of an engine implementation method according to an embodiment of the present invention;
fig. 8a and fig. 8b are schematic interface diagrams of a browser-based data processing method according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a browser-based data processing apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic view of a scene of a data processing method based on a browser according to an embodiment of the present invention. Because the cost for establishing an entity exhibition hall is high, and for a long-distance exhibitor, a large amount of time and money are consumed for visiting the exhibits by the entity exhibition hall, an online museum can be created, the cost can be reduced, and the waste of time and money is avoided for the exhibitor. As shown in fig. 1, the interface 10a in the terminal device 100a may be represented as a home page of an online museum, where the online museum may employ Augmented Reality (AR) technology, so that a human being may browse in the museum, or may be referred to as an AR online museum (that is, the museum may be implemented on an online scene through virtual Reality technology). The user may click on the browser portal 10b on the interface 10a to cause the terminal device 100a to generate an access request according to the user's click operation. After generating the access request for the online museum, the terminal device 100a may send the access request to a background server corresponding to a browser (e.g., a QQ browser), and the background server returns online museum data requested by the access request to the terminal device 100a, so that the terminal device 100a may enter the display page 13a of the online museum. Of course, the user may also scan the two-dimensional code 10c in the interface 10a through other software (such as WeChat, QQ, and the like) in the terminal device 100a, so that the terminal device 100a generates an access request for the online museum, the terminal device 100a may send the generated access request to a background server corresponding to the other software, and after receiving the online museum data requested by the access request returned by the background server, the user may enter the display page 13a of the online museum from the other software. The terminal device 100a may obtain the three-dimensional object model 12a in the online museum data, call the camera of the terminal device 100a through the browser, collect the environmental image data 11a (i.e., the real-time image of the scene) by using the camera, superimpose the three-dimensional object model 12a onto the environmental image data 11a by using the augmented reality technology, and display the superimposed image on the display page 13 a. The AR technology is a technology for calculating the position and the angle of a camera image in real time and adding corresponding images, videos and three-dimensional models, and aims to sleeve a virtual world on a screen of a terminal device in the real world for interaction, namely three-dimensional display information is superposed at the corresponding position of a real-time image for display.
For the three-dimensional object model 12a of the online museum, different exhibition halls (also referred to as exhibition areas) in the three-dimensional object model 12a may be represented by different identification information. For example, a first exhibition hall of an online museum, which may be used to show Qinshihuang terracotta soldiers, may be indicated with the designation 13 b; a second exhibition hall of the online museum, which may be used for showing gansu painted pottery, etc., may be indicated with the sign 13 c. The device may enter an exhibition hall in the online museum interior according to the position information of the terminal device 100a (i.e., responding to the operation behavior data of the user) during the walking process of the user (i.e., the device may simulate the user to enter the exhibition hall of the online museum through the walking path of the user in reality), and may directly enter the exhibition hall in the online museum interior through the identifier selected by the user, and the exhibition hall may include not only the three-dimensional model data corresponding to the exhibit, but also the exhibition hall design structure in the online museum interior, such as an exhibition stand for exhibiting the exhibits, decoration in the interior, etc., so that in the display page 13a, the internal structure of the exhibition hall in the online museum interior (including the three-dimensional model and the exhibition hall design structure of the exhibits) may be displayed. In addition, the display page 13a may display three-dimensional model data of the exhibits in the exhibition hall superimposed on the environment image data (for example, the environment image data may be displayed in the background by making the internal structure of the exhibition hall transparent), and as shown in fig. 1, the three-dimensional model data of a plurality of exhibits in the exhibition hall is displayed by taking the environment image data as the background information of the display page 13a as an example. If the entering exhibition hall is the Qinshihuang terracotta warriors exhibition hall, kneeling-fire warriors 14a, warriors 14b and the like can be displayed on the display page 13 a. Acquiring the sliding operation and the like of the user on the screen of the terminal device 100a, and determining the target exhibit (i.e. the exhibit that the user wants to view, which can also be called as a target object, such as the kneeling-fire warrior 14a is determined as the target exhibit). The user can operate the target exhibit and check corresponding information of the target exhibit, such as the overall appearance and detailed description information of the target exhibit. It should be noted that after entering the presentation page 13a, the camera of the terminal device 100a may collect the environment image data 11a in real time, that is, the collected environment image data 11a is updated in real time as the user moves or operates the terminal device. In addition, if the user does not need to browse the currently displayed exhibition hall or target exhibit in the presentation page 13a, the user may click the "back" button in the presentation page 13a to perform redetermination.
The terminal device 100a may include a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), a Point Of Sale (POS) machine, a wearable device (e.g., a smart watch, a smart bracelet, etc.), or other terminal devices with a browser installation function.
Fig. 2 is a schematic flowchart illustrating a data processing method based on a browser according to an embodiment of the present invention. As shown in fig. 2, the browser-based data processing method may include the steps of:
step S101, responding to an access request aiming at a webpage identifier in a browser, acquiring an object model corresponding to the webpage identifier, acquiring environment image data by adopting a camera, and performing augmented reality display on the object model and the environment image data in a display page of the browser;
specifically, after the object model is created, a web page identifier corresponding to the object model may be generated, a user may click or scan the web page identifier to enable the terminal device to generate an access request, the terminal device may send the access request to a background server corresponding to the browser, and may further receive the object model requested by the access request returned by the background server, and call a camera of the terminal device by using the browser, collect the environment image data in real time by using the camera, superimpose the obtained object model on a corresponding position of the environment image data (that is, the object model may cover a part of data in the environment image data), and display the image in a display page of the browser. In other words, the display page of the browser may display real-time environment image data, or may display virtual three-dimensional display information (i.e., an object model).
The object model may include three-dimensional model data of a plurality of objects and three-dimensional model data of a container carrying the plurality of objects, and taking the case that the object model is an online museum, the object model may include three-dimensional model data corresponding to each of a plurality of cultural relics, and may also include a three-dimensional building model of the museum carrying the plurality of cultural relics. The webpage identification can comprise any one of a website and a two-dimensional code, and when the webpage identification is the website, a user can input the website in a browser to enable the terminal device to generate an access request; when the webpage is identified as the two-dimensional code, a user can scan the two-dimensional code through software application (such as application with a two-dimensional code scanning function, such as WeChat) so as to enable the terminal equipment to generate an access request; when the webpage identifier comprises the website and the two-dimension code at the same time, the user can select to input the website in the browser or select to scan the two-dimension code through software application, so that the terminal equipment generates an access request.
It should be noted that, because the browser has the right to call the camera of the terminal device, and the bottom development of the browser encapsulates the augmented reality software development tool, after the user inputs the website in the browser to make the terminal device generate the access request, the browser can call the camera of the terminal device to implement the augmented reality capability according to the access request generated by the terminal device. If the user scans the two-dimensional code through software application, the application software does not have the right to call a camera of the terminal device, or an augmented reality software development tool is not packaged in the bottom development of the application software, the environmental image data cannot be collected in real time, and the object model is superposed into the environmental image data, so that only the object model data is displayed in a display page, but the internal scene roaming of the object model can be simulated through a gyroscope of the terminal device. For convenience of description, specific description is given below on the premise that the browser has the right to call the camera of the terminal device and encapsulates the augmented reality software development tool.
Step S102, acquiring first operation behavior data aiming at the terminal equipment to which the browser belongs, switching and displaying a display area in the inner scene of the object model in the display page according to the first operation behavior data, and determining a target object from the display area;
specifically, after the object model and the environmental image data are augmented reality displayed in the display page of the browser, the user may operate the terminal device to which the browser belongs, and in the process of operating the terminal device by the user, the terminal device may obtain first operation behavior data of the user for the terminal device, where the first operation behavior data may include, but is not limited to, performing a sliding operation on a screen of the terminal device by a finger, a stylus, or a mouse, performing a clicking operation on the screen of the terminal device by a finger, a stylus, or a mouse, performing a rotating operation on the terminal device, and performing a moving operation on the terminal device (the process of performing a moving operation may refer to a process in which the user holds the terminal device to walk in a real scene), and when performing a moving operation, a gyroscope in the terminal device may record the process of the user walking, displacement information and angle information of the terminal device movement. According to the first operation behavior data, a display area in the inner scene of the object model can be switched and displayed in the display page, and an object (namely a target object) which needs to be viewed by the user can be determined from the display area. For example, when a user performs a sliding operation on a screen of a terminal device, the terminal device may invoke an AR capability integrated in a browser, simulate the user to walk around in a scene shown in a presentation page, slide left on the screen, simulate the user to walk left in the scene shown in the presentation page, slide right on the screen, simulate the user to walk right in the scene shown in the presentation page, slide up on the screen, and simulate the user to walk forward in the scene shown in the presentation page; when a user performs rotation operation on the terminal equipment, the change of a visual angle of the user in a scene shown in a display page can be simulated, the rotation angle of the terminal equipment can be recorded according to a gyroscope in the terminal equipment, and the change of the visual angle of the user in the scene shown in the display page can be simulated according to the rotation angle; when the user executes the moving operation on the terminal device, the walking distance, the walking direction and the like of the user in the scene shown by the display page can be simulated according to the moving distance and the moving direction of the terminal device in the real scene. According to the operation, the display area in the inner scene of the object model can be switched and displayed in the display page, and the target object is determined, namely, the process that the user enters the object model from the outside of the object model and determines the target object is simulated.
Step S103, second operation behavior data aiming at the target object is obtained, and the target object is displayed in the display page according to the second operation behavior data.
Specifically, after the target object is determined, the target object may be displayed in a presentation page of the browser. The user may operate the target object displayed in the display page, and in the process of operating the target object by the user, the terminal device may obtain second operation behavior data of the user for the target object (where the second operation behavior data is to distinguish the first operation behavior data in step S102, the second operation behavior data is operation data of the user for the target object, that is, the second operation behavior data may be used to view and display the target object, and the first operation behavior data is operation data of the user for the terminal device, that is, the first operation behavior data may be used to simulate that the user walks in a virtual scene), and display the target object in the display page according to the second operation behavior data, for example, it may be determined according to the second operation behavior data whether the back side or the front side of the target object is displayed in the display page, or a side, etc.
Further, please refer to fig. 3a to fig. 3c, which are schematic interface diagrams for displaying a target object based on second operation behavior data according to an embodiment of the present invention. As shown in fig. 3a to fig. 3c, steps S201 to S206 are specific descriptions of step S103 in the embodiment corresponding to fig. 2, that is, steps S201 to S206 are specific flows for displaying a target object based on second operation behavior data according to the embodiment of the present invention.
Taking the example that the object model is an online museum, when the second operation behavior data includes a rotation operation, as shown in fig. 3a, displaying the target object based on the second operation behavior data may include the following steps:
step S201, responding to the rotation operation aiming at the target object, and acquiring the object rotation speed and the object rotation angle corresponding to the target object;
specifically, the target object 32a may be displayed in the display page of the browser, and if the user wants to view the full view of the target object 32a, the user may perform 360-degree rotation zoom viewing on the target object 32a by clicking a dot button in the area 31a corresponding to the target object 32a on the screen of the terminal device and pressing the dot button to rotate the finger. When the user clicks a dot button in the area 31a and holds down the rotation, the terminal device may respond to the rotation operation for the target object 32a to obtain an object rotation speed and an object rotation angle corresponding to the target object 32a, the object rotation speed may be determined according to the speed of the rotation of the user's finger, and the faster the user's finger rotates, the faster the object rotation speed is; the object rotation angle may be determined based on the information on the angle of rotation of the user's finger, where the angle of rotation of the user's finger is proportional to the object rotation angle, and if the angle of rotation of the user's finger is represented by a, the object rotation angle may be represented by k × a (k may be represented by a value greater than 0). For example, when k is 4 and the angle a of rotation of the user's finger is 22.5 degrees, the object rotation angle is 90 degrees.
Step S202, determining an object orientation parameter corresponding to the target object according to the object rotation speed and the object rotation angle, and displaying the target object in the display page according to the object orientation parameter;
specifically, the object orientation parameters corresponding to the target object 32a may be determined according to the object rotation speed and the object rotation angle, and the object orientation parameters may include the real-time rotation angle, the rotation speed, and the rotation direction of the target object 32 a. The target object 32a may be displayed in the presentation page according to the object orientation parameters. For example, the object rotation angle is 4 times the user finger rotation angle (i.e., k is 4), that is, the target object 32a may complete a full view rotation of 360 degrees during the rotation of 90 degrees by the user finger. If the finger of the user rotates clockwise 22.5 degrees on the screen of the terminal device, a rotation animation that the target object 32a rotates clockwise 90 degrees from the initial front image can be displayed on the presentation page, and when the user rotates 22.5 degrees, a side image corresponding to the target object 32a rotating 90 degrees is displayed; if the user continues to rotate the finger until the rotation angle of the finger is 45 degrees, the animation that the target object 32a rotates clockwise from the side image where the target object is located at 90 degrees can be continuously displayed on the display interface, and when the user rotates 45 degrees, the back image corresponding to the target object 32a rotating 180 degrees is displayed. Of course, if the user does not want to continue viewing the target object 32a, the user may click a "back" button in the presentation page to exit the presentation of the target object 32a from the presentation page.
Optionally, the object orientation parameters of the target object 32a may be preset, and if the user wants to view the entire view of the target object 32a, the user only needs to click a dot button in the area 31a corresponding to the target object 32a on the screen of the terminal device with a finger, so that the target object 32a can be viewed in a 360-degree rotating and zooming manner. In other words, after the user clicks the dot button in the area 31a, the terminal device may perform 360-degree rotation with a preset rotation angle, rotation speed, and rotation direction in response to the rotation operation on the target object 32a, that is, the user may realize a 360-degree rotation animation of the target object 32a without pressing and rotating the dot button in the area 31 a.
Taking the object model as an online museum as an example, when the second operation behavior data includes the detail triggering operation, as shown in fig. 3b, displaying the target object based on the second operation behavior data may include the following steps:
step S203, responding to the detail triggering operation aiming at the target object, and calling the detail description information corresponding to the target object;
specifically, if the user wants to know the detailed description information of the target object 32b, the user may draw (i.e., perform a detailed triggering operation) on the area where the target object 32b is located in the display page, so that the terminal device calls the detailed description information corresponding to the target object 32b to display. When the user performs a detail triggering operation (such as an upper trace in the area 33 a), the terminal device may call out the detail description information 34a corresponding to the target object 32b in response to the detail triggering operation for the target object 32 b. For example, the target object 32b is a kneeling warrior in the qin-ji terracotta warriors, when the user performs an up-stroke operation on the target object 32b in the display page, the terminal device can call detailed description information of the kneeling warrior, namely that the kneeling warrior is earthed up at the middle of a crossbow warrior matrix at the east end of the second pit of the qin terracotta warrior, wears a robe, an outer armour, a hammerhead on the left side of the head, a crouch on the left leg, a crouch on the right leg, the land on the right knee, and the kneeling warrior is more precise than a common warrior, and plays a vivid role in the emotional state and the carving of the hammerhead, the nail sheet, the bottom of the track and the like, and the original color painting preservation condition of the cultural relic is excellent, so that the scene of the qi. …'.
It should be noted that each target object corresponds to its own detailed description information, which may be matched in advance and stored in the cloud database, and after the terminal device responds to the detailed trigger operation for the target object, data corresponding to the target object may be read from the cloud database according to the identification information of the target object (e.g., the number of the target object, each target object corresponds to a unique number). Of course, the detailed description information may also be obtained from the internet in real time according to the name of the target object, which is not limited herein.
Step S204, an information display window covering the target object is created, and the detailed description information is displayed in the information display window;
specifically, after the terminal device calls the detailed description information 34a corresponding to the target object 32b, an information display window 35a covering the target object 32b may be created on the display page, and the detailed description information 35a may be displayed in the information display window 35 a. The information presentation window 35a may comprise two parts: an area 35b and an area 35c, wherein the area 35b may display a detailed description of the target object 32b, and the area 35c may display key information of the target object 32b (such as the name, year, size, and place of unearthing of the target object 32 b). It can be understood that if the detailed introduction content corresponding to the target object 32b is large and cannot be displayed in the area 35b of the information display window 35a, the user may turn a page in the area 35b to view the detailed description information of the target object 32 b.
Taking the example that the object model is an online museum, when the second operation behavior data includes a disassembly trigger operation or a combination trigger operation, as shown in fig. 3c, displaying the target object based on the second operation behavior data may include the following steps:
step S205, responding to the disassembly triggering operation aiming at the target object, creating an animation display area in the display page, and displaying the assembly disassembly animation corresponding to the target object in the animation display area;
specifically, for a functional target object 32c (for example, the target object 32c may be used for lighting an oil lamp at night), a user may perform a disassembling triggering operation on the target object 32c in a display page, where the disassembling triggering operation may be that the user presses an area where the target object 32c is located, pops up a menu, and the user selects a disassembling option, so as to display a disassembling animation of the target object 32 c; the disassembly triggering operation may also be to click the trigger button (for all functional objects, a disassembly trigger button is provided), and display the disassembly animation of the target object 32 c; the dismissal triggering operation may also refer to a user inputting a specific gesture (for example, extending a palm above a screen of a display page where the target object 32c is located), and triggering the terminal device to display a dismissal animation of the target object 32 c; the disassembly trigger operation may also refer to the rest of operations (operations other than the above rotation operation and the detailed description information) for the target object, and is not limited herein. When the user performs the disassembly triggering operation on the target object 32c, the terminal device may create an animation display area in the display page, and display the component disassembly animation corresponding to the target object 32c in the animation display area, that is, a group of components forming the target object 32c, such as the component 36a, the component 36b, and the component 36c, may be obtained through three-dimensional simulation.
Step S206, responding to the combined trigger operation aiming at the disassembled target object, and displaying the component combined animation corresponding to the target object in the animation display area.
Specifically, after the component disassembly animation corresponding to the target object 32c is displayed in the animation display area, the user may perform a combination trigger operation on the disassembled target object 32c, where the combination trigger operation may be an operation performed on an area where the disassembled target object 32c is located, and the specific operation mode may refer to the description of the disassembly trigger operation, which is not described herein again. After the user performs a combination trigger operation on the disassembled target object 32c, the component combination animation of the target object 32c may be displayed in the animation display area, and the usage scenario of the target object 32c may be simulated. Through the component disassembly animation and the component combination animation, a user can intuitively know the use scene, the structural composition and the working principle of the target object 32 c. The component disassembly animation and the component combination animation corresponding to the target object 32c may be pre-made, and when the terminal device obtains the disassembly trigger operation of the user, the component disassembly animation corresponding to the target object 32c may be called and displayed in an animation display area; when the terminal device obtains the combination trigger operation of the user, the component combination animation corresponding to the target object 32c may be called and displayed in the animation display area.
It should be noted that, for convenience of description, the first operation behavior data and the second operation behavior data described in the implementation of the present invention are both performed on the premise that the display screen of the terminal device has a touch function, and for the terminal device whose display screen does not have the touch function, any operation such as clicking or sliding performed by using a finger of a user mentioned above may be performed by using other devices (such as a mouse, a stylus, and the like), which is not described herein again.
The method and the device for displaying the object model in the browser have the advantages that the object model corresponding to the webpage identifier is obtained by responding to the access request for the webpage identifier in the browser, the environment image data are collected by the camera, the object model and the environment image data are displayed in an augmented reality mode in the display page of the browser, the display area in the internal scene of the object model can be switched and displayed in the display page by obtaining the first operation behavior data of the user for the terminal equipment to which the browser belongs, the target object can be determined from the display area, the second operation behavior data of the user for the target object can be obtained, and the target object is displayed according to the second operation behavior data. Therefore, in the process of displaying model data (namely a target object or an object model), an augmented reality technology can be adopted to perform augmented reality display on the environment image data and the model data on a display page in a browser, the triggering operation of a user on the model data can be responded, the model data can be correspondingly displayed, and the display mode of the model data can be enriched; the user can directly operate the three-dimensional model data of the object in the display page of the browser so as to check the overall appearance of the object and the detailed description information of the object, and further the browsing efficiency can be improved.
Fig. 4 is a schematic flowchart of another browser-based data processing method according to an embodiment of the present invention. As shown in fig. 4, the browser-based data processing method may include the steps of:
step S301, acquiring three-dimensional model data of a plurality of objects and three-dimensional model data of containers bearing the plurality of objects;
specifically, for an object with a real object, professional equipment can be used to directly acquire three-dimensional model data of the object, and the three-dimensional model data acquired by the professional equipment is imported into the terminal equipment. Fig. 5a is a schematic interface diagram of a method for obtaining an object model according to an embodiment of the present invention. As shown in fig. 5a, for the real object 42a, a professional device 43a (a device dedicated to acquiring three-dimensional model data) may be connected to the terminal device, the professional device 43a is used to acquire the three-dimensional model data of the real object 42a, the acquired three-dimensional model data is imported into the terminal device, and the acquired three-dimensional model data may be displayed in real time on the screen 41a of the terminal device. In other words, the professional device 43a scans all directions of the real object 42a and displays the acquired data in the screen 41a in real time (for example, the three-dimensional model data 42b is a part of the three-dimensional model data of the real object 42 a), so that when the professional device 43a scans all directions of the real object 42a, the complete three-dimensional model data of the real object 42a can be obtained. Because the three-dimensional model data directly acquired by professional equipment is too large and is not beneficial to display on a webpage, the acquired three-dimensional model data needs to be optimized, namely, the effect of optimizing the three-dimensional model data can be achieved by reducing the grid surface of the three-dimensional model (the grid surface can be regarded as a mode for recording the three-dimensional model data, the more the grid surfaces of the three-dimensional model data are, the larger the data volume of the three-dimensional model is, the more the details contained in the three-dimensional model are, the less the grid surfaces of the three-dimensional model data are, the smaller the data volume of the three-dimensional model is, and the details contained in the three-dimensional model are also reduced) on the premise of not affecting. Please refer to fig. 5b, which is an interface diagram of a method for obtaining an object model according to an embodiment of the present invention. As shown in fig. 5b, the three-dimensional model 42c is original three-dimensional model data (data size 255M) directly acquired by professional equipment, and the three-dimensional model 42d is three-dimensional model data (data size 635K) optimized for the three-dimensional model 42 c. In order to compare the difference between the original three-dimensional model data and the optimized three-dimensional model data, the same portion (the portion 44a in the three-dimensional model 42c and the portion 44b in the three-dimensional model 42 d) of the three-dimensional model 42c (i.e., the original three-dimensional model data) and the same portion of the three-dimensional model 42d (i.e., the optimized three-dimensional model data) may be enlarged, and it is apparent that the number of mesh surfaces of the portion 44a in the three-dimensional model 42c is greater than the number of mesh surfaces of the portion 44b in the three-dimensional model 42d, thereby achieving the effect of reducing the data amount.
For an object without a real object (i.e., an object that has disappeared, only text and picture data) and a container carrying a plurality of objects, a three-dimensional model can be generated by a three-dimensional software Maya (a three-dimensional animation software). The Maya software may construct a three-dimensional model using polygon modeling, which is a common modeling approach, and the modeling process may be implemented by editing and modifying various sub-objects of the polygon object. For editable Polygon objects, Vertex, Edge, Polygon may be included. In principle, the larger the number of polygon faces, the larger the data amount of the three-dimensional model. Because the final purpose of building the three-dimensional model is to display on a webpage and provide a real-time downloading and browsing function for a user on the webpage, based on the current network downloading speed and the current test result, a data packet containing all the three-dimensional models needs to be controlled to be about 20M (i.e. the downloading time is controlled within the waiting time range acceptable by the user), so that the three-dimensional model needs to be built on the premise of 20M data volume, and the data volume of the three-dimensional model is prevented from being too large.
Taking an online museum as an example, the three-dimensional model data of the objects may refer to three-dimensional model data corresponding to multiple cultural relics, and for the cultural relics with real objects, professional equipment of the museum may be used to directly acquire the three-dimensional model data of the cultural relics, and the original three-dimensional model data directly acquired by the professional equipment is optimized (i.e., model mesh surfaces are reduced) in Maya software; and for the lost cultural relics, restoring the lost cultural relics by using a three-dimensional digitization technology according to the character introduction and the picture data of the cultural relics, namely constructing a three-dimensional model corresponding to the lost cultural relics. After the three-dimensional model data corresponding to all the cultural relics in the online museum are acquired, the Maya software is also used for generating a three-dimensional building model of the museum bearing all the cultural relics (namely, a container bearing the objects). In the process of constructing the three-dimensional building model, a wire frame model (wire frame model) of the building of the museum can be drawn, the wire frame model is a visual representation method of an object in three-dimensional computer graphics, and the edge and the vertex of the building shape can be used for representing a model of a geometric shape and reflecting the outline of the museum. In the wire-frame model, the model cannot be rendered by rendering, shading (eliminating hidden lines and hidden surfaces in the model). The surface can be added on the basis of the wire frame model to obtain a surface model (also called a prime model) of the museum, and the model can be colored, blanked and rendered in the surface model, so that the visual effect of the real museum can be obtained. Fig. 5c is a schematic interface diagram of a method for obtaining an object model according to an embodiment of the present invention. As shown in fig. 5c, if the wire-frame model 45a represents a wire-frame model corresponding to a museum, the wire-frame model 45a may reflect the outline of the museum through edges and vertices, and a prime model 45b may be obtained by adding a surface to the wire-frame model 45a, and in the prime model 45b, hidden lines and hidden surfaces in the model may be eliminated, and further coloring and rendering may be performed.
Step S302, in the visual three-dimensional engine, obtaining a display effect model corresponding to the three-dimensional model data, converting the display effect model into a format type corresponding to the webpage three-dimensional engine, and inputting the display effect model after format conversion into the webpage three-dimensional engine;
specifically, after the three-dimensional model data of a plurality of objects and the three-dimensional model data of a container carrying the plurality of objects are generated through Maya software (the three-dimensional model data directly acquired by professional equipment also needs to be subjected to data optimization processing by the Maya software), the three-dimensional model data generated by the Maya software can be exported to a three-dimensional engine, the three-dimensional model data is developed by the three-dimensional engine, the augmented reality capability packaged in a browser is integrated, and object exhibition can be realized on line by combining Web3D (network three-dimensional) and AR technology. The three-dimensional engine is a set of easy-to-use and efficient core components for rendering three-dimensional model data, which are developed on the basis of a graphic device interface; the Web3D is a virtual reality technology which is based on the virtual reality technology and performs virtual three-dimensional stereoscopic display and interactive browsing operation on tangible objects in the real world through the internet. Compared with the mainstream display modes of pictures and animations on the internet at present, the Web3D technology enables users to have the feeling of autonomy in browsing, can observe at the angle of the users, and has a plurality of virtual special effects and interactive operations.
For three-dimensional model data derived by Maya software, light rendering can be carried out on the three-dimensional model data, because the three-dimensional engine rendering belongs to real-time rendering, the fidelity of light is not high, and global illumination is not supported, so that a film-level renderer Arnold (rendering software) can be used for making light, light information is baked on each chartlet, and finally the light chartlets are called in the three-dimensional engine to simulate the realistic rendering effect. Fig. 6 is a schematic view of a light rendering implementation according to an embodiment of the present invention. As shown in fig. 6, taking an online museum as an example, a wire-frame model 51a may be generated in Maya software, and a surface may be added to the wire-frame model 51a to obtain a prime model 52 a. The rough visual effect of the online museum can be reflected in the prime model 52a, the prime model 52a is lighted by a renderer 53a (e.g., Arnold, etc.), lighting information is baked on each map, and a light map 54a is output, and in the three-dimensional engine, the light map 54a can be called to display a three-dimensional engine effect 55 a.
To better achieve the effect of object display, the three-dimensional engine may include a web page three-dimensional engine (e.g., laya, a web page side engine) and a visualization three-dimensional engine (e.g., unity, a game engine, which may perform visualization operations on an interface for performing file organization and effect creation on three-dimensional model data derived from Maya software). In other words, the terminal device may input the three-dimensional model data derived from Maya software into the visual three-dimensional engine, and perform filing and effect creation on the three-dimensional model data by calling the light map output by Arnold. In the visual three-dimensional engine, each time one step of operation is executed (such as modification of a certain parameter), corresponding effect display can be displayed on a screen of the terminal device, that is, in the development process, effect production can be performed on the three-dimensional model according to actual effect display, and a display effect model corresponding to the three-dimensional model data is obtained. Because the format of the display effect model in the visual three-dimensional engine cannot be used by the webpage three-dimensional engine, a visual three-dimensional engine plug-in derivation tool layaairunityplug-in _ beta.unitypackage (a unity engine plug-in derivation tool officially provided by the laya engine) in the webpage three-dimensional engine can be used to convert the display effect model in the visual three-dimensional engine into a format type corresponding to the webpage three-dimensional engine, and input the display effect model after format conversion into the webpage three-dimensional engine.
Step S303, generating an object model corresponding to the display effect model based on an augmented reality control in the browser in the webpage three-dimensional engine, and outputting a webpage identifier corresponding to the object model;
specifically, because the augmented reality software development kit is developed and encapsulated at the bottom layer of the browser, the web page three-dimensional engine can call the augmented reality software development kit (i.e., augmented reality control) in the browser and the gyroscope of the terminal device in the development process, can generate the object model corresponding to the display effect model, and output the web page identifier corresponding to the object model.
It should be noted that the object model may refer to an object model corresponding to a physical object generated by taking a panoramic image and collecting the panoramic image based on a real scene; it may also refer to an object model without an actual object, which is constructed by three-dimensional production software.
Fig. 7 is a schematic flow chart of an engine implementation method according to an embodiment of the present invention. As shown in fig. 7, outputting the three-dimensional model 61a by the three-dimensional modeling software may refer to exporting a manufactured three-dimensional model and an optimized three-dimensional model by Maya software, inputting the exported three-dimensional model into a unity engine (i.e., a visual three-dimensional engine), and performing a filing 62a on the three-dimensional model by using the unity engine, wherein in the process of filing 62a, a shader (which is essentially a program executed on a graphics processor and written in a specific programming language) needs to be used to develop 63 a. In other words, when the unity engine performs the file arrangement 62a, a shader specific programming language glsl (a programming language) is required to write a program, and after the unity engine completes the file arrangement 62a and the effect creation of the three-dimensional model, the three-dimensional model data (i.e., the display effect model) written in the glsl language and subjected to the effect creation can be converted into a format in the three-dimensional engine shader 64a, that is, into a format that can be read by the three-dimensional engine laya. Inputting the display effect model after format conversion into a laya engine, performing three-dimensional engine interactive development 65a, in the three-dimensional engine interactive development 65a, the laya engine may call a browser integrated augmented reality capability 66a and a gyroscope 68a in the terminal device, and generate an object model corresponding to the display effect model, that is, the gyroscope 68a may be used to record changes of a lens and a user's finger sliding a screen when the terminal device is rotated, so as to simulate the user to roam and walk in the object model, and output a website or a two-dimensional code 69a (which may be referred to as a web page identifier). The user may conduct a browser augmented reality view 70a through a web site or two-dimensional code 69a, or simulate augmented reality view 71a using another platform (a platform that does not encapsulate an augmented reality software development kit). The browser integrated augmented reality capability 66a is implemented by a browser underlying development packaged augmented reality software development kit 67 a.
Step S304, responding to an access request aiming at a webpage identifier in a browser, acquiring an object model corresponding to the webpage identifier, acquiring environment image data by adopting a camera, and performing augmented reality display on the object model and the environment image data in a display page of the browser;
for a specific implementation process of step S304, reference may be made to the description of step S101 in the embodiment corresponding to fig. 2, which is not described herein again.
Step S305, acquiring first operation behavior data of the terminal equipment to which the browser belongs, and determining the type of the selected tag in the terminal screen based on the first operation behavior data;
specifically, in a display page of the browser, an object model and environment image data may be displayed, and on the object model, a tag type corresponding to each display area of the object model, such as a mark 13b and a mark 13c shown in fig. 1, may be displayed, and a user may click the mark 13b or 13c on a screen of a terminal device to which the browser belongs. When the user clicks the identifier 13b, the terminal device may obtain a click operation (i.e., first operation behavior data) for the terminal device, and determine a tag type corresponding to the identifier 13b selected in the terminal screen according to the click operation, where the tag type corresponding to the identifier 13b may be a "qinshihuang terracotta warrior", for example.
Step S306, visual orientation parameters matched with the label types are obtained from an orientation parameter table; the orientation parameter table comprises visual orientation parameters corresponding to a plurality of label types respectively;
specifically, the terminal device may obtain the visual orientation parameter matched with the tag type from the stored orientation parameter table. The visual orientation parameter refers to an orientation parameter from the current display direction of the object model in the display page to the direction of the specific internal view display area of the object model; the orientation parameter table is a corresponding relation table between the label types and the visual orientation parameters, that is, each label type corresponds to one visual orientation parameter, for example, the label type 1 may correspond to a left hall entrance of the object model, the label type 2 may correspond to a right hall entrance of the object model, and the like, and the corresponding relation between the label types and the visual orientation parameters is preset.
Optionally, the visual orientation parameter may also be determined according to a position change of the terminal device, and may be represented as: acquiring first operation behavior data of terminal equipment to which the browser belongs, and determining displacement increment information and rotation angle information corresponding to the terminal equipment according to the first operation behavior data; and determining visual orientation parameters according to the displacement increment information and the rotation angle information. In other words, the user may simulate the process of the user entering the inner view of the object model from the outside of the object model by moving or rotating the terminal device. When a user moves the terminal device (the user holds the terminal device to walk in a real scene, or the user holds the terminal device to move) or rotates the terminal device, the terminal device may obtain a movement or rotation operation (i.e., first operation behavior data) of the user, and may record displacement increment information and rotation angle information corresponding to the terminal device by using a gyroscope according to the movement or rotation operation, and may further determine a visual orientation parameter according to the displacement increment information and the rotation angle information. In brief, the orientation parameter of the user entering the inner view display area of the object model can be determined according to the displacement increment information and the rotation angle information corresponding to the terminal device.
Step S307, switching and displaying a display area matched with the visual orientation parameter in the inner scene of the object model in the display page; the presentation area comprises at least one object;
specifically, after the visual orientation parameter is determined, the display page may be switched to display an object model interior display area matched with the visual orientation parameter, and at least one object may be displayed in the displayed display area. For example, the object model may be represented as a teaching building, the visual orientation parameter may be represented as specific location information of a classroom, the display area may be represented as a classroom, and then the at least one desk (which may be regarded as an object) may be displayed at the classroom door by entering from the teaching building and finding the door of the classroom according to the visual orientation parameter.
Step S308, responding to the selection triggering operation aiming at the display area, and determining a selected target object from the at least one object;
specifically, the user may select an object of interest as the target object in the display area according to a plurality of objects displayed in the display area. When the user clicks an area where an object is located on the screen of the terminal device, or the sliding screen draws up an object, the terminal device may determine, from the plurality of objects in the display area, a target object selected by the user in response to a selection trigger operation (which may include the above-described click operation or sliding operation) of the user for the display area. For example, also taking the display area as a classroom as an example, the student may directly designate a third desk (which may be regarded as a target object) in the third row as his/her seat, or may directly walk to the third desk in the third row and sit down, thereby determining that the third desk in the third row is the seat of the student.
Step S309, acquiring second operation behavior data aiming at the target object, and displaying the target object in the display page according to the second operation behavior data;
the specific implementation process of step S309 may refer to the description of step S103 in the embodiment corresponding to fig. 2, or may refer to the description of step S201 to step S206 in the embodiments corresponding to fig. 3a to fig. 3c, which is not described herein again.
Step S310, if the object model is packaged into a first shared webpage identifier, the first shared webpage identifier is sent to an interaction platform, so that a target terminal in the interaction platform accesses the object model through the first shared webpage identifier; and if the target object is packaged as a second shared webpage identifier, sending the second shared webpage identifier to the interaction platform so that the target terminal in the interaction platform accesses the target object through the second shared webpage identifier.
Specifically, when a user views a target object, the user can share the target object or the object model to an interaction platform (such as a friend circle, a space, a microblog and other platforms) when seeing the target object of particular interest. Please refer to fig. 8a and fig. 8b together, which are schematic interface diagrams of a data processing method based on a browser according to an embodiment of the present invention. As shown in fig. 8a, for a target object 81a, a user presses an area where the target object 81a is located for a long time, a menu window may be displayed in an area 82a on a display page, the user may select to share or collect the target object 81a, when the user selects a sharing option in the area 82a, the terminal device may package the target object 81a as a first shared webpage identifier, and send the first shared webpage identifier to the interaction platform, so that a friend in the interaction platform may view the target object 81a according to the first shared webpage identifier; when the user selects the favorite option, the terminal device may store the first shared web page identifier corresponding to the target object 81a in the local storage of the browser. As shown in fig. 8b, for the object model 83a, a user presses the area where the object model 83a is located for a long time, a menu window may be displayed in the area 84a on the display page, the user may select to share or collect the object model 83a, when the user selects a sharing option in the area 84a, the terminal device may package the object model 83a as a second shared webpage identifier, and send the second shared webpage identifier to the interaction platform, so that a friend in the interaction platform may view the object model 83a according to the second shared webpage identifier; when the user selects the favorite option, the terminal device may store the second shared web page identifier corresponding to the object model 83a in the browser local storage.
The method and the device for displaying the object model in the browser have the advantages that the object model corresponding to the webpage identifier is obtained by responding to the access request for the webpage identifier in the browser, the environment image data are collected by the camera, the object model and the environment image data are displayed in an augmented reality mode in the display page of the browser, the display area in the internal scene of the object model can be switched and displayed in the display page by obtaining the first operation behavior data of the user for the terminal equipment to which the browser belongs, the target object can be determined from the display area, the second operation behavior data of the user for the target object can be obtained, and the target object is displayed according to the second operation behavior data. Therefore, in the process of displaying model data (namely a target object or an object model), an augmented reality technology can be adopted to perform augmented reality display on the environment image data and the model data on a display page in a browser, the triggering operation of a user on the model data can be responded, the model data can be correspondingly displayed, and the display mode of the model data can be enriched; the user can directly operate the three-dimensional model data of the object in the display page of the browser so as to check the overall appearance of the object and the detailed description information of the object, and further the browsing efficiency can be improved; the object model or the target object can be shared on the interaction platform, and the propagation of the object model is facilitated.
Fig. 9 is a schematic structural diagram of a data processing apparatus based on a browser according to an embodiment of the present invention. As shown in fig. 9, the browser-based data processing apparatus 1 may include: a response request module 10, an object determination module 20, an object display module 30;
a response request module 10, configured to respond to an access request for a web page identifier in a browser, obtain an object model corresponding to the web page identifier, acquire environment image data by using a camera, and perform augmented reality display on the object model and the environment image data in a display page of the browser;
an object determining module 20, configured to obtain first operation behavior data for a terminal device to which the browser belongs, switch and display a display area in an internal view of the object model in the display page according to the first operation behavior data, and determine a target object from the display area;
and the object display module 30 is configured to acquire second operation behavior data for the target object, and display the target object in the display page according to the second operation behavior data.
The specific functional implementation manners of the response request module 10, the object determination module 20, and the object display module 30 may refer to steps S101 to S103 in the embodiment corresponding to fig. 2, which is not described herein again.
Referring to fig. 9, the browser-based data processing apparatus 1 may further include: a model generation module 40, a sharing module 50;
the model generation module 40 is configured to obtain three-dimensional model data of a plurality of objects and three-dimensional model data of containers carrying the plurality of objects, and generate, according to a three-dimensional engine, an object model including the plurality of objects and the containers, which corresponds to the three-dimensional model data;
the sharing module 50 is configured to send the first shared webpage identifier to an interaction platform if the object model is encapsulated as the first shared webpage identifier, so that a target terminal in the interaction platform accesses the object model through the first shared webpage identifier;
the sharing module 50 is further configured to send the second shared webpage identifier to the interaction platform if the target object is encapsulated as the second shared webpage identifier, so that the target terminal in the interaction platform accesses the target object through the second shared webpage identifier.
The specific functional implementation manner of the model generating module 40 may refer to steps S301 to S303 in the embodiment corresponding to fig. 4, and the specific functional implementation manner of the sharing module 50 may refer to step S310 in the embodiment corresponding to fig. 4, which is not described herein again.
Referring also to fig. 9, the object determination module 20 may include: a visual orientation parameter determining unit 201, a display area displaying unit 202, and a selection operation responding unit 203;
a visual orientation parameter determining unit 201, configured to acquire first operation behavior data for a terminal device to which the browser belongs, and determine a visual orientation parameter according to the first operation behavior data;
a display area display unit 202, configured to switch and display a display area, which is matched with the visual orientation parameter, in the internal view of the object model in the display page; the presentation area comprises at least one object;
a selecting operation responding unit 203, configured to determine a selected target object from the at least one object in response to a selecting triggering operation for the presentation area.
The specific functional implementation manners of the visual orientation parameter determining unit 201, the display area displaying unit 202, and the selection operation responding unit 203 may refer to steps S305 to S308 in the embodiment corresponding to fig. 4, which is not described herein again.
Referring to fig. 9, the object display module 30 may include: a rotation operation response unit 301, an object orientation parameter determination unit 302, a target object display unit 303, a detailed operation response unit 304, a detailed description display unit 305, a disassembly operation response unit 306, and a combination operation response unit 307;
a rotation operation response unit 301, configured to obtain an object rotation speed and an object rotation angle corresponding to the target object in response to a rotation operation on the target object;
an object orientation parameter determining unit 302, configured to determine an object orientation parameter corresponding to the target object according to the object rotation speed and the object rotation angle;
a target object display unit 303, configured to display the target object in the display page according to the object orientation parameter;
a detail operation response unit 304, configured to respond to a detail trigger operation for the target object, and invoke the detail description information corresponding to the target object;
a detailed description display unit 305 configured to create an information presentation window covering the target object, and display the detailed description information in the information presentation window;
a disassembling operation responding unit 306, configured to respond to a disassembling trigger operation for the target object, create an animation display area in the display page, and display a component disassembling animation corresponding to the target object in the animation display area;
a combination operation responding unit 307, configured to respond to a combination trigger operation for the disassembled target object, and display a component combination animation corresponding to the target object in the animation display area.
For specific functional implementation manners of the rotation operation responding unit 301, the object orientation parameter determining unit 302, the target object displaying unit 303, the detail operation responding unit 304, the detail description displaying unit 305, the disassembling operation responding unit 306, and the combining operation responding unit 307, reference may be made to steps S201 to S206 in the embodiment corresponding to fig. 3a to 3c, which is not described herein again. Wherein, when the rotating operation responding unit 301, the object orientation parameter determining unit 302 and the target object displaying unit 303 are performing corresponding operations, the detailed operation responding unit 304, the detailed description displaying unit 305, the disassembling operation responding unit 306 and the combining operation responding unit 307 suspend performing operations; when the detailed operation response unit 304, the detailed description display unit 305 are performing the corresponding operations, the rotating operation response unit 301, the object orientation parameter determination unit 302, the target object display unit 303, the disassembling operation response unit 306, the combining operation response unit 307 all suspend performing the operations; when the disassembling operation responding unit 306, the combining operation responding unit 307, and the performing responding operation, the rotating operation responding unit 301, the object orientation parameter determining unit 302, the target object displaying unit 303, the detailed operation responding unit 304, and the detailed description displaying unit 305 each suspend the performing operation.
Referring also to fig. 9, the model generation module 40 may include: a format conversion unit 401, a display area webpage identifier output unit 402;
a format conversion unit 401, configured to obtain, in the visual three-dimensional engine, a display effect model corresponding to the three-dimensional model data, convert the display effect model into a format type corresponding to the web page three-dimensional engine, and input the display effect model after format conversion into the web page three-dimensional engine;
a web page identifier output unit 402, configured to generate, in the web page three-dimensional engine, an object model corresponding to the display effect model based on the augmented reality control in the browser, and output a web page identifier corresponding to the object model.
The specific functional implementation manners of the format conversion unit 401 and the display area webpage identifier output unit 402 may refer to steps S302 to S303 in the embodiment corresponding to fig. 4, which are not described herein again.
Referring to fig. 9 together, the visual orientation parameter determining unit 201 may include: an information obtaining subunit 2011, a first determining subunit 2012, a tag type obtaining subunit 2013, and a second determining subunit 2014;
an information obtaining subunit 2011, configured to obtain first operation behavior data for a terminal device to which the browser belongs, and determine, according to the first operation behavior data, displacement increment information and rotation angle information corresponding to the terminal device;
a first determining subunit 2012, configured to determine a visual orientation parameter according to the displacement increment information and the rotation angle information;
a tag type obtaining subunit 2013, configured to obtain first operation behavior data for a terminal device to which the browser belongs, and determine a tag type selected in a terminal screen based on the first operation behavior data;
a second determining subunit 2014, configured to obtain the visual orientation parameter matching the tag type from the orientation parameter table; the orientation parameter table comprises visual orientation parameters corresponding to a plurality of label types respectively.
For specific functional implementation manners of the information obtaining subunit 2011, the first determining subunit 2012, the tag type obtaining subunit 2013, and the second determining subunit 2014, reference may be made to steps S305 to S306 in the embodiment corresponding to fig. 4, which is not described herein again. When the information acquiring subunit 2011 and the first determining subunit 2012 execute the corresponding operation, the tag type acquiring subunit 2013 and the second determining subunit 2014 both suspend executing the operation; when the tag type acquiring subunit 2013 and the second determining subunit 2014 are executing corresponding operations, the information acquiring subunit 2011 and the first determining subunit 2012 suspend executing the operations.
The method and the device for displaying the object model in the browser have the advantages that the object model corresponding to the webpage identifier is obtained by responding to the access request for the webpage identifier in the browser, the environment image data are collected by the camera, the object model and the environment image data are displayed in an augmented reality mode in the display page of the browser, the display area in the internal scene of the object model can be switched and displayed in the display page by obtaining the first operation behavior data of the user for the terminal equipment to which the browser belongs, the target object can be determined from the display area, the second operation behavior data of the user for the target object can be obtained, and the target object is displayed according to the second operation behavior data. Therefore, in the process of displaying model data (namely a target object or an object model), an augmented reality technology can be adopted to perform augmented reality display on the environment image data and the model data on a display page in a browser, the triggering operation of a user on the model data can be responded, the model data can be correspondingly displayed, and the display mode of the model data can be enriched; the user can directly operate the three-dimensional model data of the object in the display page of the browser so as to check the overall appearance of the object and the detailed description information of the object, and further the browsing efficiency can be improved; the object model or the target object can be shared on the interaction platform, and the propagation of the object model is facilitated.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention. As shown in fig. 10, the terminal 1000 can include: the processor 1001, the network interface 1004, and the memory 1005, and the terminal 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1004 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 10, a memory 1005, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the terminal 1000 shown in fig. 10, the network interface 1004 may provide a network communication function; the user interface 1003 is an interface for providing a user with input; the processor 1001 may be configured to call a device control application stored in the memory 1005, so as to implement the description of the browser-based data processing method in the embodiment corresponding to any one of fig. 2, fig. 3a to fig. 3c, and fig. 4, which is not described herein again.
It should be understood that the terminal 1000 described in the embodiment of the present invention may perform the description of the browser-based data processing method in the embodiment corresponding to any one of the foregoing fig. 2, fig. 3a to fig. 3c, and fig. 4, and may also perform the description of the browser-based data processing apparatus 1 in the embodiment corresponding to the foregoing fig. 9, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present invention further provides a computer-readable storage medium, where a computer program executed by the aforementioned browser-based data processing apparatus 1 is stored in the computer-readable storage medium, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the browser-based data processing method in the embodiment corresponding to any one of fig. 2, fig. 3a to fig. 3c, and fig. 4 can be executed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present invention, reference is made to the description of the method embodiments of the present invention.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (15)

1. A data processing method based on a browser is characterized by comprising the following steps:
responding to an access request for a webpage identifier in a browser, acquiring an object model corresponding to the webpage identifier, acquiring environmental image data by adopting a camera, and performing augmented reality display on the object model and the environmental image data in a display page of the browser;
acquiring first operation behavior data of terminal equipment to which the browser belongs, switching and displaying a display area in the inner scene of the object model in the display page according to the first operation behavior data, and determining a target object from the display area;
and acquiring second operation behavior data aiming at the target object, and displaying the target object in the display page according to the second operation behavior data.
2. The method according to claim 1, wherein the obtaining first operation behavior data for a terminal device to which the browser belongs, switching a display area in a background in which the object model is displayed in the display page according to the first operation behavior data, and determining a target object from the display area comprises:
acquiring first operation behavior data of terminal equipment to which the browser belongs, and determining a visual orientation parameter according to the first operation behavior data;
switching and displaying a display area matched with the visual orientation parameter in the inner scene of the object model in the display page; the presentation area comprises at least one object;
and determining the selected target object from the at least one object in response to a selection trigger operation aiming at the display area.
3. The method according to claim 2, wherein the obtaining first operation behavior data for a terminal device to which the browser belongs, and determining the visual orientation parameter according to the first operation behavior data comprises:
acquiring first operation behavior data of terminal equipment to which the browser belongs, and determining displacement increment information and rotation angle information corresponding to the terminal equipment according to the first operation behavior data;
and determining visual orientation parameters according to the displacement increment information and the rotation angle information.
4. The method according to claim 2, wherein the obtaining first operation behavior data for a terminal device to which the browser belongs, and determining the visual orientation parameter according to the first operation behavior data comprises:
acquiring first operation behavior data of terminal equipment to which the browser belongs, and determining a tag type selected in a terminal screen based on the first operation behavior data;
acquiring visual orientation parameters matched with the label types from an orientation parameter table; the orientation parameter table comprises visual orientation parameters corresponding to a plurality of label types respectively.
5. The method of claim 1, wherein the second operational behavior data comprises a rotation operation;
the obtaining second operation behavior data for the target object, and displaying the target object in the display page according to the second operation behavior data includes:
responding to the rotating operation aiming at the target object, and acquiring an object rotating speed and an object rotating angle corresponding to the target object;
determining an object orientation parameter corresponding to the target object according to the object rotation speed and the object rotation angle;
and displaying the target object in the display page according to the object orientation parameter.
6. The method of claim 1, wherein the second operational behavior data comprises detail triggering operations;
the obtaining second operation behavior data for the target object, and displaying the target object in the display page according to the second operation behavior data includes:
responding to the detail triggering operation aiming at the target object, and calling detail description information corresponding to the target object;
and creating an information display window covering the target object, and displaying the detailed description information in the information display window.
7. The method of claim 1, wherein the second operational behavior data comprises a tear down trigger operation or a combine trigger operation;
the obtaining second operation behavior data for the target object, and displaying the target object in the display page according to the second operation behavior data includes:
responding to disassembly triggering operation aiming at the target object, creating an animation display area in the display page, and displaying component disassembly animation corresponding to the target object in the animation display area;
and responding to the combined trigger operation aiming at the disassembled target object, and displaying the component combined animation corresponding to the target object in the animation display area.
8. The method of claim 1, further comprising:
if the object model is packaged into a first shared webpage identifier, sending the first shared webpage identifier to an interaction platform so that a target terminal in the interaction platform can access the object model through the first shared webpage identifier;
and if the target object is packaged as a second shared webpage identifier, sending the second shared webpage identifier to the interaction platform so that the target terminal in the interaction platform accesses the target object through the second shared webpage identifier.
9. The method of claim 1, further comprising:
the method comprises the steps of obtaining three-dimensional model data of a plurality of objects and three-dimensional model data of containers bearing the objects, and generating an object model which comprises the objects and the containers and corresponds to the three-dimensional model data according to a three-dimensional engine.
10. The method of claim 9, wherein the three-dimensional engine comprises a web three-dimensional engine and a visualization three-dimensional engine;
generating an object model corresponding to the three-dimensional model data and including the plurality of objects and the container according to the three-dimensional engine, including:
in the visual three-dimensional engine, obtaining a display effect model corresponding to the three-dimensional model data, converting the display effect model into a format type corresponding to the webpage three-dimensional engine, and inputting the display effect model after format conversion into the webpage three-dimensional engine;
and in the webpage three-dimensional engine, generating an object model corresponding to the display effect model based on an augmented reality control in the browser, and outputting a webpage identifier corresponding to the object model.
11. A browser-based data processing apparatus, comprising:
the response request module is used for responding to an access request aiming at a webpage identifier in a browser, acquiring an object model corresponding to the webpage identifier, acquiring environment image data by adopting a camera, and performing augmented reality display on the object model and the environment image data in a display page of the browser;
the object determination module is used for acquiring first operation behavior data aiming at the terminal equipment to which the browser belongs, switching and displaying a display area in the internal scene of the object model in the display page according to the first operation behavior data, and determining a target object from the display area;
and the object display module is used for acquiring second operation behavior data aiming at the target object and displaying the target object in the display page according to the second operation behavior data.
12. The apparatus of claim 11, wherein the object determination module comprises:
the visual orientation parameter determining unit is used for acquiring first operation behavior data aiming at the terminal equipment to which the browser belongs and determining a visual orientation parameter according to the first operation behavior data;
the display area display unit is used for switching and displaying a display area matched with the visual orientation parameter in the inner scene of the object model in the display page; the presentation area comprises at least one object;
and the selection operation response unit is used for responding to the selection trigger operation aiming at the display area and determining the selected target object from the at least one object.
13. The apparatus of claim 11, wherein the visual orientation parameter determination unit comprises:
the information acquisition subunit is configured to acquire first operation behavior data for a terminal device to which the browser belongs, and determine, according to the first operation behavior data, displacement increment information and rotation angle information corresponding to the terminal device;
and the first determining subunit is used for determining the visual orientation parameter according to the displacement increment information and the rotation angle information.
14. A terminal, comprising: a processor and a memory;
the processor is connected to a memory, wherein the memory is used for storing a computer program, and the processor is used for calling the computer program to execute the method according to any one of claims 1-10.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions which, when executed by a processor, perform the method according to any one of claims 1-10.
CN201910407796.5A 2019-05-16 2019-05-16 Data processing method and device based on browser and terminal Pending CN111949904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910407796.5A CN111949904A (en) 2019-05-16 2019-05-16 Data processing method and device based on browser and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910407796.5A CN111949904A (en) 2019-05-16 2019-05-16 Data processing method and device based on browser and terminal

Publications (1)

Publication Number Publication Date
CN111949904A true CN111949904A (en) 2020-11-17

Family

ID=73335884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910407796.5A Pending CN111949904A (en) 2019-05-16 2019-05-16 Data processing method and device based on browser and terminal

Country Status (1)

Country Link
CN (1) CN111949904A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419471A (en) * 2020-11-19 2021-02-26 腾讯科技(深圳)有限公司 Data processing method and device, intelligent equipment and storage medium
CN114564645A (en) * 2022-02-28 2022-05-31 北京字节跳动网络技术有限公司 Encyclopedic information display method, encyclopedic information display device, encyclopedic information display equipment and encyclopedic information display medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020158905A1 (en) * 2001-03-14 2002-10-31 Giovanni Bazzoni System for the creation, visualisation and management of three-dimensional objects on web pages and a relative method
CN106803283A (en) * 2016-12-29 2017-06-06 东莞新吉凯氏测量技术有限公司 Interactive three-dimensional panorama multimedium virtual exhibiting method based on entity museum
CN108553889A (en) * 2018-03-29 2018-09-21 广州汉智网络科技有限公司 Dummy model exchange method and device
CN108572772A (en) * 2018-03-27 2018-09-25 麒麟合盛网络技术股份有限公司 Image content rendering method and device
CN109508090A (en) * 2018-11-06 2019-03-22 燕山大学 A kind of augmented reality display board system having interactivity

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020158905A1 (en) * 2001-03-14 2002-10-31 Giovanni Bazzoni System for the creation, visualisation and management of three-dimensional objects on web pages and a relative method
CN106803283A (en) * 2016-12-29 2017-06-06 东莞新吉凯氏测量技术有限公司 Interactive three-dimensional panorama multimedium virtual exhibiting method based on entity museum
CN108572772A (en) * 2018-03-27 2018-09-25 麒麟合盛网络技术股份有限公司 Image content rendering method and device
CN108553889A (en) * 2018-03-29 2018-09-21 广州汉智网络科技有限公司 Dummy model exchange method and device
CN109508090A (en) * 2018-11-06 2019-03-22 燕山大学 A kind of augmented reality display board system having interactivity

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419471A (en) * 2020-11-19 2021-02-26 腾讯科技(深圳)有限公司 Data processing method and device, intelligent equipment and storage medium
CN112419471B (en) * 2020-11-19 2024-04-26 腾讯科技(深圳)有限公司 Data processing method and device, intelligent equipment and storage medium
CN114564645A (en) * 2022-02-28 2022-05-31 北京字节跳动网络技术有限公司 Encyclopedic information display method, encyclopedic information display device, encyclopedic information display equipment and encyclopedic information display medium

Similar Documents

Publication Publication Date Title
Skublewska-Paszkowska et al. 3D technologies for intangible cultural heritage preservation—literature review for selected databases
Fernández-Palacios et al. Access to complex reality-based 3D models using virtual reality solutions
Gimeno et al. Combining traditional and indirect augmented reality for indoor crowded environments. A case study on the Casa Batlló museum
Santana et al. Multimodal location based services—semantic 3D city data as virtual and augmented reality
CN110163942B (en) Image data processing method and device
Montero et al. Designing and implementing interactive and realistic augmented reality experiences
Vetter Technical potentials for the visualization in virtual reality
Agnello et al. Virtual reality for historical architecture
Pierdicca et al. 3D visualization tools to explore ancient architectures in South America
Jin et al. Application of VR technology in jewelry display
CN111949904A (en) Data processing method and device based on browser and terminal
Zhang et al. The Application of Folk Art with Virtual Reality Technology in Visual Communication.
Trapp et al. Colonia 3D communication of virtual 3D reconstructions in public spaces
CN111476873B (en) Mobile phone virtual graffiti method based on augmented reality
Latham et al. A case study on the advantages of 3D walkthroughs over photo stitching techniques
Tao A VR/AR-based display system for arts and crafts museum
Bocevska et al. Implementation of interactive augmented reality in 3D assembly design presentation
Barszcz et al. 3D scanning digital models for virtual museums
Trapp et al. Communication of digital cultural heritage in public spaces by the example of roman cologne
Hsu et al. The visual web user interface design in augmented reality technology
Choi A technological review to develop an AR-based design supporting system
CN116243831B (en) Virtual cloud exhibition hall interaction method and system
Liarokapis et al. Design experiences of multimodal mixed reality interfaces
Creutzburg et al. Virtual reality, augmented reality, mixed reality & visual effects: new potentials by event Technology for the Immanuel Kant Anniversary 2024 in Kaliningrad
Bakaoukas Virtual Reality Reconstruction Applications Standards for Maps, Artefacts, Archaeological Sites and Monuments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221114

Address after: 1402, Floor 14, Block A, Haina Baichuan Headquarters Building, No. 6, Baoxing Road, Haibin Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Yayue Technology Co.,Ltd.

Address before: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right