CN118227000A - Product display method, device and medium based on AR technology - Google Patents

Product display method, device and medium based on AR technology Download PDF

Info

Publication number
CN118227000A
CN118227000A CN202410254239.5A CN202410254239A CN118227000A CN 118227000 A CN118227000 A CN 118227000A CN 202410254239 A CN202410254239 A CN 202410254239A CN 118227000 A CN118227000 A CN 118227000A
Authority
CN
China
Prior art keywords
model
target model
marker
target
judging whether
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410254239.5A
Other languages
Chinese (zh)
Inventor
吴建
张平
姚路
肖潇
王镇
包忠聪
张辉
陈帅蒙
陶德明
赵鸿鑫
刘靖球
周根长
余果
韩英秋
陈恩泽
高鑫媛
陈铭琪
林旭
林淑娟
陈碧霞
黄杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Gulou District Cultural Tourism Investment Development Co ltd
Fuzhou Survey Institute Co ltd
Original Assignee
Fuzhou Gulou District Cultural Tourism Investment Development Co ltd
Fuzhou Survey Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Gulou District Cultural Tourism Investment Development Co ltd, Fuzhou Survey Institute Co ltd filed Critical Fuzhou Gulou District Cultural Tourism Investment Development Co ltd
Priority to CN202410254239.5A priority Critical patent/CN118227000A/en
Publication of CN118227000A publication Critical patent/CN118227000A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a product display method based on an AR technology, which comprises the following steps: loading a plurality of model resources required by the applet; scanning and identifying an object, judging whether the current object is a marker, if so, entering a step 3; otherwise, ending the flow; returning the ID of the marker, finding out a corresponding target model from a plurality of model resources according to the ID of the marker to serve as a virtual object, rendering the target model on a small program page in real time, and displaying the target model; the target model moves in real time according to the tracked position and posture of the marker; according to the condition of the target model, the placement position, the size and the rotation angle of the target model on the screen of the mobile phone are adjusted; and realizing interaction between the user and the target model by clicking the screen of the mobile phone. The invention also provides an electronic device and a computer readable storage medium, which are used for injecting new vitality for inheritance and development of culture and are expected to promote continuous innovation and development of culture creative industry.

Description

Product display method, device and medium based on AR technology
Technical Field
The invention relates to the technical field of product display, in particular to a product display method, device and medium based on an AR technology.
Background
Augmented reality technology (Augmented Reality, abbreviated AR technology below) has experienced long-standing development in the past decades as a bridge to the virtual and real world. With the advent of smart phones and AR glasses, the AR market began to enter the explosive growth stage, and was widely used in all industries, but the application of AR technology in the geographic information industry was mainly concentrated on map navigation, in the fields of map display, cultural tourism, etc., the AR technology still had higher application value, in order to better expand the space of the dimension and industrial potential of cultural creative products, the application of AR technology on cultural creative products was further developed and perfected.
The invention in China with the publication number of CN117234627A discloses an AR applet platform, which comprises: a client and a server; the client side exchanges data with the server side through a network interface; wherein, the client comprises: the AR engine interface layer is used for calling the bottom functions of ARKit, ARCore and AREngine by adopting a unified interface and shielding development interface differences of ARKit, ARCore and AREngine; the development interface differences include: development language differences, interface specification differences, and calling method differences. According to the AR applet platform, three AR engines of ARkit, ARCore and AREngine are integrated based on a Unity 3D engine, an AR platform layer is constructed, the bottom layer functions of the ARkit, the ARCore and AREngine are called by adopting a unified interface, the problem that the interfaces of the three engines are inconsistent is shielded, the difference of each engine is not needed to be considered in upper-layer application, the development difficulty is reduced, and the work efficiency is improved. That is, the solution is mainly to solve the problem of the differences of the development interfaces of the ARkit, the ARCore and the AREngine, but not to be applied to the cultural creative product, and does not expand the space of the dimension and the industrial potential of the cultural creative product.
Disclosure of Invention
Therefore, the invention aims to provide a product display method based on the AR technology, which injects new technological elements into cultural creative products by utilizing the AR technology, so that the characteristics of interactivity, interestingness, education, individuation and the like are improved on the basis of keeping the original cultural connotation. Injecting new vitality for the inheritance and development of culture, and simultaneously hopefully promoting the continuous innovation and development of culture creative industry.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
the invention provides a product display method based on AR technology, which comprises the following steps:
Step 1, loading a plurality of model resources required by an applet;
Step 2, scanning and identifying an object, judging whether the current object is a marker, if so, entering step 3; otherwise, ending the flow;
Step 3, returning an ID of a marker, finding out a corresponding target model from a plurality of model resources according to the ID of the marker to serve as a virtual object, rendering the target model on a small program page in real time, and displaying the target model;
step 4, the target model moves in real time according to the tracked position and posture of the marker;
Step 5, according to the condition of the target model, adjusting the placement position, the size and the rotation angle of the target model on the screen of the mobile phone;
and 6, realizing interaction between the user and the target model by clicking a screen of the mobile phone.
Further, the step 1 specifically includes:
Step 11, installing an applet on a mobile phone, and listing a plurality of model resources required by the applet, wherein each model resource comprises a model number, model basic information, a video, logo and a picture;
step 12, judging whether the type of the model resource is gltf or glb format, if yes, entering step 13; if not, go to step 14;
Step 13, optimizing the model resource through an xr-frame-tool, and entering step 13;
step 14, loading the model resources related to the applet when the applet is started, and configuring a unique model ID for each model resource according to the model number corresponding to each loaded model resource;
And 15, displaying the loading condition of each model resource through a progress bar.
Further, the step 2 specifically includes:
Step 21, storing the identification picture for identifying the marker into a cloud;
step 22, opening a camera of the mobile phone, and scanning and identifying the object aimed at by the camera;
Step 23, framing and screenshot the scanned image;
step 24, sequentially uploading the screenshot of each frame to the cloud;
Step 25, the cloud matches the screenshot of each frame with the identification picture one by one, judges whether the current object is consistent with the marker, if so, identifies the current object as the marker, and enters step 3; otherwise, the flow is ended.
Further, the step 3 specifically includes:
step 31, returning the ID of the marker to the target model, wherein each ID of the marker has a model ID matched with the ID, and finding out the model ID matched with the ID according to the ID of the marker;
step 32, determining corresponding model resources as a target model according to the model ID;
Step 33, based on a rendering system in an xr-frame, wherein a set of customizable RENDERGRAPH is arranged at the bottom layer of the rendering system to organize the whole rendering pipeline, the xr-frame orders cameras according to depth, and after the ordering, grids and lights of each frame of images are removed according to the visible angle and range of the cameras;
and step 34, rendering a target model serving as a virtual object on the applet page in real time by utilizing the geometric data and the materials of the grid, and displaying the target model.
Further, the step 4 specifically includes:
Step 41, the AR tracker tracks the position and the gesture of the marker in real time according to the ID of the marker, and returns coordinate axis information of the marker to the target model in real time;
and 42, calculating and executing corresponding coordinate axis information under the AR scene according to the returned coordinate axis information of the marker by the target model to obtain the position and the posture of the target model consistent with the position and the posture of the marker, so that the target model moves along with the marker.
Further, the step 5 specifically includes:
step 51, judging whether the target model is self-driven, if so, entering step 52, and otherwise, entering step 53;
Step 52, automatically creating a Animator component under the current element, adding the animation segment in the GLTF model into the Animator component, and entering step 55;
step 53, judging whether the target model has frame animation, if so, entering step 54; otherwise, go to step 55;
Step 54, using the position attribute, scale attribute and rotation attribute of the transform component under the element of the frame animation to change, and entering step 55;
and 55, rendering the target model on a screen of the mobile phone and displaying the animation effect.
Further, the step 6 specifically includes:
step 61, if the user needs gesture interaction, the step 62 is entered; otherwise, ending the flow;
step 62, judging whether a screen clicking operation exists, if so, executing different operations on the target model according to different screen clicking operations, otherwise, ending the flow;
Step 63, automatically adapting the mesh and GLTF model under the element, and creating a model contour according to the shape of the mesh or the shape of the GLTF model, wherein the model contour covers the surface of the target model;
Step 64, judging whether to click on the target model according to the clicking condition of the model outline, if so, jumping out the introduction window and the navigation button of the target model from the screen of the mobile phone, and moving the introduction window along with the movement of the target model; otherwise, ending the flow;
step 65, detecting whether a navigation button is clicked, if yes, jumping the map interface and automatically planning an arrival route; otherwise, ending the flow;
step 66, judging the state of the target model, and if the current state of the target model is a display state and needs to be switched to a hidden state, clicking a display/hidden button to hide the target model; and if the current state of the target model is a hidden state and needs to be switched to a display state, clicking a display/hidden button to display the target model.
Further, the step 62 specifically includes:
Step 621, judging whether a screen clicking operation exists, if yes, entering step 622, otherwise, ending the flow;
Step 622, judging whether a single finger touches the screen, if so, performing a rotation operation on the target model; otherwise, go to step 623;
Step 623, judging whether the screen is touched by double fingers, if yes, performing scaling operation on the target model; otherwise, ending the flow;
step 624, judging whether to click a reset button, if so, returning to the initial state of the target model, otherwise, ending the flow.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the product display method based on the AR technology when executing the program.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a product display method based on AR technology as described above.
By adopting the technical scheme, compared with the prior art, the invention has the beneficial effects that:
1. Interactivity: by means of AR technology, the model can interact with a user, and the user can experience and know the product more deeply. The user can preview the creative points, production modes, functions, purposes, meanings, etc. of the product before purchasing and learn more about the history and cultural background of the product through the applet. This interaction allows the user to better experience and understand the stories behind the product across different time-spaces.
2. Interest: through the applet, the user can randomly generate different scenic spots, and the vivid and interesting interaction mode makes the product more vivid and interesting. Meanwhile, the user can know more interesting stories and historical legends about the product through the small program, so that the interestingness of the product is improved.
3. Educational: the applet can obtain virtual presentations and educational content regarding the history, culture, stories, etc. of the product by clicking through randomly occurring attractions. The user can learn and know the related knowledge through interaction and experience, and the culture literacy of the user is improved.
4. Convenience: through the applet, the user can directly scan images or entities of the cultural creative product on a mobile phone or tablet computer to obtain related information and services. The convenience greatly improves the information acquisition efficiency of the user, and enables the user to know the detailed information and related services of the product more conveniently. Meanwhile, the applet also provides convenient services such as online purchase, personalized customization and the like, so that a user can purchase and use products more conveniently.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart illustrating an execution of a product display method based on AR technology according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a computer readable storage medium according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is specifically noted that the following examples are only for illustrating the present invention, but do not limit the scope of the present invention. Likewise, the following examples are only some, but not all, of the examples of the present invention, and all other examples, which a person of ordinary skill in the art would obtain without making any inventive effort, are within the scope of the present invention.
Referring to fig. 1, the product display method based on AR technology of the present invention includes the following steps:
Step 1, loading a plurality of model resources required by an applet; and a plurality of model resources are loaded in the mobile phone in advance, can be directly called when being used in the later period, are not needed to be downloaded additionally, and the calling, rendering and displaying efficiency is improved.
In this embodiment, the step 1 specifically includes:
Step 11, installing an applet on a mobile phone, and listing a plurality of model resources required by the applet, wherein each model resource comprises a model number, model basic information, a video, logo and a picture; wherein, the model number, the model basic information and the video belong to data resources, logo (foreign language abbreviation of Logo or trademark) and pictures belong to static resources;
Step 12, judging whether the model resource type is gltf or glb format, if yes, describing that the model resource type is a three-dimensional model, and entering step 13 because the data volume of the three-dimensional model is relatively large and therefore optimization is needed; if not, go to step 14;
Step 13, optimizing the model resources through an xr-frame-tool, saving loading speed and memory space, and relieving loading pressure of the model resources; step 13 is entered;
Step 14, loading the model resources related to the applet when the applet is started (when the user logs in the applet), and configuring a unique model ID for each model resource according to the model number corresponding to each loaded model resource; for example: model ID of model number 1 is model1, model ID of model number 2 is model2; the models are only downloaded to the handset at this point, but have not yet been rendered, i.e. the models are not displayed on the screen.
And 15, displaying the loading condition of each model resource through a progress bar, and intuitively observing the loading progress through the progress bar.
Step 2, scanning and identifying an object, judging whether the current object is a marker, if so, entering step 3; otherwise, ending the flow; the object refers to an image corresponding to a cultural creative product, and the marker refers to an image (such as a scene picking tea tray picture or an entity) corresponding to the cultural creative product with a special mark. The corresponding model can be found out through the identification of the cultural creative product, and the product introduction information of the cultural creative product can be displayed and obtained through the model.
In this embodiment, the step 2 specifically includes:
Step 21, storing the identification picture for identifying the marker into a cloud;
step 22, opening a camera of the mobile phone, and scanning and identifying the object aimed at by the camera;
Step 23, framing and screenshot the scanned image;
step 24, sequentially uploading the screenshot of each frame to the cloud;
Step 25, the cloud matches the screenshot of each frame with the identification picture one by one, judges whether the current object is consistent with the marker, if so, identifies the current object as the marker, and enters step 3; otherwise, the flow is ended. The screenshot of each frame is matched with the identification picture one by one, so that the same point and difference between the screenshot and the identification picture can be accurately identified, and the marker can be found more quickly and accurately.
Step 3, returning an ID of a marker, finding out a corresponding target model from a plurality of model resources according to the ID of the marker to serve as a virtual object, rendering the target model on a small program page in real time, and displaying the target model;
in this embodiment, the step 3 specifically includes:
step 31, returning the ID of the marker to the target model, wherein each ID of the marker has a model ID matched with the ID, and finding out the model ID matched with the ID according to the ID of the marker;
step 32, determining corresponding model resources as a target model according to the model ID;
Step 33, based on a rendering system in an xr-frame, wherein a set of customizable RENDERGRAPH is arranged at the bottom layer of the rendering system to organize the whole rendering pipeline, the xr-frame orders cameras according to depth, and after the ordering, grids and lights of each frame of images are removed according to the visible angle and range of the cameras;
And step 34, rendering a target model serving as a virtual object on the applet page in real time by utilizing the geometric data and the materials of the grid, and displaying the target model. When the marker is scanned, the ID of the marker is obtained, the corresponding model ID is obtained, the corresponding model is rendered and displayed on the small program page, and the user can see the dynamic change of the target model on the small program page in real time.
Step 4, the target model moves in real time according to the tracked position and posture of the marker;
in this embodiment, the step 4 specifically includes:
Step 41, the AR tracker tracks the position and the gesture of the marker in real time according to the ID of the marker, and returns coordinate axis information of the marker to the target model in real time;
And 42, calculating and executing corresponding coordinate axis information under the AR scene according to the returned coordinate axis information of the marker by the target model to obtain the position and the posture of the target model consistent with the position and the posture of the marker, so that the target model moves along with the marker. Thus, the synchronization of the target model and the marker can be realized, and the interactivity is improved.
Step 5, according to the condition of the target model, adjusting the placement position, the size and the rotation angle of the target model on the screen of the mobile phone, and carrying out accurate and attractive layout;
In this embodiment, the step 5 specifically includes:
step 51, judging whether the target model is self-driven, if so, entering step 52, and otherwise, entering step 53;
Step 52, automatically creating a Animator component under the current element (GLTF or GLB model to be currently displayed), and adding the animation segment in the GLTF model to the Animator component, because to play the animation of the model, it is necessary to add the animation to Animator, and then adding an anim-auto play attribute to the label, so that the animation in the GLTF model can be automatically played. Step 55 is entered;
step 53, judging whether the target model has frame animation, if so, entering step 54; otherwise, go to step 55;
Step 54, using the position attribute, scale attribute and rotation attribute of the transform component under the element of the frame animation to change, and placing the moving object model at the position, size and rotation angle on the screen of the mobile phone, through the parameters: a position attribute, a scale attribute and a rotation attribute to control the position, the size and the rotation angle of the target model displayed on the screen of the mobile phone, and step 55 is entered;
and 55, rendering the target model on a screen of the mobile phone and displaying the animation effect.
The self-contained animation and the frame-contained animation belong to the animation, namely the self-contained animation or the frame-contained animation exists on the target model, the self-contained animation is the animation of the three-dimensional model, and the frame-contained animation is the animation written in the applet through codes. If the animation is self-contained, anim-autoplay parameters are required to be set during model rendering, namely, the animation is automatically played. If the frame animation is the frame animation, a json-format frame animation resource exists, and the resource is referenced when the animation is played, so that the frame animation is realized.
And 6, realizing interaction between the user and the target model by clicking a screen of the mobile phone.
In this embodiment, the step 6 specifically includes:
step 61, if the user needs gesture interaction, the step 62 is entered; otherwise, ending the flow;
step 62, judging whether a screen clicking operation exists, if so, executing different operations on the target model according to different screen clicking operations, otherwise, ending the flow;
in this embodiment, the step 62 specifically includes:
Step 621, judging whether a screen clicking operation exists, if yes, entering step 622, otherwise, ending the flow;
Step 622, judging whether a single finger touches the screen, if so, performing a rotation operation on the target model; otherwise, go to step 623;
Step 623, judging whether the screen is touched by double fingers, if yes, performing scaling operation on the target model; otherwise, ending the flow;
step 624, judging whether to click a reset button, if so, returning to the initial state of the target model, otherwise, ending the flow.
Step 63, automatically adapting the mesh and GLTF model under the element, and creating a model contour according to the shape of the mesh or the shape of the GLTF model, wherein the model contour covers the surface of the target model; so that the shape of this model contour substantially conforms to the GLTF model or mesh; the direct clicking model is characterized in that clicking operation cannot be judged, only one model contour can be created first, the model contour is covered on the model surface, whether the model is clicked or not is judged by clicking the model contour, and a finger point is in the range of the model contour, so that the model is clicked.
Step 64, judging whether to click on the target model according to the clicking condition of the model outline, if so, jumping out the introduction window and the navigation button of the target model from the screen of the mobile phone, and moving the introduction window along with the movement of the target model; otherwise, ending the flow;
step 65, detecting whether a navigation button is clicked, if yes, jumping the map interface and automatically planning an arrival route; otherwise, ending the flow;
step 66, judging the state of the target model, and if the current state of the target model is a display state and needs to be switched to a hidden state, clicking a display/hidden button to hide the target model; and if the current state of the target model is a hidden state and needs to be switched to a display state, clicking a display/hidden button to display the target model.
By utilizing the AR technology, brand new technological elements are injected into the cultural creative product, so that the characteristics of interactivity, interestingness, education, individuation and the like are improved on the basis of retaining the original cultural connotation. Injecting new vitality for the inheritance and development of culture, and simultaneously hopefully promoting the continuous innovation and development of culture creative industry.
As shown in fig. 2, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the product display method based on AR technology when executing the program.
As shown in fig. 3, an embodiment of the present invention further provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor implements an AR technology-based product display method as described above.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only a partial embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent devices or equivalent processes using the descriptions and the drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the present invention.

Claims (10)

1. The product display method based on the AR technology is characterized by comprising the following steps of:
Step 1, loading a plurality of model resources required by an applet;
Step 2, scanning and identifying an object, judging whether the current object is a marker, if so, entering step 3; otherwise, ending the flow;
Step 3, returning an ID of a marker, finding out a corresponding target model from a plurality of model resources according to the ID of the marker to serve as a virtual object, rendering the target model on a small program page in real time, and displaying the target model;
step 4, the target model moves in real time according to the tracked position and posture of the marker;
Step 5, according to the condition of the target model, adjusting the placement position, the size and the rotation angle of the target model on the screen of the mobile phone;
and 6, realizing interaction between the user and the target model by clicking a screen of the mobile phone.
2. The AR technology-based product display method as set forth in claim 1, wherein the step 1 specifically includes:
Step 11, installing an applet on a mobile phone, and listing a plurality of model resources required by the applet, wherein each model resource comprises a model number, model basic information, a video, logo and a picture;
step 12, judging whether the type of the model resource is gltf or glb format, if yes, entering step 13; if not, go to step 14;
Step 13, optimizing the model resource through an xr-frame-tool, and entering step 13;
step 14, loading the model resources related to the applet when the applet is started, and configuring a unique model ID for each model resource according to the model number corresponding to each loaded model resource;
And 15, displaying the loading condition of each model resource through a progress bar.
3. The AR technology-based product display method as set forth in claim 2, wherein said step 2 specifically includes:
Step 21, storing the identification picture for identifying the marker into a cloud;
step 22, opening a camera of the mobile phone, and scanning and identifying the object aimed at by the camera;
Step 23, framing and screenshot the scanned image;
step 24, sequentially uploading the screenshot of each frame to the cloud;
Step 25, the cloud matches the screenshot of each frame with the identification picture one by one, judges whether the current object is consistent with the marker, if so, identifies the current object as the marker, and enters step 3; otherwise, the flow is ended.
4. The AR technology-based product display method as set forth in claim 3, wherein said step 3 specifically includes:
step 31, returning the ID of the marker to the target model, wherein each ID of the marker has a model ID matched with the ID, and finding out the model ID matched with the ID according to the ID of the marker;
step 32, determining corresponding model resources as a target model according to the model ID;
Step 33, based on a rendering system in an xr-frame, wherein a set of customizable RENDERGRAPH is arranged at the bottom layer of the rendering system to organize the whole rendering pipeline, the xr-frame orders cameras according to depth, and after the ordering, grids and lights of each frame of images are removed according to the visible angle and range of the cameras;
and step 34, rendering a target model serving as a virtual object on the applet page in real time by utilizing the geometric data and the materials of the grid, and displaying the target model.
5. The AR technology-based product display method as set forth in claim 4, wherein said step 4 specifically includes:
Step 41, the AR tracker tracks the position and the gesture of the marker in real time according to the ID of the marker, and returns coordinate axis information of the marker to the target model in real time;
and 42, calculating and executing corresponding coordinate axis information under the AR scene according to the returned coordinate axis information of the marker by the target model to obtain the position and the posture of the target model consistent with the position and the posture of the marker, so that the target model moves along with the marker.
6. The AR technology-based product display method as set forth in claim 1, wherein the step 5 specifically includes:
step 51, judging whether the target model is self-driven, if so, entering step 52, and otherwise, entering step 53;
Step 52, automatically creating a Animator component under the current element, adding the animation segment in the GLTF model into the Animator component, and entering step 55;
step 53, judging whether the target model has frame animation, if so, entering step 54; otherwise, go to step 55;
Step 54, using the position attribute, scale attribute and rotation attribute of the transform component under the element of the frame animation to change, and entering step 55;
and 55, rendering the target model on a screen of the mobile phone and displaying the animation effect.
7. The AR technology-based product display method as set forth in claim 1, wherein the step 6 specifically includes:
step 61, if the user needs gesture interaction, the step 62 is entered; otherwise, ending the flow;
step 62, judging whether a screen clicking operation exists, if so, executing different operations on the target model according to different screen clicking operations, otherwise, ending the flow;
Step 63, automatically adapting the mesh and GLTF model under the element, and creating a model contour according to the shape of the mesh or the shape of the GLTF model, wherein the model contour covers the surface of the target model;
Step 64, judging whether to click on the target model according to the clicking condition of the model outline, if so, jumping out the introduction window and the navigation button of the target model from the screen of the mobile phone, and moving the introduction window along with the movement of the target model; otherwise, ending the flow;
step 65, detecting whether a navigation button is clicked, if yes, jumping the map interface and automatically planning an arrival route; otherwise, ending the flow;
step 66, judging the state of the target model, and if the current state of the target model is a display state and needs to be switched to a hidden state, clicking a display/hidden button to hide the target model; and if the current state of the target model is a hidden state and needs to be switched to a display state, clicking a display/hidden button to display the target model.
8. The AR technology-based product display method as set forth in claim 1, wherein said step 62 specifically includes:
Step 621, judging whether a screen clicking operation exists, if yes, entering step 622, otherwise, ending the flow;
Step 622, judging whether a single finger touches the screen, if so, performing a rotation operation on the target model; otherwise, go to step 623;
Step 623, judging whether the screen is touched by double fingers, if yes, performing scaling operation on the target model; otherwise, ending the flow;
step 624, judging whether to click a reset button, if so, returning to the initial state of the target model, otherwise, ending the flow.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements an AR technology based product display method as claimed in any one of claims 1 to 8 when executing the program.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a product display method based on AR technology as claimed in any one of claims 1 to 8.
CN202410254239.5A 2024-03-06 2024-03-06 Product display method, device and medium based on AR technology Pending CN118227000A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410254239.5A CN118227000A (en) 2024-03-06 2024-03-06 Product display method, device and medium based on AR technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410254239.5A CN118227000A (en) 2024-03-06 2024-03-06 Product display method, device and medium based on AR technology

Publications (1)

Publication Number Publication Date
CN118227000A true CN118227000A (en) 2024-06-21

Family

ID=91505798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410254239.5A Pending CN118227000A (en) 2024-03-06 2024-03-06 Product display method, device and medium based on AR technology

Country Status (1)

Country Link
CN (1) CN118227000A (en)

Similar Documents

Publication Publication Date Title
Bekele et al. A survey of augmented, virtual, and mixed reality for cultural heritage
US9443353B2 (en) Methods and systems for capturing and moving 3D models and true-scale metadata of real world objects
KR20210047278A (en) AR scene image processing method, device, electronic device and storage medium
CN110019600B (en) Map processing method, map processing device and storage medium
US20170153787A1 (en) Injection of 3-d virtual objects of museum artifact in ar space and interaction with the same
US10186084B2 (en) Image processing to enhance variety of displayable augmented reality objects
CN108959392B (en) Method, device and equipment for displaying rich text on 3D model
US11783534B2 (en) 3D simulation of a 3D drawing in virtual reality
KR20160022086A (en) Terminal and method for surpporting 3d printing, computer program for performing the method
CN113359986B (en) Augmented reality data display method and device, electronic equipment and storage medium
CN114401443B (en) Special effect video processing method and device, electronic equipment and storage medium
US10891801B2 (en) Method and system for generating a user-customized computer-generated animation
JP2022500795A (en) Avatar animation
CN111494952A (en) Webpage end object display method and device and readable storage medium
CN110544315B (en) Virtual object control method and related equipment
CN106445439A (en) Method for online exhibiting pictures
CN111949904B (en) Data processing method and device based on browser and terminal
Pryss et al. The AREA framework for location-based smart mobile augmented reality applications
KR20230053717A (en) Systems and methods for precise positioning using touchscreen gestures
Dev Mobile expressive renderings: The state of the art
Kurt et al. ARgent: A web based augmented reality framework for dynamic content generation
Ortman et al. Guidelines for user interactions in mobile augmented reality
CN118227000A (en) Product display method, device and medium based on AR technology
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN113362474A (en) Augmented reality data display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication