CN114463517A - AR-based physical and physical analysis method and device - Google Patents

AR-based physical and physical analysis method and device Download PDF

Info

Publication number
CN114463517A
CN114463517A CN202111615427.9A CN202111615427A CN114463517A CN 114463517 A CN114463517 A CN 114463517A CN 202111615427 A CN202111615427 A CN 202111615427A CN 114463517 A CN114463517 A CN 114463517A
Authority
CN
China
Prior art keywords
model
stress
target object
interface
setting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111615427.9A
Other languages
Chinese (zh)
Inventor
张腾飞
彭飞
邓竹立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 58 Information Technology Co Ltd
Original Assignee
Beijing 58 Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 58 Information Technology Co Ltd filed Critical Beijing 58 Information Technology Co Ltd
Priority to CN202111615427.9A priority Critical patent/CN114463517A/en
Publication of CN114463517A publication Critical patent/CN114463517A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/06Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for physics
    • G09B23/08Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for physics for statics or dynamics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The embodiment of the invention provides an AR-based physical and physical analysis method and device, wherein the method comprises the following steps: acquiring an image shot in all directions aiming at a target object; acquiring a 3D model for the target object generated according to the image, and displaying the 3D model on the AR interface; setting stress parameters for the 3D model in response to user operation for the 3D model; and controlling the 3D model to simulate the motion state of the target object according to the stress parameter, and displaying the motion track of the target object on the AR interface. Common object stress analysis and simulated object motion are converted from mathematical graphs into 3D models for display, so that abstract scenes are materialized, learning and stress understanding are facilitated, understanding ability is reduced, and learning interest of students is improved.

Description

AR-based physical and physical analysis method and device
Technical Field
The invention relates to the technical field of AR, in particular to an AR-based physical and physical analysis method and an AR-based physical and physical analysis method device.
Background
In the existing physical teaching, the stress analysis of the motion or state of a complex object is usually carried out through drawing or data calculation, the expression form is usually a plane graph, the stress direction and the stress size of the object are marked, and the final effect of the motion direction or the interaction of the object is obtained through mathematical calculation. However, the planar force analysis method can only analyze the stress state of the object, and the expression forms of different forces cannot be well expressed (such as magnetic field, flow, etc.), which is not conducive to students to better understand the stress condition of the object to be analyzed, and it is difficult to imagine the movement condition of the object in individual scenes after analyzing the stress of the object.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide an AR-based physical analysis method and a corresponding AR-based physical analysis method apparatus that overcome or at least partially solve the above problems.
In order to solve the above problems, an embodiment of the present invention discloses an AR-based physical analysis method, which is applied to a terminal, where the terminal at least includes an AR interface, and the method includes:
acquiring an image shot in all directions aiming at a target object;
acquiring a 3D model for the target object generated according to the image, and displaying the 3D model on the AR interface;
setting stress parameters for the 3D model in response to user operation for the 3D model;
and controlling the 3D model to simulate the motion state of the target object according to the stress parameter, and displaying the motion track of the target object on the AR interface.
Optionally, the method further comprises:
and adding and displaying corresponding interface elements in the 3D model according to the stress parameters of the 3D model.
Optionally, the method further comprises:
acquiring a motion state of the target object determined according to the image;
and adding a corresponding rigid body to the 3D model according to the motion state of the target object.
Optionally, the stress parameters for the 3D model are set in response to user operation on the 3D model; the stress parameters comprise the type, size and direction of force, including:
setting a stress type in response to an operation of setting the stress type for the 3D model;
setting the stress size in response to an operation of setting the stress size for the 3D model;
and setting the stress direction in response to the operation of setting the stress direction for the 3D model.
Optionally, the acquiring a 3D model for the target object generated from the image includes:
sending the omni-directional shot image to a server so that the server generates a corresponding 3D model according to the omni-directional shot image and stores the 3D model as a URL link;
and receiving the URL link sent by the server, and acquiring the 3D model of the target object according to the URL link.
Optionally, the controlling the 3D model to move according to the force parameter includes:
and when the AR interface displays the 3D models corresponding to the target objects, controlling the 3D models to move according to the stress parameters of the 3D models.
The embodiment of the invention also discloses an AR-based physical and physical analysis device, which is applied to a terminal, wherein the terminal at least comprises an AR interface, and the device comprises:
the acquisition module is used for acquiring an image shot in all directions aiming at a target object;
a first obtaining module, configured to obtain a 3D model for the target object generated according to the image, and display the 3D model on the AR interface;
the setting module is used for responding to the user operation aiming at the 3D model and setting the stress parameters aiming at the 3D model;
and the control module is used for controlling the 3D model to simulate the motion state of the target object according to the stress parameter and displaying the motion track of the target object on the AR interface.
Optionally, the apparatus further comprises:
and the first adding module is used for adding and displaying corresponding interface elements in the 3D model according to the stress parameters of the 3D model.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring the motion state of the target object determined according to the image;
and the second adding module is used for adding a corresponding rigid body to the 3D model according to the motion state of the target object.
Optionally, the setting module further includes:
a first setting submodule for setting a kind of stress in response to an operation of setting a kind of stress for the 3D model;
the second setting submodule is used for setting the stress size in response to the operation of setting the stress size aiming at the 3D model;
and the third setting submodule is used for setting the stress direction in response to the operation of setting the stress direction aiming at the 3D model.
Optionally, the acquiring a 3D model for the target object generated from the image, the apparatus further comprises:
the sending module is used for sending the omni-directional shot image to a server so that the server generates a corresponding 3D model according to the omni-directional shot image and stores the 3D model as a URL link;
and the receiving module is used for receiving the URL link sent by the server and acquiring the 3D model of the target object according to the URL link.
Optionally, the control module further comprises:
and the control sub-module is used for controlling the plurality of 3D models to move according to the stress parameters of the plurality of 3D models when the AR interface displays the 3D models corresponding to the plurality of target objects.
The embodiment of the invention also discloses an electronic device, which comprises: a processor, a memory and a computer program stored on the memory and capable of running on the processor, which computer program, when executed by the processor, implements the steps of the AR-based physico-physical analysis method as described above.
The embodiment of the invention also discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the ultra-AR-based physical analysis method are realized.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the 3D model of the target object generated according to the omnidirectionally shot image is obtained by collecting the omnidirectionally shot image of the target object, and the 3D model is displayed on an AR interface; responding to the user operation of the 3D model, and setting the stress parameters of the 3D model of the target object; and controlling the 3D model of the target object to simulate the motion state of the target object according to the set stress parameters, and displaying the motion track of the target object on the AR interface. Common object stress analysis and simulated object motion are converted from mathematical graphs into 3D models for display, so that abstract scenes are materialized, learning and stress understanding are facilitated, understanding ability is reduced, and learning interest of students is improved. The method can also be applied to real life, the collision condition of the object under different scenes is judged, and the motion trail of the object after collision is predicted.
Drawings
FIG. 1 is a flow chart illustrating the steps of a method for AR-based physico-physical analysis according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating steps of another AR-based physical analysis method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating AR interface stress parameter settings according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an example of an option bar for the type of AR interface force;
fig. 5 is a block diagram of an AR-based physical analysis apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
In physical teaching, the motion process of an object may have various different expression forms after time lapse, the applied force from the outside is various, only the stress state of the object can be analyzed through simple plane stress analysis, and the expression forms of different forces cannot be well expressed through a 2D form (such as magnetic field and flow), and the like.
The invention aims to solve the technical problems and provides an AR-based physical and physical analysis method which has the core conception that target objects are photographed in all directions and uploaded to a server, the server generates a 3D model according to the taken pictures of the target objects in all directions, a user downloads and stores the 3D model to the local, opens an AR page to load the 3D model, adds the type, the size and the direction of force to the 3D model, can display the motion trail and the stress direction of the model after the addition is finished, further realizes the 2D plane stress analysis and is expressed by the 3D model, and can view the stress condition and the motion state of the target objects from all directions.
Referring to fig. 1, a flowchart of steps of an AR-based physical analysis method provided in an embodiment of the present invention is shown, and is applied to a terminal device, where the method specifically includes the following steps:
the embodiment of the invention is explained by taking a mobile terminal as an example.
Step 101, acquiring an image shot in all directions for a target object.
The terminal device may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the specific type of the terminal device is not limited by the present invention.
In the embodiment of the invention, the mobile terminal shoots the target object and acquires the omnibearing image of the target object. The target object is an object to be subjected to force analysis, and the object to be analyzed may be a moving object or a stationary object, which may be determined according to an actual situation.
Illustratively, the mobile terminal takes a picture of an object to be analyzed from multiple angles to acquire an omnidirectional image of the object to be analyzed. The number of the shot pictures can be any number of pictures according to the fact that the shot images are subject to the acquisition of all characteristics of the object to be analyzed, and the invention is not limited herein.
Step 102, obtaining a 3D model for the target object generated from the image, and displaying the 3D model on the AR interface.
And acquiring a 3D model generated according to the omnibearing image of the target object, and displaying the 3D model of the target object in an AR interface.
Exemplarily, the mobile terminal may acquire a 3D model of an object to be analyzed generated according to a photographed image, and load the 3D model on an opened AR interface, where the 3D model on the AR interface may have the same proportional size as the object to be analyzed, so that a user has a real viewing experience.
And 103, responding to the user operation aiming at the 3D model, and setting stress parameters aiming at the 3D model.
After the 3D model of the target object is displayed on the AR interface, the stress parameters of the 3D model are set in response to the operation of the user on the AR interface. In specific implementation, after a user can see the 3D model of the object to be analyzed on the AR interface, touch operation is performed on the 3D model, and the mobile terminal sets stress parameters of the 3D model in response to the touch operation of the user.
For example, a user may trigger to open an AR page in a mobile APP, an AR scene may be initialized when the page is opened, specifically, an AR view under a corresponding AR engine may be initialized, and a 3D model is loaded, and the user may set a stress parameter of the 3D model by touch on the AR interface.
And 104, controlling the 3D model to simulate the motion state of the target object according to the stress parameter, and displaying the motion track of the target object on the AR interface.
After the stress parameters of the 3D model are set, the mobile terminal controls the 3D model to simulate the motion state of the target object according to the stress parameters, and displays the motion track of the target object on the AR interface.
After the user touches the AR interface to determine the stress state of the 3D model, clicking is started, and the AR interface demonstrates the track and the direction of the stress motion of the object to be analyzed. The user can perform corresponding control operation on the 3D model of the object to be analyzed presented by the page in the real world through the AR interface.
In the embodiment of the invention, the 3D model of the target object generated according to the omnidirectionally shot image is obtained by collecting the omnidirectionally shot image of the target object, and the 3D model is displayed on an AR interface; responding to the user operation of the 3D model, and setting the stress parameters of the 3D model of the target object; and controlling the 3D model of the target object to simulate the motion state of the target object according to the set stress parameters, and displaying the motion track of the target object on the AR interface. Common object stress analysis and simulated object motion are converted from mathematical graphs into 3D models for display, so that abstract scenes are materialized, learning and stress understanding are facilitated, understanding ability is reduced, and learning interest of students is improved.
Referring to fig. 2, a flowchart of steps of another AR-based physical analysis method provided in the embodiment of the present invention is shown, and is applied to a terminal device, where the method specifically includes the following steps:
step 201, acquiring an image shot in all directions for a target object.
Step 202, sending the omni-directional shot image to a server so that the server generates a corresponding 3D model according to the omni-directional shot image and stores the 3D model as a URL link.
Specifically, the mobile terminal carries out all-dimensional shooting on the object to be analyzed, the shot image is sent to the server, and the server can generate a 3D model with the same proportion size as the object to be analyzed according to the shot image and stores the 3D model as a URL link.
Illustratively, after the server acquires a picture shot in all directions, a 3D object is created by using an API interface newly provided by the MacOS system according to the acquired picture. Creating the 3D object includes: creating a photogrammetry session, wherein a URL with the required level of detail to point to the output location and model required for the generated USDZ format file is created first, and then using the request and the URL to point to the directory containing the image to create the object; creating a session of the 3D object, monitoring and providing state update related to the object creation process in the background by using the RealityKit, calling a process (requests:) for starting the session after receiving the state update, processing pictures in the background by using the RealityKit, generating a 3D model for shooting the object, and notifying when the processing is completed or fails. The completed process generates a 3D model of the subject, stored as a URL link.
Step 203, receiving the URL link sent by the server, and obtaining the 3D model of the target object according to the URL link.
And receiving a URL link returned by the server, and acquiring the 3D model of the target object through the URL link. In a specific implementation, the server may return the generated URL link to the user, and the user downloads and stores the 3D model locally through the URL link, opens an AR page in the application program of the mobile terminal, and loads the 3D model file stored locally.
And 204, acquiring the motion state of the target object determined according to the image.
For example, a motion state detection model may be utilized to perform static and dynamic object determination on an object in a captured picture to determine a motion state of the object to be analyzed.
And step 205, adding a corresponding rigid body to the 3D model according to the motion state of the target object.
In a specific implementation, a corresponding rigid body may be added to the 3D model according to the motion state of the object to be analyzed, giving the model physical properties. The rigid bodies can be divided into static rigid bodies and dynamic rigid bodies, the static rigid bodies do not move along with the change of external force, and the dynamic rigid bodies move along with the model according to the stress condition. If the object to be analyzed is in a static state, adding a static rigid body to the 3D model; and if the object to be analyzed is in a motion state, adding a dynamic rigid body to the 3D model.
For example, the user may perform a touch input operation on the AR page, add a rigid body to the 3D model, and generate a corresponding collision boundary. The collision boundary may be the size of the original model, or the size of the boundary may be adjusted by a user through manual drag control, which may be determined according to the actual situation, and the embodiment of the present invention is not limited to this.
Step 206, responding to the user operation aiming at the 3D model, and setting stress parameters aiming at the 3D model; the stress parameters comprise the type, the size and the direction of the force.
In response to a user operation on a 3D model of an object to be analyzed, the magnitude and direction of a force are added to the 3D model, and a force type parameter is set. In a specific implementation, as shown in fig. 3, a schematic diagram of setting of stress parameters of an AR interface according to an embodiment of the present invention is shown, where a circle in the diagram is a 3D model, a square outside the circle is a physical rigid body of the 3D model, a user can see the 3D model of an object to be analyzed on the AR interface, then perform a touch operation on the 3D model, and a mobile terminal sets the stress parameters of the 3D model in response to the touch operation of the user.
In the embodiment of the present invention, the method may further include: setting a stress type in response to an operation of setting the stress type for the 3D model; setting the stress size in response to an operation of setting the stress size for the 3D model; and setting the stress direction in response to the operation of setting the stress direction for the 3D model.
Exemplarily, as shown in fig. 4, a schematic diagram of an option bar of a kind of an AR interface force according to an embodiment of the present invention is shown, where a circle in the diagram is a 3D model, a square outside the circle is a physical rigid body of the 3D model, and an AR interface where the 3D model is located may include an option bar providing a kind of a force, and when a user operates as a click or input operation on the option bar, the kind and magnitude of the force are determined; when the user operation is a sliding operation on the 3D model, the stress direction of the 3D model is determined.
For example, the option bar of the AR interface may include gravity, pressure, electric field force, magnetic field force, and the like, and the user may click the option bar to select the type of force according to the actual situation, and touch input determines the magnitude of the force, and may select the 3D model by clicking the 3D model. And controlling the stress direction of the 3D model according to the gesture operation of sliding the screen by the user.
It should be understood by those skilled in the art that the above-described user operation manner is merely an example of the present invention, and those skilled in the art may set the user operation manner by using other user operation methods, and the embodiment of the present invention is not limited herein.
In the embodiment of the present invention, the method may further include: and adding and displaying corresponding interface elements in the 3D model according to the stress parameters of the 3D model.
In order to distinguish the type, the size and the direction of the stress of the object to be analyzed, corresponding interface elements are added to the 3D according to specific stress parameters of the 3D model and are displayed on an AR interface. For example, the user adds the set force to the 3D model through a model composed of dotted lines, sets parameters such as damping coefficients, and simulates the object motion and collision effect.
As an example, the interface elements corresponding to different kinds of forces are different, the different interface elements may be arrows with different shapes or colors, or straight lines, or curved lines, or a combination thereof, and the user may select the different interface elements to represent that the 3D model is subjected to different types of forces, for example, the user may select blue arrows to represent the gravity of the 3D model and red arrows to represent the pressure of the 3D model.
For example, in the physical teaching, when the force of the moving charge in the magnetic field needs to be analyzed, a user can click an interface element corresponding to the magnetic field in an option bar of the AR interface, add a blue magnetic field to the 3D model of the moving charge, and set the magnetic field strength and direction; clicking an interface element corresponding to gravity, adding red gravity to the 3D model of the motion charge, touch-controlling the input gravity and the motion charge speed, clicking and sliding the 3D model, giving the 3D model a speed direction, and further simulating the motion track of the motion charge in a magnetic field, and more intuitively analyzing the stress condition of the motion charge in the magnetic field, thereby facilitating better learning and understanding of students.
It should be understood by those skilled in the art that the form of the interface element described above is merely an example of the present invention, and those skilled in the art may set other forms of interface elements, which may be determined according to practical situations, and the embodiment of the present invention is not limited to this.
And step 207, when the AR interface displays the 3D models corresponding to the target objects, controlling the 3D models to move according to the stress parameters of the 3D models.
After the stress parameters of the 3D model are set, the mobile terminal can control the 3D model to simulate the motion state of the target object according to the stress parameters, and the motion trail of the target object is displayed on the AR interface.
In the embodiment of the invention, the AR interface can display the 3D models of the target objects, the mobile terminal can control the 3D models to move according to the stress parameters of the 3D models, and the interactive pictures of the 3D models are displayed on the AR interface.
For example, in physical teaching, when stress conditions and motion conditions of two objects colliding with each other need to be displayed, firstly, the colliding object and the collided object are shot in a multi-angle and all-around mode, shot images are uploaded to a server, the server generates 3D models of the colliding object and the collided object according to the images, after the 3D models of the colliding object and the collided object are loaded on an AR interface, stress parameters are set for the 3D model of the colliding object and the 3D model of the collided object respectively, interface elements are added, and then stress conditions and motion tracks of the 3D model of the colliding object and the 3D model of the collided object are displayed on the AR interface.
In addition, the 3D model of the object to be analyzed can also simulate the condition of external force increase, for example, the 3D model of the stressed object to be analyzed can be used for simulating the condition of external force increase, and then the scene when the object to be analyzed collides is received, the image of another object can be shot in all directions again, the 3D model of another object is generated, the 3D model of the object to be analyzed and the 3D model of the object to be analyzed are displayed on the same AR interface, the stress parameter of the 3D model of the other object is set, new acting force is added to the 3D model of the object to be analyzed, and then the process that the motion trail of the object to be analyzed changes after the new added force and the process of interaction with the 3D model of the other object can be displayed on the AR interface.
It should be understood by those skilled in the art that the above-mentioned manner of representing the motion trajectories of the plurality of 3D models is only an example of the present invention, and those skilled in the art may use other representation methods to represent the motion trajectories, and the present invention is not limited herein.
In the embodiment of the invention, the 3D model of the target object generated according to the omnidirectionally shot image is obtained by collecting the omnidirectionally shot image of the target object, and the 3D model is displayed on an AR interface; responding to the user operation of the 3D model, and setting the stress parameters of the 3D model of the target object; and controlling the 3D model of the target object to simulate the motion state of the target object according to the set stress parameters, and displaying the motion track of the target object on the AR interface. Common object stress analysis and simulated object motion are converted from mathematical graphs into 3D models for display, so that abstract scenes are materialized, learning and stress understanding are facilitated, understanding ability is reduced, and learning interest of students is improved. The method can also be applied to real life, the collision condition of the object under different scenes is judged, and the motion trail of the object after collision is predicted.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 5, a block diagram of a structure of an AR-based physical analysis apparatus according to an embodiment of the present invention is shown, which may specifically include the following modules:
an acquisition module 301, configured to acquire an image shot in all directions for a target object;
a first obtaining module 302, configured to obtain a 3D model for the target object generated according to the image, and display the 3D model on the AR interface;
a setting module 303, configured to set a stress parameter for the 3D model in response to a user operation for the 3D model;
and the control module 304 is configured to control the 3D model to simulate a motion state of the target object according to the stress parameter, and display a motion trajectory of the target object on the AR interface.
In one embodiment of the invention, the apparatus further comprises:
and the first adding module is used for adding and displaying corresponding interface elements in the 3D model according to the stress parameters of the 3D model.
In one embodiment of the invention, the apparatus further comprises:
the second acquisition module is used for acquiring the motion state of the target object determined according to the image;
and the second adding module is used for adding a corresponding rigid body to the 3D model according to the motion state of the target object.
In an embodiment of the present invention, the setting module 303 may include:
a first setting submodule for setting a kind of stress in response to an operation of setting a kind of stress for the 3D model;
the second setting submodule is used for setting the stress size in response to the operation of setting the stress size aiming at the 3D model;
and the third setting submodule is used for setting the stress direction in response to the operation of setting the stress direction aiming at the 3D model.
In one embodiment of the invention, the apparatus further comprises:
the sending module is used for sending the omnibearing shot image to a server so that the server generates a corresponding 3D model according to the omnibearing shot image and stores the 3D model as a URL link;
and the receiving module is used for receiving the URL link sent by the server and acquiring the 3D model of the target object according to the URL link.
In an embodiment of the present invention, the control module 304 may include:
and the control sub-module is used for controlling the plurality of 3D models to move according to the stress parameters of the plurality of 3D models when the AR interface displays the 3D models corresponding to the plurality of target objects.
In the embodiment of the invention, the 3D model of the target object generated according to the omnidirectionally shot image is obtained by collecting the omnidirectionally shot image of the target object, and the 3D model is displayed on an AR interface; responding to the user operation of the 3D model, and setting the stress parameters of the 3D model of the target object; and controlling the 3D model of the target object to simulate the motion state of the target object according to the set stress parameters, and displaying the motion track of the target object on the AR interface. Common object stress analysis and simulated object motion are converted from mathematical graphs into 3D models for display, so that abstract scenes are materialized, learning and stress understanding are facilitated, understanding ability is reduced, and learning interest of students is improved. The method can also be applied to real life, the collision condition of the object under different scenes is judged, and the motion trail of the object after collision is predicted.
For the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
An embodiment of the present invention further provides an electronic device, including: the computer program is executed by the processor to implement each process of the above-mentioned physical and physical analysis method embodiment based on AR, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the above-mentioned AR-based physical and physical analysis method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The AR-based physical and physical analysis method and the AR-based physical and physical analysis device provided by the present invention are described in detail above, and specific examples are applied herein to explain the principle and the implementation of the present invention, and the description of the above examples is only used to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An AR-based physical analysis method is applied to a terminal, wherein the terminal at least comprises an AR interface, and the method comprises the following steps:
acquiring an image shot in all directions aiming at a target object;
acquiring a 3D model for the target object generated according to the image, and displaying the 3D model on the AR interface;
setting stress parameters for the 3D model in response to user operation for the 3D model;
and controlling the 3D model to simulate the motion state of the target object according to the stress parameter, and displaying the motion track of the target object on the AR interface.
2. The method of claim 1, further comprising:
and adding and displaying corresponding interface elements in the 3D model according to the stress parameters of the 3D model.
3. The method of claim 1, further comprising:
acquiring a motion state of the target object determined according to the image;
and adding a corresponding rigid body to the 3D model according to the motion state of the target object.
4. The method of claim 1, wherein the stress parameters for the 3D model are set in response to user manipulation of the 3D model; the stress parameters comprise the type, size and direction of force, including:
setting a stress type in response to an operation of setting the stress type for the 3D model;
setting the stress size in response to an operation of setting the stress size for the 3D model;
and setting the stress direction in response to the operation of setting the stress direction for the 3D model.
5. The method of claim 1, wherein the obtaining the 3D model for the target object generated from the image comprises:
sending the omni-directional shot image to a server so that the server generates a corresponding 3D model according to the omni-directional shot image and stores the 3D model as a URL link;
and receiving the URL link sent by the server, and acquiring the 3D model of the target object according to the URL link.
6. The method of claim 1, wherein said controlling said 3D model to move in accordance with said force parameters comprises:
and when the AR interface displays the 3D models corresponding to the target objects, controlling the 3D models to move according to the stress parameters of the 3D models.
7. An AR-based physical analysis apparatus applied to a terminal including at least an AR interface, the apparatus comprising:
the acquisition module is used for acquiring an image shot in all directions aiming at a target object;
an obtaining module, configured to obtain a 3D model for the target object generated according to the image, and display the 3D model on the AR interface;
the setting module is used for responding to the user operation aiming at the 3D model and setting the stress parameters aiming at the 3D model;
and the control module is used for controlling the 3D model to simulate the motion state of the target object according to the stress parameter and displaying the motion track of the target object on the AR interface.
8. The method of claim 1, further comprising:
and the adding module is used for adding and displaying corresponding interface elements in the 3D model according to the stress parameters of the 3D model.
9. An electronic device, comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the AR-based physico-physical analysis method as claimed in any one of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of the AR-based physico-analysis method as claimed in any one of claims 1-6.
CN202111615427.9A 2021-12-27 2021-12-27 AR-based physical and physical analysis method and device Pending CN114463517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111615427.9A CN114463517A (en) 2021-12-27 2021-12-27 AR-based physical and physical analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111615427.9A CN114463517A (en) 2021-12-27 2021-12-27 AR-based physical and physical analysis method and device

Publications (1)

Publication Number Publication Date
CN114463517A true CN114463517A (en) 2022-05-10

Family

ID=81407725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111615427.9A Pending CN114463517A (en) 2021-12-27 2021-12-27 AR-based physical and physical analysis method and device

Country Status (1)

Country Link
CN (1) CN114463517A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098207A (en) * 2022-06-23 2022-09-23 北京字跳网络技术有限公司 Image display method, image display device, electronic device, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098207A (en) * 2022-06-23 2022-09-23 北京字跳网络技术有限公司 Image display method, image display device, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN106355153B (en) A kind of virtual objects display methods, device and system based on augmented reality
US11385760B2 (en) Augmentable and spatially manipulable 3D modeling
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN111556278B (en) Video processing method, video display device and storage medium
US11256958B1 (en) Training with simulated images
CN107222529A (en) Augmented reality processing method, WEB modules, terminal and cloud server
CN111445486B (en) Image processing method, device, equipment and computer readable storage medium
CN114373047B (en) Method, device and storage medium for monitoring physical world based on digital twin
CN110568923A (en) unity 3D-based virtual reality interaction method, device, equipment and storage medium
CN111508033A (en) Camera parameter determination method, image processing method, storage medium, and electronic apparatus
CN111862341A (en) Virtual object driving method and device, display equipment and computer storage medium
CN114047824A (en) Method for interaction of multiple terminal users in virtual space
US20190164323A1 (en) Method and program for generating virtual reality contents
KR20220168573A (en) Computer-implemented method and system for generating a synthetic training data set for training a machine learning computer vision model
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
CN114463517A (en) AR-based physical and physical analysis method and device
EP3141985A1 (en) A gazed virtual object identification module, a system for implementing gaze translucency, and a related method
CN108537149B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113989470A (en) Picture display method and device, storage medium and electronic equipment
CN109615620A (en) The recognition methods of compression of images degree, device, equipment and computer readable storage medium
CN112115900A (en) Image processing method, device, equipment and storage medium
CN111292234B (en) Panoramic image generation method and device
CN112468865B (en) Video processing method, VR terminal and computer readable storage medium
CN113867532A (en) Evaluation system and evaluation method based on virtual reality skill training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination