CN111158480A - Scene model sharing method and system, augmented reality device and storage medium - Google Patents

Scene model sharing method and system, augmented reality device and storage medium Download PDF

Info

Publication number
CN111158480A
CN111158480A CN201911382795.6A CN201911382795A CN111158480A CN 111158480 A CN111158480 A CN 111158480A CN 201911382795 A CN201911382795 A CN 201911382795A CN 111158480 A CN111158480 A CN 111158480A
Authority
CN
China
Prior art keywords
scene
model
space
sharable
carrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911382795.6A
Other languages
Chinese (zh)
Other versions
CN111158480B (en
Inventor
于斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911382795.6A priority Critical patent/CN111158480B/en
Publication of CN111158480A publication Critical patent/CN111158480A/en
Application granted granted Critical
Publication of CN111158480B publication Critical patent/CN111158480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a scene model sharing method and system, augmented reality equipment and a storage medium, wherein a sharable carrier model manufactured by an authoring user based on a first scene space is obtained; obtaining model source information according to the sharable carrier model, packaging the sharable carrier model and the model source information into a scene model, and uploading the scene model to a server; acquiring a second scene space; and under the condition that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value, receiving the scene model and displaying the scene model to a browsing user, thereby solving the problem of more limitation in scene model sharing.

Description

Scene model sharing method and system, augmented reality device and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and a system for sharing a scene model, an augmented reality device, and a storage medium.
Background
With the popularization of intelligent terminals, applications of Virtual Reality (VR for short), Augmented Reality (AR for short) and Mixed Reality (MR for short) technologies are increasingly widespread, and experience can be performed by installing VR, AR or MR technologies on an intelligent terminal, such as a wearable intelligent terminal. For example, the general working principle of AR technology is: the intelligent terminal shoots images or records videos through a camera, and then identifies target objects in the shot images or videos; tracking a target object; the method comprises the steps of obtaining AR virtual content related to a target object, rendering an image frame, overlapping the AR virtual content on the target object, and finally displaying the AR virtual content on the intelligent terminal. In the related art, VR, AR or MR technologies can generally be implemented only by a single machine, a same scene local area network (i.e. same time and place) or a same time and view local area network, resulting in more limitation in scene model sharing.
Aiming at the problem that the scene model sharing is more limited in the related technology, an effective solution is not provided at present.
Disclosure of Invention
The invention provides a scene model sharing method, a scene model sharing system, an augmented reality device and a storage medium, aiming at the problem that the scene model sharing is more limited in the related art, and at least solving the problem.
According to an aspect of the present invention, there is provided a method of scene model sharing, the method comprising:
acquiring a sharable carrier model manufactured by an authoring user based on a first scene space;
obtaining model source information according to the sharable carrier model, packaging the sharable carrier model and the model source information into a scene model, and uploading the scene model to a server;
acquiring a second scene space;
and receiving the scene model and displaying the scene model to a browsing user under the condition that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value.
In one embodiment, before obtaining the shareable carrier model produced by the authoring user based on the first scene space, the method comprises:
acquiring the first scene space, and performing scene verification according to the first scene space; wherein the scene check includes: acquiring the geometric characteristics of the first scene space for verification; or, acquiring a feature code arranged in the first scene space for verification;
instructing the authoring user to fabricate the shareable carrier model if the scene check passes.
In one embodiment, after receiving and displaying the scene model in the case that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold, the method includes:
acquiring first interaction information added by the browsing user according to the scene model, and uploading the scene model and the first interaction information to the server; wherein the first interaction information comprises: feedback information and operational information based on the scene model.
In one embodiment, the obtaining of the shareable carrier model produced by the authoring user based on the first scene space comprises:
and establishing a first space model according to the first scene space, acquiring a three-dimensional model manufactured by the authoring user based on the first scene space, and forming the three-dimensional model and the first space model into the sharable carrier model.
In one embodiment, the receiving and displaying the scene model to the browsing user when the matching degree of the second scene space and the first scene space is greater than a preset threshold includes:
establishing a second space model according to the second scene space, comparing the matching degree of the first space model and the second space model, and acquiring the goodness of fit according to the matching degree;
determining the scene model in the preset range according to the result of the goodness of fit when the goodness of fit is greater than a preset threshold; wherein the preset range includes: presetting a space, a time or a preset rule related to the service;
and receiving the scene model and displaying the scene model to the browsing user.
In one embodiment, before obtaining the shareable carrier model produced by the authoring user based on the first scene space, the method comprises:
according to the first scene space, acquiring the interactive operation of the authoring user on the first scene space; displaying second interactive information according to the interactive operation; the second interaction information comprises service information and associated information of the first scene space.
In one embodiment, after the acquiring the second scene space, the method further includes:
under the condition that a scene label is matched with a preset label, receiving the scene model containing the scene label, and displaying the scene model to the browsing user; the scene tags are tags contained in the second scene space and used for indexing, and the preset tags are tags calibrated by the authoring user according to the site.
According to another aspect of the present invention, there is provided a system for sharing a scene model, the system comprising a terminal and a server:
the terminal acquires a sharable carrier model manufactured by an authoring user based on a first scene space;
the terminal acquires model source information according to the sharable carrier model, packages the sharable carrier model and the model source information into a scene model, and uploads the scene model to the server;
the terminal acquires a second scene space swept by a browsing user;
and the terminal receives the scene model sent by the server and displays the scene model to a browsing user under the condition that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value.
According to another aspect of the present invention, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the methods described above when executing the computer program.
According to another aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of any of the methods described above.
According to the invention, a scene model sharing method, a scene model sharing system, augmented reality equipment and a storage medium are adopted to obtain a sharable carrier model which is manufactured by an authoring user based on a first scene space; obtaining model source information according to the sharable carrier model, packaging the sharable carrier model and the model source information into a scene model, and uploading the scene model to a server; acquiring a second scene space; and under the condition that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value, receiving the scene model and displaying the scene model to a browsing user, thereby solving the problem of more limitation in scene model sharing.
Drawings
FIG. 1 is a diagram illustrating a scene model sharing application scenario according to an embodiment of the present invention;
FIG. 2 is a first flowchart of a scene model sharing method according to an embodiment of the present invention;
FIG. 3 is a flowchart II of a scene model sharing method according to an embodiment of the present invention;
FIG. 4 is a flowchart III of a scene model sharing method according to an embodiment of the present invention;
FIG. 5 is a fourth flowchart of a scene model sharing method according to an embodiment of the present invention;
FIG. 6 is a flowchart of a scene model sharing method according to an embodiment of the present invention;
FIG. 7 is a sixth flowchart of a scene model sharing method according to an embodiment of the present invention;
FIG. 8 is a block diagram of a scene model sharing system according to an embodiment of the present invention;
fig. 9 is a block diagram of the inside of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The terms "first", "second" and "third" in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In the present embodiment, a method for sharing a scene model is provided, and fig. 1 is a schematic diagram of a scene model sharing application scene according to an embodiment of the present invention, as shown in fig. 1, in which a terminal 12 communicates with a server 14 through a network. The terminal 12 acquires a sharable carrier model made by an authoring user based on a first scene space; the terminal acquires model source information according to the sharable carrier model, packages the sharable carrier model and the model source information into a scene model, and uploads the scene model to the server 14; the terminal 12 acquires a second scene space swept by the browsing user; when the coincidence degree of the second scene space and the first scene space is greater than the preset threshold value, the terminal 12 receives and displays the scene model sent by the server 14. The terminal 12 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 14 may be implemented by an independent server or a server cluster composed of a plurality of servers.
In this embodiment, a method for sharing a scene model is provided, and fig. 2 is a first flowchart of a method for sharing a scene model according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step S202, a sharable carrier model manufactured by an authoring user based on a first scene space is obtained; wherein, the authoring user makes (such as adding dialog boxes or buttons, doodling, writing, picture frames, arrows, etc.) the sharable carrier model based on the first scene space, references and modifies (contents modeled in advance and the authoring user can do secondary processing) the sharable carrier model in the terminal 12, or builds and imports a more complex model, and after adjusting parameters such as size, color, position, direction, additional description, etc. of the sharable carrier model, the sharable carrier model is confirmed to be placed in the first scene space; the shareable carrier model may be a three-dimensional model.
Step S204, obtaining model source information according to the sharable carrier model, packaging the sharable carrier model and the model source information into a scene model, and uploading the scene model to the server 14; wherein, the terminal 12 automatically stores model source information of the sharable carrier model, the model source information includes content, size, color (including transparency, etc.), position, direction and additional description of the sharable carrier model, and the model source information further includes longitude and latitude, altitude, orientation information and plane characteristic content of the first scene space; and moreover, according to the binding relationship between the first scene space plane characteristic and the scene model, packaging the sharable carrier model and the model source information into the scene model.
Step S206, acquiring a second scene space; the obtaining mode of the second scene space comprises the following steps: the browsing user heading to an area near the second scene space and panning the second scene space through the terminal 12; or, the creating user goes to each area to perform on-site shooting, and marks the labels, for example, marks labels such as a supermarket or a school for each on-site coordinate, and the browsing user may directly designate one or more labels, thereby acquiring the second scene space; the browsing user may be an authoring user who performs secondary authoring, or may be a plurality of users.
Step S208, receiving the scene model and displaying the scene model to the browsing user under the condition that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value; wherein, the terminal 12 will automatically read and update from the server 14, or the server 14 will push and update all the made scene models in a certain range of the current geographic position and altitude to the terminal 12; the browsing user scans the second scene space through the terminal 12, and if the coincidence degree is greater than a preset threshold value, it indicates that the coincidence degree of the second scene space and the plane where the manufactured scene model is located is high, the scene model is displayed; the preset threshold is a comprehensive matching degree of the second scene space and the first scene space set by a user, and the preset threshold may be 80%.
In the related art, the sharing of the scene model is generally realized through a single machine or a local area network, and in the embodiment of the invention, the sharable carrier model manufactured by the authoring user based on the first scene space is obtained through the steps S202 to S208; packaging the sharable carrier model and the model source information into a scene model and uploading the scene model to a server; the method comprises the steps of obtaining a second scene space swept by a browsing user, and receiving and displaying a scene model under the condition that the coincidence degree of the second scene space and the first scene space is larger than a preset threshold value, so that the scene model can be seen through different character view angles at different times, and the problem that the scene model is more limited in sharing is solved.
In an embodiment, a method for sharing a scene model is provided, and fig. 3 is a flowchart of a second method for sharing a scene model according to an embodiment of the present invention, as shown in fig. 3, the method includes the following steps:
step S302, a first scene space is obtained, and scene verification is carried out according to the first scene space; wherein, this scene check includes: acquiring geometric features of the first scene space (including a single plane) for verification; or, acquiring the feature code arranged in the first scene space for verification.
Wherein the first scene space needs to be selected through the scene check performed by the terminal 12, in case the scene check is performed to obtain the geometric feature of the first scene space, the first scene space needs to have sufficient geometric features, or the spatial feature of the first scene space is sufficiently vivid to pass the scene check, for example, in case a desk in the first scene space is swept, the terminal 12 may perform segmentation and binarization processing on the first scene space including the sweep plane of the desk to obtain the geometric feature of the desk, because the geometric feature of the desk is sufficient, the first scene space can pass the scene check; alternatively, the scene check may be performed through a feature code set in the first scene space, for example, the feature code is set in an area where the first scene space is located, and the first scene space passes the scene check when the terminal 12 confirms that the feature code is included in the first scene space.
Step S304, instructing the authoring user to make the sharable carrier model if the scene check passes.
Through the steps S302 to S304, the scene verification is performed on the first scene space, wherein the geometric feature of the first scene space is obtained for verification, or the feature code is obtained for verification, so that the display position of the scene model is more accurately and stably positioned.
In an embodiment, a method for sharing a scene model is provided, and fig. 4 is a flowchart three of a method for sharing a scene model according to an embodiment of the present invention, as shown in fig. 4, the method includes the following steps:
step S402, acquiring first interaction information added by the browsing user according to the scene model, and uploading the scene model and the first interaction information to the server 14; the first interaction information is bound with the scene model through a binding correspondence, for example, the first interaction information includes three-dimensional coordinate information, a spatial coordinate system is established with the terminal 12 as an origin, and the first interaction information is added to the scene model according to the three-dimensional coordinate information.
The first interaction information further comprises: feedback information and operational information based on the scene model; the browsing user can view the related information of the additional description (such as jump link), creation time and attribution of the scene model through certain interaction, and can also perform positive feedback, negative feedback or display shielding on the content through certain interaction. The feedback and the interactive behavior of the user to the content of the scene model are both used as the evaluation material of the scene model and are transmitted back to the server 14; as the evaluation materials are accumulated continuously, the display time, occasions and limited ranges of the scene model are influenced.
For example, the scene model is a model made by the authoring user based on a supermarket, and the scene model includes commodity information of the supermarket; the browsing user can read the commodity information of the scene model, add comments to the commodity information or perform operations such as approval and stepping, and upload the evaluation materials to the server 14; meanwhile, when negative comments of the commodity information are accumulated to a certain value, the commodity can be placed on shelf, and the commodity information in the scene model is deleted; or, the scene model further includes a dialog box or button made by the authoring user, and in the case that the browsing user operates the dialog box or button, the terminal 12 plays the associated video or audio to the browsing user.
Through the step S402, the first interaction information added by the browsing user according to the scene model is obtained, and the scene model and the first interaction information are uploaded to the server 14, so that interactivity between users in scene model sharing is enhanced, and interaction information of the scene model is enriched.
In an embodiment, a method for sharing a scene model is provided, and fig. 5 is a fourth flowchart of a method for sharing a scene model according to an embodiment of the present invention, as shown in fig. 5, the method includes the following steps:
step S502, establishing a first space model according to the first scene space, acquiring a three-dimensional model manufactured by the authoring user based on the first scene space, and forming the three-dimensional model and the first space model into the sharable carrier model; wherein the first space model is determined by one or more plane features in the space and the incidence relation among the plane features; before the authoring user authors or refers to modify the three-dimensional model, the terminal 12 performs real-time spatial modeling based on the first scene space, and places the three-dimensional model when the spatial modeling is completed.
Through the step S502, the first space model is established in real time according to the first scene space, and then the three-dimensional model is placed on the first space model, so that the spatial information of the sharable carrier model is more accurately recorded, and the sharing accuracy of the scene model is further improved.
In an embodiment, a method for sharing a scene model is provided, and fig. 6 is a flowchart of a method for sharing a scene model according to an embodiment of the present invention, as shown in fig. 6, the method includes the following steps:
step S602, establishing a second space model according to the second scene space, comparing the matching degree of the first space model and the second space model, and obtaining the goodness of fit according to the matching degree; and in the view angle of a browsing user, performing real-time space modeling based on the second scene space in the whole time and the whole process, and comparing whether the matching degree of the second space model is close to the matching degree of the first space model, wherein the comparison can be performed through local models of longitude and latitude, height and orientation position within a certain phase difference range.
Step S604, determining the scene model in the preset range according to the result of the goodness of fit when the goodness of fit is greater than a preset threshold; wherein, this preset range includes: a preset space or a preset time; for example, the scene model may obtain a model of a certain area or a certain effective time, and may also obtain a model with a model ID within a certain numerical range; the preset range can also be other conditions in the scene model related business. Meanwhile, besides obtaining an inosculating result according to the inosculating degree, an inosculating scene model can be determined according to a comparison result of the scene label and a preset label; the scene tag is a tag which is contained in the second scene space and used for indexing, and the preset tag is a tag which is calibrated by an authoring user according to a field location, so that the efficiency of the scene model sharing system is improved.
Step S606, receiving and displaying the scene model; wherein the scene model is transmitted via the server 14.
Through the steps S602 to S606, a second space model is established according to the second scene space, and according to the matching degree goodness of fit between the first space model and the second space model, under the condition that the goodness of fit is high, the scene model in the preset range is determined, and the terminal 12 receives and displays the scene model, so that the sharing accuracy of the scene model is further improved; meanwhile, the scene model is determined within a preset range, and the comparison efficiency of the scene model is improved.
In one embodiment, a method for scene model sharing is provided, the method comprising the steps of:
according to the first scene space, acquiring the interactive operation of the authoring user on the first scene space; displaying second interactive information according to the interactive operation; the second interaction information comprises service information and associated information of the first scene space; it can be understood that the second interaction information can also be added to the sharable carrier model according to the carried three-dimensional coordinate information; the authoring user may interact with the first scene space through the terminal 12 during the process of making the sharable carrier model, for example, the terminal 12 displays a dialog box to the authoring user in the first scene space for the authoring user to select and browse related services; or play the associated video or audio if the authoring user clicks a play button.
Through the steps, the interactive operation of the authoring user on the first scene space is obtained, and the interactive information is displayed according to the interactive operation, so that the interactivity in the process of manufacturing the sharable carrier model is realized, and the interactive information of the sharable carrier model is enriched.
In this embodiment, a method for sharing a scene model is provided, and fig. 7 is a sixth flowchart of a method for sharing a scene model according to an embodiment of the present invention, as shown in fig. 7, the method includes the following steps:
step S702, receiving a scene model uploaded by the terminal 12; the terminal 12 obtains a sharable carrier model created by an authoring user based on a first scene space, obtains model source information according to the sharable carrier model, and packages the sharable carrier model and the model source information into the scene model.
Step S704, sending the scene model to the terminal 12 when the coincidence degree between the second scene space and the first scene space is greater than a preset threshold; wherein the terminal 12 displays the scene model to the browsing user.
Through the steps S702 to S704, the scene model uploaded by the terminal 12 is received, and the scene model is sent to the terminal 12 for display under the condition that the coincidence degree between the second scene space and the first scene space is high, so that the server 14 stores the scene model, multiple users can interact based on the scene model conveniently, and the problem that the sharing of the scene model is limited more is solved.
It should be understood that, although the steps in the flowcharts of fig. 2 to 7 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
In this embodiment, a system for sharing a scene model is provided, and fig. 8 is a block diagram of a structure of a system for sharing a scene model according to an embodiment of the present invention, as shown in fig. 8, the system includes a terminal 12 and a server 14;
the terminal 12 acquires a sharable carrier model made by an authoring user based on a first scene space;
the terminal 12 obtains model source information according to the sharable carrier model, packages the sharable carrier model and the model source information into a scene model, and uploads the scene model to the server 14;
the terminal 12 acquires a second scene space swept by the browsing user;
when the coincidence degree of the second scene space and the first scene space is greater than the preset threshold value, the terminal 12 receives and displays the scene model sent by the server 14.
With the above embodiment, the terminal 12 obtains a sharable carrier model made by an authoring user based on a first scene space; packaging the sharable carrier model and the model source information into a scene model and uploading the scene model to the server 14; the terminal 12 acquires a second scene space swept by the browsing user, and under the condition that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value, the terminal 12 receives and displays the scene model of the server 14, so that the scene model can be seen through different character viewing angles at different times, and the problem that the scene model is more limited in sharing is solved.
In one embodiment, the terminal 12 is further configured to obtain the first scene space, and perform scene verification according to the first scene space; wherein, this scene check includes: acquiring the geometric characteristics of the first scene space for verification; or, acquiring a feature code arranged in the first scene space for verification;
the terminal 12 instructs the authoring user to make the shareable carrier model in case the scene check passes.
In one embodiment, the terminal 12 is further configured to obtain first interaction information added by the browsing user according to the scene model, and upload the scene model and the first interaction information to the server 14; wherein the first interactive information comprises: feedback information and operational information based on the scene model.
In one embodiment, the terminal 12 is further configured to establish a first space model according to the first scene space, obtain a three-dimensional model produced by the authoring user based on the first scene space, and compose the three-dimensional model and the first space model into the sharable carrier model.
In one embodiment, the terminal 12 is further configured to establish a second space model according to the second scene space, compare the matching degrees of the first space model and the second space model, and the terminal 12 obtains the goodness of fit according to the matching degrees;
the terminal 12 determines the scene model within the preset range according to the result of the goodness of fit when the goodness of fit is greater than a preset threshold; wherein, this preset range includes: presetting a space, a time or a preset rule related to the service;
the terminal 12 receives the scene model sent by the server 14 and displays it to the browsing user.
In one embodiment, the terminal 12 is further configured to obtain, according to the first scene space, an interactive operation of the authoring user on the first scene space; the terminal 12 displays second interactive information according to the interactive operation; wherein the second interaction information includes service information and associated information of the first scene space.
In one embodiment, a computer device is provided, which may be the terminal 12, and fig. 9 is a block diagram of the inside of a computer device according to an embodiment of the present invention, as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal 12 via a network connection. The computer program is executed by a processor to implement a method of sharing a scene model. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, an augmented reality device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the computer program to implement the steps in the scene model sharing method provided in the foregoing embodiments.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the scene model sharing method provided by the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for scene model sharing, the method comprising:
acquiring a sharable carrier model manufactured by an authoring user based on a first scene space;
obtaining model source information according to the sharable carrier model, packaging the sharable carrier model and the model source information into a scene model, and uploading the scene model to a server;
acquiring a second scene space;
and receiving the scene model and displaying the scene model to a browsing user under the condition that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value.
2. The method of claim 1, wherein prior to obtaining the shareable carrier model produced by the authoring user based on the first scene space, the method comprises:
acquiring the first scene space, and performing scene verification according to the first scene space; wherein the scene check includes: acquiring the geometric characteristics of the first scene space for verification; or, acquiring a feature code arranged in the first scene space for verification;
instructing the authoring user to fabricate the shareable carrier model if the scene check passes.
3. The method according to claim 1, wherein after receiving and displaying the scene model to a browsing user in a case that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value, the method comprises:
acquiring first interaction information added by the browsing user according to the scene model, and uploading the scene model and the first interaction information to the server; wherein the first interaction information comprises: feedback information and operational information based on the scene model.
4. The method of claim 1, wherein obtaining a sharable carrier model produced by an authoring user based on a first scene space comprises:
and establishing a first space model according to the first scene space, acquiring a three-dimensional model manufactured by the authoring user based on the first scene space, and forming the three-dimensional model and the first space model into the sharable carrier model.
5. The method according to claim 4, wherein the receiving and displaying the scene model to a browsing user in the case that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value comprises:
establishing a second space model according to the second scene space, comparing the matching degree of the first space model and the second space model, and acquiring the goodness of fit according to the matching degree;
determining the scene model in the preset range according to the result of the goodness of fit when the goodness of fit is greater than a preset threshold; wherein the preset range includes: presetting a space, a time or a preset rule related to the service;
and receiving the scene model and displaying the scene model to the browsing user.
6. The method of claim 1, wherein prior to obtaining the shareable carrier model produced by the authoring user based on the first scene space, the method comprises:
according to the first scene space, acquiring the interactive operation of the authoring user on the first scene space; displaying second interactive information according to the interactive operation; the second interaction information comprises service information and associated information of the first scene space.
7. The method of claim 1, wherein after the obtaining the second scene space, the method further comprises:
under the condition that a scene label is matched with a preset label, receiving the scene model containing the scene label, and displaying the scene model to the browsing user; the scene tags are tags contained in the second scene space and used for indexing, and the preset tags are tags calibrated by the authoring user according to the site.
8. The system for sharing the scene model is characterized by comprising a terminal and a server;
the terminal acquires a sharable carrier model manufactured by an authoring user based on a first scene space;
the terminal acquires model source information according to the sharable carrier model, packages the sharable carrier model and the model source information into a scene model, and uploads the scene model to the server;
the terminal acquires a second scene space;
and the terminal receives the scene model sent by the server and displays the scene model to a browsing user under the condition that the coincidence degree of the second scene space and the first scene space is greater than a preset threshold value.
9. An augmented reality device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911382795.6A 2019-12-27 2019-12-27 Scene model sharing method and system, augmented reality device and storage medium Active CN111158480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911382795.6A CN111158480B (en) 2019-12-27 2019-12-27 Scene model sharing method and system, augmented reality device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911382795.6A CN111158480B (en) 2019-12-27 2019-12-27 Scene model sharing method and system, augmented reality device and storage medium

Publications (2)

Publication Number Publication Date
CN111158480A true CN111158480A (en) 2020-05-15
CN111158480B CN111158480B (en) 2022-11-25

Family

ID=70558717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911382795.6A Active CN111158480B (en) 2019-12-27 2019-12-27 Scene model sharing method and system, augmented reality device and storage medium

Country Status (1)

Country Link
CN (1) CN111158480B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303773A (en) * 2008-06-10 2008-11-12 中国科学院计算技术研究所 Method and system for generating virtual scene
US20140082018A1 (en) * 2011-03-22 2014-03-20 Baidu Online Network Technology (Beijing) Co., Ltd Device and Method for Obtaining Shared Object Related to Real Scene
CN103929479A (en) * 2014-04-10 2014-07-16 惠州Tcl移动通信有限公司 Method and system for simulating real scene through mobile terminal to achieve user interaction
CN108109464A (en) * 2017-12-26 2018-06-01 佛山市道静科技有限公司 A kind of shared bicycle learning system based on VR technologies
CN109783914A (en) * 2018-12-29 2019-05-21 河北德冠隆电子科技有限公司 A kind of pretreatment dynamic modelling method and device based on virtual reality emulation
CN110096814A (en) * 2019-05-05 2019-08-06 广西路桥工程集团有限公司 A kind of digitlization bridge construction system based on BIM model
CN110211227A (en) * 2019-04-30 2019-09-06 深圳市思为软件技术有限公司 A kind of method for processing three-dimensional scene data, device and terminal device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303773A (en) * 2008-06-10 2008-11-12 中国科学院计算技术研究所 Method and system for generating virtual scene
US20140082018A1 (en) * 2011-03-22 2014-03-20 Baidu Online Network Technology (Beijing) Co., Ltd Device and Method for Obtaining Shared Object Related to Real Scene
CN103929479A (en) * 2014-04-10 2014-07-16 惠州Tcl移动通信有限公司 Method and system for simulating real scene through mobile terminal to achieve user interaction
CN108109464A (en) * 2017-12-26 2018-06-01 佛山市道静科技有限公司 A kind of shared bicycle learning system based on VR technologies
CN109783914A (en) * 2018-12-29 2019-05-21 河北德冠隆电子科技有限公司 A kind of pretreatment dynamic modelling method and device based on virtual reality emulation
CN110211227A (en) * 2019-04-30 2019-09-06 深圳市思为软件技术有限公司 A kind of method for processing three-dimensional scene data, device and terminal device
CN110096814A (en) * 2019-05-05 2019-08-06 广西路桥工程集团有限公司 A kind of digitlization bridge construction system based on BIM model

Also Published As

Publication number Publication date
CN111158480B (en) 2022-11-25

Similar Documents

Publication Publication Date Title
CN108830692B (en) Remote panoramic house-viewing method and device, user terminal, server and storage medium
CN109726647B (en) Point cloud labeling method and device, computer equipment and storage medium
US10147399B1 (en) Adaptive fiducials for image match recognition and tracking
US20130278633A1 (en) Method and system for generating augmented reality scene
US20090161963A1 (en) Method. apparatus and computer program product for utilizing real-world affordances of objects in audio-visual media data to determine interactions with the annotations to the objects
CN111031293B (en) Panoramic monitoring display method, device and system and computer readable storage medium
US20140082018A1 (en) Device and Method for Obtaining Shared Object Related to Real Scene
CN112330819B (en) Interaction method and device based on virtual article and storage medium
CN107084740B (en) Navigation method and device
CN111240543A (en) Comment method and device, computer equipment and storage medium
US11817129B2 (en) 3D media elements in 2D video
CN112181250A (en) Mobile terminal webpage screenshot method, device, equipment and storage medium
CN114387400A (en) Three-dimensional scene display method, display device, electronic equipment and server
CN105447534A (en) Imaged-based information presenting method and device
CN110880139A (en) Commodity display method, commodity display device, terminal, server and storage medium
CN110990700A (en) Comment information display method, device, client, server and system
US20220189075A1 (en) Augmented Reality Display Of Commercial And Residential Features During In-Person Real Estate Showings/Open Houses and Vacation Rental Stays
KR20180029690A (en) Server and method for providing and producing virtual reality image about inside of offering
CN116524088B (en) Jewelry virtual try-on method, jewelry virtual try-on device, computer equipment and storage medium
CN111158480B (en) Scene model sharing method and system, augmented reality device and storage medium
CN110990106B (en) Data display method and device, computer equipment and storage medium
KR101947553B1 (en) Apparatus and Method for video edit based on object
KR101909994B1 (en) Method for providing 3d animating ar contents service using nano unit block
TW201923549A (en) System of digital content as in combination with map service and method for producing the digital content
US20140258829A1 (en) Webform monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant