CN114004953A - Method and system for realizing reality enhancement picture and cloud server - Google Patents

Method and system for realizing reality enhancement picture and cloud server Download PDF

Info

Publication number
CN114004953A
CN114004953A CN202010737541.8A CN202010737541A CN114004953A CN 114004953 A CN114004953 A CN 114004953A CN 202010737541 A CN202010737541 A CN 202010737541A CN 114004953 A CN114004953 A CN 114004953A
Authority
CN
China
Prior art keywords
picture
terminal
cloud server
real
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010737541.8A
Other languages
Chinese (zh)
Inventor
刘晓军
唐宏
武娟
徐晓青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202010737541.8A priority Critical patent/CN114004953A/en
Publication of CN114004953A publication Critical patent/CN114004953A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The utility model provides a realization method, a system and a cloud server of reality enhancement picture, relating to the technical field of reality enhancement, wherein the method comprises the following steps: the method comprises the steps that a cloud server receives live-action information uploaded by a terminal, wherein the live-action information comprises at least one of position information of the terminal and identification of a live-action object in a live-action picture shot by the terminal; the cloud server determines a virtual scene picture corresponding to the real scene object according to the real scene information; and the cloud server issues the virtual scene picture to the terminal so that the terminal can overlay the real scene picture and the virtual scene picture to form a real augmented picture.

Description

Method and system for realizing reality enhancement picture and cloud server
Technical Field
The present disclosure relates to the field of Augmented Reality (AR) technologies, and in particular, to a method and a system for implementing an Augmented Reality picture, and a cloud server.
Background
The AR technology is a technology that fuses virtual information with the real world. The AR technology applies various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing, and the like, and applies virtual information such as characters, images, three-dimensional models, music, videos, and the like to the real world after analog simulation, thereby realizing "enhancement" of the real world.
Because more intuitive and multidimensional information superposition and three-dimensional image display can be provided based on a real scene, the AR technology has good development prospect in a plurality of fields such as games, education, industrial manufacturing and the like.
Disclosure of Invention
The inventor notices that in the related art, the implementation of the AR picture requires the terminal to process the AR picture to obtain the virtual picture, and further, the virtual picture is fused with the real picture, so that the time delay of displaying the AR picture by the terminal is large.
In order to solve the above problem, the embodiments of the present disclosure propose the following solutions.
According to an aspect of the embodiments of the present disclosure, a method for implementing a reality augmented picture is provided, including: the method comprises the steps that a cloud server receives live-action information uploaded by a terminal, wherein the live-action information comprises at least one of position information of the terminal and identification of a live-action object in a live-action picture shot by the terminal; the cloud server determines a virtual scene picture corresponding to the real scene object according to the real scene information; and the cloud server issues the virtual scene picture to the terminal so that the terminal can overlay the real scene picture and the virtual scene picture to form a real augmented picture.
In some embodiments, the terminal includes a plurality of terminals, and the live-action information uploaded by each terminal includes at least one of position information of each terminal and an identifier of the same live-action object in a live-action picture taken by each terminal; the cloud server issues the virtual scene picture to the terminal, so that the terminal superimposes the real scene picture and the virtual scene picture to form a real augmented picture comprises the following steps: and the cloud server issues the virtual scene picture to each terminal so that each terminal can superpose the shot real scene picture and the virtual scene picture to form a real augmented picture.
In some embodiments, the cloud server constructs a parallel operating environment for the real-scene information uploaded by the plurality of terminals, so as to execute the operation of determining the virtual-scene picture corresponding to the real-scene object according to the real-scene information in parallel, and sending the virtual-scene picture to each terminal.
In some embodiments, the determining, by the cloud server, the virtual scene picture corresponding to the real scene object according to the real scene information includes: the cloud server searches the corresponding relation between the real scene information and the virtual scene picture according to the real scene information; and the cloud server determines the virtual scene picture corresponding to the real scene information according to the corresponding relation.
In some embodiments, the determining, by the cloud server according to the real-virtual scene correspondence, the virtual scene picture corresponding to the real scene information includes: the cloud server determines a script corresponding to the corresponding relation; and the cloud server executes the script to call the virtual scene picture corresponding to the real scene information.
In some embodiments, the method further comprises: the cloud server acquires the live-action information of the live-action object in advance; the cloud server establishes the virtual scene picture corresponding to the real scene information of the real scene object; and the cloud server associates the real scene information of the real scene object with the corresponding virtual scene picture to obtain the corresponding relation.
In some embodiments, the obtaining, by the cloud server, the live-action information of the live-action object in advance includes: the cloud server acquires a live-action video containing the live-action object in advance; the cloud server analyzes the live-action video to obtain the live-action information of the live-action object; and after the cloud server obtains the corresponding relation, deleting the live-action video and storing the live-action information of the live-action object.
In some embodiments, the live-action information further comprises tilt angle information of the terminal; the cloud server determining the virtual scene picture corresponding to the real scene object according to the real scene information includes: the cloud server determines an initial virtual scene picture corresponding to a real scene object in a real scene picture shot by the terminal according to at least one item of position information of the terminal and an identification of the real scene object in the real scene picture shot by the terminal; and the cloud server adjusts the inclination angle of the initial virtual scene picture according to the inclination angle information so as to obtain the virtual scene picture corresponding to the real scene object.
In some embodiments, the sending, by the cloud server, the virtual scene image to each terminal includes: the cloud server prompts a plurality of display modes of the virtual scene picture, and each display mode comprises at least one of the following: the appearance style of the virtual scene picture and the relative position between the virtual scene picture and the real scene picture; responding to the selection of the user corresponding to each terminal, and sending the virtual scene picture of the display mode selected by the user to each corresponding terminal.
According to another aspect of the embodiments of the present disclosure, there is provided a cloud server, including: the terminal comprises a receiving module, a processing module and a display module, wherein the receiving module is configured to receive live-action information uploaded by the terminal, and the live-action information comprises at least one of position information of the terminal and identification of a live-action object in a live-action picture shot by the terminal; the determining module is configured to determine a virtual scene picture corresponding to the real scene object according to the real scene information; and the issuing module is configured to issue the virtual scene picture to the terminal so that the terminal can overlay the real scene picture and the virtual scene picture to form a real augmented picture.
According to another aspect of the embodiments of the present disclosure, an apparatus for implementing a reality augmented picture is provided, including: a memory; and a processor coupled to the memory, the processor configured to perform the method of any of the above embodiments based on instructions stored in the memory.
According to another aspect of the embodiments of the present disclosure, a system for implementing a reality augmented picture is provided, including: the cloud server according to any one of the above embodiments; and the terminal is configured to upload the real scene information of a real scene object contained in the shot real scene picture and superpose the real scene picture and the virtual scene picture to form a reality augmented picture.
According to a further aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method according to any one of the embodiments described above.
In the embodiment of the disclosure, the cloud server determines the virtual scene picture corresponding to the real scene object according to the real scene information uploaded by the terminal, and then sends the virtual scene picture to the terminal to perform the superposition operation of the subsequent real scene picture and the virtual scene picture. In such a way, the cloud server determines the virtual scene picture and issues the virtual scene picture to the terminal, so that the processing speed of the real augmented picture is improved, and the time delay of the real augmented picture is reduced. In addition, the terminal only needs to upload the live-action information, the requirement on the capability of the terminal is low, and the terminal range of the AR is expanded.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow diagram of a method of implementing a reality augmentation picture according to some embodiments of the present disclosure;
FIG. 2 is a flow chart diagram of a method for implementing a reality augmentation picture according to further embodiments of the present disclosure;
fig. 3 is a schematic structural diagram of a cloud server according to some embodiments of the present disclosure;
fig. 4 is a schematic structural diagram of a cloud server according to further embodiments of the present disclosure;
fig. 5 is a schematic structural diagram of a system for implementing a reality-augmented picture according to some embodiments of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Fig. 1 is a flow diagram of a method for implementing a reality-augmented picture according to some embodiments of the present disclosure.
In step 102, the cloud server receives the live-action information uploaded by the terminal. Here, the live-action information includes at least one of position information of the terminal and an identification of a live-action object in a live-action picture taken by the terminal.
After the terminal shoots the live-action picture, at least one item of position information of the terminal and the identification of the live-action object contained in the live-action picture can be determined based on the positioning information of the terminal and the identification condition of the live-action picture. The AR service request may be initiated by opening an AR service client on the terminal, an AR plug-in built in other clients, scanning a real object provided with an AR service using a camera, and the like.
In step 104, the cloud server determines a virtual scene picture corresponding to the real scene object according to the real scene information.
For example, the cloud server searches for a corresponding relationship between the real scene information and the virtual scene picture according to the real scene information uploaded by the terminal. And then, the cloud server determines a virtual scene picture corresponding to the real scene information according to the searched corresponding relation.
In some embodiments, the cloud server may determine a script corresponding to the found correspondence; the cloud server executes the script to call a virtual scene picture corresponding to the real scene information. Therefore, the virtual scene picture corresponding to the real scene object can be determined more quickly.
In step 106, the cloud server issues the virtual scene image to the terminal, so that the terminal superimposes the real scene image and the virtual scene image to form a real augmented image.
As some implementation manners, the terminal superimposes the real-scene picture and the virtual-scene picture to form a reality enhanced picture by using the real-scene picture as a bottom-layer picture and the virtual-scene picture as a top-layer picture. It should be understood that the embodiment of the present disclosure is not limited thereto, and in other implementations, the terminal may also overlay the virtual scene image and the real scene image with the virtual scene image as the bottom layer image and the real scene image as the top layer image.
For example, the cloud server renders the virtual image after determining the virtual image to determine the appearance position, the display form, and the like of the virtual image, and then issues the virtual image to the terminal in the form of a coded video stream. Correspondingly, after receiving the video stream, the terminal decodes the received video stream to obtain a virtual scene picture, and then superimposes the real scene picture and the virtual scene picture to form a real augmented picture and displays the real augmented picture.
In the above embodiment, the cloud server determines the virtual scene picture corresponding to the real scene object according to the real scene information uploaded by the terminal, and then issues the virtual scene picture to the terminal to perform the superposition operation of the subsequent real scene picture and the virtual scene picture. In such a way, the cloud server determines the virtual scene picture and issues the virtual scene picture to the terminal, so that the processing speed of the real augmented picture is improved, and the time delay of the real augmented picture is reduced. In addition, the terminal only needs to upload the live-action information, the requirement on the capability of the terminal is low, and the terminal range of the AR is expanded.
In some implementations, the cloud server may establish a correspondence between the real-scene information and the virtual-scene information according to the following manner.
First, the cloud server acquires live-action information of a live-action object in advance.
For example, the cloud server obtains a live-action video including the live-action object in advance, and then analyzes the live-action video to obtain live-action information of the live-action object. Here, the live-action video may be uploaded by a terminal or autonomously photographed by a cloud server.
For another example, the cloud server may directly obtain the live-action information of the live-action object without obtaining the live-action video of the live-action object. Namely, the live-action information of the live-action object is stored in the cloud server.
And then, the cloud server establishes a virtual scene picture corresponding to the real scene information of the real scene object.
The virtual scene picture corresponding to the real scene information of the real scene object can be determined according to the service provided for the user. For example, the services provided for the user are: the real picture shot by the user is a plane earth, the displayed virtual picture is a three-dimensional earth, and the virtual real picture is that the plane earth and the three-dimensional earth exist simultaneously. In this case, the virtual scene picture created may be a stereoscopic earth. The missing resource materials of the virtual pictures, such as maps, pixels, virtual characters, sound effects, animation special effects and the like, can be supplemented correspondingly.
And then, the cloud server associates the real scene information of the real scene object with the corresponding virtual scene picture to obtain a corresponding relation.
For example, the real-scene information of the real-scene object is associated with the corresponding virtual-scene picture by means of the script, so that the virtual-scene picture corresponding to the real-scene information is called by subsequently executing the script.
Through the above manner, the virtual scene pictures corresponding to the real scene information of the multiple real scene objects can be determined, and then the real scene information of each real scene object is associated with the corresponding virtual scene picture to obtain the corresponding relation. After uploading the live-action information of a certain live-action object, the subsequent terminal can determine the corresponding virtual-action picture according to the corresponding relationship.
In some embodiments, the cloud server analyzes the pre-acquired live-action video to obtain live-action information of the live-action object, and deletes the live-action video and stores the live-action information of the live-action object after the cloud server obtains the corresponding relationship. This way, the pressure of the cloud server can be reduced.
The inventor also noticed that in the application scenario of multiple terminals, due to the difference of processing capabilities of different terminals, the time delays for displaying the real augmented screen by different terminals are different. How to realize the reality augmented picture in the application scenario of the multi-terminal is described below with reference to fig. 2.
Fig. 2 is a flow chart of a method for implementing a reality-augmented picture according to further embodiments of the present disclosure. Only the differences with respect to the embodiment shown in fig. 1 will be described below, and reference may be made to the above description for the relevant points.
In step 202, the cloud server receives live-action information uploaded by each terminal of the plurality of terminals.
Here, the live-action information uploaded by each terminal includes at least one of position information of each terminal and an identification of the same live-action object in the live-action picture taken by each terminal.
In step 204, the cloud server determines a virtual scene picture corresponding to the real scene object according to the real scene information.
Because the live-action information uploaded by different terminals is the live-action information of the same live-action object, the virtual-action picture finally determined by the cloud server is the same virtual-action picture.
In step 206, the cloud server issues the virtual scene image to each terminal, so that each terminal superimposes the captured real scene image and the virtual scene image to form a real augmented image.
For example, the cloud server may prompt multiple display modes of the virtual scene picture, each display mode including at least one of the following: the appearance style of the virtual scene picture and the relative position between the virtual scene picture and the real scene picture. Responding to the selection of the user corresponding to each terminal, and sending the virtual scene picture of the display mode selected by the user to each corresponding terminal. Therefore, the personalized requirements of different users can be met on the premise of improving the processing speed.
In the above embodiment, the cloud server determines the virtual scene picture and issues the virtual scene picture to each terminal, and each terminal does not need to process the virtual scene picture to obtain the virtual picture. On one hand, the processing speed of the reality enhancement picture is improved, and the time delay of the reality enhancement picture displayed by each terminal is reduced; on the other hand, the time delay deviation between the reality augmented pictures displayed by different terminals is reduced. Such a way can realize the AR interaction of multiple users in the same virtual scene.
In some embodiments, the cloud server constructs a parallel operating environment for the real-scene information uploaded by the plurality of terminals, so as to execute the operations of determining the virtual-scene picture corresponding to the real-scene object according to the real-scene information in the steps 204 and 206, and sending the virtual-scene picture to each terminal. For example, the cloud server may construct mutually isolated cloud operating environments, that is, parallel operating environments, for AR service requests of different terminals based on a central processing unit/graphics processing unit (CPU/GPU) virtualization technology. Here, the parallel execution environment includes, for example, independent execution resources such as CPU, GPU, storage, and network.
In the above embodiment, the cloud server may issue the virtual scene picture to each terminal more quickly, so as to further increase the processing speed of the real augmented picture, and further reduce the delay of the real augmented picture displayed by each terminal and the delay deviation between the real augmented pictures displayed by different terminals. And, the resources of the cloud server can be fully reused.
In some embodiments, the live-action information uploaded by the terminal further includes tilt angle information of the terminal. How the cloud server determines the virtual scene corresponding to the real scene object in this case is described below.
Firstly, the cloud server determines an initial virtual scene picture corresponding to a real scene object according to at least one item of position information of the terminal and an identification of the real scene object in the real scene picture shot by the terminal.
For example, an initial virtual scene picture corresponding to the real scene object is determined according to the position information of the terminal. For example, the location information may be Global Positioning System (GPS) information, such as latitude and longitude, altitude, angle, and the like. As an example, if the distance between the terminal and a certain real-scene object is within a preset range, it may be determined that the real-scene object is included in a real-scene picture currently taken by the terminal, and the real-scene information uploaded by the terminal is the real-scene information of the real-scene object. It should be understood that the cloud server stores location information of a plurality of live-action objects. And determining the corresponding live-action object according to the position information of the terminal and the stored position information of the plurality of live-action objects.
For another example, the initial virtual scene picture corresponding to the real scene object is determined according to the identifier of the real scene object. For example, the identification of the real scene object may be a specific identification such as a grade, a subject, a page number, and the like. As an example, a certain subject represents a textbook of that subject.
For another example, the initial virtual scene picture corresponding to the real scene object is determined according to the position information of the terminal and the identification of the real scene object. In this way, the initial virtual scene picture corresponding to the real scene object can be determined more accurately.
And then, the cloud server adjusts the inclination angle of the initial virtual scene picture according to the inclination angle information of the terminal so as to obtain the virtual scene picture corresponding to the real scene object.
The inclination angle of the initial virtual scene picture can be adjusted to be adaptive to the real scene object according to the inclination angle information of the terminal, so that the virtual scene picture can be better presented.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the cloud server embodiment, since it basically corresponds to the method embodiment, the description is simple, and reference may be made to part of the description of the method embodiment for relevant points.
Fig. 3 is a schematic structural diagram of a cloud server according to some embodiments of the present disclosure.
As shown in fig. 3, the cloud server includes a receiving module 301, a determining module 302, and a sending module 303.
The receiving module 301 is configured to receive the live-action information uploaded by the terminal, where the live-action information includes at least one of position information of the terminal and an identifier of a live-action object in a live-action picture captured by the terminal. The determining module 302 is configured to determine a virtual scene picture corresponding to the real scene object according to the real scene information. The issuing module 303 is configured to issue the virtual scene image to the terminal, so that the terminal superimposes the real scene image and the virtual scene image to form a real augmented image.
Fig. 4 is a schematic structural diagram of a cloud server according to further embodiments of the present disclosure.
As shown in fig. 4, the cloud server 400 includes a memory 401 and a processor 402 coupled to the memory 401, wherein the processor 402 is configured to execute the method of any of the foregoing embodiments based on instructions stored in the memory 401.
The memory 401 may include, for example, a system memory, a fixed non-volatile storage medium, and the like. The system memory may store, for example, an operating system, application programs, a Boot Loader (Boot Loader), and other programs.
The cloud server 400 may further include an input-output interface 403, a network interface 404, a storage interface 405, and the like. The interfaces 403, 404, 405 and the memory 401 and the processor 402 may be connected by a bus 406, for example. The input/output interface 403 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 404 provides a connection interface for various networking devices. The storage interface 405 provides a connection interface for external storage devices such as an SD card and a usb disk.
Fig. 5 is a schematic structural diagram of a system for implementing a reality-augmented picture according to some embodiments of the present disclosure.
As shown in fig. 5, the system for implementing the augmented reality picture includes the cloud server (e.g., the cloud server 400) and one or more terminals 501 in any of the above embodiments.
The terminal 500 is configured to upload real-scene information of a real-scene object included in a photographed real-scene picture, and superimpose the real-scene picture and a virtual-scene picture to form a real augmented picture.
The disclosed embodiments also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of the above embodiments.
Thus, various embodiments of the present disclosure have been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that the functions specified in one or more of the flows in the flowcharts and/or one or more of the blocks in the block diagrams can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be understood by those skilled in the art that various changes may be made in the above embodiments or equivalents may be substituted for elements thereof without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (13)

1. A realization method of a reality augmentation picture comprises the following steps:
the method comprises the steps that a cloud server receives live-action information uploaded by a terminal, wherein the live-action information comprises at least one of position information of the terminal and identification of a live-action object in a live-action picture shot by the terminal;
the cloud server determines a virtual scene picture corresponding to the real scene object according to the real scene information; and
and the cloud server issues the virtual scene picture to the terminal so that the terminal can overlay the real scene picture and the virtual scene picture to form a real augmented picture.
2. The method according to claim 1, wherein the terminal comprises a plurality of terminals, and the live-action information uploaded by each terminal comprises at least one of position information of each terminal and an identification of the same live-action object in a live-action picture taken by each terminal;
the cloud server issues the virtual scene picture to the terminal, so that the terminal superimposes the real scene picture and the virtual scene picture to form a real augmented picture comprises the following steps:
and the cloud server issues the virtual scene picture to each terminal so that each terminal can superpose the shot real scene picture and the virtual scene picture to form a real augmented picture.
3. The method of claim 2, wherein the cloud server constructs a parallel operating environment for the real-scene information uploaded by the plurality of terminals, so as to perform the operation of determining the virtual scene corresponding to the real-scene object according to the real-scene information and sending the virtual scene to each terminal in parallel.
4. The method of claim 1, wherein the cloud server determining, according to the live-action information, a virtual-action picture corresponding to the live-action object comprises:
the cloud server searches the corresponding relation between the real scene information and the virtual scene picture according to the real scene information;
and the cloud server determines the virtual scene picture corresponding to the real scene information according to the corresponding relation.
5. The method of claim 4, wherein the cloud server determining the virtual scene picture corresponding to the real scene information according to the real-virtual scene correspondence relationship comprises:
the cloud server determines a script corresponding to the corresponding relation;
and the cloud server executes the script to call the virtual scene picture corresponding to the real scene information.
6. The method of claim 4, further comprising:
the cloud server acquires the live-action information of the live-action object in advance;
the cloud server establishes the virtual scene picture corresponding to the real scene information of the real scene object;
and the cloud server associates the real scene information of the real scene object with the corresponding virtual scene picture to obtain the corresponding relation.
7. The method of claim 6, wherein the cloud server obtaining the live-action information of the live-action object in advance comprises:
the cloud server acquires a live-action video containing the live-action object in advance;
the cloud server analyzes the live-action video to obtain the live-action information of the live-action object;
and after the cloud server obtains the corresponding relation, deleting the live-action video and storing the live-action information of the live-action object.
8. The method according to any one of claims 1-7, wherein the live-action information further comprises tilt angle information of the terminal;
the cloud server determining the virtual scene picture corresponding to the real scene object according to the real scene information includes:
the cloud server determines an initial virtual scene picture corresponding to a real scene object in a real scene picture shot by the terminal according to at least one item of position information of the terminal and an identification of the real scene object in the real scene picture shot by the terminal;
and the cloud server adjusts the inclination angle of the initial virtual scene picture according to the inclination angle information so as to obtain the virtual scene picture corresponding to the real scene object.
9. The method of any one of claims 2 to 7, wherein the cloud server issuing the virtual scene to each terminal comprises:
the cloud server prompts a plurality of display modes of the virtual scene picture, and each display mode comprises at least one of the following: the appearance style of the virtual scene picture and the relative position between the virtual scene picture and the real scene picture;
responding to the selection of the user corresponding to each terminal, and sending the virtual scene picture of the display mode selected by the user to each corresponding terminal.
10. A cloud server, comprising:
the terminal comprises a receiving module, a processing module and a display module, wherein the receiving module is configured to receive live-action information uploaded by the terminal, and the live-action information comprises at least one of position information of the terminal and identification of a live-action object in a live-action picture shot by the terminal;
the determining module is configured to determine a virtual scene picture corresponding to the real scene object according to the real scene information; and
and the issuing module is configured to issue the virtual scene picture to the terminal so that the terminal can overlay the real scene picture and the virtual scene picture to form a real augmented picture.
11. A cloud server, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the method of any of claims 1-9 based on instructions stored in the memory.
12. A system for implementing augmented reality pictures, comprising:
the cloud server of any of claims 9-10; and
and the terminal is configured to upload the real scene information of a real scene object contained in the shot real scene picture and superpose the real scene picture and the virtual scene picture to form a reality augmented picture.
13. A computer readable storage medium having computer program instructions stored thereon, wherein the instructions, when executed by a processor, implement the method of any of claims 1-9.
CN202010737541.8A 2020-07-28 2020-07-28 Method and system for realizing reality enhancement picture and cloud server Pending CN114004953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010737541.8A CN114004953A (en) 2020-07-28 2020-07-28 Method and system for realizing reality enhancement picture and cloud server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010737541.8A CN114004953A (en) 2020-07-28 2020-07-28 Method and system for realizing reality enhancement picture and cloud server

Publications (1)

Publication Number Publication Date
CN114004953A true CN114004953A (en) 2022-02-01

Family

ID=79920396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010737541.8A Pending CN114004953A (en) 2020-07-28 2020-07-28 Method and system for realizing reality enhancement picture and cloud server

Country Status (1)

Country Link
CN (1) CN114004953A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114900545A (en) * 2022-05-10 2022-08-12 中国电信股份有限公司 Augmented reality implementation method and system and cloud server

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114900545A (en) * 2022-05-10 2022-08-12 中国电信股份有限公司 Augmented reality implementation method and system and cloud server

Similar Documents

Publication Publication Date Title
US11195332B2 (en) Information interaction method based on virtual space scene, computer equipment and computer-readable storage medium
WO2022088918A1 (en) Virtual image display method and apparatus, electronic device and storage medium
US9485493B2 (en) Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof
US20130278633A1 (en) Method and system for generating augmented reality scene
CN109829964B (en) Web augmented reality rendering method and device
US9588651B1 (en) Multiple virtual environments
TWI783472B (en) Ar scene content generation method, display method, electronic equipment and computer readable storage medium
US20170186243A1 (en) Video Image Processing Method and Electronic Device Based on the Virtual Reality
CN103914876A (en) Method and apparatus for displaying video on 3D map
KR102197615B1 (en) Method of providing augmented reality service and server for the providing augmented reality service
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN114531553B (en) Method, device, electronic equipment and storage medium for generating special effect video
CN109741431B (en) Two-dimensional and three-dimensional integrated electronic map frame
US20230342973A1 (en) Image processing method and apparatus, device, storage medium, and computer program product
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN115712351A (en) Hierarchical rendering and interaction method and system for multi-person remote mixed reality sharing scene
CN114697703A (en) Video data generation method and device, electronic equipment and storage medium
CN114004953A (en) Method and system for realizing reality enhancement picture and cloud server
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
WO2023231793A9 (en) Method for virtualizing physical scene, and electronic device, computer-readable storage medium and computer program product
KR102161437B1 (en) Apparatus for sharing contents using spatial map of augmented reality and method thereof
KR20130118761A (en) Method and system for generating augmented reality scene
CN112651801B (en) Method and device for displaying house source information
US20230326147A1 (en) Helper data for anchors in augmented reality
US20230326156A1 (en) Method and apparatus for displaying virtual card, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination