CN112714305A - Presentation method, presentation device, presentation equipment and computer-readable storage medium - Google Patents

Presentation method, presentation device, presentation equipment and computer-readable storage medium Download PDF

Info

Publication number
CN112714305A
CN112714305A CN202011566877.9A CN202011566877A CN112714305A CN 112714305 A CN112714305 A CN 112714305A CN 202011566877 A CN202011566877 A CN 202011566877A CN 112714305 A CN112714305 A CN 112714305A
Authority
CN
China
Prior art keywords
special effect
performance
virtual special
virtual
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011566877.9A
Other languages
Chinese (zh)
Inventor
马辉
刘畅
程松
栾青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202011566877.9A priority Critical patent/CN112714305A/en
Publication of CN112714305A publication Critical patent/CN112714305A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements
    • G09F9/30Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides a display method, a device, equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a real scene image of a performance site, wherein display equipment is arranged between a performance area and a watching area in the performance site; identifying scene information of a performance site based on the real scene image; determining at least one piece of virtual special effect data matched with scene information based on the scene information of the performance scene; and displaying an augmented reality effect of overlapping a virtual special effect image rendered by at least one piece of virtual special effect data and a real scene image through a transparent display screen of the display equipment. Through this disclosure, the automatic display of the augmented reality AR of the performance is realized, and the display effect is improved.

Description

Presentation method, presentation device, presentation equipment and computer-readable storage medium
Technical Field
The present disclosure relates to image processing technologies, and in particular, to a display method, apparatus, device, and computer-readable storage medium.
Background
At present, in order to perform the literary and artistic performances, a corresponding stage, and related facilities such as light, background, scenery and the like are generally required to be built in advance; in order to achieve the effect of the literature and art performance, workers are required to adjust the light, background, scenery and the like of the stage in real time according to the actual situation of the literature and art performance, and the automation effect is poor.
Disclosure of Invention
The embodiment of the disclosure provides a display method, a display device, display equipment and a computer-readable storage medium, which realize automatic display of Augmented Reality (AR) of a performance and improve display effect.
The technical scheme of the disclosure is realized as follows:
the embodiment of the disclosure provides a display method, which includes:
acquiring a real scene image of a performance site, wherein display equipment is arranged between a performance area and a watching area in the performance site; identifying scene information of the performance site based on the real scene image; determining at least one piece of virtual special effect data matched with the scene information based on the scene information of the performance field; and displaying an augmented reality effect of the virtual special effect image rendered by the at least one piece of virtual special effect data and the real scene image by the transparent display screen of the display equipment.
In the above method, the scene information includes at least one of the following information: attribute information of the current performance content in the performance site; attribute information of a performance object located in a performance area; attribute information of a viewing object located in the viewing zone.
In the above method, the acquiring the real scene image of the performance venue includes: acquiring a first real scene image corresponding to the performance area through the first image acquisition device; and/or acquiring a second real scene image corresponding to the watching region through the second image acquisition device; taking the first real scene image and/or the second real scene image as the real scene image.
In the above method, the determining, based on scene information of the performance scene, at least one piece of virtual special effect data matched with the scene information includes at least one of: determining first animation special effect data matched with the attribute information of the current performance content based on the attribute information of the current performance content; the first animation special effect data is used as virtual special effect data; determining second animation special effect data matched with the attribute information of the current performance object based on the attribute information of the current performance object; the second animation special effect data is used as virtual special effect data; determining third animation special effect data matched with the attribute information of the viewing object based on the attribute information of the viewing object; the third animated special effect data serves as a virtual special effect data.
In the above method, the displaying, through the transparent display screen of the display device, an augmented reality effect in which a virtual special effect image rendered from the at least one virtual special effect data is superimposed on the real scene image includes: determining target virtual special effect data according to the at least one piece of virtual special effect data; rendering is carried out according to the target virtual special effect data to obtain the virtual special effect image; and displaying the augmented reality effect overlapped by the virtual special effect image and the real scene image through a transparent display screen of the display equipment.
In the above method, when the at least one piece of virtual special effect data is greater than or equal to two pieces, determining target virtual special effect data according to the at least one piece of virtual special effect data includes: and determining the virtual special effect data with the highest priority as the target virtual special effect data according to the priority of the at least one piece of virtual special effect data.
In the above method, the rendering according to the target virtual special effect data to obtain a virtual special effect image includes: determining the display position of the virtual special effect image according to the real-time position of the performance object in the first real scene image; and rendering the target virtual special effect data in real time at the display position of the virtual special effect image to obtain the virtual special effect image.
In the above method, when the at least one piece of virtual special effect data is greater than or equal to two pieces, the rendering based on the at least one piece of virtual special effect data to obtain a virtual special effect image includes: rendering each virtual special effect data of the at least one virtual special effect data to obtain at least two sub virtual special effect images; and according to the real-time position of the performance object in the first real scene image, performing real-time superposition and real-time rendering on the corresponding display positions of the at least two sub-virtual special effect images to obtain the virtual special effect image.
In the above method, the virtual special effect image includes at least one of: a virtual light effect graph; a virtual animation effect graph; a virtual auxiliary performance effect diagram; and (5) virtual scene effect pictures.
In the method, the attribute information of the current show content includes at least one of: the performance duration of the current performance content; a performance time node of the current performance content; the type of content of the current show.
In the method, the attribute information of the show object includes at least one of:
the attitude of the performance object; a category of the show object; the position of the show object.
In the above method, the attribute information of the viewing object includes at least one of:
an expression of the viewing object; an action of viewing the object; a category of viewing object.
The disclosed embodiment provides a display device, including:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a real scene image of a performance site, and display equipment is arranged between a performance area and a watching area in the performance site;
the identification module is used for identifying scene information of the performance site based on the real scene image;
the determining module is used for determining at least one piece of virtual special effect data matched with the scene information based on the scene information of the performance field;
and the display module is used for displaying the augmented reality effect superposed by the virtual special effect image rendered by the virtual special effect data and the real scene image through the transparent display screen of the display equipment.
In the above apparatus, the scene information includes at least one of the following information: attribute information of the current performance content in the performance site; attribute information of a performance object located in a performance area; attribute information of a viewing object located in the viewing zone.
In the device, the acquisition module is further configured to acquire a first real scene image corresponding to the performance area through the first image acquisition device; and/or acquiring a second real scene image corresponding to the watching region through the second image acquisition device; taking the first real scene image and/or the second real scene image as the real scene image.
In the device, the determining module is further configured to determine, based on the attribute information of the current performance content, first animation special effect data matched with the attribute information of the current performance content; the first animation special effect data is used as virtual special effect data; determining second animation special effect data matched with the attribute information of the current performance object based on the attribute information of the current performance object; the second animation special effect data is used as virtual special effect data; determining third animation special effect data matched with the attribute information of the viewing object based on the attribute information of the viewing object; the third animated special effect data serves as a virtual special effect data.
In the above apparatus, the display module is further configured to determine target virtual special effect data according to the at least one virtual special effect data; rendering is carried out according to the target virtual special effect data to obtain the virtual special effect image; and displaying the augmented reality effect overlapped by the virtual special effect image and the real scene image through a transparent display screen of the display equipment.
In the above apparatus, the display module is further configured to determine, according to the priority of the at least one virtual special effect data, that the virtual special effect data with the highest priority is the target virtual special effect data.
In the device, the display module is further configured to determine a display position of the virtual special effect image according to a real-time position of the performance object in the first real scene image; and rendering the target virtual special effect data in real time at the display position of the virtual special effect image to obtain the virtual special effect image.
In the above apparatus, the display module is further configured to render each virtual special effect data of the at least one virtual special effect data to obtain at least two sub-virtual special effect images; and according to the real-time position of the performance object in the first real scene image, performing real-time superposition and real-time rendering on the corresponding display positions of the at least two sub-virtual special effect images to obtain the virtual special effect image.
In the above apparatus, the virtual special effect image includes at least one of:
a virtual light effect graph; a virtual animation effect graph; a virtual auxiliary performance effect diagram; and (5) virtual scene effect pictures.
In the apparatus, the attribute information of the current show content includes at least one of:
the performance duration of the current performance content; a performance time node of the current performance content; the type of content of the current show.
In the apparatus, the attribute information of the show object includes at least one of:
the attitude of the performance object; a category of the show object; the position of the show object.
In the above apparatus, the attribute information of the viewing object includes at least one of:
an expression of the viewing object; an action of viewing the object; a category of viewing object.
An embodiment of the present disclosure provides a display device, including:
the transparent display screen is used for displaying an augmented reality effect of the virtual special effect image and the real scene image which are superposed;
a memory for storing a computer program;
and the processor is used for combining the transparent display screen to realize the display method when executing the computer program stored in the memory.
The embodiment of the disclosure provides a computer-readable storage medium, which stores a computer program, and is used for realizing the display method when being executed by a processor.
The embodiment of the disclosure has the following beneficial effects:
the display device can determine a matched virtual special effect image according to a real scene image of a performance field, and further display the virtual special effect image on a transparent display screen; therefore, the audience watching the performance can see the real scene image of the performance field and the virtual special effect image displayed on the transparent display screen through the transparent display screen, so that the audience watching the performance can see the superposed augmented reality AR effect of the real scene image and the virtual special effect image through the transparent display screen, the automatic display of the augmented reality AR is realized, and the display effect is improved.
Drawings
FIG. 1 is a schematic structural diagram of an alternative display system architecture provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an optional application scenario provided by the embodiment of the present disclosure;
FIG. 3 is a flow chart of an alternative presentation method provided by the embodiments of the present disclosure;
fig. 4 is a schematic diagram of an optional application scenario provided by the embodiment of the present disclosure;
FIG. 5 is a flow chart of an alternative presentation method provided by the embodiments of the present disclosure;
fig. 6 is a schematic diagram of an alternative method for acquiring an image of a real scene according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an alternative method for acquiring an image of a real scene according to an embodiment of the present disclosure;
FIG. 8 is a flow chart of an alternative presentation method provided by the embodiments of the present disclosure;
FIG. 9 is a flow chart of an alternative presentation method provided by the embodiments of the present disclosure;
FIG. 10 is a flow chart of an alternative presentation method provided by the embodiments of the present disclosure;
fig. 11 is a schematic structural diagram illustrating a display device according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a display apparatus according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present disclosure.
An Augmented Reality (AR) technology is a technology for skillfully fusing virtual information and a real world, and a user can view a virtual object superimposed in a real scene through an AR device, for example, can view a virtual big tree superimposed on a real campus playground and see a virtual flying bird superimposed in the sky, how to better fuse the virtual objects, such as the virtual big tree and the virtual flying bird, with the real scene, and realize the effect of presenting the virtual object in the Augmented Reality scene.
The display method provided by the embodiment of the present disclosure is applied to a display device, and an exemplary application of the display device provided by the embodiment of the present disclosure is described below, and the display device provided by the embodiment of the present disclosure may be implemented as various terminals having display screens, such as AR glasses, a notebook computer, a tablet computer, a desktop computer, a set-top box, a transparent display screen, and a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated message device, and a portable game device). In the embodiment of the disclosure, the display device comprises a transparent display screen, and a real scene image behind the display screen can be seen through the transparent display screen; wherein, transparent display screen can be implemented for splicing OLED transparent screen, so, can splice not unidimensional display screen to the stage of equidimension.
In the following, an exemplary application will be explained when the presentation apparatus is implemented as a terminal. When the display equipment is implemented as a terminal, a real scene image of a performance site can be acquired, wherein the display equipment is arranged between a performance area and a watching area in the performance site; identifying scene information of a performance site based on the real scene image; determining at least one piece of virtual special effect data matched with the scene information based on the scene information of the performance site; displaying an augmented reality effect of superimposing a virtual special effect image rendered by at least one piece of virtual special effect data and a real scene image through a transparent display screen of display equipment; the terminal can also interact with the cloud server to acquire virtual special effect data prestored in the cloud server. In the display scene, the terminal acquires virtual special effect data in an interactive manner with the server, and the display system is explained by taking the effect of presenting the augmented reality AR image as an example.
Referring to fig. 1, fig. 1 is an alternative architecture diagram of a display system 100 provided by the embodiment of the present disclosure, in order to support a display application, a terminal 400 is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two. In a real stage presentation, the terminal 400 may be a presentation device with a camera; the terminal 400 is configured to acquire a real scene image through a camera; acquiring a real scene image of a performance site, wherein display equipment is arranged between a performance area and a watching area in the performance site; identifying scene information of a performance site based on the real scene image; determining at least one piece of virtual special effect data matched with the scene information based on the scene information of the performance site; displaying an augmented reality effect in which a virtual special effect image rendered from at least one piece of virtual special effect data is superimposed with a real scene image on a graphical interface 410 of the terminal 400; the graphical interface 410 may be an OLED transparent display screen of the terminal.
For example, when the terminal 400 is implemented as a mobile phone, a preset display application on the mobile phone may be started, and a camera is called through the preset display application to obtain a real scene image of a performance field, where a display device is disposed between a performance area and a viewing area in the performance field; identifying scene information of a performance site based on the real scene image; based on scene information of the performance site, a data request is sent to the server 200, and after receiving the data request, the server 200 determines corresponding at least one piece of virtual special effect data from a database 500 in advance; at least one virtual special effects data is returned to the terminal 400. After obtaining the at least one piece of virtual special effect data fed back by the server, the terminal 400 renders based on the at least one piece of virtual special effect data to obtain a virtual special effect image; and an augmented reality effect in which the virtual special effect image and the real scene image are superimposed is displayed on the graphical interface 410 of the terminal 400.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present disclosure is not limited thereto.
Fig. 2 is a schematic diagram of an optional application scene provided in the embodiment of the present disclosure, and as shown in fig. 2, the display device includes a transparent display screen 201, where the transparent display screen 201 may be disposed between a stage and an audience, and a rear camera of the transparent display screen is used to capture an image of a performance scene on the stage; the audience can see the performance scene through the transparent display screen 201, and meanwhile, the display equipment displays the virtual special effect image matched with the performance scene on the transparent display screen 201; the virtual special effect image may include a virtual character, a virtual scene, virtual lighting, and the like. In other embodiments, the transparent display screen is further provided with a front camera, the viewing scene of the audience is shot through the front camera, and the display device can also display the virtual special effect image matched with the viewing scene on the transparent display screen 201, so that the augmented reality AR effect of superimposing the virtual special effect image and the performance scene is realized, and a rich stage effect is presented.
A terminal device may refer to a terminal, an access terminal device, a subscriber unit, a subscriber station, a mobile station, a remote terminal device, a mobile device, a User Equipment (UE), a wireless communication device, a User agent, or a User Equipment. The terminal device may be a server, a mobile phone, a tablet computer, a laptop computer, a palmtop computer, a personal digital assistant, a portable media player, an intelligent sound box, a navigation device, a display device, a wearable device such as an intelligent bracelet, a Virtual Reality (VR) device, an Augmented Reality (AR) device, a pedometer, a digital TV, a desktop computer, or the like.
An embodiment of the present disclosure provides a display method, and referring to fig. 3, fig. 3 is a flowchart of an alternative display method provided by the embodiment of the present disclosure, which will be described with reference to the steps shown in fig. 3.
S101, acquiring a real scene image of a performance site, wherein display equipment is arranged between a performance area and a watching area in the performance site.
In the disclosed embodiment, the display device is arranged between the performance area and the viewing area; the audience in the watching area can see the corresponding display effect of the scene of the performance field through the display equipment.
In the embodiment of the disclosure, the display device can acquire the real scene image of the performance field through the image acquisition device; the image acquisition device can be a camera device, and the real scene image is shot through the camera device.
In the embodiment of the disclosure, the camera device may be a camera device carried by the display apparatus; the display equipment acquires a real scene image acquired by the camera device through communication connection; the embodiments of the present disclosure are not limited with respect to the settings of the image pickup apparatus.
In the embodiment of the disclosure, the display device may acquire a real scene image through one camera device, or may acquire a real scene image through a plurality of camera devices, so as to acquire a larger scene image of a real scene; the number of the image capturing devices may be set as needed, and the embodiments of the present disclosure are not limited thereto.
In the embodiment of the present disclosure, the transparent display screen may be a complete display screen of a preset size; the display screen can also be formed by splicing small-size display screens; the disclosed embodiments are not limited in this respect.
And S102, identifying scene information of the performance site based on the real scene image.
In the embodiment of the disclosure, after the display device acquires the real scene image of the performance site, the scene information of the performance site can be acquired from the real scene image.
Wherein the scene information may include at least one of the following information: attribute information of current performance content in the performance site; attribute information of a performance object located in a performance area; attribute information of a viewing object located in the viewing zone.
In the embodiment of the present disclosure, the attribute information of the current performance content may be at least one of a performance duration of the current performance content, a performance time node of the current performance content, and a type of the current performance content.
The time node of the current performance content can be the actual time point of the current performance scene, and can also be the corresponding time node of the current performance content in the complete performance; the disclosed embodiments are not limited in this respect.
It should be noted that the display device acquires a real scene image of the performance site in real time; the presentation apparatus may use the actual time point of each acquired real scene image as a time node of the current performance content.
Illustratively, the actual time when the display device currently acquires the real scene image is 10:00, the display device may be programmed with 10:00 is used as the time point of the current performance content; alternatively, the presentation apparatus may use the start time point of the show as the start time point, and acquire the real scene image when the show is performed for 25 seconds, then 0: 25 as the time node for the current show content.
In the embodiment of the disclosure, each corresponding performance content in a normal performance can be preset, and thus, after the display device obtains the real scene image corresponding to the current performance area, the display device can identify the current performance content, and further determine the attribute information of the current performance content.
For example, the presentation device may identify the content of the current performance according to the performance object in the scene of the current performance, and the position, the motion, the expression and the like of the performance object; then, the display device may determine attribute information corresponding to the current performance content according to preset performance contents.
In an embodiment of the present disclosure, the types of the current show content may include: small articles, dancing, singing, magic, etc.; the type of the current performance content may also be a performance style of the current performance content, such as: fresh style, classical style, modern fashion style, cartoon style, etc., to which the disclosed embodiments are not limited.
In an embodiment of the present disclosure, the attribute information of the show object in the show area may include at least one of: the pose of the rendering object, the category of the rendering object, and the position of the rendering object.
In the embodiment of the present disclosure, the performance object may be a performance character or a performance prop, and thus, the embodiment of the present disclosure is not limited.
In this embodiment of the present disclosure, when the performance object is a performance character, the posture of the performance object may be: performing the limb actions of the characters and performing the expressions of the characters; the categories of show objects may be: age of the show character, identity of the show character, gender of the show character.
In this embodiment of the disclosure, when the performance object is the performance prop, the posture of the performance object may be: performing the height and width of the prop; the type of the rendering object may be a function type, a color type, a material type, or the like, which is not limited in this disclosure.
In an embodiment of the present disclosure, the attribute information of the viewing object of the viewing zone may include at least one of: an expression of the viewing object, an action of the viewing object, a category of the viewing object.
Wherein the category of the viewing object may include at least one of: age of the viewing object, gender of the viewing object, etc., to which the disclosed embodiments are not limited.
In some embodiments of the present disclosure, the age of the viewing subject is divided into children, young, strong, and elderly; after the display device identifies the viewing object, the age stage of the viewing object can be determined.
When the number of people viewing the object is greater than two, the display device determines the age stage with the largest number of people as the age of the object according to the identified age stage of the object.
In the embodiment of the disclosure, after acquiring the real scene image, the display device may identify at least one piece of scene information from the real scene image.
S103, determining at least one piece of virtual special effect data matched with the scene information based on the scene information of the performance site.
In the embodiment of the disclosure, after acquiring scene information of a performance site, a display device may determine at least one piece of virtual special effect data corresponding to the scene information.
In some embodiments of the present disclosure, different scene information may correspond to different virtual special effect data; after the display device identifies the at least one scene information, at least one virtual special effect data may be determined.
In an embodiment of the disclosure, each of the at least one virtual special effect data may include: at least one of virtual lighting data, virtual auxiliary performance data, virtual animation data, and virtual set data.
Illustratively, the context information includes: the time node of the action of the performance figure, the expression of the performance figure and the current performance content; the action of the performance figure corresponds to the virtual special effect data 1, the expression of the performance figure corresponds to the virtual special effect data 2, and the time node of the current performance content corresponds to the virtual special effect data 3; the virtual special effect data 1 comprises first auxiliary performance data, the virtual special effect data 2 comprises virtual lighting data and second auxiliary performance data, and the virtual special effect data 3 comprises virtual animation data.
In the embodiment of the present disclosure, the virtual light data includes the brightness of the virtual light, the illumination area of the virtual light, the color of the virtual light, the illumination angle of the virtual light, and the like, and the virtual light data may be set as needed, which is not limited in the embodiment of the present disclosure.
Illustratively, the scene information is the motion of a performance figure, and when the face of the performance figure faces the right front, the illumination angle of the corresponding virtual light is 45 degrees in the lateral direction; when the show character makes jumping action, the corresponding virtual light brightness is the highest brightness.
In the embodiment of the present disclosure, the virtual auxiliary performance data includes virtual character data, virtual animal data, and cartoon image data.
In some embodiments of the present disclosure, when the virtual auxiliary performance data is virtual character data, the method may include: motion data of the virtual character, expression data of the virtual character, makeup data of the virtual character, and the like.
Illustratively, the scene information is the expression of a real character; when the expression of the performance figure is neutral, the action data of the virtual figure is preset dance action data; and when the expression of the performance figure is crying, the corresponding action data of the virtual figure is the action data of the virtual figure embracing the real figure.
In the embodiment of the present disclosure, the virtual animation data is preset animation data, and the preset animation data includes an identifier of a preset animation, a preset animation start time, a preset animation end time, and the like.
Illustratively, the scene information is a time point of the content of the current performance; when the time point is the scene time 10:00, the corresponding virtual special effect data is first preset animation data; when the time point is the scene time 10:15, the corresponding virtual special effect data is second preset animation data.
In the embodiment of the present disclosure, the virtual setting data includes virtual weather data, a position of a virtual article, an attribute of the virtual article, virtual expression data, and the like, and the virtual setting data may be set as needed, which is not limited in the embodiment of the present disclosure.
And S104, displaying an augmented reality effect of superposition of a virtual special effect image rendered by at least one piece of virtual special effect data and a real scene image through a transparent display screen of the display equipment.
In the embodiment of the present disclosure, after obtaining at least one piece of virtual special effect data, the display device performs rendering based on the at least one piece of virtual special effect data, so as to obtain a virtual special effect image.
In an embodiment of the disclosure, the virtual special effects image comprises at least one of: virtual lighting effect picture, virtual animation effect picture, virtual auxiliary performance effect picture and virtual set effect picture.
In the embodiment of the present disclosure, the virtual special effect data includes virtual lighting data, and a virtual lighting effect diagram can be obtained after rendering.
In the embodiment of the present disclosure, the virtual special effect data includes virtual animation data, and a virtual animation effect diagram can be obtained after rendering.
In the embodiment of the present disclosure, the virtual special effect data includes virtual auxiliary performance data, and a virtual auxiliary performance effect diagram can be obtained after rendering.
In the embodiment of the present disclosure, the virtual special effect data includes virtual set data, and a virtual set effect diagram can be obtained after rendering.
Illustratively, the at least one virtual special effect data comprises virtual special effect data 1 and virtual special effect data 2; the virtual special effect data 1 comprises virtual auxiliary performance data and first virtual lighting data; the virtual special effect data 2 comprises virtual animation data and second virtual lighting data; the virtual special effect image obtained by rendering the virtual special effect data 1 by the display equipment comprises a virtual auxiliary performance effect graph and a first virtual lamplight graph; the virtual special effect image obtained by rendering the virtual special effect data 2 by the display equipment comprises a virtual animation effect image and a second virtual lamplight image.
In the embodiment of the present disclosure, the display device may render each virtual special effect data of the at least one virtual special effect data to obtain a virtual special effect image; or selecting part of virtual special effect data from at least one piece of virtual special effect data according to a preset rule to render to obtain a virtual special effect image; the disclosed embodiments are not limited in this respect.
In the embodiment of the disclosure, after the display device obtains the virtual special effect image, the virtual special effect image and the real scene image are superimposed and displayed on the transparent display screen of the display device.
Illustratively, as shown in fig. 4, an actor 202 is performing a martial arts performance on a stage behind a transparent display 201; the display equipment determines the virtual special effect data as virtual character data according to the action of the actor 202; after rendering the virtual character data, the virtual character 2011 obtains the martial arts fighting action of the actor 202, and the stage effect of martial arts fighting between the actor 202 and the virtual character 2011 is displayed on the transparent display screen 201.
In some embodiments of the present disclosure, the display screen of the display device may be a tiled OLED transparent display screen. By splicing the OLED transparent display screen, a transparent display screen with a target size can be formed; the flexibility of setting the size of the display screen is increased.
It should be noted that the real scene image is acquired in real time, and the scene information of the real image identified by the real scene image is real-time scene information, so that the virtual special effect image determined based on the scene information and the real scene image are matched in real time.
In the embodiment of the disclosure, the display device can determine the matched virtual special effect image according to the real scene image of the performance field, and further display the virtual special effect image on the transparent display screen; therefore, the audience watching the performance can see the real scene image of the performance field and the virtual special effect image displayed on the transparent display screen through the transparent display screen, so that the audience watching the performance can see the superposed augmented reality AR effect of the real scene image and the virtual special effect image through the transparent display screen, the automatic display of the augmented reality AR is realized, and the display effect is improved.
In some embodiments of the present disclosure, the obtaining of the real scene image of the performance site in S101, as shown in fig. 5, may include: S201-S202.
S201, acquiring a first real scene image corresponding to a performance area through a first image acquisition device; and/or acquiring a second real scene image corresponding to the watching area by a second image acquisition device.
In an embodiment of the present disclosure, a display apparatus includes a first image capture device and a second image capture device; the display equipment can acquire a real scene image of a performance area through the first image acquisition device, namely a real performance scene is taken as a first real scene image; and/or acquiring a real scene image of the watching area by a second image acquisition device, namely acquiring the real watching scene as a second real scene image.
For example, fig. 6 is a schematic diagram of an alternative method for capturing an image of a real scene; as shown in fig. 6, a display device implemented as a transparent display screen 201 is placed between the viewing area and the performance area, a first image acquisition device configured for the transparent display screen 201 is a rear camera 2021, the performance area is shot by the rear camera 2021, and a first real scene image of a real performance scene of an actor performance is acquired; FIG. 7 is a schematic diagram of an alternative method for capturing an image of a real scene; as shown in fig. 7, the display apparatus is implemented as a transparent display 201, the second image capturing device configured for the transparent display 201 is a front camera 2031, and the front camera 2031 captures a viewing area to obtain a second real scene image of a real viewing scene of the performance viewed by the viewer.
S202, taking the first real scene image and/or the second real scene image as real scene images.
In the embodiment of the present disclosure, the display device may acquire a first real scene image as a real scene image, and identify scene information from the first real scene image; or, the display device may acquire a second real scene image as the real scene image, and identify scene information from the second real scene image; alternatively, the display device may acquire the first real scene image and the second real scene image together as the real scene image, and identify the scene information from the first real scene image and the second real scene image, which is not limited in the embodiment of the present disclosure.
It can be understood that the display device can acquire a plurality of real scene images through a plurality of image acquisition devices, so that the display device can acquire more scene information from the plurality of real scene images, further determine more comprehensive virtual special effect data and obtain better display effect.
In some embodiments of the present disclosure, the presentation device may present an augmented display AR effect of the virtual special effect image superimposed with the first real scene image on the display screen.
It can be understood that the display device identifies at least one piece of scene information based on the first real scene image and the second real scene image, and after at least one virtual special effect image is determined, the virtual special effect image is superposed with the first real scene image in the real scene image to obtain an Augmented Reality (AR) effect; in this way, the viewer can see the AR effect on the performance on the transparent display screen; in addition, because the virtual special effect image is determined according to the first real scene image and/or the second real scene image, the interaction with actors and/or audiences is added, and the display effect is improved.
In some embodiments of the present disclosure, after acquiring the attribute information of the current performance content, the display device may determine, based on the attribute information of the current performance content, first animation special effect data that matches the attribute information of the current performance content; the first animated special effect data serves as a virtual special effect data.
In the embodiment of the disclosure, after obtaining a time node of a current performance content, a display device may determine first animation special effect data corresponding to the current time node, render the first animation special effect data to obtain a first animation, and play the first animation on a transparent display screen; in this way, the corresponding first animation can be played on each time node.
In the embodiment of the disclosure, the display device stores the corresponding relationship between the preset time node and the first animation data; the display device can play the first animation corresponding to the preset time node when the current time point reaches the preset time node.
Illustratively, the whole performance includes 10 programs, corresponding first animation data is set for each program, and the corresponding first animation is played according to the time node of the current program, so that the audience can watch the performance of actors and the superposed first animation through the transparent display screen, thereby showing a rich stage effect.
In some embodiments of the present disclosure, after acquiring the attribute information of the current performance object, the display device may determine, based on the attribute information of the current performance object, second animation special effect data that matches the attribute information of the current performance object; the second animated special effect data serves as a virtual special effect data.
It should be noted that the attribute information of different current show objects may correspond to different second animation special effect data.
In the embodiment of the disclosure, the display device stores the corresponding relationship between the attribute information of the preset performance object and the second animation data; after the display equipment identifies the attribute of the current performance object, the matched second animation special effect data can be determined, and the second animation special effect data is rendered to obtain a second animation.
Illustratively, when the display device identifies that the expression of the current performance object is crying, the second animation displayed on the transparent display screen is an embraced expression to show comfort; the second animation displayed on the transparent display screen may be an animation in which the flower is open when the expression of the current show object is recognized as smiling.
In some embodiments of the present disclosure, after obtaining the attribute information of the viewing object, the display device may determine, based on the attribute information of the viewing object, third animation special effect data that matches the attribute information of the viewing object; the third animated special effect data serves as a virtual special effect data.
In the embodiment of the present disclosure, the display device stores a preset corresponding relationship between attribute information of a viewing object and third animation data; after the attribute of the watching object is identified by the display equipment, the matched third animation special effect data can be determined, and the third animation special effect data is rendered to obtain a third animation.
For example, when the display device recognizes that the motion of the viewing object is clapping, the third animation displayed on the display screen may be a firework animation obtained after rendering the firework data; when the display equipment determines that the age of the watching object is children, the firework with the cartoon style can be displayed on the transparent display screen; the display device can display fashionable fireworks on the transparent display screen when determining that the age of the watching object is young.
It can be understood that the display device may determine the corresponding third animation special effect data according to the attribute information of the viewing object, and further superimpose the third animation obtained by rendering the third animation special effect data on the transparent display screen, so that interactivity with the viewing object in the performance process is increased, and the display effect is improved.
In some embodiments of the present disclosure, in S104, the displaying, through a transparent display screen of a display device, an implementation of an augmented reality effect in which a virtual special effect image rendered by at least one piece of virtual special effect data is superimposed on an image of a real scene may include, as shown in fig. 8: S301-S303.
S301, determining target virtual special effect data according to at least one piece of virtual special effect data.
In the embodiment of the present disclosure, after obtaining at least one piece of virtual special effect data, the display device needs to determine target virtual special effect data according to the at least one piece of virtual special effect data.
In some embodiments of the present disclosure, the at least one virtual special effect data comprises one virtual special effect data, which the presentation apparatus may determine as the target virtual special effect data.
In some embodiments of the present disclosure, the at least one piece of virtual special effect data includes two or more pieces of virtual special effect data, and the presentation device may determine, according to a priority of the at least one piece of virtual special effect data, a piece of virtual special effect data with a highest priority as the target virtual special effect data.
In the embodiment of the present disclosure, the priorities of different virtual special effect data are different; after the display device obtains at least one piece of virtual special effect data, the virtual special effect data with the highest priority can be determined, and the virtual special effect data with the highest priority is determined as target scene data.
Illustratively, the number of the at least one piece of virtual special effect data obtained by the presentation device is 3, which are: first animation special effect data, second animation special effect data, and third animation special effect data; wherein the first animation special effect data has the highest priority; the second animation special effect data has a lower priority than the first animation special effect data; the priority of the third animation special effect data is lower than that of the second animation special effect data; the first animation special effect data comprise light data irradiated by 45 degrees of light obliquely leftwards; the second animation special effect data comprise light data irradiated by light at an angle of 45 degrees to the right; the third animation special effect data comprises firework setting data; the display device may use light data in the first animation special effect data, in which light is obliquely radiated to the left by 45 degrees, as the target virtual special effect data.
And S302, rendering is carried out according to the target virtual special effect data to obtain a virtual special effect image.
And S303, displaying the augmented reality effect overlapped by the virtual special effect image and the first real scene image through a transparent display screen of the display equipment.
In the embodiment of the disclosure, after obtaining the target virtual special effect data, the display device renders according to the target virtual special effect data, so as to obtain a virtual special effect image; and overlapping and displaying the virtual special effect image and the first real scene image on the transparent display screen.
It can be understood that after obtaining a plurality of pieces of virtual special effect data, the display device may select the virtual special effect data with the highest priority from the plurality of pieces of virtual special effect data, and use the virtual special effect data with the highest priority as the target virtual special effect data; the target virtual special effect data is rendered to obtain a virtual special effect image, and the virtual special effect image and the performance scene image are displayed on the transparent display screen in an overlapping mode, so that the automation of augmented reality AR display is increased, and the display effect is improved.
In some embodiments of the present disclosure, the rendering is performed according to the target virtual special effect data in S302 to obtain an implementation of the virtual special effect image, as shown in fig. 9, the rendering may include: S401-S402.
S401, determining the display position of the virtual special effect image according to the real-time position of the performance object in the first real scene image.
In the embodiment of the disclosure, after the display device acquires the first real scene image, the feature point of the first real scene image may be matched with the three-dimensional scene model of the first real scene image, so that the model position of the performance object in the three-dimensional scene model is determined according to the display position of the performance object in the first real scene image; and then determining the real-time display position of the performance object on the display screen according to the conversion relation between the three-dimensional scene model coordinates and the display screen coordinates.
In the embodiment of the present disclosure, the three-dimensional scene model may be established according to the collected first real scene image during the performance; or can be pre-established before the performance begins; the embodiment of the present disclosure is not limited to the establishment of a three-dimensional scene model.
In the embodiment of the present disclosure, after the real-time display position of the performance object is obtained, the display device may determine the display position of the virtual special effect image according to a position correspondence between the display position of the performance object and the display position of the virtual special effect image.
In the embodiment of the present disclosure, the position correspondence may be preset; for example, the display position of the virtual special effect image is on the left side of the display position of the show object, and the distance between the display position of the virtual special effect image and the display position of the show object is a first preset distance; after the display device acquires the display position of the performance object, the display device may determine that the display position of the virtual special effect image is on the left side of the display position of the performance object, and the distance between the virtual special effect image and the performance object is a first preset distance; therefore, when the real-time position of the performance object changes, the real-time position of the virtual special effect image also changes, so that the preset position corresponding relation between the virtual special effect image and the performance object is maintained.
In some embodiments of the present disclosure, different categories of virtual special effect images may correspond to different presentation positions. The display device can determine the display position of the virtual special effect image according to the category of the virtual special effect image.
Illustratively, the virtual special effect image is a virtual animation effect image, such as a hugging expression effect image, the display position of the corresponding virtual special effect image is the right side of the display position of the performance object, and the distance between the display position and the display position is a second preset distance; the virtual special effect image is a virtual set effect image, such as a virtual sky, and the display position of the corresponding virtual special effect image is a display position other than the display position of the performance object, so that the virtual sky can be used as a background and the performance object cannot be shielded.
The first preset distance and the second preset distance may be set as needed, and the embodiment of the present disclosure is not limited thereto.
In some embodiments of the present disclosure, different virtual special effect images in the same category of virtual special effect images may correspond to different presentation positions. The display device can determine different display positions for different virtual special effect images.
Illustratively, the virtual special effect image is a virtual scene effect image; if the virtual scene effect image is a virtual sky, the display position of the virtual special effect image is at other display positions except the display position of the performance object and occupies one third of the upper part of the transparent display screen; thus, the virtual sky can be displayed above the transparent display screen without blocking the performance object; if the virtual scene effect image is a virtual sea, the display position of the virtual special effect image is at other display positions except the display position of the performance object and occupies one third of the lower part of the transparent display screen; therefore, the virtual sea can be displayed below the transparent display screen, and the performance object is not shielded.
In the embodiment of the present disclosure, the real-time display position of the performance object on the display screen may change along with the motion of the performance object, and then the display position of the virtual special effect image may change along with the real-time display position of the performance object.
S402, rendering the target virtual special effect data in real time at the display position of the virtual special effect image to obtain the virtual special effect image.
In the embodiment of the disclosure, the display device renders the target virtual special effect data in real time at the display position of the virtual special effect image to obtain the virtual special effect image.
In some embodiments of the disclosure, when the at least one piece of virtual special effect data is greater than or equal to two pieces of virtual special effect data, the rendering is performed based on the at least one piece of virtual special effect data in S104, and obtaining the implementation of the virtual special effect image, as shown in fig. 10, may include: S501-S502.
S501, rendering each virtual special effect data of at least one piece of virtual special effect data to obtain at least two sub virtual special effect images.
In this disclosure, if the number of the at least one piece of virtual special effect data is two or more, the display device may render each piece of virtual special effect data in the at least one piece of virtual special effect data to obtain at least two corresponding sub-virtual special effect images.
S502, according to the real-time position of the performance object in the first real scene image, at least two sub-virtual special effect images are subjected to real-time superposition and real-time rendering of the corresponding display positions, and a virtual special effect image is obtained.
In the embodiment of the disclosure, the display device determines at least two sub-display positions corresponding to at least two sub-virtual special effect images according to the real-time position of the performance object; and performing real-time superposition and real-time rendering on at least two sub-virtual special effect images at corresponding sub-display positions to obtain virtual special effect images.
In the embodiment of the present disclosure, in the position correspondence, the display position of the performance object may correspond to the display positions of the plurality of virtual special effect images; in this way, in the case where the number of the at least one virtual special effect data is two or more, the presentation apparatus may select at least two sub-presentation positions from the presentation positions of the plurality of virtual special effect images.
In some embodiments of the disclosure, the presentation apparatus may arbitrarily select at least two presentation positions as the at least two sub-presentation positions from among the presentation positions of the plurality of virtual special effect images.
In some embodiments of the present disclosure, the display apparatus may display at least two sub-display positions corresponding from the display positions of the plurality of virtual special effect images according to a correspondence between each of the display positions of the plurality of virtual special effect images and the sub-virtual special effect image.
Illustratively, the presentation positions of the plurality of virtual special effect images include: a display position 1, a display position 2, a display position 3 and a display position 4; the display position 1 and the display position 2 are display positions corresponding to the virtual animation effect graph; the display position 3 is a display position corresponding to the virtual performance effect diagram; the display position 4 is a display position corresponding to the virtual scene effect picture; wherein the display position 1 is 45 degrees at the upper left of the display position of the performance object and is 20 centimeters away from the performance object; the display position 2 is 45 degrees at the upper right of the display position of the performance object and is 20 centimeters away from the performance object; the display position 3 is arranged at the right side of the display position of the performance object and is 3 meters away from the performance object; the display position 4 is at other positions than the display position of the performance object; in this way, the display device determines the display position 3 of the virtual set effect picture as the display position of the virtual sky under the condition that at least two sub-virtual images including the virtual sky and the animation expression are obtained; selecting one of the display position 1 and the display position 2 of the virtual animation effect picture as a display position of the animation expression; thus, the display device may determine that the two sub-display positions are display position 1 and display position 3; the presentation apparatus may determine that the two sub-presentation positions are presentation position 1 and presentation position 2.
The embodiment of the present disclosure provides a display device, fig. 11 is an optional schematic structural diagram of the display device provided in the embodiment of the present disclosure, and as shown in fig. 11, the display device 20 includes:
an obtaining module 2001, configured to obtain a real scene image of a performance venue, where a display device is disposed between a performance area and a viewing area in the performance venue;
an identifying module 2002, configured to identify scene information of the performance venue based on the real scene image;
a determining module 2003, configured to determine, based on scene information of the performance site, at least one piece of virtual special effect data matching the scene information;
a display module 2004, configured to display, through a transparent display screen of the display device, an augmented reality effect in which a virtual special effect image rendered from the virtual special effect data is superimposed on the real scene image.
In some embodiments of the present disclosure, the context information includes at least one of the following information: attribute information of the current performance content in the performance site; attribute information of a performance object located in a performance area; attribute information of a viewing object located in the viewing zone.
In some embodiments of the present disclosure, the obtaining module 2001 is further configured to obtain, by the first image obtaining device, a first real scene image corresponding to the performance area; and/or acquiring a second real scene image corresponding to the watching region through the second image acquisition device; taking the first real scene image and/or the second real scene image as the real scene image.
In some embodiments of the present disclosure, the determining module 2003 is further configured to determine, based on the attribute information of the current performance content, first animation special effect data matching the attribute information of the current performance content; the first animation special effect data is used as virtual special effect data; determining second animation special effect data matched with the attribute information of the current performance object based on the attribute information of the current performance object; the second animation special effect data is used as virtual special effect data; determining third animation special effect data matched with the attribute information of the viewing object based on the attribute information of the viewing object; the third animated special effect data serves as a virtual special effect data.
In some embodiments of the present disclosure, the presentation module 2004 is further configured to determine target virtual special effect data from the at least one virtual special effect data; rendering is carried out according to the target virtual special effect data to obtain the virtual special effect image; and displaying the augmented reality effect overlapped by the virtual special effect image and the real scene image through a transparent display screen of the display equipment.
In some embodiments of the present disclosure, the presentation module 2004 is further configured to determine, according to the priority of the at least one virtual special effect data, a virtual special effect data with a highest priority as the target virtual special effect data.
In some embodiments of the present disclosure, the presentation module 2004 is further configured to determine a presentation position of the virtual special effect image according to a real-time position of a show object in the first real scene image; and rendering the target virtual special effect data in real time at the display position of the virtual special effect image to obtain the virtual special effect image.
In some embodiments of the present disclosure, the presentation module 2004 is further configured to render each virtual special effect data of the at least one virtual special effect data to obtain at least two sub-virtual special effect images; and according to the real-time position of the performance object in the first real scene image, performing real-time superposition and real-time rendering on the corresponding display positions of the at least two sub-virtual special effect images to obtain the virtual special effect image.
In some embodiments of the present disclosure, the virtual special effects image comprises at least one of:
a virtual light effect graph; a virtual animation effect graph; a virtual auxiliary performance effect diagram; and (5) virtual scene effect pictures.
In some embodiments of the disclosure, the attribute information of the current show content includes at least one of:
the performance duration of the current performance content; a performance time node of the current performance content; the type of content of the current show.
In some embodiments of the disclosure, the attribute information of the show object includes at least one of:
the attitude of the performance object; a category of the show object; the position of the show object.
In some embodiments of the present disclosure, the attribute information of the viewing object includes at least one of:
an expression of the viewing object; an action of viewing the object; a category of viewing object.
Fig. 12 is a schematic diagram of an optional constituent structure of the display apparatus provided in the embodiment of the present disclosure, and as shown in fig. 12, the display apparatus 12 includes:
a transparent display screen 2101 configured to display an augmented reality effect in which the virtual special effect image and the real scene image are superimposed;
a memory 2102 for storing a computer program;
the processor 2103 is configured to, when executing the computer program stored in the memory 2102, implement the steps of the display method provided in the foregoing embodiment in combination with the transparent display 2101.
The display apparatus 21 further comprises: a communication bus 2104. The communication bus 2104 is configured to enable connection communications between these components.
The Memory 2102 is configured to store computer programs and applications executed by the processor 2103, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 2103 and modules in the presentation apparatus, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
The processor 2103, when executing the program, performs the steps of any of the presentation methods described above. Processor 2103 generally controls the overall operation of display device 21.
The Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is understood that the electronic device implementing the above processor function may be other, and the embodiments of the present disclosure are not limited.
The computer-readable storage medium/Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM), and the like; but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure. The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present disclosure.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Alternatively, the integrated unit of the present disclosure may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an automatic test line of a device to perform all or part of the methods according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in this disclosure may be combined arbitrarily without conflict to arrive at new method embodiments.
The features disclosed in the several method or apparatus embodiments provided in this disclosure may be combined in any combination to arrive at a new method or apparatus embodiment without conflict.
The above description is only an embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (15)

1. A method of displaying, comprising:
acquiring a real scene image of a performance site, wherein display equipment is arranged between a performance area and a watching area in the performance site;
identifying scene information of the performance site based on the real scene image;
determining at least one piece of virtual special effect data matched with the scene information based on the scene information of the performance field;
and displaying an augmented reality effect of the virtual special effect image rendered by the at least one piece of virtual special effect data and the real scene image by the transparent display screen of the display equipment.
2. The method of claim 1, wherein the scene information comprises at least one of the following information: attribute information of the current performance content in the performance site; attribute information of a performance object located in a performance area; attribute information of a viewing object located in the viewing zone.
3. The method of claim 2, wherein capturing the image of the real scene of the show venue comprises:
acquiring a first real scene image corresponding to the performance area through a first image acquisition device; and/or the presence of a gas in the gas,
acquiring a second real scene image corresponding to the watching area through a second image acquisition device;
taking the first real scene image and/or the second real scene image as the real scene image.
4. The method of claim 3, wherein determining at least one virtual special effect data that matches the scene information based on the scene information for the show venue comprises at least one of:
determining first animation special effect data matched with the attribute information of the current performance content based on the attribute information of the current performance content; the first animation special effect data is used as virtual special effect data;
determining second animation special effect data matched with the attribute information of the current performance object based on the attribute information of the current performance object; the second animation special effect data is used as virtual special effect data;
determining third animation special effect data matched with the attribute information of the viewing object based on the attribute information of the viewing object; the third animated special effect data serves as a virtual special effect data.
5. The method according to claim 3 or 4, wherein the displaying, through a transparent display screen of the display device, the augmented reality effect in which the virtual special effect image rendered from the at least one virtual special effect data is superimposed on the real scene image comprises:
determining target virtual special effect data according to the at least one piece of virtual special effect data;
rendering is carried out according to the target virtual special effect data to obtain the virtual special effect image;
and displaying the augmented reality effect overlapped by the virtual special effect image and the first real scene image through a transparent display screen of the display equipment.
6. The method of claim 5, wherein when the at least one virtual special effect data is two or more, the determining target virtual special effect data from the at least one virtual special effect data comprises:
and determining the virtual special effect data with the highest priority as the target virtual special effect data according to the priority of the at least one piece of virtual special effect data.
7. The method of claim 6, wherein the rendering according to the target virtual special effect data to obtain a virtual special effect image comprises:
determining the display position of the virtual special effect image according to the real-time position of the performance object in the first real scene image;
and rendering the target virtual special effect data in real time at the display position of the virtual special effect image to obtain the virtual special effect image.
8. The method according to any one of claims 3 to 7, wherein when the at least one virtual special effect data is greater than or equal to two, the rendering based on the at least one virtual special effect data to obtain a virtual special effect image comprises:
rendering each virtual special effect data of the at least one virtual special effect data to obtain at least two sub virtual special effect images;
and according to the real-time position of the performance object in the first real scene image, performing real-time superposition and real-time rendering on the corresponding display positions of the at least two sub-virtual special effect images to obtain the virtual special effect image.
9. The method of any of claims 2-8, wherein the virtual special effects image comprises at least one of:
a virtual light effect graph; a virtual animation effect graph; a virtual auxiliary performance effect diagram; and (5) virtual scene effect pictures.
10. The method of any of claims 2-8, wherein the attribute information of the current show content includes at least one of:
the performance duration of the current performance content; a performance time node of the current performance content; the type of content of the current show.
11. The method of any of claims 2-8, wherein the performance object attribute information comprises at least one of:
the attitude of the performance object; a category of the show object; the position of the show object.
12. The method according to any one of claims 2 to 8, wherein the attribute information of the viewing object includes at least one of:
an expression of the viewing object; an action of viewing the object; a category of viewing object.
13. A display device, comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a real scene image of a performance site, and display equipment is arranged between a performance area and a watching area in the performance site;
the identification module is used for identifying scene information of the performance site based on the real scene image;
the determining module is used for determining at least one piece of virtual special effect data matched with the scene information based on the scene information of the performance field;
and the display module is used for displaying the augmented reality effect superposed by the virtual special effect image rendered by the virtual special effect data and the real scene image through the transparent display screen of the display equipment.
14. A display apparatus, comprising:
the transparent display screen is used for displaying an augmented reality effect of the virtual special effect image and the real scene image which are superposed;
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1 to 12 in conjunction with the transparent display screen when executing a computer program stored in the memory.
15. A computer-readable storage medium, characterized in that a computer program is stored for implementing the method of any of claims 1 to 12 when being executed by a processor.
CN202011566877.9A 2020-12-25 2020-12-25 Presentation method, presentation device, presentation equipment and computer-readable storage medium Pending CN112714305A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011566877.9A CN112714305A (en) 2020-12-25 2020-12-25 Presentation method, presentation device, presentation equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011566877.9A CN112714305A (en) 2020-12-25 2020-12-25 Presentation method, presentation device, presentation equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN112714305A true CN112714305A (en) 2021-04-27

Family

ID=75546733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011566877.9A Pending CN112714305A (en) 2020-12-25 2020-12-25 Presentation method, presentation device, presentation equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN112714305A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625872A (en) * 2021-07-30 2021-11-09 深圳盈天下视觉科技有限公司 Display method, system, terminal and storage medium
CN114003092A (en) * 2021-10-29 2022-02-01 深圳康佳电子科技有限公司 Intelligent display equipment for virtual reality and virtual reality method
CN115022666A (en) * 2022-06-27 2022-09-06 北京蔚领时代科技有限公司 Interaction method and system for virtual digital person
CN115766312A (en) * 2022-10-25 2023-03-07 深圳绿米联创科技有限公司 Scene linkage demonstration method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018188088A1 (en) * 2017-04-14 2018-10-18 广州千藤玩具有限公司 Clay toy system based on augmented reality and digital image processing and method therefor
CN110737338A (en) * 2019-10-20 2020-01-31 南京象皮尼科技有限公司 Holographic projection method and system for special effects to automatically follow roles
CN110858491A (en) * 2018-08-07 2020-03-03 深圳市宝业恒实业股份有限公司 Intelligent scene multimedia informatization management control equipment based on Internet of things
CN111643900A (en) * 2020-06-08 2020-09-11 浙江商汤科技开发有限公司 Display picture control method and device, electronic equipment and storage medium
CN111897431A (en) * 2020-07-31 2020-11-06 北京市商汤科技开发有限公司 Display method and device, display equipment and computer readable storage medium
CN112040141A (en) * 2019-06-03 2020-12-04 南京风船云聚信息技术有限公司 Video production method based on three-dimensional reconstruction and virtual reality

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018188088A1 (en) * 2017-04-14 2018-10-18 广州千藤玩具有限公司 Clay toy system based on augmented reality and digital image processing and method therefor
CN110858491A (en) * 2018-08-07 2020-03-03 深圳市宝业恒实业股份有限公司 Intelligent scene multimedia informatization management control equipment based on Internet of things
CN112040141A (en) * 2019-06-03 2020-12-04 南京风船云聚信息技术有限公司 Video production method based on three-dimensional reconstruction and virtual reality
CN110737338A (en) * 2019-10-20 2020-01-31 南京象皮尼科技有限公司 Holographic projection method and system for special effects to automatically follow roles
CN111643900A (en) * 2020-06-08 2020-09-11 浙江商汤科技开发有限公司 Display picture control method and device, electronic equipment and storage medium
CN111897431A (en) * 2020-07-31 2020-11-06 北京市商汤科技开发有限公司 Display method and device, display equipment and computer readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625872A (en) * 2021-07-30 2021-11-09 深圳盈天下视觉科技有限公司 Display method, system, terminal and storage medium
CN114003092A (en) * 2021-10-29 2022-02-01 深圳康佳电子科技有限公司 Intelligent display equipment for virtual reality and virtual reality method
CN114003092B (en) * 2021-10-29 2024-09-03 深圳康佳电子科技有限公司 Intelligent display device for virtual reality and virtual reality method
CN115022666A (en) * 2022-06-27 2022-09-06 北京蔚领时代科技有限公司 Interaction method and system for virtual digital person
CN115022666B (en) * 2022-06-27 2024-02-09 北京蔚领时代科技有限公司 Virtual digital person interaction method and system
CN115766312A (en) * 2022-10-25 2023-03-07 深圳绿米联创科技有限公司 Scene linkage demonstration method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11948260B1 (en) Streaming mixed-reality environments between multiple devices
CN112714305A (en) Presentation method, presentation device, presentation equipment and computer-readable storage medium
US10516870B2 (en) Information processing device, information processing method, and program
US10818094B2 (en) System and method to integrate content in real time into a dynamic real-time 3-dimensional scene
CN106730815B (en) Somatosensory interaction method and system easy to realize
TW202123178A (en) Method for realizing lens splitting effect, device and related products thereof
JP6298563B1 (en) Program and method for providing virtual space by head mounted device, and information processing apparatus for executing the program
CN114615513B (en) Video data generation method and device, electronic equipment and storage medium
CN114745598B (en) Video data display method and device, electronic equipment and storage medium
WO2023035897A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
US20240163528A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
CN112261481A (en) Interactive video creating method, device and equipment and readable storage medium
KR102200239B1 (en) Real-time computer graphics video broadcasting service system
CN114697703B (en) Video data generation method and device, electronic equipment and storage medium
Marner et al. Exploring interactivity and augmented reality in theater: A case study of Half Real
CN112261482B (en) Interactive video playing method, device and equipment and readable storage medium
US20230326161A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN113194329B (en) Live interaction method, device, terminal and storage medium
CN115562480A (en) Method and device for augmented reality
CN114286077B (en) Virtual reality device and VR scene image display method
JP2019012509A (en) Program for providing virtual space with head-mounted display, method, and information processing apparatus for executing program
US20170287521A1 (en) Methods, circuits, devices, systems and associated computer executable code for composing composite content
CN114915735A (en) Video data processing method
CN118105689A (en) Game processing method and device based on virtual reality, electronic equipment and storage medium
CN118118700A (en) Live interaction method, live interaction device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210427