CN112684893A - Information display method and device, electronic equipment and storage medium - Google Patents

Information display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112684893A
CN112684893A CN202011623777.5A CN202011623777A CN112684893A CN 112684893 A CN112684893 A CN 112684893A CN 202011623777 A CN202011623777 A CN 202011623777A CN 112684893 A CN112684893 A CN 112684893A
Authority
CN
China
Prior art keywords
media resource
sand table
target display
head
display object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011623777.5A
Other languages
Chinese (zh)
Inventor
钱广璞
陈罡
徐欣
毕航
杨顺超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Electric Group Corp
Original Assignee
Shanghai Electric Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Electric Group Corp filed Critical Shanghai Electric Group Corp
Priority to CN202011623777.5A priority Critical patent/CN112684893A/en
Publication of CN112684893A publication Critical patent/CN112684893A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides an information display method, an information display device, electronic equipment and a storage medium. The sand table display is completed based on the cooperation of the head-mounted mixed reality equipment and the actual scene. In the display process, different target display objects can be automatically identified, virtual resources of the target objects are displayed based on the head-mounted mixed reality equipment, the control of a user on display contents can be achieved through man-machine interaction, and the effect of information popularization and publicity is improved once.

Description

Information display method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to an information display method and apparatus, an electronic device, and a storage medium.
Background
In traditional exhibitions and advertising campaigns, exhibitors mostly take entity models, brochures, display boards or audio and video displays as the main.
In the current display method, if the entity model is taken as an object, the manufacturing, transportation and exhibition costs of the model are higher. And the contents presented by the display board, the brochure or the audio and video are relatively monotonous to show and have poor propaganda effect. Therefore, how to improve the display and publicity effects needs to be solved.
Disclosure of Invention
The embodiment of the application provides an information display method, an information display device, electronic equipment and a storage medium, and aims to solve the problem that platform resources are wasted due to the fact that some drivers in a network appointment platform intentionally have no order hanging time.
In one aspect, an embodiment of the present application provides an information displaying method, including:
responding to the scanned target display object of the sand table, and controlling the head-mounted mixed reality equipment to display the media resource corresponding to the target display object;
and responding to the gesture operation, and performing corresponding control on the media resource.
In some embodiments, the method further comprises:
determining that the target display object is scanned according to the following method:
if the mark of the target display object is scanned, determining that the target display object is scanned; or the like, or, alternatively,
and if the position information of the target display object is scanned, determining that the target display object is scanned.
In some embodiments, if the media asset is video or audio, the control items supported by the media asset include start playing, fast forward playing, pause playing, rewind playing, and switch media asset;
if the media resource is a picture, the control items supported by the media resource comprise a zoom picture and a media resource switching;
if the media resource is a dynamic animation of a three-dimensional virtual model and the media resource comprises multiple operation modes of the three-dimensional virtual model, the control items supported by the media resource comprise selecting any operation mode for playing, switching different operation modes, circularly playing at least one operation mode and switching the media resource;
if the media resource comprises a plurality of virtual components, the control item supported by the media resource comprises virtual equipment assembled by adopting at least two virtual components, and the virtual equipment is controlled to operate and switch the media resource;
if the media resource is in the structure of the three-dimensional virtual model, the control items supported by the media resource comprise a shell for displaying the three-dimensional virtual model, a hidden shell for displaying the internal structure of the three-dimensional virtual model and media resource switching.
In some embodiments, if the sand table comprises a physical sand table, the method further comprises:
responding to the gesture operation for controlling the physical sand table, and controlling the physical sand table to change;
or, in response to the control performed on the media asset, controlling the physical sandbox to change.
In some embodiments, the method further comprises:
controlling the head-mounted mixed reality device to zoom the displayed content in response to a gesture operation for zooming the content;
and/or the presence of a gas in the gas,
in response to the gesture operation of adjusting the height, controlling the head-mounted mixed reality device to adjust the height of the displayed content.
In some embodiments, the operation object of the gesture operation is a virtual button displayed by the head-mounted mixed reality device; the method further comprises the following steps:
and responding to the position adjusting operation, and controlling the head-mounted mixed reality equipment to adjust the position area where the virtual button is located.
In some embodiments, if the sand table includes a physical sand table, the physical sand table can be seen from the head-mounted mixed reality device, and the controlling the head-mounted mixed reality device to display the media resource corresponding to the target display object includes:
and controlling the media resources to be displayed in a designated area with the physical sand table as a reference, or displaying the media resources at the position of the target display object.
In a second aspect, the present application further provides an information display apparatus, the apparatus comprising:
the media display module is used for responding to the scanned target display object of the sand table and controlling the head-mounted mixed reality equipment to display the media resource corresponding to the target display object;
and the interaction module is used for responding to the gesture operation and executing corresponding control on the media resource.
In some embodiments, the apparatus further comprises:
a target display object determination module, configured to determine that the target display object is scanned according to the following method:
if the mark of the target display object is scanned, determining that the target display object is scanned; or the like, or, alternatively,
and if the position information of the target display object is scanned, determining that the target display object is scanned.
In some embodiments, if the media asset is video or audio, the control items supported by the media asset include start playing, fast forward playing, pause playing, rewind playing, and switch media asset;
if the media resource is a picture, the control items supported by the media resource comprise a zoom picture and a media resource switching;
if the media resource is a dynamic animation of a three-dimensional virtual model and the media resource comprises multiple operation modes of the three-dimensional virtual model, the control items supported by the media resource comprise selecting any operation mode for playing, switching different operation modes, circularly playing at least one operation mode and switching the media resource;
if the media resource comprises a plurality of virtual components, the control item supported by the media resource comprises virtual equipment assembled by adopting at least two virtual components, and the virtual equipment is controlled to operate and switch the media resource;
if the media resource is in the structure of the three-dimensional virtual model, the control items supported by the media resource comprise a shell for displaying the three-dimensional virtual model, a hidden shell for displaying the internal structure of the three-dimensional virtual model and media resource switching.
In some embodiments, if the sand table comprises a physical sand table, the interaction module is further configured to:
responding to the gesture operation for controlling the physical sand table, and controlling the physical sand table to change;
or, in response to the control performed on the media asset, controlling the physical sandbox to change.
In some embodiments, the interaction module is further to:
controlling the head-mounted mixed reality device to zoom the displayed content in response to a gesture operation for zooming the content;
and/or the presence of a gas in the gas,
in response to the gesture operation of adjusting the height, controlling the head-mounted mixed reality device to adjust the height of the displayed content.
In some embodiments, the operation object of the gesture operation is a virtual button displayed by the head-mounted mixed reality device; the interaction module is further to:
and responding to the position adjusting operation, and controlling the head-mounted mixed reality equipment to adjust the position area where the virtual button is located.
In some embodiments, if a physical sand table is included in the sand table, the media presentation module is configured to see through to the physical sand table from the head mounted mixed reality device:
and controlling the media resources to be displayed in a designated area with the physical sand table as a reference, or displaying the media resources at the position of the target display object.
In a third aspect, another embodiment of the present application further provides an electronic device, including at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute any information presentation method provided by the embodiment of the application.
In a fourth aspect, another embodiment of the present application further provides a computer storage medium, where the computer storage medium stores a computer program, and the computer program is used to enable a computer to execute any information presentation method in the embodiments of the present application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of an information presentation method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an information displaying method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a template image provided in accordance with an embodiment of the present application;
fig. 4 is a schematic structural diagram of an information display apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
(1) In the embodiments of the present application, the term "plurality" means two or more, and other terms are similar thereto.
(2) "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In view of the poor effect of the conventional information display method, AR technology can be used to display the content. If the AR technology is used for exhibition, the traditional mobile phone or the tablet personal computer is only used for presentation, the stereoscopic impression is poor, information is input to audiences in a one-way mode, and the audiences cannot well understand the information. In the embodiment of the application, the virtual scene is mapped to the real scene and is directly presented in front of the audience, and the interaction between a person and the virtual scene and even the interaction between the person and the real scene can be realized through gesture operation. Helping users to find content in which the users are interested. By means of the technology, the propaganda effect in exhibitions and advertising activities can be greatly improved.
To facilitate understanding of the information display method provided by the embodiments of the present application, the following description is made with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an application scenario according to an embodiment of the present application.
As shown in fig. 1, the application scenario may include, for example, a storage system 10, a server 20, and a head-mounted hybrid display device 30. The head mounted hybrid display device 30 may be used for network access including, but not limited to, VR glasses, VR helmets, and the like.
The sand table referred to in the embodiment of the present application may be a physical sand table (i.e., a physical sand table) or an electronic sand table. The sand table may be a building model or an equipment model.
The storage system 10 shown in fig. 1 is capable of storing information for a sand table. The server 20 is used to implement interaction with the head mounted hybrid display device 30 to implement a complete information presentation service.
In practice, taking an exhibition as an example, the spectator can wear the head-mounted hybrid display device to move in the meeting place. Different display objects on the physical sand table in the meeting place can be provided with marks, and when the marks are recognized, the head-mounted display equipment can display virtual reality images of the corresponding display objects. The virtual display imagery and the physical sand table may be displayed superimposed to the audience. When the audience is interested in the virtual display image, the interaction with the virtual display image can be realized by executing gesture operation in the air. For example, the viewer can zoom the virtual display image, search the content of interest for playing, and view the shell and internal structure of the sand table through the virtual display image. In addition, the spectator can also control the physical sand table to change through gesture operation and air separation. For example, controlling a building sand table to turn on a light, turning off the light, controlling the screen content on the sand table to change, and the like. Meanwhile, the display object can be explained for the audience by combining the virtual display image and the audio.
The processing of these virtual display images may be performed by the server 20 and transferred to the head-mounted mixed reality device 30, or may be performed by the head-mounted mixed reality device 30 itself. The information display method provided by the embodiment of the present application may be partially executed by the head-mounted hybrid display device 30, and partially executed by the server 20.
In the application scenario shown in fig. 1, the head mounted hybrid display devices 30 (e.g., 30_1 and 30_2 or 30_ N) may also communicate with each other via the network 40. Network 40 may be a network for information transfer in a broad sense and may include one or more communication networks such as a wireless communication network, the internet, a private network, a local area network, a metropolitan area network, a wide area network, or a cellular data network, among others.
Only a single server or head mounted hybrid display device is detailed in the description of the present application, but it will be understood by those skilled in the art that the single server 20, the head mounted hybrid display device 30, and the storage system 10 shown are intended to represent the operation of the aspects of the present application involving head mounted hybrid display devices, servers, and storage systems. The detailed description of the single head mounted hybrid display device and the single server and storage system is at least for convenience of illustration and does not imply a limitation on the number, type, or location of the head mounted hybrid display device and the server. It should be noted that the underlying concepts of the example embodiments of the present application may not be altered if additional modules are added or removed from the illustrated environments. In addition, although fig. 1 shows a bidirectional arrow from the storage system 10 to the server 20 for convenience of explanation, it will be understood by those skilled in the art that the above-described data transmission and reception may be realized through the network 40.
The server 20 may be a server, a server cluster composed of several servers, or a cloud computing center. The server 20 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform.
In the information display method provided by the embodiment of the application, a user can know the displayed content based on the head-mounted mixed reality device. The head-mounted mixed reality device can not only view virtual reality pictures, but also see through a physical sand table to view real models.
During the implementation, for the convenience of different body types of the different users of operating habits can know the show information with comfortable mode, in this application embodiment, the user can enlarge or reduce the height and the size of the content that wear-type mixed reality equipment shows through gesture operation. For example, the height can be adjusted by performing a drag gesture in the air, and the size of the display content can be controlled to be larger or smaller by sliding both hands in opposite directions.
In another embodiment, different areas of the sand table in the present application may have different display requirements. For example, component 1 within a sandbox may provide audio explanation services, explain the performance, design concepts, parameters, etc. of the component. Another part of the sand table allows a user to operate the part, and the operation effect of the operated part is displayed by adopting virtual reality technology. Therefore, in the embodiment of the application, the contents of different areas can be automatically displayed to the user. The sand table can be divided into a plurality of target display objects, and each target display object can be automatically identified and can display corresponding content through the head-mounted virtual reality device. In view of this, as shown in fig. 2, a flow chart of the information displaying method provided in the embodiment of the present application is schematically illustrated, and the method includes the following steps:
in step 201, in response to the target display object scanned to the sand table, controlling the head-mounted mixed reality device to display a media resource corresponding to the target display object;
in some embodiments, in order to achieve accurate positioning of different target display objects, different target display objects may be configured with different marks in the embodiments of the present application. For example, different two-dimensional codes and image codes are configured on different target display objects in the physical sand table, and different colors can be configured so as to uniquely identify one target display object by adopting different marks. Of course, any mark capable of distinguishing different target display objects in the implementation is applicable to the embodiment of the present application, and the present application does not limit this. Therefore, when the user uses the head-mounted virtual reality device, if the mark of the target display object is scanned, the target display object is determined to be scanned.
In practice, different target presentation objects may be determined based on machine vision. For example, the head-mounted virtual reality device may be configured with an image capture device that, after capturing an image, may analyze the image for the presence of a marker of the target presentation object. In practice, the neural network model may be used to classify and identify the image so as to determine whether the mark of the target display object exists, and classify the mark of which target display object exists.
Of course, in another embodiment, a template image of the marks of different target display objects may also be constructed, and the mark of which target display object exists is retrieved from the image by means of template matching. For example, fig. 3 shows a template image of a mark of a target display object, in which patterns are used to uniquely identify a target display object, and the template image may be matched in the captured image in practice, and if the matching is successful, the target display object is determined to be scanned. When in design, the simpler the template image is, the better the template image is, and the recognition efficiency and precision can be improved.
When the marks of a plurality of target display objects are stored in one image, one target display object can be selected according to the sequence of the sight lines. And a target display object can be selected by the user based on human-computer interaction. Of course, all the identified target display objects may also be used as target display objects to be processed subsequently.
When the mark of the target display object is a two-dimensional code, a graphic code or a bar code, a specific target display object can be identified through code scanning.
Of course, in another embodiment, the mark of different target display objects may also be the respective position information of each target display object. The virtual reality device can switch the visual angle when changing the position or the direction in the real space. Different target presentation objects can be identified based on the position information of the object viewed from different perspectives.
For example, a special initialization spatial coordinate position may be established, and then the identification point position and the initialization spatial coordinate of the different target presentation objects are marked into the controller of the head mounted virtual reality device. So that different target presentation objects can be identified based on the initialization space and the identification points of the different target presentation objects.
Similarly, when a plurality of target display objects are identified, one target display object may be selected according to the sequence of the different target display objects entering the sight line, or one target display object may be selected according to the user operation. Of course, all the identified target display objects may also be used as target display objects to be processed subsequently.
Based on marking different target display objects, different areas in the sand table can be configured and controlled independently, and the objects can be displayed to users comprehensively and accurately.
In some embodiments, different target presentation objects may be configured with different program functions to enable presentation of different media assets. That is, the tags of different target objects are associated with different program functions, and based on the identified tags, the corresponding media resources can be exposed by calling the associated program functions.
When the media resource is displayed, in order to facilitate the user to know the display content comprehensively, in the embodiment of the present application, the media resource may be displayed on the position of the target display object on the sand table. And in order to enable the user to know the whole appearance of the sand table and the content of the target display object, the media resource can be displayed in the specified area based on the sand table.
After the media resource of the target presentation object is presented, in step 202, corresponding control may be performed on the media resource in response to the gesture operation.
Wherein, in one embodiment, the media resource may be any form of virtual resource. Such as audio, video, pictures (which may include logos), animations, three-dimensional virtual models, and so forth. Wherein, in order to facilitate the user to control the virtual resource to know the display object. In the embodiment of the application, the media resources are displayed, and meanwhile, a user is allowed to execute gesture operation to realize man-machine interaction. For example:
1) and if the media resource is video or audio, the control items supported by the media resource comprise starting playing, fast-forwarding playing, pausing playing, backing playing, switching the media resource and the like.
2) And if the media resource is a picture, the control items supported by the media resource comprise a zoom picture, a media resource switching and the like.
3) If the media resource is dynamic animation of the three-dimensional virtual model and the media resource comprises multiple operation modes of the three-dimensional virtual model, the control items supported by the media resource comprise selection of any operation mode for playing, switching of different operation modes, circular playing of at least one operation mode, switching of the media resource and the like.
For example, by adding buttons on a virtual video playing screen, a playing control function for controlling media resources is realized, including "play", "pause", "progress bar", and the like, so that a playing function of a manual media resource is achieved. For another example, a resource switching button that a user's finger sticks to the air can switch to another media resource for viewing. For another example, the next-level button that the user points in the air can view the next-level introduction, so that the media resources can be displayed in a multi-level menu mode.
4) If the media resource comprises a plurality of virtual components, the control items supported by the media resource comprise virtual equipment assembled by adopting at least two virtual components, control of the virtual equipment and switching of the media resource and the like.
For example, the intelligent street lamp can be paved with different street lamps on a road and can be turned on or off at variable times according to different weather changes. The road on which the intelligent street lamp is laid can be selected from the displayed virtual three-dimensional map according to the requirements of the user, and then the intelligent street lamp is configured to different positions of the road. After the configuration is finished, different weather change scenes can be selected, and the starting and the light-off control of the street lamp can be demonstrated under the condition of different weather changes.
For another example, the device may be assembled by different components, and the user may select components with different models and performances to assemble the virtual device according to his own needs. In the embodiment of the application, performance parameters of different components and matching relations with other components can be defined, and a user is guided to select proper components to assemble so as to obtain the virtual device. After the virtual equipment is assembled, the user can control the virtual equipment to operate through gestures, and the operation effect and performance of the virtual equipment in different operation states can be observed.
5) If the media resource is in the structure of the three-dimensional virtual model, the control items supported by the media resource comprise a shell for displaying the three-dimensional virtual model, a hidden shell for displaying the internal structure of the three-dimensional virtual model and switching media resources.
That is, the user can observe the appearance of the three-dimensional virtual model and also can see through the internal structure thereof. In the embodiment of the application, display and hidden attributes can be configured for the shell of the three-dimensional virtual model, and when a user needs to view the shell, the attributes are set to be displayed, and then the shell can be displayed. When a user needs to view the internal structure, the attribute of the shell is set to be hidden, and then the internal structure is displayed for the user to view. In practice, the housings of the different components can be individually displayed and hidden by the controller.
For example, in a building model, the internal supporting facilities of different functional areas are used as the internal structures of the different functional areas. A housing may be selected that conceals one or more functional areas.
In one embodiment, virtual buttons may be included in the content displayed by the head mounted virtual reality device. The gesture operation of the user may be implemented based on a virtual button. For example, a different virtual button may be selected by gesture operation to control the media asset.
The position of the virtual buttons may be adjustable for ease of user manipulation. The head-mounted mixed reality device is controlled to adjust the location area where the virtual button is located, such as in response to a user-triggered location adjustment operation. For example, the virtual buttons are adjusted to the upper left corner area and the lower right corner area visually, so as to help the user configure the buttons according to own preference and requirement.
In another embodiment, the user is able to control not only the virtual scene but also the real scene. For example, if the sand table includes a physical sand table, the physical sand table may be controlled to change in response to a gesture operation that controls the physical sand table. For example, a user may light up, turn off a light on a physical building model. Controlling the opening and closing of the cabin door, taking off, landing and the like of the airplane model.
Alternatively, in another embodiment, the physical sandbox may also be controlled to change in response to the control performed on the media asset. For example, when the user controls the device to start, if the device needs to be lighted when the device starts, the light in the corresponding device model is controlled to be lighted.
In addition, in the embodiment of the application, the target display object may further include an interaction link between the merchant and the user. Such as lottery events, bonus question and answer events, entry experience products, etc.
For convenience of understanding, the information display method provided by the embodiment of the present application is described below by taking a physical sand table of a ark as an example.
First, corresponding marks can be configured for different target display objects in the solid sand table, and the marks can be parts with different shapes, two-dimensional codes and the like.
And registering related marks in a control program of the head-mounted mixed virtual reality device, and configuring media resources of different marks.
After the user is matched with the head-mounted mixed virtual reality device, the camera based on the device can collect the picture, transmit the picture to the server or the processor, and identify the picture. And if the registered mark is recognized, triggering a corresponding program function to control the head-mounted mixed virtual reality device to display the media resource of the mark object. The media resources include, but are not limited to, multimedia materials such as a three-dimensional virtual model of a product, a control button, video, audio, a picture, and a logo. For example, when the main camera of the HoloLens scans the mark identification point of the intelligent ark sand table, a three-dimensional model of the virtual sand table appears on the display screen of the HoloLens glasses, and operation buttons (switching targets, opening/closing the cabin, starting and stopping), a propaganda video and parameter introduction are presented.
The user carries out gesture operation in the air in front of eyes through fingers, and touches virtual buttons in front of eyes to control the head-mounted mixed virtual reality device to execute corresponding operation. For example, the buttons can be dragged to different positions, so that the buttons in the control area can be dragged to comfortable positions according to different heights and habits of experiencers.
And the dragging model can be stretched, and the execution related actions of the three-dimensional virtual models of different projects in the sand table are controlled according to different instructions.
Thirdly, through the buttons of the pictures in the virtual video playing screen, the user can realize the playing control functions of controlling the video and the audio, including 'playing', 'pausing', 'progress bar', and the like, so that the playing function of manually controlling the propaganda video is achieved.
In addition, the corresponding virtual button which is inserted into the air suspension by the finger can control the ejection of a second-level picture, including corresponding steam turbine pictures in a sand table, characters or audio introduced by related parameters, materials such as steam turbine video and audio and the like. The user can hide the secondary picture specifically introduced by the steam turbine by clicking the picture again.
In another embodiment, the functions of switching media resources, opening or closing the shell of the three-dimensional virtual model, starting or stopping the state of the three-dimensional virtual model of the steam turbine and the like can be realized by using a related virtual button which is inserted into the suspension by a finger.
In addition, the system can also be configured with the man-machine interaction function of items such as an offshore wind turbine, an assembled building and the like, and can also comprise a lottery module to realize the interaction with the user.
In practice, the related programs can be packaged and installed in Hololens, and the user wears Hololens glasses to operate in cooperation with a physical sand table.
As shown in fig. 4, based on the same inventive concept, an information presentation apparatus 400 is provided, which includes:
the media display module 401 is configured to, in response to a target display object scanned to the sand table, control the head-mounted mixed reality device to display a media resource corresponding to the target display object;
and the interaction module 402 is configured to perform corresponding control on the media resource in response to the gesture operation.
In some embodiments, the apparatus further comprises:
a target display object determination module, configured to determine that the target display object is scanned according to the following method:
if the mark of the target display object is scanned, determining that the target display object is scanned; or the like, or, alternatively,
and if the position information of the target display object is scanned, determining that the target display object is scanned.
In some embodiments, if the media asset is video or audio, the control items supported by the media asset include start playing, fast forward playing, pause playing, rewind playing, and switch media asset;
if the media resource is a picture, the control items supported by the media resource comprise a zoom picture and a media resource switching;
if the media resource is a dynamic animation of a three-dimensional virtual model and the media resource comprises multiple operation modes of the three-dimensional virtual model, the control items supported by the media resource comprise selecting any operation mode for playing, switching different operation modes, circularly playing at least one operation mode and switching the media resource;
if the media resource comprises a plurality of virtual components, the control item supported by the media resource comprises virtual equipment assembled by adopting at least two virtual components, and the virtual equipment is controlled to operate and switch the media resource;
if the media resource is in the structure of the three-dimensional virtual model, the control items supported by the media resource comprise a shell for displaying the three-dimensional virtual model, a hidden shell for displaying the internal structure of the three-dimensional virtual model and media resource switching.
In some embodiments, if the sand table comprises a physical sand table, the interaction module is further configured to:
responding to the gesture operation for controlling the physical sand table, and controlling the physical sand table to change;
or, in response to the control performed on the media asset, controlling the physical sandbox to change.
In some embodiments, the interaction module is further to:
controlling the head-mounted mixed reality device to zoom the displayed content in response to a gesture operation for zooming the content;
and/or the presence of a gas in the gas,
in response to the gesture operation of adjusting the height, controlling the head-mounted mixed reality device to adjust the height of the displayed content.
In some embodiments, the operation object of the gesture operation is a virtual button displayed by the head-mounted mixed reality device; the interaction module is further to:
and responding to the position adjusting operation, and controlling the head-mounted mixed reality equipment to adjust the position area where the virtual button is located.
In some embodiments, if the sand table includes a physical sand table, the media presentation module is configured to see through the physical sand table from the head-mounted mixed reality device:
and controlling the media resources to be displayed in a designated area with the physical sand table as a reference, or displaying the media resources at the position of the target display object.
For the implementation and beneficial effects of the operations in the information displaying apparatus, reference is made to the description of the foregoing method, and further description is omitted here.
Having described the information presentation method and apparatus according to the exemplary embodiments of the present application, an electronic device according to another exemplary embodiment of the present application is described next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device according to the present application may include at least one processor, and at least one memory. The memory stores program code, and the program code, when executed by the processor, causes the processor to execute the steps of the information presentation method according to the various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform steps as in an information presentation method.
The electronic device 130 according to this embodiment of the present application is described below with reference to fig. 5. The electronic device 130 shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 5, the electronic device 130 is represented in the form of a general electronic device. The components of the electronic device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 130, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 130 to communicate with one or more other electronic devices. Such communication may occur via input/output (I/O) interfaces 135. Also, the electronic device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 136. As shown, network adapter 136 communicates with other modules for electronic device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, aspects of an information presentation method provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of an information presentation method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for information presentation of the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image scaling apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image scaling apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image scaling apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image scaling device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An information presentation method, the method comprising:
responding to the scanned target display object of the sand table, and controlling the head-mounted mixed reality equipment to display the media resource corresponding to the target display object;
and responding to the gesture operation, and performing corresponding control on the media resource.
2. The method of claim 1, further comprising:
determining that the target display object is scanned according to the following method:
if the mark of the target display object is scanned, determining that the target display object is scanned; or the like, or, alternatively,
and if the position information of the target display object is scanned, determining that the target display object is scanned.
3. The method of claim 1,
if the media resource is video or audio, the control items supported by the media resource comprise starting playing, fast forwarding playing, pausing playing, backing playing and switching the media resource;
if the media resource is a picture, the control items supported by the media resource comprise a zoom picture and a media resource switching;
if the media resource is a dynamic animation of a three-dimensional virtual model and the media resource comprises multiple operation modes of the three-dimensional virtual model, the control items supported by the media resource comprise selecting any operation mode for playing, switching different operation modes, circularly playing at least one operation mode and switching the media resource;
if the media resource comprises a plurality of virtual components, the control item supported by the media resource comprises virtual equipment assembled by adopting at least two virtual components, and the virtual equipment is controlled to operate and switch the media resource;
if the media resource is in the structure of the three-dimensional virtual model, the control items supported by the media resource comprise a shell for displaying the three-dimensional virtual model, a hidden shell for displaying the internal structure of the three-dimensional virtual model and media resource switching.
4. The method of claim 1, wherein if the sand table comprises a physical sand table, the method further comprises:
responding to the gesture operation for controlling the physical sand table, and controlling the physical sand table to change;
or, in response to the control performed on the media asset, controlling the physical sandbox to change.
5. The method of claim 1, further comprising:
controlling the head-mounted mixed reality device to zoom the displayed content in response to a gesture operation for zooming the content;
and/or the presence of a gas in the gas,
in response to the gesture operation of adjusting the height, controlling the head-mounted mixed reality device to adjust the height of the displayed content.
6. The method according to any one of claims 1-5, wherein the operation object of the gesture operation is a virtual button displayed by the head-mounted mixed reality device; the method further comprises the following steps:
and responding to the position adjusting operation, and controlling the head-mounted mixed reality equipment to adjust the position area where the virtual button is located.
7. The method according to any one of claims 1 to 5, wherein if the sand table includes a physical sand table, the physical sand table is viewable from the head-mounted mixed reality device, and the controlling the head-mounted mixed reality device to display the media resource corresponding to the target presentation object includes:
and controlling the media resources to be displayed in a designated area with the physical sand table as a reference, or displaying the media resources at the position of the target display object.
8. An information presentation device, the device comprising:
the media display module is used for responding to the scanned target display object of the sand table and controlling the head-mounted mixed reality equipment to display the media resource corresponding to the target display object;
and the interaction module is used for responding to the gesture operation and executing corresponding control on the media resource.
9. An electronic device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A computer storage medium, characterized in that the computer storage medium stores a computer program for causing a computer to perform the method of any one of claims 1-7.
CN202011623777.5A 2020-12-31 2020-12-31 Information display method and device, electronic equipment and storage medium Pending CN112684893A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011623777.5A CN112684893A (en) 2020-12-31 2020-12-31 Information display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011623777.5A CN112684893A (en) 2020-12-31 2020-12-31 Information display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112684893A true CN112684893A (en) 2021-04-20

Family

ID=75453978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011623777.5A Pending CN112684893A (en) 2020-12-31 2020-12-31 Information display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112684893A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063785A (en) * 2021-11-23 2022-02-18 Oppo广东移动通信有限公司 Information output method, head-mounted display device, and readable storage medium
CN114356089A (en) * 2021-12-30 2022-04-15 Oppo广东移动通信有限公司 Augmented reality glasses control method and device, storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123013A (en) * 2017-03-01 2017-09-01 阿里巴巴集团控股有限公司 Exchange method and device under line based on augmented reality
CN107506037A (en) * 2017-08-23 2017-12-22 三星电子(中国)研发中心 A kind of method and apparatus of the control device based on augmented reality
CN107767355A (en) * 2016-08-18 2018-03-06 深圳市劲嘉数媒科技有限公司 The method and apparatus of image enhaucament reality
CN109257855A (en) * 2018-08-28 2019-01-22 上海宽创国际文化科技股份有限公司 A kind of spectators and wait entity building sand table gesture interaction experiencing system and method
CN110209285A (en) * 2019-06-19 2019-09-06 哈尔滨拓博科技有限公司 A kind of sand table display systems based on gesture control
CN110456901A (en) * 2019-08-16 2019-11-15 上海电气集团股份有限公司 Control method, system, electronic equipment and the storage medium that object is shown in exhibition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767355A (en) * 2016-08-18 2018-03-06 深圳市劲嘉数媒科技有限公司 The method and apparatus of image enhaucament reality
CN107123013A (en) * 2017-03-01 2017-09-01 阿里巴巴集团控股有限公司 Exchange method and device under line based on augmented reality
CN107506037A (en) * 2017-08-23 2017-12-22 三星电子(中国)研发中心 A kind of method and apparatus of the control device based on augmented reality
CN109257855A (en) * 2018-08-28 2019-01-22 上海宽创国际文化科技股份有限公司 A kind of spectators and wait entity building sand table gesture interaction experiencing system and method
CN110209285A (en) * 2019-06-19 2019-09-06 哈尔滨拓博科技有限公司 A kind of sand table display systems based on gesture control
CN110456901A (en) * 2019-08-16 2019-11-15 上海电气集团股份有限公司 Control method, system, electronic equipment and the storage medium that object is shown in exhibition

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063785A (en) * 2021-11-23 2022-02-18 Oppo广东移动通信有限公司 Information output method, head-mounted display device, and readable storage medium
WO2023093329A1 (en) * 2021-11-23 2023-06-01 Oppo广东移动通信有限公司 Information output method, head-mounted display device and readable storage medium
CN114356089A (en) * 2021-12-30 2022-04-15 Oppo广东移动通信有限公司 Augmented reality glasses control method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
US20230079307A1 (en) Defining, displaying and interacting with tags in a three-dimensional model
US11678004B2 (en) Recording remote expert sessions
US11043031B2 (en) Content display property management
Grasset et al. Image-driven view management for augmented reality browsers
US9652046B2 (en) Augmented reality system
US20140267598A1 (en) Apparatus and method for holographic poster display
US10963140B2 (en) Augmented reality experience creation via tapping virtual surfaces in augmented reality
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
US10868977B2 (en) Information processing apparatus, information processing method, and program capable of adaptively displaying a video corresponding to sensed three-dimensional information
CN106464773B (en) Augmented reality device and method
TWI795762B (en) Method and electronic equipment for superimposing live broadcast character images in real scenes
US20190244431A1 (en) Methods, devices, and systems for producing augmented reality
Khan et al. Rebirth of augmented reality-enhancing reality via smartphones
CN112684893A (en) Information display method and device, electronic equipment and storage medium
WO2019016820A1 (en) A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY-VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF
JP2022507502A (en) Augmented Reality (AR) Imprint Method and System
WO2014189840A1 (en) Apparatus and method for holographic poster display
US20120327114A1 (en) Device and associated methodology for producing augmented images
US20220350650A1 (en) Integrating overlaid digital content into displayed data via processing circuitry using a computing memory and an operating system memory
KR102443049B1 (en) Electric apparatus and operation method thereof
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20230334792A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20230334791A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
CA3139068C (en) System and method for quantifying augmented reality interaction
Huang A method of evaluating user visual attention to moving objects in head mounted virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination