CN111131904B - Video playing method and head-mounted electronic equipment - Google Patents

Video playing method and head-mounted electronic equipment Download PDF

Info

Publication number
CN111131904B
CN111131904B CN201911418185.7A CN201911418185A CN111131904B CN 111131904 B CN111131904 B CN 111131904B CN 201911418185 A CN201911418185 A CN 201911418185A CN 111131904 B CN111131904 B CN 111131904B
Authority
CN
China
Prior art keywords
target
story
user
head
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911418185.7A
Other languages
Chinese (zh)
Other versions
CN111131904A (en
Inventor
郝磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911418185.7A priority Critical patent/CN111131904B/en
Publication of CN111131904A publication Critical patent/CN111131904A/en
Application granted granted Critical
Publication of CN111131904B publication Critical patent/CN111131904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a video playing method and a head-mounted electronic device, relates to the technical field of communication, and aims to solve the problem that the prior art is boring in a mode of providing scenic spot information. The method comprises the following steps: displaying N identifiers on a virtual screen, wherein the N identifiers are used for indicating M story videos associated with a target object of a first sight spot, and the first sight spot is a sight spot at the current position of the head-mounted electronic equipment; receiving a first input of a user to a target identifier in the N identifiers; in response to the first input, playing a target story video indicated by the target identification on the virtual screen; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer.

Description

Video playing method and head-mounted electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video playing method and a head-mounted electronic device.
Background
With the improvement of living standard, the tourism industry becomes one of hot topics in the current society and items pursued by people spirit, and people experience different cultural information of different scenic spots through tourism, thereby adding color to life.
At present, there are two main ways of showing scenic spot information for a user in the prior art: the first is to show the sight spot information for the user through the signboard, and the second is to provide a two-dimensional code indicating the sight spot information, and the user needs to control the electronic device to scan the two-dimensional code so as to trigger the electronic device to display an interface including the sight spot information corresponding to the two-dimensional code.
However, the above ways of showing the scenic spot information for the user are tedious, and it is difficult to make the user really integrate into the scenic spot culture and generate resonance.
Disclosure of Invention
The embodiment of the invention provides a video playing method and a head-mounted electronic device, which can solve the problem that the prior art is boring in a method for providing scenic spot information.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present invention provides a video playing method, which is applied to a head-mounted electronic device, and the method includes: displaying N identifiers on a virtual screen, wherein the N identifiers are used for indicating M story videos associated with a target object of a first sight spot, and the first sight spot is a sight spot at the current position of the head-mounted electronic equipment; receiving a first input of a user to a target identifier in the N identifiers; in response to the first input, playing a target story video indicated by the target identification on the virtual screen; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer.
In a second aspect, an embodiment of the present invention provides a head-mounted electronic device, including: the device comprises a display module, a receiving module and a playing module; the display module is used for displaying N identifiers on a virtual screen, wherein the N identifiers are used for indicating M story videos associated with a target object of a first scenic spot, and the first scenic spot is a scenic spot at the current position of the head-mounted electronic equipment; the receiving module is used for receiving a first input of a target identifier in the N identifiers displayed by the display module by a user; the playing module is used for responding to the first input received by the receiving module and playing the target story video indicated by the target identification on the virtual screen; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer.
In a third aspect, an embodiment of the present invention provides a head-mounted electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the video playing method in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the video playing method as in the first aspect.
In the embodiment of the invention, the head-mounted electronic device can display N identifiers on a virtual screen, wherein the N identifiers are used for indicating M story videos associated with a target object of a first sight spot, and the first sight spot is a sight spot at the current position of the head-mounted electronic device; receiving a first input of a user to a target identifier in the N identifiers; in response to the first input, playing a target story video indicated by the target identification on the virtual screen; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer. Through the scheme, the head-mounted electronic equipment can display N identifications of M story videos associated with the target object for indicating the first scenic spot under the condition that the head-mounted electronic equipment is located at the first scenic spot, and a user can trigger the head-mounted electronic equipment to play the target story video indicated by the target identification on a virtual screen through inputting the target identification in the N identifications. The target story video can show the sight spot culture for the user from many aspects to the user can really integrate into the sight spot culture through the mode of watching the target story video, produces the sympathetic response.
Drawings
FIG. 1 is a block diagram of a possible operating system according to an embodiment of the present invention;
fig. 2 is a flowchart of a video playing method according to an embodiment of the present invention;
fig. 3 is a schematic view of an interface of a video playing method according to an embodiment of the present invention;
fig. 4 is a second schematic view of an interface of a video playing method according to an embodiment of the present invention;
fig. 5 is a second flowchart of a video playing method according to an embodiment of the present invention;
fig. 6 is a third schematic view of an interface of a video playing method according to an embodiment of the present invention;
fig. 7 is a third flowchart of a video playing method according to an embodiment of the present invention;
fig. 8 is a fourth schematic view of an interface of a video playing method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a head-mounted electronic device according to an embodiment of the present invention;
fig. 10 is a hardware schematic diagram of a head-mounted electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present application are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input, the second input, the third input, the fourth input, etc. are used to distinguish between different inputs, rather than to describe a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural elements means two or more elements, and the like.
In an embodiment of the present invention, the head-mounted electronic device may be an electronic device having a Virtual Reality (VR) function, an Augmented Reality (AR) function, or a Mixed Reality (MR) function, such as a VR headset, VR glasses, a VR helmet, AR glasses, AR helmets, MR glasses, and an MR helmet.
The virtual screen in the embodiment of the present invention may be any carrier that can be used to display content projected by a projection device when AR technology (or VR technology or MR technology) is used to display the content. The projection device may be a projection device using AR technology (or VR technology or MR technology), such as an AR headset (or VR headset or MR headset) in an embodiment of the present invention.
The following description will exemplarily describe displaying content on a virtual screen by using AR technology.
When displaying content on the virtual screen by using the AR technology, the projection device may project a virtual scene acquired by (or internally integrated with) the projection device, or a virtual scene and a real scene onto the virtual screen, so that the virtual screen may display the content, thereby showing an effect of superimposing the real scene and the virtual scene to a user.
In connection with different scenarios of AR technology applications, the virtual screen may generally be a display screen of an electronic device (e.g. a mobile phone), a lens of AR glasses, a windshield of a car, a wall of a room, etc. any possible carrier.
The following describes an exemplary process of displaying content on a virtual screen by using AR technology, by taking the virtual screen as a display screen of an electronic device, a lens of AR glasses, and a windshield of an automobile as examples.
In one example, when the virtual screen is a display screen of an electronic device, the projection device may be the electronic device. The electronic equipment can acquire a real scene in the area where the electronic equipment is located through the camera of the electronic equipment, the real scene is displayed on the display screen of the electronic equipment, then the electronic equipment can project a virtual scene acquired by the electronic equipment (or internally integrated) onto the display screen of the electronic equipment, so that the virtual scene can be displayed in a superposition mode in the real scene, and a user can see the effect of the real scene and the virtual scene after superposition through the display screen of the electronic equipment.
In another example, when the virtual screen is a lens of AR glasses, the projection device may be the AR glasses. When the user wears the glasses, the user can see the real scene in the area where the user is located through the lenses of the AR glasses, and the AR glasses can project the acquired (or internally integrated) virtual scene onto the lenses of the AR glasses, so that the user can see the display effect of the real scene and the virtual scene after superposition through the lenses of the AR glasses.
In yet another example, when the virtual screen is a windshield of an automobile, the projection device may be any electronic device. When the user is located in the automobile, the user can see the real scene in the area where the user is located through the windshield of the automobile, and the projection device can project the acquired (or internally integrated) virtual scene onto the windshield of the automobile, so that the user can see the display effect of the real scene and the virtual scene after superposition through the windshield of the automobile.
Of course, in the embodiment of the present invention, the specific form of the virtual screen may not be limited, for example, it may be a non-carrier real space. In this case, when the user is located in the real space, the user can directly see the real scene in the real space, and the projection device can project the acquired (or internally integrated) virtual scene into the real space, so that the user can see the display effect of the real scene and the virtual scene after superposition in the real space.
The embodiment of the invention provides a video playing method, wherein a head-mounted electronic device can display N identifiers on a virtual screen, the N identifiers are used for indicating M story videos associated with a target object of a first scenic spot, and the first scenic spot is a scenic spot at the current position of the head-mounted electronic device; receiving a first input of a user to a target identifier in the N identifiers; in response to the first input, playing a target story video indicated by the target identification on the virtual screen; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer. Through the scheme, the head-mounted electronic equipment can display N identifications of M story videos associated with the target object for indicating the first scenic spot under the condition that the head-mounted electronic equipment is located at the first scenic spot, and a user can trigger the head-mounted electronic equipment to play the target story video indicated by the target identification on a virtual screen through inputting the target identification in the N identifications. The target story video can show the sight spot culture for the user from many aspects to the user can really integrate into the sight spot culture through the mode of watching the target story video, produces the sympathetic response.
The head-mounted electronic device in the embodiment of the invention can be a head-mounted electronic device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the video playing method according to an embodiment of the present invention, taking an operating system as an example.
Fig. 1 is a schematic diagram of a possible operating system according to an embodiment of the present invention. In fig. 1, the architecture of the operating system includes 4 layers, respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application layer comprises various application programs (including system application programs and third-party application programs) in an operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes a library (also referred to as a system library) and an operating system runtime environment. The library mainly provides various resources required by the operating system. The operating system runtime environment is used to provide a software environment for the operating system.
The kernel layer is the operating system layer of the operating system and belongs to the lowest layer of the operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the operating system based on the Linux kernel.
Taking an operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the video playing method provided in the embodiment of the present invention based on the system architecture of the operating system shown in fig. 1, so that the video playing method may run based on the operating system shown in fig. 1. That is, the processor or the head-mounted electronic device may implement the video playing method provided by the embodiment of the present invention by running the software program in the operating system.
An execution main body of the video playing method provided by the embodiment of the present invention may be the above-mentioned head-mounted electronic device, or may also be a functional module and/or a functional entity capable of implementing the method in the head-mounted electronic device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes the head-mounted electronic device as an example to exemplarily describe the video playing method provided by the embodiment of the present invention.
Taking the electronic device with a VR function as an example, the electronic device with a VR function in the embodiment of the present invention includes the following possible situations: in a first possible case, the head-mounted electronic device is a VR head-mounted device, and the VR head-mounted electronic device may receive a voice input (which may also include a touch input in a display area of the VR head-mounted device) from a user through the VR head-mounted device to control the VR head-mounted device; in a second possible case, the head-mounted electronic device includes other display devices besides the VR head-mounted device, and the head-mounted electronic device may receive voice input of the user (which may also include touch input in a display area of the VR head-mounted device) through the VR head-mounted device, and receive touch input, voice input, and the like of the user through the display device to control the VR head-mounted device; in a third possible case, the head-mounted electronic device includes a VR head-mounted device, but the head-mounted electronic device may be simultaneously in communication connection with other electronic devices including a display screen, and the VR head-mounted device may receive a voice input of a user (which may also include a touch input in a display area of the VR head-mounted device) through the VR head-mounted device to control the VR head-mounted device, and receive a touch input of the user through the other electronic devices to transmit a corresponding touch instruction to the VR head-mounted electronic device to control the VR head-mounted device; other possibilities are also included and the embodiments of the invention are not limited.
The principle of VR headsets is to magnify the image produced by a small two-dimensional display via an optical system. Specifically, the light emitted from the small display passes through the convex lens to make the image generate a far-like effect due to refraction. This effect is used to enlarge a near object to a distant object for viewing, thereby achieving a so-called holographic vision (Hologram). The image of a liquid crystal display (a small cathode ray tube used in the early days, and an organic electroluminescent display device has been applied recently) is made to resemble a large screen picture by passing the image through an eccentric free-form surface lens. Since the eccentric free-form surface lens is a slant concave lens, it has not only a lens function but also a free-form surface prism. When the generated image enters the eccentric free-form surface prism surface, it is totally reflected to the concave mirror surface on the opposite side of the viewer's eyes. The side concave mirror surface is coated with a mirror coating, and the light is reflected and amplified again to the eccentric free-form surface prism surface at the same time of reflection, and the inclination of the light is corrected on the surface to reach the eyes of a viewer. Meanwhile, the left and right eye screens respectively display images of the left and right eyes through the left and right eye lenses, and after the human eyes acquire the information with parallax, images with stereoscopic impression are generated in the brain. It can also be said that the VR headset forms a virtual reality visual field in the brain visual system of the user based on different visual fields and different visual depth perception in the local space. While the primary determinant of the virtual reality field of view is the lens, not the user's pupil. Therefore, in order to obtain a wider field of view, it is necessary to shorten the distance between the user's eyeball and the lens or to increase the size of the lens.
Optionally, in the embodiment of the present invention, the head-mounted electronic device may provide two viewing modes for the user, where one viewing mode is a story video viewing mode; one viewing mode is a scenery to view mode (in this case, the head-mounted electronic device may be equivalent to sunglasses), which is convenient for the user to see scenery along while walking when wearing the head-mounted electronic device.
Referring to fig. 2, an embodiment of the present invention provides a video playing method applied to a head-mounted electronic device, and the method may include steps 201 to 203 described below.
Step 201, the head-mounted electronic device displays N identifiers on a virtual screen.
The N identifiers are used to indicate M story videos associated with a target object of a first sight point, where the first sight point is a sight point at a current location of the head-mounted electronic device (i.e., the first sight point is determined according to location information at the current location of the head-mounted electronic device). Wherein the target object comprises at least one of: buildings, natural scenes and target objects can also be other contents, and the embodiment of the invention is not limited. N, M is a positive integer.
It will be appreciated that the M story videos associated with the target object, i.e. each story video is associated with the target object and each story video is showing a story related to the target object.
It is to be understood that, in the embodiment of the present invention, each of the N identifiers may be used to indicate one story video, and the N identifiers are used to indicate N story videos (M ═ N); any two of the N identifiers are used for indicating a story video, N is a positive integer, the N identifiers are used for indicating N (N-1)/2 story videos, and N is an integer greater than 1; any three of the N identifiers are used for indicating one story video, the N identifiers are used for indicating N (N-1) (N-2)/6 story videos, and N is an integer greater than 2; other situations may also be included, and the embodiments of the present invention are not limited. Each identifier may be a name of a corresponding story video, a keyword of a corresponding story video, and the like, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, various modes of identifying and indicating the story video are provided, and different requirements of users can be met.
Optionally, each of the N identifiers is used to indicate a story video, and the N identifiers are ordered according to a target parameter, where the target parameter includes at least one of: the system comprises a corresponding story video generation time parameter, a corresponding story video playing frequency parameter and a corresponding story video type parameter.
In the embodiment of the present invention, the occurrence time parameter of the corresponding story video may be a specific occurrence time of the corresponding story video, may also be an occurrence era of the corresponding story video, and may also be other feasible contents, which is not limited in the embodiment of the present invention. Further, the N identifications may be ordered from front to back (or back to front) according to the occurrence time parameter of the corresponding story video.
In the embodiment of the present invention, the playing frequency parameter corresponding to the story video may be: and in a period of time, corresponding to the playing times of the story video. And may also be understood as the clicked rate of the corresponding story video over a period of time. Further, the N identifiers may be ordered from high to low (or from low to high) according to a play frequency parameter of the corresponding story video.
In the embodiment of the present invention, the type parameter corresponding to the story video may be: historical story types, celebrity story types, mythology story types and popular habit types; other feasible types are also possible, and the embodiment of the invention is not limited. Further, the N identifiers may be sorted according to the order of the historical story type, the celebrity story type, the mythology story type, and the colloquial habit type, or may be sorted according to the order of the mythology story type, the colloquial habit type, the celebrity story type, and the historical story type, or may be sorted according to other orders, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, the N identifications are sequenced according to the target parameters, so that a user can conveniently have a preliminary understanding on each story video, and then the user can conveniently select the story video needing to be continuously understood.
Optionally, any P identifiers in the N identifiers are used to indicate a story video, each identifier is a keyword in the corresponding story video, and P is a positive integer less than or equal to N.
It is understood that the N flags are used to indicate { { N (N-1) (N-2) … (N-P +1) }/{ P (P-1) (P-2) … 2} } story videos. The P keywords indicate one story video.
Illustratively, N is 6, and as shown in fig. 3 (a), the 6 labels are tag 1, tag 2, tag 3, tag 4, tag 5, and tag 6, respectively. Here, as shown in (b) of fig. 3, a label 1 and a label 2 are used to indicate a story 1.
In the embodiment of the invention, one story video is indicated by the P keywords, so that a user can conveniently have a preliminary understanding on each story video, and then the user can conveniently select the story video needing to be continuously understood.
Optionally, the step 201 may be specifically implemented by the following step 201 a.
Step 201a, in case that the target condition is satisfied, displaying the N identifiers on the virtual screen.
Optionally, the target condition includes at least one of: detecting that the distance between the head-mounted electronic equipment and the target object is smaller than or equal to a preset threshold value; receiving target input of a user for inputting the target object; detecting the target object from the received voice information of the user; based on the gaze tracking technique, it is determined that the user's gaze is focused on the target object.
It can be understood that when the target condition is that the distance between the head-mounted electronic device and the target object is detected to be less than or equal to the preset threshold, the head-mounted electronic device may determine that the user is interested in the sight culture related to the target object, and therefore, N identifiers indicating M story videos associated with the target object may be displayed for the user to select. The value of the preset threshold may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
Illustratively, the head-mounted electronic device determines a target object according to the position where the head-mounted electronic device (user) is currently located, and displays an identification for indicating a story video associated with the target object for the user.
It can be understood that when the target condition is that a target input of the target object is received by the user, it can be determined that the user is interested in the sight culture related to the target object, and therefore, N identifiers for indicating M story videos associated with the target object can be displayed for the user to select. The target input may be an operation of a user inputting a target object in an input box, an operation of a user selecting a target object from a plurality of objects, or other feasible operations, and the embodiment of the present invention is not limited.
It can be understood that when the target condition is that the target object is detected from the received voice information of the user, it can be determined that the user is interested in the sight culture related to the target object, and therefore, N identifiers for indicating M story videos associated with the target object can be displayed for the user to select.
For example, a user may have a voice conversation with a head-mounted electronic device to learn sight culture about a target sight. The head-mounted electronic equipment can determine a target object according to the voice information of the user and recommend an identifier for indicating a story video associated with the target object for the user.
It can be understood that when the target condition is that the target object is determined to be focused by the sight of the user based on the sight tracking technology, it can be determined that the user is interested in the sight culture related to the target object, and therefore, N identifiers for indicating M story videos associated with the target object can be displayed for the user to select. For the sight tracking technology, reference may be made to any related technology, which is not described herein again.
It can be understood that when the target condition is that an image including the target object is detected in the field of view of the user when the user wears the VR headset, it may be determined that the user is interested in sight culture related to the target object, and thus N identifiers for displaying M story videos indicating association of the target object may be displayed for selection by the user. The VR headset can acquire images in the user field of view through the camera, and then detect whether the images in the user field of view contain the image of the target object through an image recognition technology. Any relevant technology can be referred to for the specific image recognition technology, and the embodiment of the present invention is not limited.
Illustratively, an object on which the user's line of sight is focused is determined as a target object, and an identification indicating a story video associated with the target object is displayed for the user.
In the embodiment of the invention, various target conditions are provided, namely, under the condition that the user is positioned in the first scenic spot, various modes of recommending the story video for the user are provided, and different requirements of the user can be met.
In the embodiment of the present invention, the location information may be longitude and latitude information, or may be other location information that can indicate a current location of the head-mounted electronic device, which is not limited in the embodiment of the present invention.
Optionally, the head-mounted electronic device may obtain the position information of the current position of the head-mounted electronic device through Positioning manners such as Global Positioning System (GPS), base station Positioning, Wireless Fidelity (Wi-Fi) Positioning, Internet Protocol (IP) Positioning, Radio Frequency Identification (RFID) Positioning, two-dimensional code Positioning, and bluetooth Positioning, and a specific Positioning technology for implementing the above-mentioned Positioning manner may refer to any related technology, which is not repeated in the embodiments of the present invention.
In step 202, the head-mounted electronic device receives a first input of a target identifier of the N identifiers from a user.
The target identifier is at least one of the N identifiers, i.e., the target identifier is one or more of the N identifiers.
Optionally, the first input may be a click input of the user on the target identifier, a slide input of the user on the target identifier, or other feasibility inputs, which is not limited in the embodiment of the present invention.
Illustratively, the click input may be any number of click inputs or multi-finger click inputs, such as a single click input, a double click input, a three click input, a double-finger click input, or a three-finger click input; the slide input may be a slide input in an arbitrary direction or a multi-finger slide input, for example, an upward slide input, a downward slide input, a leftward slide input, a rightward slide input, a two-finger slide input, or a three-finger slide input.
It can be understood that, in the case where each of the N identifiers indicates one story video, if the target identifier is one of the N identifiers, the target identifier indicates one story video, and if the target identifier is multiple of the N identifiers, the target identifier indicates multiple story videos; under the condition that any P marks in the N marks indicate one story video, if the target mark is P marks in the N marks, the target mark indicates one story video, and if the target mark is S marks in the N marks (S is a positive integer less than P), the target mark indicates a plurality of story videos.
Step 203, the head-mounted electronic device responds to the first input, and plays the target story video indicated by the target identification on the virtual screen.
The head-mounted electronic device responds to the first input, and plays the target story video indicated by the target identification through the head-mounted electronic device.
Optionally, in this embodiment of the present invention, the target story video may include one story video or may include a plurality of story videos, and this embodiment of the present invention is not limited.
Illustratively, in the case that the target story video includes at least two story videos, the step 203 may be specifically implemented by the step 203a described below.
And 203a, sequentially playing the at least two story videos according to a preset sequence on the virtual screen.
Optionally, in the embodiment of the present invention, the preset sequence may be a selection sequence of a user, or may also be an arrangement sequence of at least two story videos, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, when the user selects a plurality of story videos at one time, the head-mounted electronic equipment plays the story videos for the user in sequence according to the preset sequence, so that the user operation can be simplified, and the user time can be saved.
In the embodiment of the invention, when the head-mounted electronic equipment is the electronic equipment with the VR function, the head-mounted electronic equipment plays the target story video, and a virtual reality visual field is formed in a brain visual system of a user based on different visual fields and different picture depth perceptions created in a local space, so that the user can be personally on the scene and experience in the story.
Illustratively, as shown in fig. 4, in the process that a user watches a target story video being played through a VR headset, the user (mark "2" indicates the user) watches the content of the whole story video as a passerby character, and a virtual tour guide character (mark "1" indicates the tour guide character) is also arranged in the story video to perform an additional explaining work for the user, so that the user can realistically feel various situations occurring in the story, and can invest in the whole story, follow the story trend of the story character (mark "3" indicates the story character) to know various relationships in the story, and better know the scenic spot culture explained by the story video.
In the embodiment of the invention, before the target story video is played, if the head-mounted electronic equipment detects that the user does not wear the head-mounted electronic equipment, the head-mounted electronic equipment can prompt the user to wear the head-mounted electronic equipment so as to be convenient for watching the target story video.
In the embodiment of the invention, the head-mounted electronic equipment can be connected with a server (cloud server), acquire the story video from the server and play the story video; the head-mounted electronic equipment can also be connected with a server (cloud server) through other head-mounted electronic equipment, and the story video is acquired from the server through other head-mounted electronic equipment and played; the present invention may be implemented in other ways, and the embodiments of the present invention are not limited.
In the embodiment of the invention, the user can also select the M story videos to be played in sequence according to the preset sequence.
In the embodiment of the invention, the story video can be a story film shot by VR synthesis technology. The story video may be taken 360 degrees omni-directionally by a four-directional or omni-directional camera equipped with a multi-directional camera. Because the camera lens is fixed focus and does not have close-up lens and walking shift in the traditional film, the moving track of the camera needs to be set in advance, then closed shooting is carried out, and the story video which can be presented to the complete story line of the user is output through later-stage production and synthesis.
In the embodiment of the invention, the head-mounted electronic equipment can acquire the story video from the server (for example, a cloud server) and the like by adopting technologies such as a progressive downloading mechanism, a lightweight processing technology, an intelligent caching technology, a large-scale three-dimensional virtual scene progressive downloading and transmission mechanism and the like for the virtual scene, so that the story video can be played online.
In order to solve the contradiction between large scene, multiple building and tree models, large data volume and the like of a virtual tourism scene and small storage space of a head-mounted electronic device, the roaming walking is carried out on the virtual scene, a progressive downloading mechanism is adopted, multi-resolution hierarchical modeling is firstly carried out on the models, and then model data with different resolutions are downloaded as required without reducing the roaming visual effect according to factors such as the distance between an object and a viewpoint, the deviation angle between the object and a sight line and the like, so that the network delay of scene downloading is reduced.
Due to the limitation of the internet speed and bandwidth, the instant downloading of the (ultra) large-scale WebVR virtual scene and the three-dimensional model on the internet is always a bottleneck problem; the virtual tourism scene constructed by the three-dimensional model is smoothly played on a Web page, the number and the texture of the scene are required to be reduced, and a reuse mechanism is adopted to manufacture the scene model. Lightweight processing technology: 3D roaming of the virtual scene is realized according to a lightweight modeling → WebVR script programming → background management and support framework → lightweight engine scheduling technology; a plug-in-free simulation engine technology based on WebGL is designed to efficiently schedule scenes with millions of patch levels, so that the virtual roaming experience of a user on the network becomes smoother.
Wherein, the intelligent caching technology: the cache technology is a means for providing the most effective three-dimensional performance of the head-mounted electronic equipment, and the simulation engine divides the network three-dimensional information service system into a plurality of independent physical layers, so that a multi-layer cache mechanism can be effectively realized, and a high-efficiency cache system framework is built; by adopting a three-dimensional model information data pre-fetching mechanism, partial data is obtained from database services in advance and stored in a memory, so that the frequency of accessing a three-dimensional space database by a three-dimensional service component is reduced, three-dimensional elements similar to a three-dimensional space can be quickly extracted in batches by utilizing SQL (structured query language) query operation of a relational database, and the database is prevented from being accessed for multiple times; network front-end caching: data of a certain proportion of scales are acquired step by step and are spliced at the client, and the data of the region accessed once is cached at the local part of the client, so that the network transmission amount and the interaction times between networks are reduced.
The large-scale three-dimensional virtual scene progressive downloading and transmitting mechanism comprises: the three-dimensional engine can judge according to the user viewpoint distance by using a layered 1od loading mode so as to load different models and maps, and the model level is divided into three levels lod 1, lod 2 and lod 3 according to the model grid number; the model maps are simultaneously divided into 1024 × 1024, 512 × 512, 256 × 256; if the user viewpoint distance is higher than 10m, the model calls the lod 3 texture map and calls 256 × 256 textures; therefore, more models can be loaded faster by using the models with low grid number and small materials.
The embodiment of the invention provides a video playing method.A head-mounted electronic device can display N identifiers on a virtual screen, wherein the N identifiers are used for indicating M story videos associated with a target object of a first scenic spot, and the first scenic spot is a scenic spot at the current position of the head-mounted electronic device; receiving a first input of a user to a target identifier in the N identifiers; in response to the first input, playing a target story video indicated by the target identification on the virtual screen; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer. Through the scheme, the head-mounted electronic equipment can display N identifications of M story videos associated with the target object for indicating the first scenic spot under the condition that the head-mounted electronic equipment is located at the first scenic spot, and a user can trigger the head-mounted electronic equipment to play the target story video indicated by the target identification on a virtual screen through inputting the target identification in the N identifications. The target story video can show the sight spot culture for the user from many aspects to the user can really integrate into the sight spot culture through the mode of watching the target story video, produces the sympathetic response.
Optionally, in this embodiment of the present invention, the head-mounted electronic device may display the profile (i.e., the story outline) of each story video while displaying the N identifiers, so that the user can further understand each story video, and thus can better select the story video in which the user is interested.
Optionally, after step 201, the user may trigger the head-mounted electronic device to display a brief description of each story video (which may also be referred to as a brief outline of the story video) by inputting, or, the user may trigger the head-mounted electronic device to play a voice containing the brief description of each story video by talking with the head-mounted electronic device, so that the user can know each story video by the brief description and then select a story video of more interest for viewing.
For example, after the step 201 and before the step 203, the video playing method provided by the embodiment of the present invention may further include the following steps 204 to 205.
Step 204, the head-mounted electronic device receives a third input of the user.
Optionally, the third input may be a click input or a slide input of the user on the first target control or the first target option (e.g., a display story video profile control or option), and the like, which are not limited in the embodiments of the present invention.
For example, reference may be made to the description of the click input and the slide input in the above description of the first input in step 202, and details of the description are not repeated here.
Alternatively, the third input may also be a voice input by the user to the head-mounted electronic device, for example, asking the head-mounted electronic device to introduce a brief summary of each story video.
In step 205, the head-mounted electronic device displays a brief description of each story video in response to the third input, or plays a voice containing the brief description of each story video.
In this embodiment of the present invention, the head-mounted electronic device may display a brief description of the story video within a preset distance indicating an identifier corresponding to the story video, and the embodiment of the present invention is not limited to the brief description.
In the embodiment of the invention, the brief introduction of each story video is displayed, or the voice containing the brief introduction of each story video is played, so that the user can better know the story video and then select the target story video which is interested by the user.
Optionally, after the target story video is played, the head-mounted electronic device may continue to play, for the user, story videos other than the target story video in the M story videos. For example, the head-mounted electronic device may play each story video in turn in the order of the M story videos selected by the user.
Optionally, after the target story video is played, the user may perform an AI conversation with a character in the story video, so as to better understand the sight culture corresponding to the story video.
Optionally, in the embodiment of the present invention, each story video may include at least one story character, may not include other characters (at this time, a preset voice may be played to explain a story for a user), and may also include a character to watch a story and a character to explain a story, which is not limited in the embodiment of the present invention.
Illustratively, the target story video includes: a first character to view a story, a second character to be a story explanation, and at least one story character; referring to fig. 2, as shown in fig. 5, after step 203, the video playing method provided in the embodiment of the present invention may further include steps 206 to 207 described below.
And step 206, the head-mounted electronic device receives a second input of the user after the target story video playing is finished.
The second input is a voice input of the user.
Step 207, the head-mounted electronic device responds to the second input, controls the first person to execute a first preset action corresponding to the voice content according to the voice content corresponding to the second input, and determines a target problem corresponding to the voice content; and playing the voice answer corresponding to the target question, and controlling the target person to execute a second preset action corresponding to the voice answer.
Wherein the target person comprises at least one of: a second character, any of the at least one story character.
Optionally, the target character may be a preset character, or may be a character corresponding to the second input in the target story video (that is, a character specified by the user through the second input).
It is to be understood that the target character may be a second character, and the target character may also be any one of at least one story character in the story video, and the embodiment of the present invention is not limited thereto.
Optionally, the first person and the second person may be virtual persons or real persons. In the process of playing the target story video, the user can imagine the first person as the user himself, and imagine the second person as a guide character or an onlooker character for telling the story. Each story character may be a real character, for example, if the type of story video is a historical story type, each story character is a real historical character; each story character may also be a virtual character, for example, if the type of story video is a mystery type, each story character is a virtual mystery character.
In the embodiment of the invention, the target problem can comprise related contents such as role information, historical background, important event nodes and the like.
In the embodiment of the invention, the head-mounted electronic equipment can analyze the voice information input by the user through a semantic analysis technology to determine the target problem. Any relevant technology can be referred to for the semantic analysis technology, and the embodiment of the invention is not repeated.
In the embodiment of the invention, the head-mounted electronic equipment can realize the conversation between the user and the target person through the AI voice intelligent conversation system. The AI voice intelligent dialogue system comprises a terminal voice system named AIVoice, wherein the terminal voice system comprises a core logic engine, a voice processing engine and a character VR engine; the core logic engine comprises six core modules of recording, voice recognition, semantic processing, function execution, virtual character display and broadcast, wherein the recording module is responsible for acquiring audio data of various input devices and sending the audio data to the voice recognition module as output; the voice recognition module converts the input recording data into text data and outputs the text data to the semantic processing module; the semantic processing module converts input text data into structured data, outputs the structured data to a function execution module and a voice broadcasting module which are arranged behind the semantic processing module, and the function execution module is responsible for calling a local function interface and finally displays a result to a user in a VR virtual image mode. The Speech processing engine comprises Automatic Speech Recognition (ASR), neural-Linguistic Programming (NLP) and Text-To-Speech (TTS), provides an abstract interface and a concrete implementation, and provides the abstract interface for realizing the concrete ASR, NLP and TTS and simultaneously for calling the core logic engine.
In the embodiment of the invention, a database can be designed, wherein the database comprises a large number of questions, voice content corresponding to each question, preset actions corresponding to each voice content, voice answers corresponding to each question and preset actions corresponding to the voice answers of each question. The head-mounted electronic equipment can firstly search a first preset action corresponding to the voice content, a target question corresponding to the voice content, a voice answer corresponding to the target question and a preset animation corresponding to the voice answer from a database according to the collected voice content of the user, then control a first person to execute the first preset action corresponding to the voice content, play the voice answer corresponding to the target question and control the target person to execute a second preset action corresponding to the voice answer.
Illustratively, as shown in fig. 6, a mark "1" indicates a guide character, a mark "2" indicates a user, a mark "3" indicates a story character 1, a mark "4" indicates a story character 2, a mark "5" indicates a story character 3, after the user views a target story video, the user can ask the story character 1 about a question "history time of occurrence of this event? "story character 1 may then answer" occurs in 19xx years ".
In the embodiment of the invention, the head-mounted electronic equipment can control the first person to execute the first preset action corresponding to the question of the user aiming at the question of the target person by the user, and the user can better and comprehensively know the whole story line through the interaction with the person in the story video by playing the voice answer of the question and controlling the target person to execute the second preset action corresponding to the voice answer, so that more interesting travel experience is obtained and more scenic spot cultural knowledge is obtained.
Optionally, after the target story video is played, the head-mounted electronic device may recommend information of a second sight spot associated with the story video for the user, so that the user may serially browse related sight spots and related story videos, and thus better understand sight spot culture and culture between sight spots.
Illustratively, in conjunction with fig. 2, as shown in fig. 7, after the step 203, the video playing method provided by the embodiment of the present invention may further include the following steps 208 to 210.
And step 208, after the target story video is played, the head-mounted electronic device displays information of a second sight spot associated with the target story video.
The information of the second sight includes: a name of the second sight, a geographic location of the second sight, and an identification of a story video associated with the second sight. The information of the second attraction may also include other content, and the embodiment of the present invention is not limited.
It can be understood that the user can know which attraction is associated with the target story video according to the name of the second attraction, can know where the second attraction is located according to the geographic location, and can know the story video associated with the second attraction according to the identifiers of the Q story videos. The user may then determine whether the user is interested in the second sight, wants to refer to the second sight, and knows the sight culture of the second sight based on the information of the second sight.
Step 209, the head-mounted electronic device receives a fourth input from the user.
Optionally, the fourth input may be a click input or a slide input of the user on the second target control or the second target option, and may also be other feasibility inputs, which is not limited in the embodiment of the present invention. For example, reference may be made to the description of the click input and the slide input in the above description of the first input in step 202, and details of the description are not repeated here.
In this embodiment of the present invention, the second target control or the second target option may be a favorite control (or option), a navigation control (or option), or a video playing control (or option), and may also be another control (or option), which is not limited in this embodiment of the present invention.
Step 210, the head-mounted electronic device, in response to the fourth input, performs any one of the following operations: and collecting information of the second scenic spot, displaying navigation path information, and playing a story video associated with the second scenic spot.
Wherein the navigation path information is used to indicate a full path or a partial path from the first attraction to the second attraction.
The total paths are all of the paths from the first sight to the second sight. The partial path may be a path from the first sight spot to a first intermediate position, a second intermediate position, a distance from the first sight spot to a second sight spot, and a distance from the first sight spot to an nth intermediate position, wherein the distances of the first sight spot to the first intermediate position, the second intermediate position, and the distance from the first sight spot to the nth intermediate position are sequentially increased.
Specifically, since the database includes the distances between the plurality of positions, when the head-mounted electronic device or the server plans the navigation path, the distance between all paths of the first scenic spot and the second scenic spot can be calculated, so that all paths or part of paths can be selected to be provided for the user according to the distance between all paths of the first scenic spot and the second scenic spot, which is determined according to the actual situation, and the embodiment of the present invention is not limited. For example, the final navigation path information may be determined according to within which preset range the distance of the entire path from the first sight to the second sight is.
In the embodiment of the present invention, any related technology may be referred to in the method for acquiring navigation path information by a head-mounted electronic device, and details are not described herein.
Optionally, the head-mounted electronic device may acquire and display a plurality of pieces of navigation path information for the user to select, which may improve the user experience. The head-mounted electronic equipment can also acquire and display the shortest navigation path information, and only one piece of navigation path information is relatively simple to display and is convenient for a user to understand.
It will be appreciated that if the user is interested in the second sight spot, but is not currently able to visit the second sight spot, the user may trigger the head-mounted electronic device to collect information about the second sight spot via a fourth input, such that the second sight spot is taken as an alternative on the next trip. If the user is interested in the second sight spot and wants to visit the second sight spot immediately, the user can trigger the head-mounted electronic device to display the navigation path information to the second sight spot through the fourth input. If the user is interested in the second scenic spot, the user can select to watch the story video associated with the second scenic spot first under the condition that the user cannot visit the second scenic spot, so that the scenic spot culture of the second scenic spot can be further known, and the user can have a deeper experience on the target story video by watching the story video associated with the second scenic spot.
In the embodiment of the invention, the head-mounted electronic equipment recommends the information of the second scenic spot associated with the target story video for the user, so that a plurality of selection schemes can be better provided for the user, and the user can select better and intelligent scenic spot information. And wear electronic equipment through the broadcast of story video, associate the sight spot culture between different sight spots, guide the user to visit different sight spots more, bring different tourism experiences for the user, richened traditional under-line tourism mode, realized the advantage combination of on-line and under-line resource.
Optionally, after the target story video is played, the user may post a watching experience (which may be text input or voice input) of the target story video through the head-mounted electronic device, and the user may view the experience (which may be text content or voice content) of the user who previously watched the target story video through the head-mounted electronic device, and the user may record the experience of the user in the story video through the head-mounted electronic device.
In the embodiment of the invention, the user wears the VR headset to watch the story video, so that a more interesting and interactive tourism interaction mode is provided, a rigid card-punching tourism mode is eliminated, and the user obtains more pleasure in the tourism.
Optionally, before step 201, the head-mounted electronic device may provide a plurality of tour routes for the user, and obtain the location information according to the target route selected by the user, so as to display the identifier of the corresponding story video according to the location information.
For example, before step 201, the video playing method provided by the embodiment of the present invention may further include the following steps 211 to 212.
Step 211, the head-mounted electronic device displays at least one route.
Step 212, the head-mounted electronic device acquires the position information in case of receiving an input of a user selecting a target route of the at least one route.
The target route comprises a first sight spot.
Each route includes each sight spot of the scenic spot, but the arrangement sequence of each sight spot of the scenic spot in each route is different.
Illustratively, as shown in fig. 8 (a), the head-mounted electronic device displays a scenic spot browsing map, which includes four routes. When the user selects the route a, the head-mounted electronic device displays the path information of the route a as shown in (b) in fig. 8: the starting point is scenery 1, scenery 2, scenery 3, scenery 4 and the ending point.
In the embodiment of the invention, after the user is detected to enter the scenic spot, the head-mounted electronic equipment provides a plurality of routes for the user to select, and after the user selects the target route, the head-mounted electronic equipment acquires the position information of the user in real time and recommends the story video for the user according to the position information and the like, so that the travel experience of the user can be improved.
It should be noted that the user may also play freely regardless of a series of recommendations in the above-described embodiment of the head-mounted electronic device.
The drawings in the embodiments of the present invention are all exemplified by the drawings in the independent embodiments, and when the embodiments of the present invention are specifically implemented, each of the drawings can also be implemented by combining any other drawings which can be combined, and the embodiments of the present invention are not limited. For example, referring to fig. 5, after step 207, the video playing method provided by the embodiment of the present invention may further include steps 208 to 210 described above.
As shown in fig. 9, an embodiment of the present invention provides a head-mounted electronic device 120, where the head-mounted electronic device 120 includes: a display module 121, a receiving module 122 and a playing module 123; the display module 121 is configured to display N identifiers on a virtual screen, where the N identifiers are used to indicate M story videos associated with a target object of a first scenic spot, and the first scenic spot is a scenic spot at a current position of the head-mounted electronic device; the receiving module 122 is configured to receive a first input of a target identifier from the N identifiers displayed by the displaying module 121; the playing module 123, configured to play, on the virtual screen, a target story video indicated by the target identifier in response to the first input received by the receiving module 122; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer.
Optionally, the display module 121 is specifically configured to display the N identifiers on the virtual screen when a target condition is met; wherein the target condition comprises at least one of: detecting that the distance between the head-mounted electronic equipment and the target object is smaller than or equal to a preset threshold value; receiving target input of a user for inputting the target object; detecting the target object from the received voice information of the user; based on the gaze tracking technique, it is determined that the user's gaze is focused on the target object.
In the embodiment of the invention, various target conditions are provided, namely, under the condition that the user is positioned in the first scenic spot, various modes of recommending the story video for the user are provided, different requirements of the user can be met, and the user experience is improved.
Optionally, the target story video includes: a first character to view a story, a second character to be a story explanation, and at least one story character; the head-mounted electronic device 120 further includes: a processing module 124; the receiving module 122 is further configured to receive a second input from the user after the target story video playing is finished; the processing module 124, configured to, in response to the second input received by the receiving module 122, control the first person to perform a first preset action corresponding to the voice content according to the voice content corresponding to the second input, and determine a target question corresponding to the voice content; playing a voice answer corresponding to the target question, and controlling a target figure to execute a second preset action corresponding to the voice answer; wherein the target person comprises at least one of: a second character, any of the at least one story character.
In the embodiment of the invention, the head-mounted electronic equipment can control the first person to execute the first preset action corresponding to the question of the user aiming at the question of the target person by the user, and the user can better and comprehensively know the whole story line through the interaction with the person in the story video by playing the voice answer of the question and controlling the target person to execute the second preset action corresponding to the voice answer, so that more interesting travel experience is obtained and more scenic spot cultural knowledge is obtained.
Optionally, each of the N identifiers is used to indicate a story video, and the N identifiers are ordered according to a target parameter, where the target parameter includes at least one of: the method comprises the steps of obtaining a corresponding story video, wherein the corresponding story video comprises an occurrence time parameter, a playing frequency parameter and a type parameter; or any P identifiers in the N identifiers are used for indicating a story video, each identifier corresponds to a keyword in the story video, and P is a positive integer less than or equal to N.
In the embodiment of the invention, the N identifications are sequenced according to the target parameters, so that a user can conveniently have a preliminary understanding on each story video, and then the user can conveniently select the story video needing to be continuously understood. Through P keywords indicating one story video, the user can conveniently have a preliminary understanding of each story video, and then the user can conveniently select the story video needing to be continuously understood. Meanwhile, various modes of identifying and indicating story videos are provided, different requirements of users can be met, and the man-machine interaction performance is improved.
Optionally, the playing module 123 is specifically configured to, when the target story video includes at least two story videos, sequentially play the at least two story videos on the virtual screen according to a preset sequence.
In the embodiment of the invention, when the user selects a plurality of story videos at one time, the head-mounted electronic equipment plays the story videos for the user in sequence according to the preset sequence, so that the user operation can be simplified, the user time can be saved, and the man-machine interaction performance can be improved.
Optionally, the receiving module 122 is further configured to receive a third input from the user after the displaying module 121 displays the N identifiers and before the receiving the first input from the user to the target identifier in the N identifiers; the display module 121, further configured to display a brief description of each story video in response to the third input received by the receiving module 122; alternatively, the playing module 123 is further configured to play the voice containing the profile of each story video in response to the third input received by the receiving module 122.
In the embodiment of the invention, the brief introduction of each story video is displayed, or the voice containing the brief introduction of each story video is played, so that the user can better know the story video and then select the target story video which is interested by the user.
Optionally, the head-mounted electronic device 120 further includes: an execution module 125; the display module 121 is further configured to display information of a second sight spot associated with the target story video after the target story video is played, where the information of the second sight spot includes: a name of the second sight spot, a geographic location of the second sight spot, and an identification of a story video associated with the second sight spot; the receiving module 122 is further configured to receive a fourth input from the user; the executing module 125, configured to, in response to the fourth input received by the receiving module 122, perform any one of the following operations: collecting information of a second scenic spot, displaying navigation path information, and playing a story video associated with the second scenic spot; wherein the navigation path information is used to indicate a full path or a partial path from the first attraction to the second attraction.
In the embodiment of the invention, the head-mounted electronic equipment recommends the information of the second scenic spot associated with the target story video for the user, so that a plurality of selection schemes can be better provided for the user, and the user can select better and intelligent scenic spot information. And wear electronic equipment through the broadcast of story video, associate the sight spot culture between different sight spots, guide the user to visit different sight spots more, bring different tourism experiences for the user, richened traditional under-line tourism mode, realized the advantage combination of on-line and under-line resource.
It should be noted that, as shown in fig. 9, modules that are necessarily included in the head-mounted electronic device 120 are indicated by solid line boxes, such as the display module 121, the receiving module 122, and the playing module 123; modules that may or may not be included in the head-mounted electronic device 120 are illustrated with dashed boxes, such as processing module 124 and execution module 125.
The head-mounted electronic device provided in the embodiment of the present invention can implement each process shown in any one of fig. 2 to fig. 8 in the above method embodiments, and details are not repeated here to avoid repetition.
The embodiment of the invention provides a head-mounted electronic device, which can display N identifiers on a virtual screen, wherein the N identifiers are used for indicating M story videos associated with a target object of a first scenic spot, and the first scenic spot is a scenic spot at the current position of the head-mounted electronic device; receiving a first input of a user to a target identifier in the N identifiers; in response to the first input, playing a target story video indicated by the target identification on the virtual screen; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer. Through the scheme, the head-mounted electronic equipment can display N identifications of M story videos associated with the target object for indicating the first scenic spot under the condition that the head-mounted electronic equipment is located at the first scenic spot, and a user can trigger the head-mounted electronic equipment to play the target story video indicated by the target identification on a virtual screen through inputting the target identification in the N identifications. The target story video can show the sight spot culture for the user from many aspects to the user can really integrate into the sight spot culture through the mode of watching the target story video, produces the sympathetic response.
Fig. 10 is a hardware structure diagram of a head-mounted electronic device implementing various embodiments of the present application. As shown in fig. 10, the head-mounted electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the configuration of the head mounted electronic device shown in fig. 10 does not constitute a limitation of the head mounted electronic device, and that the head mounted electronic device may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the head-mounted electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted head-mounted electronic device, a wearable device, a pedometer, and the like.
The display unit 106 is configured to display N identifiers on a virtual screen, where the N identifiers are used to indicate M story videos associated with a target object of a first sight spot, and the first sight spot is a sight spot at a current location of the head-mounted electronic device; a user input unit 107 for receiving a first input of a target identifier of the N identifiers by a user; a processor 110, configured to play a target story video indicated by the target identifier on the virtual screen in response to the first input; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer.
According to the head-mounted electronic device provided by the embodiment of the invention, the head-mounted electronic device can display N identifiers on a virtual screen, the N identifiers are used for indicating M story videos associated with a target object of a first scenic spot, and the first scenic spot is a scenic spot at the current position of the head-mounted electronic device; receiving a first input of a user to a target identifier in the N identifiers; in response to the first input, playing a target story video indicated by the target identification on the virtual screen; wherein the target object comprises at least one of: buildings, natural scenery; n, M is a positive integer. Through the scheme, the head-mounted electronic equipment can display N identifications of M story videos associated with the target object for indicating the first scenic spot under the condition that the head-mounted electronic equipment is located at the first scenic spot, and a user can trigger the head-mounted electronic equipment to play the target story video indicated by the target identification on a virtual screen through inputting the target identification in the N identifications. The target story video can show the sight spot culture for the user from many aspects to the user can really integrate into the sight spot culture through the mode of watching the target story video, produces the sympathetic response.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The head-mounted electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the head-mounted electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The head-mounted electronic device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the head-mounted electronic device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the head-mounted electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer and tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the head-mounted electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 10, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the head-mounted electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the head-mounted electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the head-mounted electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the head mounted electronic apparatus 100 or may be used to transmit data between the head mounted electronic apparatus 100 and an external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the head-mounted electronic device, connects various parts of the whole head-mounted electronic device by using various interfaces and lines, and performs various functions of the head-mounted electronic device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the head-mounted electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The head-mounted electronic device 100 may further include a power supply 111 (such as a battery) for supplying power to various components, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the head-mounted electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides a head-mounted electronic device, which may include the processor 110 shown in fig. 10, the memory 109, and a computer program stored in the memory 109 and capable of being executed on the processor 110, where when the computer program is executed by the processor 110, each process of the video playing method shown in any one of fig. 2 to fig. 8 in the foregoing method embodiments is implemented, and the same technical effect can be achieved, and details are not described here to avoid repetition.
Optionally, in an embodiment of the present invention, the head-mounted electronic device in the above-described embodiment may be a VR head-mounted device (or an AR head-mounted device or an MR head-mounted device, etc.). Specifically, when the head-mounted electronic device in the above-described embodiment (for example, the head-mounted electronic device shown in fig. 10) is a VR head-mounted device (or an AR head-mounted device or an MR head-mounted device, etc.), the VR head-mounted device (or the AR head-mounted device or the MR head-mounted device, etc.) may include all or part of the functional modules in the above-described head-mounted electronic device. Of course, the VR headset (or AR headset or MR headset, etc.) may further include functional modules that are not included in the above-mentioned electronic device.
It is to be understood that, in the embodiment of the present invention, when the head-mounted electronic device in the above-described embodiment is a VR head-mounted device (or an AR head-mounted device or an MR head-mounted device, etc.), the head-mounted electronic device may be an electronic device integrated with VR technology (or AR technology or MR technology).
The VR technology integrates various scientific technologies such as a computer graphics technology, a computer simulation technology, a sensor technology, a display technology and the like, creates a virtual information environment on a multi-dimensional information space, enables a user to have an immersive sense, has perfect interaction capacity with the environment, and is helpful for inspiring ideas. So to speak, the immersive-interactive-idea is three basic characteristics of a VR environment system. The AR technology applies virtual information to the real world through computer technology, and real environments and virtual objects coexist in the same picture or space superimposed in real time. By adopting the AR technology, the visual function of human can be restored, so that human can experience the feeling of combining a real scene and a virtual scene through the AR technology, and further the human can experience the experience of being personally on the scene better. MR technology is a new visualization environment created by merging real and virtual worlds, virtual physical and real objects being difficult to distinguish. Physical and digital objects coexist in the new visualization environment and interact in real time.
Wherein, because the VR is a pure virtual scene, the VR headset is more used for user interaction with the virtual scene. The principle of VR headsets is to magnify the image produced by a small two-dimensional display via an optical system. Specifically, the light emitted by the small display passes through the convex lens, so that the image is refracted to generate a far-like effect. Meanwhile, the left and right eye screens respectively display images of the left and right eyes through the left and right eye lenses, and after the human eyes acquire the information with parallax, images with stereoscopic impression are generated in the brain. It can also be said that the VR headset forms a virtual reality visual field in the brain visual system of the user based on different visual fields and different visual depth perception in the local space. While the primary determinant of the virtual reality field of view is the lens, not the user's pupil. Therefore, in order to obtain a wider field of view, it is necessary to shorten the distance between the user's eyeball and the lens or to increase the size of the lens.
And because AR and MR are the combination of real scene and virtual scene, therefore AR head-mounted equipment and MR head-mounted equipment all need the camera basically, on the basis of the picture that the camera was shot, combine virtual picture to demonstrate and interact. Taking the AR device as AR glasses as an example, when the user wears the AR glasses, the scene viewed by the user is generated by processing through the AR technology, that is, the virtual scene can be displayed in the real scene in an overlapping manner through the AR technology. When the user operates the content displayed by the AR glasses, the user can see that the AR glasses peel off the real scene, so that a more real side is displayed to the user. For example, only the case of the carton can be observed when a user visually observes one carton, but the user can directly observe the internal structure of the carton through AR glasses when the user wears the AR glasses.
In the embodiment of the present invention, when a head-mounted electronic device (VR head-mounted device (or AR head-mounted device or MR head-mounted device, etc.)) is located at a first sight spot, N identifiers of M story videos associated with a target object for indicating the first sight spot may be displayed, and a user may trigger the head-mounted electronic device to play the target story video indicated by the target identifier on a virtual screen through inputting the target identifier of the N identifiers. The target story video can show the sight spot culture for the user from many aspects to the user can really integrate into the sight spot culture through the mode of watching the target story video, produces the sympathetic response.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video playing method shown in any one of fig. 2 to 8 in the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for causing a head-mounted electronic device to execute the methods according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. A video playing method is applied to a head-mounted electronic device, and is characterized by comprising the following steps:
displaying N identifiers on a virtual screen, wherein the N identifiers are used for indicating M story videos associated with a target object of a first sight spot, and the first sight spot is a sight spot at the current position of the head-mounted electronic equipment;
receiving a first input of a user to a target identifier in the N identifiers;
in response to the first input, playing a target story video indicated by the target identification on the virtual screen; the target story video comprises: a first character to view a story, a second character to be a story explanation, and at least one story character; the method further comprises the following steps:
receiving a second input of the user after the target story video playing is finished;
responding to the second input, controlling the first person to execute a first preset action corresponding to the voice content according to the voice content corresponding to the second input, and determining a target problem corresponding to the voice content;
playing a voice answer corresponding to the target question, and controlling a target figure to execute a second preset action corresponding to the voice answer;
wherein the target person comprises at least one of: the second character, any of the at least one story character, the target object including at least one of: buildings, natural scenery; n, M is a positive integer.
2. The method of claim 1, wherein displaying N identifiers on the virtual screen comprises:
displaying the N identifications on the virtual screen under the condition that a target condition is met;
wherein the target condition comprises at least one of:
detecting that the distance between the head-mounted electronic equipment and the target object is smaller than or equal to a preset threshold value;
receiving a target input of a user for inputting the target object;
detecting the target object from the received voice information of the user;
determining that the user's gaze is focused on the target object based on gaze tracking techniques.
3. The method of claim 1, wherein each of the N identifiers is indicative of a story video, the N identifiers being ordered by target parameters, the target parameters including at least one of: the method comprises the steps of obtaining a corresponding story video, wherein the corresponding story video comprises an occurrence time parameter, a playing frequency parameter and a type parameter;
or any P identifiers in the N identifiers are used for indicating one story video, each identifier corresponds to a keyword in the story video, and P is a positive integer smaller than or equal to N.
4. The method of claim 3, wherein in the case that the target story video includes at least two story videos, playing the target story video indicated by the target identification on the virtual screen comprises:
and sequentially playing the at least two story videos according to a preset sequence on the virtual screen.
5. The method of claim 1, wherein after displaying the N identifiers and before receiving a first user input of a target identifier of the N identifiers, the method further comprises:
receiving a third input of the user;
in response to the third input, displaying a brief description of each story video, or playing a voice containing the brief description of each story video.
6. The method according to any one of claims 1 to 5, further comprising:
after the target story video is played, displaying information of a second sight spot associated with the target story video, wherein the information of the second sight spot comprises: a name of the second sight spot, a geographic location of the second sight spot, and an identification of a story video associated with the second sight spot;
receiving a fourth input from the user;
in response to the fourth input, performing any one of: collecting information of the second scenic spot, displaying navigation path information, and playing a story video associated with the second scenic spot;
wherein the navigation path information is used to indicate a full path or a partial path from the first attraction to the second attraction.
7. A head-mounted electronic device, comprising: the device comprises a display module, a receiving module, a processing module and a playing module;
the display module is used for displaying N identifiers on a virtual screen, wherein the N identifiers are used for indicating M story videos associated with a target object of a first scenic spot, and the first scenic spot is a scenic spot at the current position of the head-mounted electronic equipment;
the receiving module is used for receiving a first input of a target identifier in the N identifiers displayed by the display module by a user;
the playing module is used for responding to the first input received by the receiving module, and playing the target story video indicated by the target identification on the virtual screen; the target story video comprises: a first character to view a story, a second character to be a story explanation, and at least one story character;
the receiving module is further used for receiving a second input of the user after the target story video is played;
the processing module is used for responding to the second input received by the receiving module, controlling the first person to execute a first preset action corresponding to the voice content according to the voice content corresponding to the second input, and determining a target problem corresponding to the voice content; playing a voice answer corresponding to the target question, and controlling a target figure to execute a second preset action corresponding to the voice answer;
wherein the target person comprises at least one of: the second character, any of the at least one story character, the target object including at least one of: buildings, natural scenery; n, M is a positive integer.
8. The head-mounted electronic device according to claim 7, wherein the display module is specifically configured to display the N identifiers on the virtual screen if a target condition is met;
wherein the target condition comprises at least one of:
detecting that the distance between the head-mounted electronic equipment and the target object is smaller than or equal to a preset threshold value;
receiving a target input of a user for inputting the target object;
detecting the target object from the received voice information of the user;
determining that the user's gaze is focused on the target object based on gaze tracking techniques.
9. The head-mounted electronic device of claim 7, wherein each of the N identifiers is indicative of a story video, the N identifiers being ordered by target parameters, the target parameters including at least one of: the method comprises the steps of obtaining a corresponding story video, wherein the corresponding story video comprises an occurrence time parameter, a playing frequency parameter and a type parameter;
or any P identifiers in the N identifiers are used for indicating one story video, each identifier corresponds to a keyword in the story video, and P is a positive integer smaller than or equal to N.
10. The head-mounted electronic device according to claim 9, wherein the playing module is specifically configured to, in a case where the target story video includes at least two story videos, sequentially play the at least two story videos in a preset order on the virtual screen.
11. The head-mounted electronic device according to claim 7, wherein the receiving module is further configured to receive a third input from the user after the displaying module displays the N identifiers and before the receiving the first input from the user to the target identifier of the N identifiers;
the display module is further used for responding to the third input received by the receiving module and displaying the brief description of each story video; or, the playing module is further configured to play a voice containing the profile of each story video in response to the third input received by the receiving module.
12. The head-mounted electronic device according to any one of claims 7 to 11, further comprising: an execution module;
the display module is further configured to display information of a second sight spot associated with the target story video after the target story video is played, where the information of the second sight spot includes: a name of the second sight spot, a geographic location of the second sight spot, and an identification of a story video associated with the second sight spot;
the receiving module is further used for receiving a fourth input of the user;
the execution module, configured to, in response to the fourth input received by the receiving module, perform any one of the following operations: collecting information of the second scenic spot, displaying navigation path information, and playing a story video associated with the second scenic spot;
wherein the navigation path information is used to indicate a full path or a partial path from the first attraction to the second attraction.
13. A head-mounted electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video playback method according to any one of claims 1 to 6.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video playback method according to one of claims 1 to 6.
CN201911418185.7A 2019-12-31 2019-12-31 Video playing method and head-mounted electronic equipment Active CN111131904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911418185.7A CN111131904B (en) 2019-12-31 2019-12-31 Video playing method and head-mounted electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911418185.7A CN111131904B (en) 2019-12-31 2019-12-31 Video playing method and head-mounted electronic equipment

Publications (2)

Publication Number Publication Date
CN111131904A CN111131904A (en) 2020-05-08
CN111131904B true CN111131904B (en) 2022-03-22

Family

ID=70506860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911418185.7A Active CN111131904B (en) 2019-12-31 2019-12-31 Video playing method and head-mounted electronic equipment

Country Status (1)

Country Link
CN (1) CN111131904B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111541889B (en) * 2020-07-10 2020-10-20 南京新研协同定位导航研究院有限公司 Method for using sight line triggering content by MR glasses
CN113490000B (en) * 2020-09-11 2023-04-14 青岛海信电子产业控股股份有限公司 Intelligent device and control method thereof
CN113534959A (en) * 2021-07-27 2021-10-22 咪咕音乐有限公司 Screen display method, screen display device, virtual reality equipment and program product
CN114237390A (en) * 2021-12-07 2022-03-25 福建神旅科技有限公司 Multi-scene-area AR interaction method, device, equipment and storage medium based on script killer
CN114422843B (en) * 2022-03-10 2024-03-26 北京达佳互联信息技术有限公司 video color egg playing method and device, electronic equipment and medium
CN115175004B (en) * 2022-07-04 2023-12-08 闪耀现实(无锡)科技有限公司 Method and device for video playing, wearable device and electronic device
CN115538346B (en) * 2022-10-31 2024-04-26 武汉理工大学 Sound barrier with noise-photovoltaic combined power generation function

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777901A (en) * 2013-12-27 2014-05-07 一派视觉(北京)数字科技有限公司 History reappearing method and system used for scenic spot visiting
CN105718547A (en) * 2016-01-18 2016-06-29 传成文化传媒(上海)有限公司 Tour guide method and system based on scenic spot label
CN107403395A (en) * 2017-07-03 2017-11-28 深圳前海弘稼科技有限公司 Intelligent tour method and intelligent tour device
CN107613457A (en) * 2017-09-01 2018-01-19 深圳市盛路物联通讯技术有限公司 Routing information handles method and apparatus
CN207181824U (en) * 2017-09-14 2018-04-03 呼伦贝尔市瑞通网络信息咨询服务有限公司 Explain AR equipment in scenic spot
CN109218982A (en) * 2018-07-23 2019-01-15 Oppo广东移动通信有限公司 Sight spot information acquisition methods, device, mobile terminal and storage medium
CN110211222A (en) * 2019-05-07 2019-09-06 谷东科技有限公司 A kind of AR immersion tourism guide method, device, storage medium and terminal device
CN110531849A (en) * 2019-08-16 2019-12-03 广州创梦空间人工智能科技有限公司 A kind of intelligent tutoring system of the augmented reality based on 5G communication

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT201700058961A1 (en) * 2017-05-30 2018-11-30 Artglass S R L METHOD AND SYSTEM OF FRUITION OF AN EDITORIAL CONTENT IN A PREFERABLY CULTURAL, ARTISTIC OR LANDSCAPE OR NATURALISTIC OR EXHIBITION OR EXHIBITION SITE
CN110488975B (en) * 2019-08-19 2021-04-13 深圳市仝智科技有限公司 Data processing method based on artificial intelligence and related device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777901A (en) * 2013-12-27 2014-05-07 一派视觉(北京)数字科技有限公司 History reappearing method and system used for scenic spot visiting
CN105718547A (en) * 2016-01-18 2016-06-29 传成文化传媒(上海)有限公司 Tour guide method and system based on scenic spot label
CN107403395A (en) * 2017-07-03 2017-11-28 深圳前海弘稼科技有限公司 Intelligent tour method and intelligent tour device
CN107613457A (en) * 2017-09-01 2018-01-19 深圳市盛路物联通讯技术有限公司 Routing information handles method and apparatus
CN207181824U (en) * 2017-09-14 2018-04-03 呼伦贝尔市瑞通网络信息咨询服务有限公司 Explain AR equipment in scenic spot
CN109218982A (en) * 2018-07-23 2019-01-15 Oppo广东移动通信有限公司 Sight spot information acquisition methods, device, mobile terminal and storage medium
CN110211222A (en) * 2019-05-07 2019-09-06 谷东科技有限公司 A kind of AR immersion tourism guide method, device, storage medium and terminal device
CN110531849A (en) * 2019-08-16 2019-12-03 广州创梦空间人工智能科技有限公司 A kind of intelligent tutoring system of the augmented reality based on 5G communication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于MAR的智慧文化旅游系统设计与关键技术研究;马军;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170615;I138-874 *

Also Published As

Publication number Publication date
CN111131904A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111131904B (en) Video playing method and head-mounted electronic equipment
US11727625B2 (en) Content positioning in extended reality systems
US11386629B2 (en) Cross reality system
EP3666352B1 (en) Method and device for augmented and virtual reality
TWI581178B (en) User controlled real object disappearance in a mixed reality display
US8275834B2 (en) Multi-modal, geo-tempo communications systems
CN114616534A (en) Cross reality system with wireless fingerprint
JP6462059B1 (en) Information processing method, information processing program, information processing system, and information processing apparatus
CN115398314A (en) Cross reality system for map processing using multi-resolution frame descriptors
KR20150126938A (en) System and method for augmented and virtual reality
US20230206912A1 (en) Digital assistant control of applications
US20200294265A1 (en) Information processing apparatus, method for processing information, and computer program
CN111353299B (en) Dialog scene determining method based on artificial intelligence and related device
US20200234477A1 (en) Conversion of 2d diagrams to 3d rich immersive content
CN115176285A (en) Cross reality system with buffering for positioning accuracy
CN112911356B (en) Virtual reality VR video playing method and related equipment
US20240095877A1 (en) System and method for providing spatiotemporal visual guidance within 360-degree video
US20240020920A1 (en) Incremental scanning for custom landmarkers
KR20230101177A (en) History story cartoon providing system using metaverse
CN117631904A (en) Information interaction method, device, electronic equipment and storage medium
CN117994284A (en) Collision detection method, collision detection device, electronic equipment and storage medium
CN117742555A (en) Control interaction method, device, equipment and medium
CN118105689A (en) Game processing method and device based on virtual reality, electronic equipment and storage medium
CN116206090A (en) Shooting method, device, equipment and medium based on virtual reality space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant