CN116339559A - Scene display method and device, storage medium and electronic equipment - Google Patents

Scene display method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116339559A
CN116339559A CN202310217518.XA CN202310217518A CN116339559A CN 116339559 A CN116339559 A CN 116339559A CN 202310217518 A CN202310217518 A CN 202310217518A CN 116339559 A CN116339559 A CN 116339559A
Authority
CN
China
Prior art keywords
scene
current
text information
music
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310217518.XA
Other languages
Chinese (zh)
Inventor
张栋
吕峰
郭元
李飞
陈尔展
戴宏昌
姜皓
李晓妍
李昊阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Netease Cloud Music Technology Co Ltd
Original Assignee
Hangzhou Netease Cloud Music Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Netease Cloud Music Technology Co Ltd filed Critical Hangzhou Netease Cloud Music Technology Co Ltd
Priority to CN202310217518.XA priority Critical patent/CN116339559A/en
Publication of CN116339559A publication Critical patent/CN116339559A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure relates to a scene display method, a scene display device, a storage medium and electronic equipment, and relates to the technical field of virtual scene display. The scene showing method comprises the following steps: responding to scene triggering operation in a music application, and displaying a virtual atmosphere scene on an interface of the music application; and displaying the current text information matched with the theme and/or the current music type of the virtual atmosphere scene in the virtual atmosphere scene to obtain a target scene. The present disclosure improves user experience.

Description

Scene display method and device, storage medium and electronic equipment
Technical Field
Embodiments of the present disclosure relate to the field of virtual scene display technologies, and more particularly, to a scene display method, a scene display device, a computer-readable storage medium, and an electronic apparatus.
Background
This section is intended to provide a background or context for the embodiments of the disclosure recited in the claims, which description herein is not admitted to be prior art by inclusion in this section.
In the related art, some platforms (applications, software) provide scene functions, and a certain scene, such as a rainy window, a fireplace bonfire, etc., is presented by combining audio with video pictures.
Disclosure of Invention
However, the scenes displayed in the existing scene functions are specified and have no relation with the audio being played, so that the user is easy to generate visual fatigue, and the content cannot generate linkage with the user.
For this reason, an improved scene showing method is highly required to improve the linkage with the user and reduce the visual fatigue of the user.
In this context, embodiments of the present disclosure desirably provide a scene showing method, a scene showing apparatus, a computer-readable storage medium, and an electronic device.
According to a first aspect of the present disclosure, there is provided a scene showing method, including:
responding to scene triggering operation in a music application, and displaying a virtual atmosphere scene on an interface of the music application;
displaying current text information matched with the theme and/or the current music type of the virtual atmosphere scene in the virtual atmosphere scene to obtain a target scene; the current music type is the type of currently playing music under the theme of the virtual atmosphere scene.
In one embodiment, the displaying, in the virtual atmosphere scene, the current text information matched with the theme and/or music type of the virtual atmosphere scene includes: based on a preset corresponding relation between the theme and/or music type of the virtual atmosphere scene and the text information, determining current text information matched with the theme and/or current music type of the current virtual atmosphere scene according to the theme and/or current music type of the current virtual atmosphere scene; and displaying the current text information in the virtual atmosphere scene.
In one embodiment, the correspondence between the theme and/or music type of the virtual atmosphere scene and the text information is determined by the following procedure: constructing a first feature library according to the theme of the virtual atmosphere scene and/or keywords of the music type; constructing a second feature library according to the keywords of the text information; establishing a mapping relation between the first feature library and the second feature library according to the similarity between features in the first feature library and the second feature library; and determining the mapping relation as the corresponding relation.
In one embodiment, the displaying the current text information in the virtual atmosphere scene includes: determining a current front-end style component matched with the current text information according to the current text information based on a corresponding relation between preset text information and the front-end style component; and in the virtual atmosphere scene, displaying the current text information on a current front-end style component matched with the current text information.
In one embodiment, the correspondence between the text information and the front-end style component is determined by: constructing a third feature library according to the category of the text information; determining the front-end style component with the category matched with the characteristics in the third characteristic library according to the characteristics in the third characteristic library; establishing a mapping relation between the third feature library and the front-end style component according to the text information and the category of the front-end style component; and determining the mapping relation between the third feature library and the front-end style component as the corresponding relation between the text information and the front-end style component.
In one embodiment, the method further comprises: and displaying scene information related to the personalized information of the current login user in the virtual atmosphere scene.
In one embodiment, the personalized information of the current login user includes a music preference tag of the current login user, the scene information includes text information, and the displaying the scene information related to the personalized information of the current login user includes: based on a preset corresponding relation between the music preference label and the text information, determining current text information matched with the music preference label of the current login user according to the music preference label of the current login user; displaying current text information matched with the music preference tag of the current login user; the current text information includes at least one of singer information, song information, and thermo-rating information.
In one embodiment, the correspondence between the music preference tags and the text information is determined by: determining the music preference of the current login user according to the historical play information of the current login user; constructing a music preference tag library based on the music preference of the current login user; acquiring a label of text information matched with the music preference label according to the music preference label to obtain a text information label library; determining a mapping relation between the music preference tag and the text information according to the similarity of the tags in the music preference tag library and the text information tag library; and determining the mapping relation between the music preference label and the text information as the corresponding relation between the music preference label and the text information.
In one embodiment, the method further comprises: and in the virtual atmosphere scene, displaying scene information in the current environment where the current login user is located.
In one embodiment, the current environment where the current login user is located includes a current time, the scene information includes an environmental special effect and holiday information, and the displaying the scene information in the current environment where the current login user is located includes: determining a corresponding environment special effect according to the time period of the current time, and displaying the environment special effect; the environmental effects include at least one of day effects, night effects; or determining the current date according to the current time, and displaying festival information corresponding to the current date.
In one embodiment, the current environment where the current login user is located includes a current time and a current location, the scene information includes weather information, and the displaying and displaying the scene information in the current environment where the current login user is located includes: and determining current weather information according to the current time and the current position, and displaying the weather information.
According to a second aspect of the present disclosure, there is provided a scene showing device comprising:
A first display module configured to display a virtual atmosphere scene at an interface of a music application in response to a scene triggering operation in the music application;
the second display module is configured to display current text information matched with the theme and/or the current music type of the virtual atmosphere scene in the virtual atmosphere scene to obtain a target scene; the current music type is the type of currently playing music under the theme of the virtual atmosphere scene.
In one embodiment, the second display module is configured to: based on a preset corresponding relation between the theme and/or music type of the virtual atmosphere scene and the text information, determining current text information matched with the theme and/or current music type of the current virtual atmosphere scene according to the theme and/or current music type of the current virtual atmosphere scene; and displaying the current text information in the virtual atmosphere scene.
In one embodiment, the correspondence between the theme and/or music type of the virtual atmosphere scene and the text information is determined by the following procedure: constructing a first feature library according to the theme of the virtual atmosphere scene and/or keywords of the music type; constructing a second feature library according to the keywords of the text information; establishing a mapping relation between the first feature library and the second feature library according to the similarity between features in the first feature library and the second feature library; and determining the mapping relation as the corresponding relation.
In one embodiment, the second display module is configured to: determining a current front-end style component matched with the current text information according to the current text information based on a corresponding relation between preset text information and the front-end style component; and in the virtual atmosphere scene, displaying the current text information on a current front-end style component matched with the current text information.
In one embodiment, the correspondence between the text information and the front-end style component is determined by: constructing a third feature library according to the category of the text information; determining the front-end style component with the category matched with the characteristics in the third characteristic library according to the characteristics in the third characteristic library; establishing a mapping relation between the third feature library and the front-end style component according to the text information and the category of the front-end style component; and determining the mapping relation between the third feature library and the front-end style component as the corresponding relation between the text information and the front-end style component.
In one embodiment, the apparatus further comprises a third display module configured to: and displaying scene information related to the personalized information of the current login user in the virtual atmosphere scene.
In one embodiment, the personalized information of the current login user includes a music preference tag of the current login user, the scene information includes text information, and the third display module is configured to: based on a preset corresponding relation between the music preference label and the text information, determining current text information matched with the music preference label of the current login user according to the music preference label of the current login user; displaying current text information matched with the music preference tag of the current login user; the current text information includes at least one of singer information, song information, and thermo-rating information.
In one embodiment, the correspondence between the music preference tags and the text information is determined by: determining the music preference of the current login user according to the historical play information of the current login user; constructing a music preference tag library based on the music preference of the current login user; acquiring a label of text information matched with the music preference label according to the music preference label to obtain a text information label library; determining a mapping relation between the music preference tag and the text information according to the similarity of the tags in the music preference tag library and the text information tag library; and determining the mapping relation between the music preference label and the text information as the corresponding relation between the music preference label and the text information.
In one embodiment, the apparatus further comprises a fourth display module configured to: and in the virtual atmosphere scene, displaying scene information in the current environment where the current login user is located.
In one embodiment, the current environment in which the current login user is located includes a current time, the scene information includes an environmental special effect and holiday information, and the fourth display module is configured to: determining a corresponding environment special effect according to the time period of the current time, and displaying the environment special effect; the environmental effects include at least one of day effects, night effects; or determining the current date according to the current time, and displaying festival information corresponding to the current date.
In one embodiment, the current environment in which the current login user is located includes a current time and a current location, and the fourth display module is configured to: and determining current weather information according to the current time and the current position, and displaying the weather information.
According to a third aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the methods described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any of the methods described above via execution of the executable instructions.
According to the scene display method, the scene display device, the computer-readable storage medium and the electronic equipment, responding to scene triggering operation in a music application, and displaying a virtual atmosphere scene on an interface of the music application; and displaying the current text information matched with the theme and/or the current music type of the virtual atmosphere scene in the virtual atmosphere scene to obtain a target scene. Therefore, the current text information matched with the theme and/or the current music type of the virtual atmosphere scene can be displayed in the virtual atmosphere scene, the problem that in the related technology, the audio and video are not matched with the text information is solved, and the user experience is improved.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
FIG. 1 illustrates a schematic diagram of a scene showing flow architecture in an embodiment of the disclosure;
FIG. 2 shows a flow chart of a scenario presentation method in an embodiment of the present disclosure;
FIG. 3 shows a flow chart for presenting current text information in an embodiment of the present disclosure;
FIG. 4 shows a flow chart of constructing a correspondence in an embodiment of the present disclosure;
FIG. 5 shows a flow chart for presenting current text information in an embodiment of the present disclosure;
FIG. 6 shows a flow chart of constructing a correspondence in an embodiment of the present disclosure;
FIG. 7 shows a flow chart illustrating scenario information in an embodiment of the present disclosure;
FIG. 8 shows a flow chart of constructing a correspondence in an embodiment of the present disclosure;
fig. 9 is a schematic structural view of a scene showing device according to an embodiment of the present disclosure;
fig. 10 shows a schematic structural diagram of an electronic device in an embodiment of the disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present disclosure will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable one skilled in the art to better understand and practice the present disclosure and are not intended to limit the scope of the present disclosure in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Those skilled in the art will appreciate that embodiments of the present disclosure may be implemented as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the following forms, namely: complete hardware, complete software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the disclosure, a scene showing method, a scene showing device, a computer-readable storage medium and an electronic device are provided.
Any number of elements in the figures are for illustration and not limitation, and any naming is used for distinction only, and not for any limiting sense.
The principles and spirit of the present disclosure are described in detail below with reference to several representative embodiments thereof.
Summary of The Invention
In the related art, some platforms (applications and software) provide scene functions, and a certain scene is displayed by combining audio with video pictures, such as a rainy day window, a fireplace bonfire and the like, so that the types of scene contents are single, visual fatigue of a user is easy to generate, and linkage with the user cannot be generated in the content.
In view of the foregoing, the present disclosure provides a scene showing method, a scene showing apparatus, a computer-readable storage medium, and an electronic device, which display a virtual atmosphere scene at an interface of a music application in response to a scene triggering operation in the music application; and displaying the current text information matched with the theme and/or the current music type of the virtual atmosphere scene in the virtual atmosphere scene to obtain a target scene. Therefore, the current text information matched with the theme and/or the current music type of the virtual atmosphere scene can be displayed in the virtual atmosphere scene, the problem that in the related technology, the audio and video are not matched with the text information is solved, and the user experience is improved.
Having described the basic principles of the present disclosure, various non-limiting embodiments of the present disclosure are specifically described below.
Application scene overview
It should be noted that the following application scenarios are only shown for facilitating understanding of the spirit and principles of the present disclosure, and embodiments of the present disclosure are not limited in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
The present disclosure may be applied to any situation where a client needs to perform a scene presentation, such as: when the user plays music related to the theme, the client matches the theme of the virtual atmosphere scene to the text information and front-end style component, and the text information is displayed in the front-end style component corresponding to the text information to obtain a target scene.
Exemplary method
The system architecture and application scenario of the operating environment of the present exemplary embodiment are described below in conjunction with fig. 1.
Fig. 1 shows a schematic diagram of a system architecture, which system architecture 100 may include a terminal 110 and a server 120. The terminal 110 may be a smart phone, a tablet computer, a personal computer, etc., and the terminal 110 may receive a user input or a designated trigger operation. Server 120 may refer broadly to a background system (e.g., a scene showing system) that provides a scene showing, and may be a server or a cluster of servers. The terminal 110 and the server 120 may form a connection through a wired or wireless communication link for data interaction.
Exemplary embodiments of the present disclosure first provide a scene showing method, which may include:
responding to scene triggering operation in a music application, and displaying a virtual atmosphere scene on an interface of the music application;
displaying current text information matched with the theme and/or the current music type of the virtual atmosphere scene in the virtual atmosphere scene to obtain a target scene; the current music type is the type of currently playing music under the theme of the virtual atmosphere scene.
Fig. 2 shows an exemplary flow of the scene showing method, and each step in fig. 2 is specifically described below.
Referring to fig. 2, in step S210, a virtual atmosphere scene is displayed at an interface of a music application in response to a scene trigger operation in the music application.
The scene triggering operation is an operation set by the system and used for displaying a corresponding virtual atmosphere scene; in one embodiment, the scene trigger operation may be a click operation, a slide operation, a remote control operation, etc., which is not limited herein; such as: the system sets clicking operation as scene triggering operation, and then triggers and displays the corresponding virtual atmosphere scene through the clicking operation; for another example, the system sets a blank fist making operation as a scene triggering operation, and then triggers and displays a virtual atmosphere scene corresponding to the corresponding position of the fist making center point in the music application through the blank fist making operation. Here, the scene trigger operation may be guest trigger, login user trigger, or machine trigger, which is not limited herein.
The virtual atmosphere scene comprises audio and video corresponding to the audio, wherein the virtual atmosphere scene can be understood as an atmosphere space and is an immersive scene combined by the video and the audio; such as: christmas tree scenes may be combined with Christmas related video and audio.
With continued reference to fig. 2, in step S220, in the virtual atmosphere scene, current text information matching the theme and/or the current music type of the virtual atmosphere scene is displayed, so as to obtain a target scene.
The current music type is the type of currently playing music under the theme of the virtual atmosphere scene.
The theme of the virtual atmosphere scene is preset to be displayed on the music application interface, and the theme of each virtual atmosphere scene can comprise a plurality of pieces of music which are matched with the theme of the atmosphere scene; such as: the theme of the virtual atmosphere scene is the deep learning time, and then a plurality of warm and not dry light music is included under the theme of the virtual atmosphere scene.
The music type is the type of a plurality of pieces of music included under the theme of the virtual atmosphere scene; such as: the theme of the virtual atmosphere scene is the deep learning time, and a plurality of pieces of music included under the theme of the virtual atmosphere scene, music of the type of light music, music of the type of classical music and the like.
The text information is a set of network text information resources acquired on the network according to the theme and/or the music type of the virtual atmosphere scene, and is text related to the theme and/or the music type of the virtual atmosphere scene; here, the text information may be sentences, paragraphs, articles, or the like; such as: singer-related information, critique, singer's recent news, etc., without limitation; further, the current text information is sentences, paragraphs, articles, etc. matching the subject matter and/or music genre of the virtual atmosphere scene.
In one embodiment, the current text information may be determined by the theme of the virtual atmosphere scene and/or the correspondence between the music type and the text information, and referring to fig. 3, step S220 may further include the following steps:
step S310, based on the preset corresponding relation between the theme and/or music type of the virtual atmosphere scene and the text information, determining the current text information matched with the theme and/or the current music type of the current virtual atmosphere scene according to the theme and/or the current music type of the current virtual atmosphere scene.
The correspondence between the theme and/or music type of the virtual atmosphere scene and the text information can be a mapping relationship between the theme and/or music type of the virtual atmosphere scene and the text information established through keywords, or a correspondence between the theme and/or music type of the virtual atmosphere scene and the text information established through a machine learning model; here, the theme and/or music type of one virtual atmosphere scene may correspond to one text message, or may correspond to a plurality of text messages, which is not limited herein.
Under the condition that the theme and/or the music type of one virtual atmosphere scene corresponds to one piece of text information, the text information is transformed along with the transformation of the theme and/or the music type of the virtual atmosphere scene; in the case that the theme and/or the music type of one virtual atmosphere scene corresponds to a plurality of text information, the text information may be played circularly or only one of the plurality of text information may be played during the playing of the theme and/or the music type of the virtual atmosphere scene, which is consistent with the setting of the music application, and the method is not limited herein.
Under the condition that the corresponding relation between the theme and/or music type of the virtual atmosphere scene and the text information is determined, namely under the condition that the mapping relation between the theme and/or music type of the virtual atmosphere scene and the text information established through keywords is determined, or under the condition that the corresponding relation between the theme and/or music type of the virtual atmosphere scene established through a machine learning model and the text information is determined, the corresponding text information can be determined by inquiring the mapping relation or the corresponding relation according to the theme and/or the current music type of the current virtual atmosphere scene, so that different text information can be displayed under the theme and/or music type of different virtual atmosphere scenes, and the purpose of improving user experience is achieved.
In one embodiment, referring to fig. 4, the correspondence between the theme and/or music type of the virtual atmosphere scene and the text information may be determined by the following procedure:
step S410, a first feature library is constructed according to the theme of the virtual atmosphere scene and/or keywords of the music type.
The keywords of the theme and/or music type of the virtual atmosphere scene may be extracted by TF-IDF algorithm, or may be extracted by TextRank algorithm, which is not limited herein.
The first feature library comprises a plurality of keywords, and further can comprise two types of keywords, wherein one type of keywords is a keyword of a theme of the virtual atmosphere scene, and the other type of keywords is a keyword of a music type; such as: keywords of the theme of the virtual atmosphere scene are saved as a column, and keywords of the music type are saved as a column.
And S420, constructing a second feature library according to the keywords of the text information.
The keywords of the text information are the same as the keywords of the theme and/or music type of the virtual atmosphere scene, and may be extracted by TF-IDF algorithm, or may be extracted by TextRank algorithm, which is not limited herein.
The second feature library includes a plurality of keywords, and the keywords are keywords of text information, i.e., keywords extracted from sentences, paragraphs, articles, and the like.
And S430, establishing a mapping relation between the first feature library and the second feature library according to the similarity between the features in the first feature library and the second feature library.
Wherein, the similarity can be determined by Euclidean distance, manhattan distance, cosine similarity, pearson correlation coefficient, etc.; here, a mapping relationship between the theme of the virtual atmosphere scene in the first feature library and the music type may be first established, and then a mapping relationship between the keywords in the first feature library and the second feature library may be established; of course, a mapping relationship between the keywords of the theme of the virtual atmosphere scene or the music type in the first feature library and the keywords in the second feature library may be established first, and then a mapping relationship between the keywords in the second feature library and the keywords of the theme of the music type or the virtual atmosphere scene in the first feature library may be established, which is not limited herein.
Step S440, the mapping relation is determined as a corresponding relation.
The correspondence refers to the correspondence between the theme and/or music type of the virtual atmosphere scene and the text information in step S310.
Step S320, displaying the current text information in the virtual atmosphere scene.
The method comprises the steps of displaying current text information in a video picture matched with the current music type in a virtual atmosphere scene.
In one embodiment, the displaying of the current text information in the virtual atmosphere scene is implemented by a front-end style component, and specifically, referring to fig. 5, the step S320 may further include the following steps:
step S510, determining a current front-end style component matched with the current text information according to the current text information based on the corresponding relation between the preset text information and the front-end style component.
The corresponding relation between the text information and the front-end style component can be a mapping relation between the text information and the front-end style component established by a type, or a corresponding relation between the text information and the front-end style component established by a machine learning model; here, a front-end style component may correspond to one text message, or may correspond to a plurality of text messages, which is not limited herein; such as: the news ticker component corresponds to the text information of the news class; the billboard component corresponds to the text information of the types of automobiles, foods and the like.
Under the condition that the corresponding relation between the text information and the front-end style component is determined, namely, under the condition that the mapping relation between the text information established by the type and the front-end style component is determined, or under the condition that the corresponding relation between the text information established by a machine learning model and the front-end style component is determined, the corresponding front-end style component can be determined by inquiring the mapping relation or the corresponding relation according to the current text information.
In one embodiment, referring to fig. 6, the correspondence between the text information and the front-end style component of this step may be determined by the following procedure:
step S610, a third feature library is constructed according to the category of the text information.
The category of the text information can be determined by classifying and marking the text information; namely, the text information is classified firstly, and then the classification of the text information is marked; that is, the category of the text information may be understood as a category label, or, labeling information.
The third feature library includes several category labels (labeling information).
And S620, determining a front-end style component with the category matched with the characteristics in the third characteristic library according to the characteristics in the third characteristic library.
Wherein the features in the third feature library, i.e. the category labels described above.
The front-end style component is used for componentizing the front-end presentation modes corresponding to different virtual atmosphere scenes; such as: a virtual atmosphere scene of a Christmas tree, the Christmas and the tree can be constructed into two components, namely, a Christmas element component and a tree component; here, the front-end style component may map through categories of text information; such as: the category of the text information is advertisement, and the mapped front-end style component is a billboard component; the category of the text information is news, and the mapped front-end style component is a news ticker component; the category of the text information is a concert, and the mapped front-end style component is a large-screen component of the concert; the category of the text information is a concert hand-held, and the mapped front-end style component is a concert hand-held component.
Step S630, according to the text information and the category of the front end style component, a mapping relation between the third feature library and the front end style component is established.
Under the condition that the similarity between the text information and the category of the front-end style component is high, establishing a mapping relation between the text information and the front-end style component; such as: the category of the text information is advertisement, and the mapped front-end style component is a billboard component; the category of the text information is news, and the mapped front-end style component is a news ticker component; here, a front-end style component may correspond to one text message or may correspond to a plurality of text messages, which is not limited herein.
Step S640, determining the mapping relation between the third feature library and the front-end style component as the corresponding relation between the text information and the front-end style component.
The correspondence refers to the correspondence between the text information and the front-end style component in the step S510.
And step S520, in the virtual atmosphere scene, displaying the current text information on the current front-end style component matched with the current text information.
The method comprises the steps of displaying current text information on a current front-end style component matched with the current text information in a virtual atmosphere scene.
The above-described content is applicable to tourists and login users.
In one embodiment, in order to enrich the scene, improve the linkage property and user experience with the user, some information of the user may be displayed in the virtual atmosphere scene, and specifically, the scene display method may further include the following steps:
and in the virtual atmosphere scene, displaying scene information related to the personalized information of the current login user.
The personalized information of the current login user comprises a portrait of the current login user; such as: music interest preferences such as a wind, a favorite singer, a favorite language, etc. are currently entered for the user preference. When the personalized information of the current login user is a favorite singer, displaying one or more of the singer, work information of the singer, background information of the singer and the like in a virtual atmosphere scene; when the personalized information of the current login user is a favorite language, displaying one or more than two of the information such as hot song tracks, hot song destination hot comments, hot song destination singer information and the like of the language in a virtual atmosphere scene; when the personalized information of the current login user is a preferred song, one or more than two of information such as a hot song track, a hot song destination hot comment and the like of the song are displayed in the virtual scene.
In one embodiment, the personalized information of the current login user includes a music preference tag of the current login user, the scene information includes text information, and further, referring to fig. 7, the steps may include the steps of:
step S710, determining current text information matched with the music preference label of the current login user according to the music preference label of the current login user based on the corresponding relation between the preset music preference label and the text information.
The corresponding relation between the music preference label and the text information can be a mapping relation between the music preference label and the text information established by a category, or a corresponding relation between the music preference label and the text information established by a machine learning model; here, one music preference tag may correspond to one text message or may correspond to a plurality of text messages, and of course, one text message may also correspond to a plurality of music preference tags, which is not limited herein.
In the case that one music preference tag corresponds to one text information, the text information is transformed along with the transformation of the music preference tag; in the case where one music preference tag corresponds to a plurality of text information, the plurality of text information may be played back in a loop or only one of the plurality of text information may be played back during the music playback period corresponding to the preference tag, in keeping with the setting of the music application, without limitation.
Under the condition that the corresponding relation between the music preference label and the text information is determined, namely under the condition that the mapping relation between the music preference label established by the category and the text information is determined, or under the condition that the corresponding relation between the music preference label established by the machine learning model and the text information is determined, the corresponding text information can be determined by inquiring the mapping relation or the corresponding relation according to the music preference label of the current login user, so that the purposes of displaying different text information under the theme and/or the music type of different virtual atmosphere scenes and improving the linkage with the user and the user experience are achieved.
In one embodiment, referring to fig. 8, the correspondence between the music preference tag and the text information in this step may be determined by the following procedure:
step 810, determining the music preference of the current login user according to the historical play information of the current login user.
The history playing information of the current login user can be understood as the history playing record of the current login user; in one embodiment, the music preference of the current login user can be determined according to the ratio of various types of music in the historical playing information of the current login user, the music preference of the current login user can be determined according to the ratio of various types of music in the recent historical playing information of the current login user, and the music preference of the current login user can be determined according to the last historical playing information; the recent period may be one week, one month or three months, and is not limited herein.
Step S820, a music preference tag library is constructed based on the music preference of the current login user.
The music preference tag library comprises a plurality of music preference tags, wherein the music preference tags can be determined by classifying and marking preferred music, namely, the preferred music is classified firstly and then the categories of the music are marked; that is, the music preference tag herein may be understood as a category tag, or, labeling information.
Step S830, according to the music preference label, obtaining the label of the text information matched with the music preference label, and obtaining a text information label library.
The text information is a set of network text information resources acquired through the music preference tags; such as: a sentence, a paragraph, a chapter, etc., or a set of two or more thereof, is not limited herein.
The text information tag library comprises a plurality of text information tags, wherein the text information tags can be obtained by labeling text information which is acquired on a network through the music preference tags and is similar to the music preference tags; here, one music preference tag may correspond to one text information tag or may correspond to a plurality of text information tags, which is not limited herein.
Step S840, according to the similarity of the labels in the music preference label library and the text information label library, determining the mapping relation between the music preference labels and the text information.
Wherein, the similarity can be determined by Euclidean distance, manhattan distance, cosine similarity, pearson correlation coefficient, etc.; here, the mapping relationship may be established between the music preference tag and the text information tag, where the similarity exceeds a certain threshold, or may be established between the music preference tag and the text information tag, which are completely identical, without limitation.
Step S850, determining the mapping relationship between the music preference tag and the text information as the correspondence relationship between the music preference tag and the text information.
The correspondence refers to the correspondence between the music preference tag and the text information in step S710.
Step S720, displaying the current text information matched with the music preference label of the current login user.
Wherein the current text information includes at least one of singer information, song information, and thermo-rating information.
The method comprises the step of displaying the current text information in a video picture matched with the current music type in a virtual atmosphere scene.
In an embodiment, in order to further improve the user experience, the environment information where the user is located may be displayed in the virtual atmosphere scene, that is, the scene display method may further include the following steps:
and in the virtual atmosphere scene, displaying scene information in the current environment where the current login user is located.
The current environment where the current login user is located comprises an external environment; such as: weather, time points, holidays, network ip locations, etc., are not limited herein. When the current environment where the current login user is located is weather, displaying current weather details (temperature, pollution degree, wind level, weather forecast and the like) in a virtual atmosphere scene; when the current environment where the current login user is located is a festival, the current festival information (the festival details, the children festival details and the like) is displayed in the virtual atmosphere scene. Here, the display of the scene information may be one item or a plurality of items; such as: the method refers to displaying the weather-related scene information or simultaneously displaying the weather-related scene information and the holiday-related scene information, which is not limited herein.
In one embodiment, the current environment where the current logged-in user is located includes a current time, the scene information includes an environmental special effect and holiday information, and the steps may further include the steps of:
Determining a corresponding environmental special effect according to the time period of the current time, and displaying the environmental special effect; the environmental effects include at least one of day effects and night effects; or determining the current date according to the current time, and displaying the festival information corresponding to the current date.
The environment special effect is used for further creating a virtual atmosphere scene matched with the user; the special effect color in the daytime is rich, the brightness is high, the special effect color in the night is single, black is generally used as the main material, and the brightness is low.
The festival information comprises the information such as the name of the festival, the date of the festival, the tabu of the festival and the like; such as: the midday festival, the origin of the midday festival, the taboo of the midday festival, and the like.
In one embodiment, the current environment where the current login user is located includes a current time and a current location, and the scene information includes weather information, where the step of "displaying the scene information in the current environment where the current login user is located" may include the steps of:
and determining current weather information according to the current time and the current position, and displaying the weather information.
The weather information comprises temperature, pollution degree, wind level, weather forecast and the like; such as: on sunny days, the pollution degree is light, breeze and the like.
Exemplary apparatus
Having described a scene showing method of an exemplary embodiment of the present disclosure, next, a scene showing apparatus of an exemplary embodiment of the present disclosure will be described with reference to fig. 9.
Referring to fig. 9, a scene showing apparatus 900 includes:
a first display module configured to display a virtual atmosphere scene at an interface of a music application in response to a scene triggering operation in the music application;
the second display module is configured to display current text information matched with the theme and/or the current music type of the virtual atmosphere scene in the virtual atmosphere scene to obtain a target scene; the current music type is the type of currently playing music under the theme of the virtual atmosphere scene.
In one embodiment, the second display module is configured to: based on a preset corresponding relation between the theme and/or music type of the virtual atmosphere scene and the text information, determining current text information matched with the theme and/or current music type of the current virtual atmosphere scene according to the theme and/or current music type of the current virtual atmosphere scene; and displaying the current text information in the virtual atmosphere scene.
In one embodiment, the correspondence between the theme and/or music genre of the virtual atmosphere scene and the text information is determined by the following procedure: constructing a first feature library according to the theme of the virtual atmosphere scene and/or keywords of the music type; constructing a second feature library according to the keywords of the text information; establishing a mapping relation between the first feature library and the second feature library according to the similarity between the features in the first feature library and the second feature library; and determining the mapping relation as a corresponding relation.
In one embodiment, the second display module is configured to: determining a current front end style component matched with the current text information according to the current text information based on a corresponding relation between the preset text information and the front end style component; in the virtual atmosphere scene, the current text information is displayed on a current front-end style component matched with the current text information.
In one embodiment, the correspondence between text information and front-end style components is determined by: constructing a third feature library according to the category of the text information; determining a front-end style component with the category matched with the characteristics in the third characteristic library according to the characteristics in the third characteristic library; establishing a mapping relation between a third feature library and the front-end style component according to the text information and the category of the front-end style component; and determining the mapping relation between the third feature library and the front-end style component as the corresponding relation between the text information and the front-end style component.
In one embodiment, the apparatus further comprises a third display module configured to: and in the virtual atmosphere scene, displaying scene information related to the personalized information of the current login user.
In one embodiment, the personalized information of the current login user includes a music preference tag of the current login user, the scene information includes text information, and the third display module is configured to: based on a preset corresponding relation between the music preference label and the text information, determining current text information matched with the music preference label of the current login user according to the music preference label of the current login user; displaying current text information matched with the music preference label of the current login user; the current text information includes at least one of singer information, song information, and critique information.
In one embodiment, the correspondence between the music preference tags and the text information is determined by: determining the music preference of the current login user according to the historical play information of the current login user; constructing a music preference tag library based on the music preference of the current login user; according to the music preference label, obtaining a label of text information matched with the music preference label, and obtaining a text information label library; determining a mapping relation between the music preference tag and the text information according to the similarity of the tags in the music preference tag library and the text information tag library; and determining the mapping relation between the music preference label and the text information as the corresponding relation between the music preference label and the text information.
In one embodiment, the apparatus further comprises a fourth display module configured to: and in the virtual atmosphere scene, displaying scene information in the current environment where the current login user is located.
In one embodiment, the current environment in which the current logged-in user is located includes a current time, the scene information includes an environmental special effect and holiday information, and the fourth display module is configured to: determining a corresponding environmental special effect according to the time period of the current time, and displaying the environmental special effect; the environmental effects include at least one of day effects and night effects; or determining the current date according to the current time, and displaying the festival information corresponding to the current date.
In one embodiment, the current environment in which the current logged-in user is located includes a current time and a current location, and the fourth display module is configured to: and determining current weather information according to the current time and the current position, and displaying the weather information.
Exemplary storage Medium
A storage medium according to an exemplary embodiment of the present disclosure is described below.
In the present exemplary embodiment, the above-described method may be implemented by a program product, such as a portable compact disc read only memory (CD-ROM) and including program code, and may be run on a device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RE, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Exemplary electronic device
An electronic device of an exemplary embodiment of the present disclosure is described with reference to fig. 10.
The electronic device 1000 shown in fig. 10 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 10, the electronic device 1000 is embodied in the form of a general purpose computing device. Components of electronic device 1000 may include, but are not limited to: at least one processing unit 1010, at least one memory unit 1020, a bus 1030 connecting the various system components (including the memory unit 1020 and the processing unit 1010), and a display unit 1040.
Wherein the storage unit stores program code that is executable by the processing unit 1010 such that the processing unit 1010 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the present specification. For example, the processing unit 1010 may perform the method steps shown in fig. 1, etc.
The memory unit 1020 may include volatile memory units such as a random access memory unit (RAM) 1021 and/or a cache memory unit 1022, and may further include a read only memory unit (ROM) 1023.
Storage unit 1020 may also include a program/utility 1024 having a set (at least one) of program modules 1025, such program modules 1025 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1030 may include a data bus, an address bus, and a control bus.
The electronic device 1000 may also communicate with one or more external devices 2000 (e.g., keyboard, pointing device, bluetooth device, etc.) via an input/output (I/O) interface 1050. The electronic device 1000 also includes a display unit 1040 that is connected to an input/output (I/O) interface 1050 for displaying. Also, electronic device 1000 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1060. As shown, the network adapter 1060 communicates with other modules of the electronic device 1000 over the bus 1030. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the electronic device 1000, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
It should be noted that while several modules or sub-modules of the apparatus are mentioned in the detailed description above, such partitioning is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that this disclosure is not limited to the particular embodiments disclosed nor does it imply that features in these aspects are not to be combined to benefit from this division, which is done for convenience of description only. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A scene showing method, comprising:
responding to scene triggering operation in a music application, and displaying a virtual atmosphere scene on an interface of the music application;
displaying current text information matched with the theme and/or the current music type of the virtual atmosphere scene in the virtual atmosphere scene to obtain a target scene; the current music type is the type of currently playing music under the theme of the virtual atmosphere scene.
2. The scene showing method according to claim 1, wherein the showing of the current text information matching the theme and/or music type of the virtual atmosphere scene in the virtual atmosphere scene includes:
based on a preset corresponding relation between the theme and/or music type of the virtual atmosphere scene and the text information, determining current text information matched with the theme and/or current music type of the current virtual atmosphere scene according to the theme and/or current music type of the current virtual atmosphere scene;
and displaying the current text information in the virtual atmosphere scene.
3. The scene presentation method according to claim 2, characterized in that the correspondence between the theme and/or music type of the virtual atmosphere scene and the text information is determined by:
constructing a first feature library according to the theme of the virtual atmosphere scene and/or keywords of the music type;
constructing a second feature library according to the keywords of the text information;
establishing a mapping relation between the first feature library and the second feature library according to the similarity between features in the first feature library and the second feature library;
And determining the mapping relation as the corresponding relation.
4. The scene showing method according to claim 2, wherein the showing the current text information in the virtual atmosphere scene includes:
determining a current front-end style component matched with the current text information according to the current text information based on a corresponding relation between preset text information and the front-end style component;
and in the virtual atmosphere scene, displaying the current text information on a current front-end style component matched with the current text information.
5. The scene showing method according to claim 4, wherein the correspondence between the text information and the front-end style component is determined by:
constructing a third feature library according to the category of the text information;
determining the front-end style component with the category matched with the characteristics in the third characteristic library according to the characteristics in the third characteristic library;
establishing a mapping relation between the third feature library and the front-end style component according to the text information and the category of the front-end style component;
and determining the mapping relation between the third feature library and the front-end style component as the corresponding relation between the text information and the front-end style component.
6. The scene showing method according to claim 1, characterized in that the method further comprises:
and displaying scene information related to the personalized information of the current login user in the virtual atmosphere scene.
7. The scene showing method according to claim 6, wherein the personalized information of the current login user includes a music preference tag of the current login user, the scene information includes text information, and the showing of the scene information related to the personalized information of the current login user includes:
based on a preset corresponding relation between the music preference label and the text information, determining current text information matched with the music preference label of the current login user according to the music preference label of the current login user;
displaying current text information matched with the music preference tag of the current login user; the current text information includes at least one of singer information, song information, and thermo-rating information.
8. A scene showing device, comprising:
a first display module configured to display a virtual atmosphere scene at an interface of a music application in response to a scene triggering operation in the music application;
The second display module is configured to display current text information matched with the theme and/or the current music type of the virtual atmosphere scene in the virtual atmosphere scene to obtain a target scene; the current music type is the type of currently playing music under the theme of the virtual atmosphere scene.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1-7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1-7 via execution of the executable instructions.
CN202310217518.XA 2023-03-01 2023-03-01 Scene display method and device, storage medium and electronic equipment Pending CN116339559A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310217518.XA CN116339559A (en) 2023-03-01 2023-03-01 Scene display method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310217518.XA CN116339559A (en) 2023-03-01 2023-03-01 Scene display method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116339559A true CN116339559A (en) 2023-06-27

Family

ID=86883241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310217518.XA Pending CN116339559A (en) 2023-03-01 2023-03-01 Scene display method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116339559A (en)

Similar Documents

Publication Publication Date Title
US9438850B2 (en) Determining importance of scenes based upon closed captioning data
US10299010B2 (en) Method of displaying advertising during a video pause
JP6623500B2 (en) Similar video search method, apparatus, equipment and program
US8966372B2 (en) Systems and methods for performing geotagging during video playback
US20120099760A1 (en) Associating information with a portion of media content
US11748408B2 (en) Analyzing user searches of verbal media content
US20110035382A1 (en) Associating Information with Media Content
US20140164921A1 (en) Methods and Systems of Augmented Reality on Mobile Devices
US20130076788A1 (en) Apparatus, method and software products for dynamic content management
US20110022589A1 (en) Associating information with media content using objects recognized therein
CN104602128A (en) Video processing method and device
US20180211286A1 (en) Digital content generation based on user feedback
CN111263186A (en) Video generation, playing, searching and processing method, device and storage medium
US20200250369A1 (en) System and method for transposing web content
US20170287000A1 (en) Dynamically generating video / animation, in real-time, in a display or electronic advertisement based on user data
CN104102683A (en) Contextual queries for augmenting video display
WO2023016349A1 (en) Text input method and apparatus, and electronic device and storage medium
CN112989104A (en) Information display method and device, computer readable storage medium and electronic equipment
WO2023174073A1 (en) Video generation method and apparatus, and device, storage medium and program product
US11275803B2 (en) Contextually related sharing of commentary for different portions of an information base
CN116339559A (en) Scene display method and device, storage medium and electronic equipment
CN115209211A (en) Subtitle display method, subtitle display apparatus, electronic device, storage medium, and program product
US11532111B1 (en) Systems and methods for generating comic books from video and images
US20140297285A1 (en) Automatic page content reading-aloud method and device thereof
CN115328364A (en) Information sharing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination