CN115481598A - Document display method and device - Google Patents
Document display method and device Download PDFInfo
- Publication number
- CN115481598A CN115481598A CN202211169934.9A CN202211169934A CN115481598A CN 115481598 A CN115481598 A CN 115481598A CN 202211169934 A CN202211169934 A CN 202211169934A CN 115481598 A CN115481598 A CN 115481598A
- Authority
- CN
- China
- Prior art keywords
- text
- target
- document
- information
- weather
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
- G06F40/106—Display of layout of documents; Previewing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a document display method and device, and belongs to the technical field of communication. The method comprises the following steps: receiving a first input for launching a target document; in response to the first input, displaying a background image and text of the target document on a display interface of the target document, the text of the target document being displayed in suspension over the background image, the background image being determined based on at least one of weather information and emotion type information associated with the text of the target document.
Description
Technical Field
The application belongs to the technical field of communication, and particularly relates to a document display method and device.
Background
With the development of science and technology, the use of electronic devices to process documents has become an essential important function in electronic devices, and various document application software is generally set in electronic devices, for example: note software, etc., which integrates various functions such as: the functions of document scanning, voice to text conversion and the like provide convenience for users to use document application software.
However, when a user refers to the content of a previously saved document, it is difficult to associate the emotion and state of the recorded document with the content recorded at that time, and there is a lack of emotional association between the user and the recorded document, resulting in a poor experience in referring to the previously saved document.
Disclosure of Invention
The embodiment of the application aims to provide a document display method and a document display device, which can solve the problem that emotional connection is lacked between a user and a recorded document when the user refers to the previous document.
In a first aspect, an embodiment of the present application provides a document display method, where the method includes:
receiving a first input for launching a target document;
in response to the first input, displaying a background image and text of the target document on a display interface of the target document, the text of the target document being displayed in suspension over the background image, the background image being determined based on at least one of weather information and emotion type information associated with the text of the target document.
In a second aspect, an embodiment of the present application provides a document display apparatus, including:
a first receiving module for receiving a first input for starting a target document;
the first display module is used for responding to the first input, displaying a background image and the text of the target document on a display interface of the target document, wherein the text of the target document is displayed in a floating mode on the background image, and the background image is determined based on at least one of weather information and emotion type information associated with the text of the target document.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a background image in a display interface of a target document is generated through at least one of weather information and emotion type information associated with a text in the target document, so that the weather information and the emotion type information can be reflected to a certain extent by the background image, the text is suspended in the background image in the display interface of the target document, and when a user reads the text, the user can be helped to recall content in the target document and record emotion and state of the document through the background image, emotional connection is generated between the user and the document, and the experience of the user in reading the document is effectively improved.
Drawings
FIG. 1 is a flowchart illustrating a document display method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a display interface of a target document according to an embodiment of the present application;
FIG. 3 is a second schematic view of a display interface of a target document according to an embodiment of the present application;
FIG. 4 is a second input operation diagram provided in the embodiments of the present application;
FIG. 5 is a schematic structural diagram of a document display apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The document display method and apparatus provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flowchart of a document display method provided in an embodiment of the present application, and as shown in fig. 1, the method includes:
specifically, the target document described in the embodiment of the present application may specifically be a document that is edited and saved in advance by a user, and may specifically be a log document, a note document, or another document carrying text, where the target document may include text.
In the embodiment of the application, the electronic device may receive a first input for opening a target document, and the electronic device may be a smart phone, a tablet computer, a notebook computer, or the like, or may be another electronic device having a document display function.
The first input described in the embodiment of the present application may specifically be an input that a user touches and clicks an identifier of a target document.
In other embodiments, the first input may be a voice command input by a user to open the target document.
In other embodiments, the first input may also be a user input through an input device, such as a mouse, and the first input may be a double click of the target document through the mouse.
And 120, responding to the first input, displaying a background image and the text of the target document on a display interface of the target document, wherein the text of the target document is displayed in a floating manner on the background image, and determining the background image based on at least one of weather information and emotion type information associated with the text of the target document.
After the electronic device responds to the first input, the target document is opened, and the content of the target document is displayed in the display interface of the target document, for example, the text of the target document is displayed in the display interface of the target document.
The display interface of the target document described in the embodiment of the present application is an interface for displaying the content of the target document, and the display interface of the target document is specifically displayed in a screen of an electronic device.
The text of the target document described in the embodiment of the present application may specifically include contents such as words, expressions, hyperlinks, drawings, and the like, where the contents of the text are written by a user in advance through a document editing interface, and the text may specifically be manually entered by the user or copied.
The background image described in the embodiments of the present application specifically refers to a background image of a display interface of a target document, where the size of the background image may be the same as the size of the display interface of the target document, and in other embodiments, the size of the background image may also be smaller than the size of the display interface of the target document, that is, the background image is displayed in the display interface of the target document, and when a user views the display interface of the target document, the user can see the background image inevitably, and the background image may be a static image or a dynamic image.
In the embodiment of the application, since the background image is determined based on at least one of the weather information and the emotion type information associated with the text, and the text of different target documents may be different, the background image displayed in the display interface of the target document may also be different when the different target documents are opened.
In the embodiment of the present application, fig. 2 is a schematic view of a display interface of a target document provided in the embodiment of the present application, and as shown in fig. 2, in a display interface 21 of the target document, a text 22 is displayed in a floating manner on a background image 23, which indicates that the background image is embedded in a lower layer of a text corresponding to a background of the text, and the background image does not block the text.
More specifically, in the embodiment of the present application, the weather information associated with the text may specifically be weather information determined according to text content or text recording time, and the emotion type information associated with the text is determined according to the text content, and may reflect a user emotion type expressed by the text or a user emotion type when the text is edited.
More specifically, the user emotion type information may include a happy emotion type, a calm emotion type, a sad emotion type, and the like.
The background image described in the embodiment of the present application is determined according to at least one of weather information and emotion type information associated with the text, that is, the determination manner of the background image includes the following manners:
firstly, determining a background image through text-associated weather information:
after the weather information is determined, selecting an image corresponding to the weather information from a preset gallery containing various weather information tags, for example: if the weather information is determined to be the heavy snow, a picture with a weather information label of the heavy snow can be selected as a background image in a gallery containing various weather.
Optionally, when a plurality of images exist in the weather information label corresponding to the weather information, an image selection interface may be displayed for a user to select an image meeting the user's needs, or an image with the highest frequency of use may be automatically selected.
Secondly, determining a background image through emotion type information associated with the text:
after the emotion type information is determined, an image corresponding to the emotion type information is selected as a background image from preset images containing various emotion type labels, and optionally, an image selection interface can be displayed for a user to select an image meeting the requirements of the user under the condition that a plurality of images exist in the emotion type label corresponding to one emotion type information, and an image with the highest use frequency can also be automatically selected.
Thirdly, determining a background image through the weather information and the emotion type information associated with the text:
firstly, after weather information is determined, selecting and selecting an image corresponding to the weather information from preset images simultaneously containing an emotion type label and a weather label, and under the condition that the weather information corresponds to a plurality of images, further screening the image corresponding to the emotion type information from the plurality of images according to the emotion type label to finally serve as a background image.
In the embodiment of the application, a background image in a display interface of a target document is generated through at least one of weather information and emotion type information associated with a text in the target document, so that the weather information and the emotion type information can be reflected to a certain extent by the background image, the text is suspended in the background image in the display interface of the target document, and when a user reads the text, the user can be helped to recall content in the target document and record emotion and state of the document through the background image, emotional connection is generated between the user and the document, and document reading experience of the user is effectively improved. Optionally, before the receiving the first input for starting the target document, the method further includes:
taking the weather content information as weather information associated with the text under the condition that the weather content information exists in the text content of the text;
under the condition that weather content information does not exist in the text content of the text, acquiring time content information in the text, and taking the weather information corresponding to the time content information as the weather information associated with the text;
and under the condition that the weather content information and the time content information do not exist in the text content of the text, acquiring the text recording time of the text, and taking the weather information corresponding to the text recording time as the weather information associated with the text.
Specifically, in the embodiment of the present application, before the target document is opened, text content of a text in the target document may also be analyzed, so as to effectively obtain weather information associated with the text, and effectively determine a background image corresponding to the target document according to the weather information, where the text content may specifically be text content or link content.
The step of acquiring the weather information associated with the text may be specifically performed in a process of saving the target document, or may be performed in a process of editing the target document by the user.
The text content analysis performed on the text in the embodiment of the present application may specifically be performed by analyzing the text content according to a time text format rule and a weather text format rule.
In the embodiment of the application, a weather text library may be specifically set, common texts for describing weather, such as sunny days, cloudy days, light rain, heavy snow, fog and the like, may be stored in the weather text library in advance, and text content analysis is performed on the texts in the target document through the weather text library to screen out weather content information in the texts.
In some embodiments, the text may be further analyzed by a weather text content analysis model, and weather content information described in the text may be output.
The time text format rule in the embodiment of the present application may specifically be a rule that the time text format rule includes "week", "time", "date", "X point", "X minute", "X second", "AA: AA "" BB: BB: BB' and other information, analyzing the text by the time text format rule, and combining the semantic analysis technology to obtain the time content information in the text.
In some embodiments, the text may also be analyzed by a temporal text content analysis model, entering temporal content information described in the text.
In the embodiment of the present application, when weather content information exists in text content, the description text directly describes weather-related content, and in order to better combine the text content of the text, the weather content information described in the text is directly used as text-related weather information.
More specifically, when the weather content information does not exist in the text content, it indicates that the weather information is not directly recorded in the text content at this time, and the weather content information in the text content cannot be directly used, and at this time, the time content information in the text content can be acquired by further considering the time-related content described in the text content.
After the time content information is acquired in the embodiment of the application, the weather corresponding to the time content information may be acquired, for example, if the time content information is 8 months and 15 days, the weather information of the day of 8 months and 15 days is acquired as the text-related weather information.
In the embodiment of the present application, in a case that the weather content information and the time content information do not exist in the text content, it indicates that the relevant valid information cannot be directly obtained from the text content at this time, and the time when the user records the text may be considered.
The text recording time described in the embodiment of the present application may specifically be a time when a user inputs a text, or may be a time when the user saves the text.
In the embodiment of the application, the weather information corresponding to the text recording time can be used as the weather information associated with the text.
In the embodiment of the application, the text content is analyzed, and the weather information associated with the text is effectively acquired according to the weather content information and the time content information in the text content or the text recording time of the text, so that the generation of a background image for helping a user to recall a target document can be facilitated.
Optionally, before the receiving the first input for starting the target document, the method further includes:
analyzing each statement in the text to obtain emotion type information of each statement;
and taking target emotion type information in each piece of emotion type information as emotion type information associated with the text, wherein the target emotion type information is the emotion type information with the largest repetition frequency in each piece of emotion type information.
Specifically, in the embodiment of the present application, each sentence may be analyzed through a natural language processing model, and emotion type information of each sentence is output, the natural language processing model may be obtained by training in advance according to a text training sample carrying an emotion type information tag, and the embodiment of the present application does not limit the natural language processing model.
In other embodiments, a specific emotion word bank may also be established in advance, for example, the emotion word bank may store emotion words such as "happy", "sad", "happy", "disappointed", and the like, and each sentence is subjected to emotion analysis through the emotion words appearing in each sentence, and correspondingly, a sentence without emotion words may not relate to emotion content, and its corresponding emotion type information may be null.
In other embodiments, semantic analysis may be performed on each sentence through a text semantic analysis model, so as to obtain emotion type information of each sentence.
The emotion type information described in the embodiments of the present application may specifically include: "happy emotion type", "depressed emotion type", "sad emotion type", and the like.
In the embodiment of the present application, after obtaining the emotion type information of each sentence, repeated emotion type information may exist in each emotion type information, that is, the emotion type information of some sentences may be the same, and the number of sentences corresponding to each emotion type information may be further determined, where the number of sentences corresponding to each emotion type information is the number of repetitions described in the embodiment of the present application.
The emotion type information with the most repetition times in the embodiment of the application may be the emotion tendency which is the largest in the whole text, and can represent the whole emotion type information of the text better.
In the embodiment of the application, text emotion analysis is performed on each sentence in the text, so that emotion type information associated with the text is acquired, a background image capable of helping a user to recall related content of the target document is effectively generated, and when the user reopens the target document, the user is helped to establish emotion connection with the target document.
Optionally, the display interface of the target document further includes: n first sub-interfaces, the first sub-interfaces are displayed on the background image in a floating manner, and the display content of the first sub-interfaces comprises: and preset information is acquired based on the text recording time of the text, wherein N is a positive integer.
Specifically, the first sub-interface described in this embodiment may specifically be a sub-interface in a form of a bubble frame, or may also be a sub-interface in a form of a rectangular frame, which is not limited in this embodiment.
More specifically, fig. 3 is a second schematic view of a display interface of a target document provided in the embodiment of the present application, and as shown in fig. 3, the N first sub-interfaces 31 described in the embodiment of the present application may specifically be blank positions displayed in the text display interface 21, where the blank positions may be positions where the text 22 is not displayed in the display interface 21 of the target document. The N first sub-interfaces can be tiled during display and are not shielded from each other.
In some embodiments, the N first sub-interfaces may be displayed in the display interface of the target document in the form of a bullet screen.
The preset information described in the embodiment of the present application may specifically refer to local information and network information obtained at the time of text recording.
The local information described in the embodiment of the present application refers to information generated by the electronic device itself at the time of text recording, for example, information that can be read from the body data of the electronic device, such as call record information, chat record information, motion track information, music playing history information, and schedule information in the electronic device.
The network information described in the embodiments of the present application may specifically include: "current news information, document download record information", etc., information acquired via the network is required.
In other embodiments, the user may also click on the first sub-interface, and the electronic device may display the details in the first sub-interface in response to an input from the user clicking on the first sub-interface.
For example, if the first sub-interface includes current news information, a specific news content page of the current news information is displayed.
In the embodiment of the application, the first sub-interface displayed in the display interface of the target document can effectively record the information data corresponding to the time through the text, help the user to remember the emotion and the idea when the document is edited at that time, help the user to generate emotional relation with the document, and effectively improve the document reading experience of the user.
Optionally, before the receiving the first input for starting the target document, the method further includes:
displaying M first sub-interfaces on a document editing interface of a target document;
receiving a second input for moving a first target sub-interface of the M first sub-interfaces to target text in the target document;
and responding to the second input, adjusting the display parameters of the target text based on the emotion type information corresponding to the target text, and canceling the display of the first target sub-interface.
Specifically, the text editing interface described in the embodiment of the present application is specifically an interface when editing a text corresponding to a target document, and a user may type the text in the text editing interface.
The M first sub-interfaces can be displayed in the text editing interface at the same time, so that a user can know more information which is possibly interested in the user when editing the document conveniently, and the M first sub-interfaces can be displayed at positions which do not block a text editing area.
In the embodiment of the application, the user can adjust the position of the first sub-interface in the text editing interface through specific input, and can add or delete the first sub-interface through the specific input.
More specifically, the first target sub-interface in this embodiment of the application refers to a first sub-interface selected by the user from the M first sub-interfaces, and the target text is also the text selected by the user from the target document, and the target text may specifically refer to a whole sentence selected or a word selected, for example, the text in the target document includes "weather today is true and good |)! We get out to play a bar. "the target text selected by the user at this time may be" today's weather is really good! ", the target text selected by the user may also be" today ".
The second input in this embodiment of the application is used to establish an association between the first target sub-interface and the target text, and specifically may be an input for moving the first target sub-interface to the target text in the target document.
In some alternative embodiments, the second input may also be an input dragging the target text into the first target sub-interface.
Fig. 4 is a schematic diagram of a second input operation provided in the embodiment of the present application, and as shown in fig. 4, the second input operation includes receiving user input on the first target sub-interface 41 and the target text 42 in the target document.
The electronic equipment responds to the second input, association is established between the first target sub-interface and the target text, after the association is established, the first target sub-interface can be adsorbed on the target text to be hidden, at the moment, the first target sub-interface is not displayed, namely, the first target sub-interface is not displayed in the document editing interface.
More specifically, in order to facilitate the user to recognize the target text having an association relationship with the first target sub-interface later when viewing the target document, the display parameters of the target text may be further adjusted.
The display parameters of the target text are adjusted, and the following implementation modes can be provided:
firstly, the target text is displayed in a bold mode, or the font of the target text is changed.
Secondly, the character color of the target text can be changed according to the emotion type information of the target text, for example, a warm tone is used when the emotion type information is "happy" and "excited", and a cool tone is used when the emotion type information is "disappointed" and "depressed".
The display effect of the target text after the display parameters are adjusted may refer to a hyperlink display effect in the text editing.
In the embodiment of the application, a user can connect some first sub-interfaces containing specific information with the target text according to the requirement of the user in the process of editing the document, so that the information can be reviewed conveniently when the target text is consulted.
Optionally, after the responding to the first input, displaying a background image and text of a target document on a display interface of the target document, further comprising;
receiving a third input to the target text;
in response to the third input, displaying the first target sub-interface in a display interface of the target document.
The third input described in the embodiment of the present application is an input for triggering display of the first target sub-interface associated with the target text, and the third input may specifically be an input for clicking the target document, or an input for double-clicking or long-pressing the target document.
In this embodiment, the electronic device may display a first target sub-interface above the target text in response to the third input, where the first target sub-interface may be specifically displayed according to different corresponding information.
And under the condition that the first target sub-interface comprises playable audio information, the audio information can be automatically played while the first target sub-interface is displayed, so that the user is helped to create an immersive environment.
In the embodiment of the application, through the third input, the user can be helped to further recall the information expected to be associated when the target text is compiled, the user is helped to recall the scene when the document is compiled, and the user experience is improved.
According to the document display method provided by the embodiment of the application, the execution main body can be a document display device. The document display device provided by the embodiment of the present application is described by taking a method for executing document display by a document display device as an example.
Fig. 5 is a schematic structural diagram of a document display apparatus according to an embodiment of the present application, as shown in fig. 5, including: a first receiving module 510 and a first display module 520; the first receiving module 510 is configured to receive a first input for starting the target document; wherein the first display module 520 is configured to display a background image and a text of the target document on the display interface of the target document in response to the first input, the text of the target document is displayed over the background image in a floating manner, and the background image is determined based on at least one of weather information and emotion type information associated with the text of the target document
Optionally, the apparatus further comprises:
the first association module is used for taking the weather content information as the weather information associated with the text under the condition that the weather content information exists in the text content of the text;
under the condition that weather content information does not exist in the text content of the text, acquiring time content information in the text, and taking the weather information corresponding to the time content information as the weather information associated with the text;
and under the condition that the weather content information and the time content information do not exist in the text content of the text, acquiring the text recording time of the text, and taking the weather information corresponding to the text recording time as the weather information associated with the text.
Optionally, the apparatus further comprises:
the first analysis module is used for analyzing each statement in the text to obtain the emotion type information of each statement;
and the second correlation module is used for taking the target emotion type information in each emotion type information as the emotion type information correlated with the text, wherein the target emotion type information is the emotion type information with the largest repetition frequency in each emotion type information.
Optionally, the display interface of the target document further includes: n first sub-interfaces, the first sub-interfaces are displayed on the background image in a floating manner, and the display content of the first sub-interfaces comprises: and preset information is acquired based on the text recording time of the text, wherein N is a positive integer.
Optionally, the apparatus further comprises:
the second display module is used for displaying M first sub-interfaces on a document editing interface of the target document;
a second receiving module, configured to receive a second input for moving a first target sub-interface in the M first sub-interfaces to a target text in the target document, where M is a positive integer; and the adjusting module is used for responding to the second input, adjusting the display parameters of the target text based on the emotion type information corresponding to the target text, and canceling the display of the first target sub-interface.
In the embodiment of the application, a background image in a display interface of a target document is generated through at least one of weather information and emotion type information associated with a text in the target document, so that the weather information and the emotion type information can be reflected to a certain extent by the background image, the text is suspended in the background image in the display interface of the target document, and when a user reads the text, the user can be helped to recall content in the target document and record emotion and state of the document through the background image, emotional connection is generated between the user and the document, and document reading experience of the user is effectively improved. The document display device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The document display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The document display device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Optionally, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 6, an electronic device 600 according to an embodiment of the present application is further provided and includes a processor 601 and a memory 602, where a program or an instruction that can be executed on the processor 601 is stored in the memory 602, and when the program or the instruction is executed by the processor 601, the steps of the embodiment of the document display method are implemented, and the same technical effects can be achieved, and are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Wherein the user input unit 707 is configured to receive a first input for launching the target document;
the display unit 706 is configured to display, in response to the first input, a background image and text of the target document on a display interface of the target document, where the text of the target document is displayed in a floating manner on the background image, and determine the background image based on at least one of weather information and emotion type information associated with the text of the target document.
The processor 710 is configured to, in a case that weather content information exists in text content of the text, take the weather content information as weather information associated with the text;
under the condition that weather content information does not exist in the text content of the text, acquiring time content information in the text, and taking the weather information corresponding to the time content information as weather information associated with the text;
and under the condition that the weather content information and the time content information do not exist in the text content of the text, acquiring the text recording time of the text, and taking the weather information corresponding to the text recording time as the weather information associated with the text.
The processor 710 is configured to analyze each sentence in the text to obtain emotion type information of each sentence;
and taking target emotion type information in each piece of emotion type information as emotion type information associated with the text, wherein the target emotion type information is the emotion type information with the largest repetition frequency in each piece of emotion type information.
The display unit 706 is configured to display M first sub-interfaces on a document editing interface of a target document;
the user input unit 707 is configured to receive a second input for moving a first target sub-interface of the M first sub-interfaces to target text in the target document;
the display unit 706 is configured to adjust a display parameter of the target text based on the emotion type information corresponding to the target text in response to the second input, and cancel displaying the first target sub-interface.
In the embodiment of the application, a background image in a display interface of a target document is generated through at least one item of weather information and emotion type information associated with a text in the target document, so that the weather information and the emotion type information can be reflected to a certain extent by the background image, the text is suspended in the background image in the display interface of the target document, when a user reads the text, the user can be helped to recall content in the target document through the background image, emotion and state during document recording are recorded, emotional connection is generated between the user and the document, and document reading experience of the user is effectively improved.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes at least one of a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, and the like), and the like. Further, the memory 709 may include volatile memory or nonvolatile memory, or the memory 709 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 709 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 710 may include one or more processing units; optionally, the processor 710 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned document display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above document display method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing document display method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A document display method, comprising:
receiving a first input for launching a target document;
in response to the first input, displaying a background image and text of the target document on a display interface of the target document, the text of the target document being displayed in suspension over the background image, the background image being determined based on at least one of weather information and emotion type information associated with the text of the target document.
2. The method of claim 1, further comprising, prior to said receiving a first input to launch a target document:
taking the weather content information as weather information associated with the text under the condition that the weather content information exists in the text content of the text;
under the condition that weather content information does not exist in the text content of the text, acquiring time content information in the text, and taking the weather information corresponding to the time content information as weather information associated with the text;
and under the condition that the weather content information and the time content information do not exist in the text content of the text, acquiring the text recording time of the text, and taking the weather information corresponding to the text recording time as the weather information associated with the text.
3. The document display method according to claim 1, further comprising, before said receiving a first input for starting a target document:
analyzing each sentence in the text to obtain emotion type information of each sentence;
and taking target emotion type information in each piece of emotion type information as emotion type information associated with the text, wherein the target emotion type information is the emotion type information with the most repetition times in each piece of emotion type information.
4. The document display method according to claim 1, wherein the display interface of the target document further comprises: n first sub-interfaces, the first sub-interfaces are displayed on the background image in a floating manner, and the display content of the first sub-interfaces comprises: and preset information is obtained based on the text recording time of the text, and N is a positive integer.
5. The document display method of claim 4, further comprising, prior to said receiving a first input to launch a target document:
displaying M first sub-interfaces on a document editing interface of a target document;
receiving a second input for moving a first target sub-interface of the M first sub-interfaces to target text in the target document;
and responding to the second input, adjusting the display parameters of the target text based on the emotion type information corresponding to the target text, and canceling the display of the first target sub-interface.
6. A document display apparatus, comprising:
a first receiving module for receiving a first input for starting a target document;
the first display module is used for responding to the first input, displaying a background image and text of the target document on a display interface of the target document, wherein the text of the target document is displayed on the background image in a floating mode, and determining the background image based on at least one of weather information and emotion type information associated with the text of the target document.
7. The document display apparatus according to claim 6, wherein said apparatus further comprises:
the first association module is used for taking the weather content information as the weather information associated with the text under the condition that the weather content information exists in the text content of the text;
under the condition that weather content information does not exist in the text content of the text, acquiring time content information in the text, and taking the weather information corresponding to the time content information as weather information associated with the text;
and under the condition that the weather content information and the time content information do not exist in the text content of the text, acquiring the text recording time of the text, and taking the weather information corresponding to the text recording time as the weather information associated with the text.
8. The document display apparatus according to claim 6, wherein said apparatus further comprises:
the first analysis module is used for analyzing each statement in the text to obtain the emotion type information of each statement;
and the second correlation module is used for taking the target emotion type information in each emotion type information as the emotion type information correlated with the text, wherein the target emotion type information is the emotion type information with the largest repetition frequency in each emotion type information.
9. The document display apparatus according to claim 6, wherein the display interface of the target document further comprises: n first sub-interfaces, the first sub-interfaces are displayed on the background image in a floating manner, and the display content of the first sub-interfaces comprises: and preset information is acquired based on the text recording time of the text, wherein N is a positive integer.
10. The document display apparatus according to claim 9, wherein said apparatus further comprises:
the second display module is used for displaying M first sub-interfaces on a document editing interface of the target document;
a second receiving module, configured to receive a second input for moving a first target sub-interface of the M first sub-interfaces to target text in the target document;
and the adjusting module is used for responding to the second input, adjusting the display parameters of the target text based on the emotion type information corresponding to the target text, and canceling the display of the first target sub-interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211169934.9A CN115481598A (en) | 2022-09-22 | 2022-09-22 | Document display method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211169934.9A CN115481598A (en) | 2022-09-22 | 2022-09-22 | Document display method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115481598A true CN115481598A (en) | 2022-12-16 |
Family
ID=84393511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211169934.9A Pending CN115481598A (en) | 2022-09-22 | 2022-09-22 | Document display method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115481598A (en) |
-
2022
- 2022-09-22 CN CN202211169934.9A patent/CN115481598A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109740085B (en) | Page content display method, device, equipment and storage medium | |
CN111970577A (en) | Subtitle editing method and device and electronic equipment | |
CN107657024B (en) | Search result display method, device, equipment and storage medium | |
CN108351880A (en) | Image processing method, device, electronic equipment and graphic user interface | |
CN116017043B (en) | Video generation method, device, electronic equipment and storage medium | |
CN112437353A (en) | Video processing method, video processing apparatus, electronic device, and readable storage medium | |
US20230244363A1 (en) | Screen capture method and apparatus, and electronic device | |
CN113918522A (en) | File generation method and device and electronic equipment | |
CN103218157B (en) | Mobile terminal and management method of comment information | |
CN116910368A (en) | Content processing method, device, equipment and storage medium | |
CN115437736A (en) | Method and device for recording notes | |
CN115309487A (en) | Display method, display device, electronic equipment and readable storage medium | |
CN115481598A (en) | Document display method and device | |
CN113253904A (en) | Display method, display device and electronic equipment | |
CN107145314B (en) | Display processing method and device for display processing | |
CN117010326A (en) | Text processing method and device, and training method and device for text processing model | |
CN117724648A (en) | Note generation method, device, electronic equipment and readable storage medium | |
CN117992621A (en) | Multimedia object processing method, system, electronic device and readable storage medium | |
CN115826821A (en) | Object labeling method and device, electronic equipment and storage medium | |
CN114816933A (en) | Method and device for generating note with watch, readable storage medium and electronic equipment | |
CN116866670A (en) | Video editing method, device, electronic equipment and storage medium | |
CN116955695A (en) | Audio file display method and display device | |
CN117311884A (en) | Content display method, device, electronic equipment and readable storage medium | |
CN118778860A (en) | Multimedia content generation method and device, electronic equipment and storage medium | |
CN115981765A (en) | Content display method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |