CN113641836A - Display method and related equipment thereof - Google Patents

Display method and related equipment thereof Download PDF

Info

Publication number
CN113641836A
CN113641836A CN202110961076.0A CN202110961076A CN113641836A CN 113641836 A CN113641836 A CN 113641836A CN 202110961076 A CN202110961076 A CN 202110961076A CN 113641836 A CN113641836 A CN 113641836A
Authority
CN
China
Prior art keywords
displayed
character
display
text
characters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110961076.0A
Other languages
Chinese (zh)
Inventor
储德宝
王晓斐
王帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Toycloud Technology Co Ltd
Original Assignee
Anhui Toycloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Toycloud Technology Co Ltd filed Critical Anhui Toycloud Technology Co Ltd
Priority to CN202110961076.0A priority Critical patent/CN113641836A/en
Publication of CN113641836A publication Critical patent/CN113641836A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display method and related equipment thereof, wherein the method comprises the following steps: after the display device receives a character display request which is triggered by a user and used for requesting to display characters to be displayed, the display device can firstly acquire device display data of the characters to be displayed and mimicry characters corresponding to the characters to be displayed; and then the equipment display data and the mimic figure are displayed on the display equipment, so that the mimic figure on the display equipment can change along with the display process of the equipment display data, the display equipment can simulate the process of introducing the characters to be displayed by the mimic figure, the character display effect can be improved, and the user experience is improved.

Description

Display method and related equipment thereof
Technical Field
The present application relates to the field of computer technologies, and in particular, to a display method and related devices.
Background
Currently, for some display devices (e.g., a wand with a display screen, etc.), after some characters (e.g., "meditation") are acquired, the display devices usually display the characters or content related to the characters (e.g., ancient poetry named poetry "meditation") directly to a user in a text form so that the user can view the characters from the display device.
However, the text display method has a defect, so that the text display effect is poor, and thus the user experience is poor.
Disclosure of Invention
The embodiment of the application mainly aims to provide a display method and related equipment, which can improve the character display effect and are beneficial to improving the user experience.
The embodiment of the application provides a display method, which is applied to display equipment and comprises the following steps:
receiving a text display request triggered by a user; the character display request is used for requesting to display characters to be displayed;
acquiring equipment display data of the characters to be displayed and the mimicry characters corresponding to the characters to be displayed;
displaying the mimicry character corresponding to the character to be displayed and the equipment display data of the character to be displayed; wherein the mimicry figure changes along with the display process of the device display data.
In one possible implementation, the device presentation data includes audio presentation data, and the mimetic character is changed as the audio presentation data is played.
In a possible implementation manner, obtaining the mimicry character corresponding to the character to be displayed includes:
determining an audio playing time sequence and a character parameter change sequence according to the audio display data;
generating a mimic character display sequence corresponding to the character to be displayed according to a preset initial character and the character parameter change sequence;
the device display data for displaying the mimicry character corresponding to the character to be displayed and the character to be displayed comprises:
and when the audio display data is played, displaying the mimic character display sequence corresponding to the characters to be displayed according to the audio playing time sequence.
In one possible embodiment, the human parameter variation sequence includes at least one of a parameter variation sequence of at least one body part, a parameter variation sequence of at least one human action, a parameter variation sequence of at least one wearing object, a parameter variation sequence of at least one handheld object, and a parameter variation sequence of a human background.
In one possible embodiment, the device presentation data further comprises a text presentation sequence;
the process for acquiring the character display sequence comprises the following steps:
determining an audio playing time sequence according to the audio display data;
determining the character display sequence according to the audio display data, the audio playing time sequence and the characters to be displayed;
the process of displaying the data by the equipment for displaying the characters comprises the following steps:
and when the audio display data is played, displaying the character display sequence according to the audio playing time sequence.
In a possible implementation manner, the number of the characters to be displayed reaches a preset threshold;
the displaying the text display sequence according to the audio playing time sequence comprises:
and updating and displaying the character display sequence according to the audio playing time sequence and a preset equipment display character updating rule.
An embodiment of the present application further provides a display device, including:
the receiving unit is used for receiving a character display request triggered by a user; the character display request is used for requesting to display characters to be displayed;
the acquisition unit is used for acquiring the equipment display data of the characters to be displayed and the mimicry characters corresponding to the characters to be displayed;
the display unit is used for displaying the mimicry character corresponding to the character to be displayed and the equipment display data of the character to be displayed; wherein the mimicry figure changes along with the display process of the device display data.
The embodiment of the present application further provides an apparatus, where the apparatus includes: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is used for storing one or more programs, and the one or more programs comprise instructions which, when executed by the processor, cause the processor to execute any implementation of the presentation method provided by the embodiment of the application.
The embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is enabled to execute any implementation manner of the presentation method provided in the embodiment of the present application.
The embodiment of the present application further provides a computer program product, and when the computer program product runs on a terminal device, the terminal device is enabled to execute any implementation manner of the display method provided by the embodiment of the present application.
Based on the technical scheme, the method has the following beneficial effects:
according to the technical scheme, after a display device receives a character display request which is triggered by a user and used for requesting to display characters to be displayed, the display device can firstly acquire device display data of the characters to be displayed and mimicry characters corresponding to the characters to be displayed; and then the equipment display data and the mimic figure are displayed on the display equipment, so that the mimic figure on the display equipment can change along with the display process of the equipment display data, the display equipment can simulate the process of introducing the characters to be displayed by the mimic figure, the character display effect can be improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a display method according to an embodiment of the present disclosure;
fig. 2 is a schematic view of a display interface of a display device according to an embodiment of the present disclosure;
fig. 3 is a schematic view of a display interface of another display device according to an embodiment of the present disclosure;
FIG. 4 is a schematic representation of an expression of a mimetic provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a display device according to an embodiment of the present application.
Detailed Description
The inventors found in the study of a presentation apparatus (e.g., a scanning pen having a screen, etc.) that the text presentation manner shown in the background art section (i.e., a text presentation manner in which only text is presented) has the following drawbacks: because of this characters show mode only shows the literal content according to the text form for this characters show mode is more single, thereby makes the literal content that shows according to this characters show mode lack interest and vividness, so easily leads to the user can't understand the literal content that is shown better, thereby leads to user experience relatively poor.
Based on the above findings, in order to solve the technical problems in the background art section, an embodiment of the present application provides a display method, including: after the display device receives a character display request which is triggered by a user and used for requesting to display characters to be displayed, the display device can firstly acquire device display data of the characters to be displayed and mimicry characters corresponding to the characters to be displayed; and then the equipment display data and the mimic figure are displayed on the display equipment, so that the mimic figure on the display equipment can change along with the display process of the equipment display data, the display equipment can simulate the process of introducing the characters to be displayed by the mimic figure, the character display effect can be improved, and the user experience is improved.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Method embodiment one
Referring to fig. 1, the figure is a flowchart of a display method provided in an embodiment of the present application.
The display method provided by the embodiment of the application comprises the following steps of S1-S3:
s1: the display equipment receives a text display request triggered by a user.
The display equipment refers to terminal equipment with an information display function; the display device is not limited in the embodiments of the present application, and for example, the display device may be a smart phone, a computer, a Personal Digital Assistant (PDA), a tablet computer, a scanning pen with a display screen, or a learning assistance device with a display screen.
"user" refers to a user of the "display apparatus" described above.
The character display request is used for requesting to display characters to be displayed; in addition, the embodiment of the present application is not limited to the triggering manner of the "text display request", for example, if the "display device" is a scanning pen with a display screen, the triggering manner of the "text display request" may be: after the user completes the scanning process for the text to be scanned (e.g., meditation) with the wand, the user may trigger the "text presentation request" by pressing a preset button on the wand. The preset button is used for triggering a character display request.
The 'characters to be displayed' refer to the character contents which need to be displayed when the display equipment responds to the 'character display request'; furthermore, the embodiment of the present application does not limit the determination process of the "text to be displayed", and for convenience of understanding, the following description is made with reference to two examples.
Example 1, the process of determining the "to-be-presented text" may specifically include: and directly determining the text content carried by the text display request as the text to be displayed.
Example 2, when the word to be used is carried in the "text display request", the determining process of the "text to be displayed" may specifically include: and searching text data matched with the words to be used from a preset database to be used as characters to be displayed.
Wherein, the word to be used refers to the text content carried by the text display request; moreover, the embodiment of the present application does not limit the obtaining manner of the "word to be used", and for example, the obtaining manner may refer to text content (for example, scanned content, etc.) input by a user through an input component (for example, a scanning component, etc.) of the presentation apparatus.
The "preset database" may be preset or determined according to some operation performed by the user on the display device (e.g., when the user selects the poem search function on the display device, the poem database may be determined as the preset database).
The "text data matching the word to be used" refers to text data having an association relationship with the word to be used, which is recorded in the "preset database". For example, when the "preset database" is a database of ancient poems and the "word to be used" is "quiet night thought", the "text data matched with the word to be used" may refer to ancient poems (such as ancient poems shown in fig. 2) having the "quiet night thought" as poem names. For another example, when the "preset database" is the word interpretation database, and the "to-be-used word" is "beauty", the "text data matching the to-be-used word" may refer to the related interpretation content (such as the content shown in fig. 3) for "beauty".
Example 3, when the word to be used is carried in the "text display request", the determining process of the "text to be displayed" may specifically include: and presetting the words to be used to obtain the characters to be displayed.
The preset processing refers to a preset processing process which needs to be implemented on the words to be used; furthermore, the embodiment of the present application is not limited to the "preset processing", and may be, for example, a translation processing procedure, a word correction processing procedure, or the like.
In addition, the embodiment of the present application also does not limit the determination process of the "preset process", for example, the "preset process" may be determined according to the function of the "display device" (for example, if the "display device" is a scanning pen and the scanning pen has a chinese-english translation function, the "preset process" may include a chinese-english translation process or an english-english translation process).
In some cases, the "display apparatus" has more functions, and in this case, in order to improve the accuracy of the "preset process", the "preset process" may be determined by further referring to the operation behavior of the user with respect to the display apparatus. Based on this, the embodiment of the present application further provides another possible implementation manner of determining "preset processing", which may specifically include: the preset processing is determined according to at least one function of the display equipment and at least one operation triggered by a user aiming at the display equipment in the time period to be used.
The "at least one function of the display device" is used to indicate a word processing function of the display device.
The time period to be used is a period of time which is closer to the moment when the display equipment receives the character display request; and the "time period to be used" may be expressed as [ the time when the display apparatus receives the text display request-a preset time period, the time when the display apparatus receives the text display request ]. The "preset duration" may be preset.
"at least one operation" is used to represent an operation behavior triggered by a user for a presentation device during a period of time to be used; furthermore, the embodiment of the present application is not limited to "at least one operation", for example, it may refer to an operation behavior (for example, clicking a translate to English button). As another example, it may refer to a sequence of operational behaviors (e.g., click to translate function → select translate from multiple candidates).
Based on the related content of the above example 3, after the user inputs the word to be used to the display device and triggers the text display request, the display device may extract the word to be used from the text display request, perform preset processing on the word to be used, obtain a processing result corresponding to the word to be used, and determine the processing result corresponding to the word to be used as the text to be displayed, so that the purpose of displaying the processing result corresponding to the word to be used by the display device can be achieved by the processing process for the text to be displayed.
Based on the related content of S1, when the user wants to display some text information by using a display device (e.g., a wand with a display screen) (e.g., the user wants to display ancient poems related to "quiet night thought") by using the display device, the user may trigger a text display request on the display device, so that the text display request carries contents related to the above-mentioned "some text information" (e.g., "quiet night thought"), so that after the display device receives the text display request, the display device can determine the text to be displayed (e.g., the ancient poems named "by" quiet night thought ") according to the text contents carried by the text display request, so as to satisfy the requirements when the display device displays the text to be displayed and/or the display data related to the text to be displayed (e.g., the device display data of the text to be displayed and the simulated figure corresponding to the text to be displayed shown below) The user's text information display requirements.
S2: the display equipment acquires equipment display data of the characters to be displayed and the mimicry characters corresponding to the characters to be displayed.
The "device display data of the characters to be displayed" refers to multimedia data related to the characters to be displayed and displayed on the display device.
In addition, the embodiment of the present application does not limit "the device display data of the characters to be displayed", for example, the device display data may specifically include at least one of text display data of the characters to be displayed, audio display data of the characters to be displayed, picture display data of the characters to be displayed, and video display data of the characters to be displayed.
The text display data of the characters to be displayed is used for expressing the characters to be displayed in a text form; moreover, the text display data of the characters to be displayed is not limited in the embodiment of the present application, and for example, the text display data may include text data for statically displaying the characters to be displayed, or may also include text data for dynamically displaying the characters to be displayed (for example, a "character display sequence of the characters to be displayed" shown below).
The audio display data of the characters to be displayed is used for expressing the characters to be displayed in an audio form; moreover, the display mode of the "audio display data of the text to be displayed" on the display device is not limited in the embodiment of the present application, for example, the "audio display data of the text to be displayed" may be displayed at the first position of the display interface of the display device, so that the user can control the playing and pausing of the "audio display data of the text to be displayed" by clicking the "audio display data of the text to be displayed". For another example, the audio display data of the text to be displayed may also be directly hidden and displayed, so that the audio display data of the text to be displayed is not displayed on the display interface of the display device, but the user may control the playing and pausing of the audio display data of the text to be displayed by clicking any position on the display interface of the display device.
The picture display data of the characters to be displayed is used for expressing the characters to be displayed in a picture form; in addition, the display mode of the "picture display data of the characters to be displayed" on the display device is not limited in the embodiment of the present application, for example, the "picture display data of the characters to be displayed" may be displayed as a character background, or may be displayed at the second position of the display interface of the display device.
The video display data of the characters to be displayed is used for expressing the characters to be displayed in a video form; in addition, the display mode of the "video display data of the characters to be displayed" on the display device is not limited in the embodiment of the present application, for example, the "video display data of the characters to be displayed" may be displayed at a third position of the display interface of the display device, so that the user may control the playing and pausing of the "video display data of the characters to be displayed" by clicking the "video display data of the characters to be displayed".
In addition, the embodiment of the present application does not limit the manner of acquiring the "device display data of the text to be displayed", for example, any existing or future multimedia data that can be determined to be related to the text to be displayed according to the text to be displayed may be used for implementation. As another example, it may employ the followingMethod embodiment twoThe illustrated "determination process of the device presentation data of the text to be presented" is carried out.
The "mimicry character corresponding to the character to be displayed" refers to a virtual character (such as the mimicry character shown in fig. 2) that needs to be displayed simultaneously when the "device display data of the character to be displayed" is displayed by the display device, so that the "mimicry character corresponding to the character to be displayed" serves as a voice announcer during the display process of the "device display data of the character to be displayed".
In addition, the embodiment of the present application is not limited to the "mimetic character corresponding to the character to be presented", for example, the "mimetic character corresponding to the character to be presented" may refer to a statically displayed avatar model (e.g., a three-dimensional data model). For example, the "mimetic character corresponding to the character to be displayed" may also refer to a dynamically displayed virtual character model, and the expression manner of the "dynamically displayed virtual character model" is not limited in the embodiment of the present application, and may be expressed by a "mimetic character display sequence" shown below.
The determination process of the "mimicry character corresponding to the character to be displayed" is not limited in the embodiment of the application, and for convenience of understanding, the following description is made with reference to an example.
As an example, when the "device display data of characters to be displayed" includes "audio display data of characters to be displayed", the determining process of the "mimicry character corresponding to characters to be displayed" may specifically include steps 11 to 12:
step 11: and determining an audio playing time sequence and a character parameter change sequence according to the audio display data of the characters to be displayed.
The "audio playing time sequence" is used to describe the playing time information of each frame of audio data in the "audio display data of the text to be displayed".
In addition, the "audio playing time sequence" is not limited in this embodiment of the application, for example, if the "audio display data of the text to be displayed" includes J frames of audio data, the "audio playing time sequence" may include J pieces of time point characterizing information, and the J-th time point characterizing information in the "audio playing time sequence" is used to represent a playing time characteristic possessed by the J-th frame of audio data in the "audio display data of the text to be displayed" (for example, the "playing time characteristic" may refer to a playing time of the J-th frame of audio data in the "audio display data of the text to be displayed"). Wherein J is a positive integer, J is less than or equal to J, and J is a positive integer.
The "character parameter change sequence" is used to describe the character configuration parameters (e.g., facial expressions, character motions, character backgrounds, etc.) that the "mimic character corresponding to the character to be shown" has and can change with the playing process of the "audio display data of the character to be shown".
In addition, the embodiment of the present application is not limited to the "character parameter variation sequence", for example, if the "audio display data of the character to be displayed" includes J frames of audio data, the "character parameter variation sequence" may include J pieces of character parameter description data, and the J-th piece of character parameter description data in the "character parameter variation sequence" refers to a character configuration parameter of the "mimetic character corresponding to the character to be displayed" when the J-th frame of audio data in the "audio display data of the character to be displayed" is played. Wherein J is a positive integer, J is less than or equal to J, and J is a positive integer.
In addition, the embodiment of the present application does not limit the "human parameter variation sequence", and for example, it may specifically include at least one of a parameter variation sequence of at least one body part, a parameter variation sequence of at least one human action, a parameter variation sequence of at least one wearing object, a parameter variation sequence of at least one handheld object, and a parameter variation sequence of a human background.
The "parameter change sequence of at least one body part" is used for describing the change of at least one body part of the "mimicry character corresponding to the character to be shown" along with the playing process of the "audio display data of the character to be shown"; furthermore, the embodiment of the present application does not limit "the variation sequence of the parameter of at least one body part", and for example, it may include a variation sequence of a face parameter of a person, a variation sequence of a mouth parameter of a person, and a variation sequence of a head parameter of a person.
The character face parameter change sequence is used for describing the change of the facial expression of the mimic character corresponding to the character to be displayed along with the playing process of the audio display data of the character to be displayed; furthermore, the embodiment of the present application is not limited to the above "facial expressions", and for example, as shown in fig. 4, the "mimicry character corresponding to the character to be displayed" may have facial expressions such as serious, pleasure, cold sweat, and the like.
The "character mouth parameter change sequence" is used to describe the change (for example, opening → closing → … … → closing) of the mouth of the "mimicry character corresponding to the character to be shown" along with the playing process of the "audio display data of the character to be shown", so that the mouth of the "mimicry character corresponding to the character to be shown" can be matched with the playing process of the "audio display data of the character to be shown", and thus the process of broadcasting the audio information carried by the "audio display data of the character to be shown" by the "mimicry character corresponding to the character to be shown" can be simulated.
The "character head parameter change sequence" is used to describe the change (e.g., shaking the head, nodding the head, shaking the head, etc.) of the head of the "mimic character corresponding to the character to be displayed" along with the playing process of the "audio display data of the character to be displayed".
The parameter change sequence of at least one character action is used for describing the change of the body action of the simulated character corresponding to the character to be shown along with the playing process of the audio display data of the character to be shown; also, the embodiments of the present application do not limit "body motion", and for example, it may include turning, hitting a blackboard, hitting a table, or stunning.
The "parameter change sequence of at least one wearing object" is used to describe the change of the wearing object of the "mimic character corresponding to the character to be shown" along with the playing process of the "audio display data of the character to be shown" (for example, the state change process of the wearing object, the color change process of the wearing object, the type change process of the wearing object, and the like); also, the embodiments of the present application are not limited to "clothing", and may include, for example, hats (such as the figure 2 and the doctor's hat shown in figure 3), coats, eyes, shoes, and the like.
The "parameter change sequence of at least one handheld object" is used to describe the change of the handheld object of the "mimicry character corresponding to the character to be displayed" along with the playing process of the "audio display data of the character to be displayed" (for example, the state change process of the handheld object, the color change process of the handheld object, the type change process of the handheld object, and the like); furthermore, the embodiment of the present application is not limited to the "handheld object", and for example, the handheld object may be a ring, a baton, a virtual book, a virtual computer, or the like.
The 'parameter change sequence of character background' is used for describing the change of the environment where the 'mimicry character corresponding to the character to be shown' is located along with the playing process of the 'audio display data of the character to be shown'; the "environment" is not limited in the embodiment of the application, and for example, the environment may include a classroom simulation scene, a mimicry or a mimicry cartoon character table-knocking scene, a night sky simulation scene, a sunny scene with high sun light, a natural weather scene such as rain and snow, and the like, which are preset 3D simulation scenes.
In addition, the embodiment of the present application does not limit the determination process of the "character parameter change sequence", for example, the character parameter change sequence is generated according to a preset character parameter configuration rule corresponding to the text to be displayed and the "audio playing time sequence". The preset character parameter configuration rule corresponding to the character to be displayed refers to the character characteristics of the character which is simulated and explained by the virtual character in advance.
Based on the related content in step 11, after the audio display data of the text to be displayed is obtained, an audio playing time sequence may be determined according to the audio display data, so that the audio playing time sequence can accurately represent the playing time information of each frame of audio data in the audio display data, and then a character parameter change sequence is generated based on the audio display data and the audio playing time sequence, so that the character parameter change sequence can accurately represent the character characteristics of the mimicry character corresponding to the text to be displayed in the playing process of the audio display data of the text to be displayed, thereby achieving the purpose of simulating and broadcasting the text to be displayed by the mimicry character corresponding to the text to be displayed.
Step 12: and generating a mimic character display sequence corresponding to the characters to be displayed according to the preset initial characters and the character parameter change sequence.
The 'preset initial character' refers to a virtual character model with default configuration parameters (especially, a virtual character model preset for characters to be displayed); in addition, the acquiring process of the "preset initial character" is not limited in the embodiment of the application, and for example, the acquiring process may be preset by a technician, or may be determined according to the personalized setting of the user for the virtual character model.
The mimic character display sequence corresponding to the character to be displayed is used for describing the change process of the mimic character corresponding to the character to be displayed along with the playing process of the audio display data of the character to be displayed.
In addition, the embodiment of the present application does not limit the "mimetic character display sequence corresponding to the character to be displayed", for example, if the "audio display data of the character to be displayed" includes J frames of audio data, the "mimetic character display sequence corresponding to the character to be displayed" includes J virtual characters, and the jth virtual character in the "mimetic character display sequence corresponding to the character to be displayed" refers to the mimetic character displayed by the display device when the display device plays the jth frame of audio data in the "audio display data of the character to be displayed". Wherein J is a positive integer, J is less than or equal to J, and J is a positive integer.
Based on the related contents in the above steps 11 to 12, after the audio display data of the text to be displayed is obtained, an audio playing time sequence and a character parameter change sequence may be determined according to the audio display data; and then, carrying out character parameter configuration processing on a preset initial character according to the character parameter change sequence to obtain a mimic character display sequence corresponding to the character to be displayed, so that the mimic character display sequence corresponding to the character to be displayed can represent a mimic character which dynamically changes along with the playing process of the audio display data of the character to be displayed.
In addition, in order to further improve the character display effect, corresponding character contents can be synchronously displayed when the characters to be displayed are subjected to analog broadcasting on the display equipment along with the mimicry characters on the equipment display data of the characters to be displayed. Based on this, the embodiment of the present application also provides a possible implementation manner of "device display data of characters to be displayed", and in this implementation manner, the "device display data of characters to be displayed" may further include "a character display sequence of characters to be displayed" in addition to the "audio display data of characters to be displayed".
The text display sequence of the text to be displayed is used for describing a change process of the text content displayed by the display equipment along with the playing process of the audio display data of the text to be displayed.
In addition, the text display sequence of the text to be displayed is not limited in the embodiment of the present application, for example, if the "audio display data of the text to be displayed" includes J frames of audio data, the "text display sequence of the text to be displayed" may include J pieces of equipment display text data, and the J-th equipment display text data in the "text display sequence of the text to be displayed" refers to text content that the display equipment must display when the display equipment plays the J-th frame of audio data in the "audio display data of the text to be displayed". Wherein J is a positive integer, J is less than or equal to J, and J is a positive integer.
It should be noted that, in the embodiment of the present application, the "jth device display text data" is not limited, for example, the "jth device display text data" may include text information carried by the jth frame of audio data in the "audio display data of the text to be displayed".
In addition, the embodiment of the present application does not limit the determination process of the "text display sequence of the text to be displayed", for example, the determination process may specifically include steps 21 to 22:
step 21: and determining an audio playing time sequence according to the audio display data of the characters to be displayed.
It should be noted that the relevant content of the "audio playing time sequence" refers to the relevant content of step 11 above.
Step 22: and determining the character display sequence of the characters to be displayed according to the audio display data, the audio playing time sequence and the characters to be displayed of the characters to be displayed.
In the embodiment of the application, after the audio display data of the characters to be displayed and the audio playing time sequence thereof are obtained, all characters in the characters to be displayed can be firstly collected according to the character broadcasting direction, so that a character set to be used and a character arrangement sequence of the character set to be used are obtained; searching words matched with the text content carried by the 1 st frame of audio data in the audio display data of the words to be displayed from the word set to be used according to the word arrangement sequence of the word set to be used, determining the 1 st equipment display word data in the word display sequence of the words to be displayed, deleting the words matched with the text content carried by the 1 st frame of audio data in the audio display data of the words to be displayed from the word set to be used, and updating the word arrangement sequence of the word set to be used so that the updated word arrangement sequence of the word set to be used does not include the arrangement sequence number of the words matched with the text content carried by the 1 st frame of audio data in the audio display data of the words to be displayed; then, searching words matched with the text content carried by the 2 nd frame of audio data in the audio display data of the words to be displayed from the word set to be used according to the word arrangement sequence of the word set to be used, determining the 2 nd equipment display word data in the word display sequence of the words to be displayed, deleting the words matched with the text content carried by the 2 nd frame of audio data in the audio display data of the words to be displayed from the word set to be used, and updating the word arrangement sequence of the word set to be used so that the updated word arrangement sequence of the word set to be used does not include the arrangement sequence number of the words matched with the text content carried by the 2 nd frame of audio data in the audio display data of the words to be displayed; … … (and so on); finally, words matched with the word content carried by the J-th frame audio data in the audio display data of the words to be displayed are searched from the word set to be used according to the word arrangement sequence of the word set to be used, the J-th equipment display word data in the word display sequence of the words to be displayed are determined, the words matched with the word content carried by the J-th frame audio data in the audio display data of the words to be displayed are deleted from the word set to be used, and the word arrangement sequence of the word set to be used is updated, so that the updated word arrangement sequence of the word set to be used does not include the arrangement sequence number of the words matched with the word content carried by the J-th frame audio data in the audio display data of the words to be displayed.
The "text broadcasting direction" refers to a text playing sequence followed by the "audio display data of the text to be displayed" playing the "text to be displayed".
Based on the related contents of the above steps 21 to 22, after the audio display data of the text to be displayed is obtained, an audio playing time sequence may be determined according to the audio display data; and then, referring to the audio display data, the audio playing time sequence and the characters to be displayed, and determining the character display sequence of the characters to be displayed, so that the character display sequence of the characters to be displayed can accurately show the change process of the character content displayed by the display equipment along with the playing process of the audio display data of the characters to be displayed in the process of playing the audio display data of the characters to be displayed by the display equipment.
In addition, the embodiment of the present application does not limit the acquiring process of the "device display data of characters to be displayed" in S2, for example, the acquiring process of the "device display data of characters to be displayed" may specifically include: the display equipment directly determines the equipment display data of the characters to be displayed according to the characters to be displayed. For another example, the process of acquiring the "device display data of the text to be displayed" may specifically include: firstly, the display equipment sends characters to be displayed (or character display requests) to a server, so that the server can determine the equipment display data of the characters to be displayed according to the characters to be displayed (or the character display requests); and the server feeds back the equipment display data of the characters to be displayed to the display equipment.
In addition, the embodiment of the present application does not limit the obtaining process of the "mimetic character of characters to be presented" in S2, for example, the obtaining process of the "mimetic character of characters to be presented" may specifically include: the display equipment directly determines the mimicry character of the character to be displayed according to the character to be displayed. For another example, the process of acquiring the "mimetic character of the character to be displayed" may specifically include: firstly, the display equipment sends the characters to be displayed (or the character display request) to the server, so that the server can determine the mimicry characters of the characters to be displayed according to the characters to be displayed (or the character display request) and the equipment display data of the characters to be displayed; and the server feeds back the mimicry character of the character to be displayed to the display equipment.
Based on the related content of S2, after the display device obtains the text to be displayed, the display device may obtain the device display data of the text to be displayed and the mimicry character corresponding to the text to be displayed by using a preset approach, so that the display device can display the text to be displayed and the mimicry character corresponding to the text to be displayed in the following process. The preset path may refer to that the display device autonomously generates device display data of the characters to be displayed and the mimicry characters corresponding to the characters to be displayed, or may refer to that the display device acquires the device display data of the characters to be displayed and the mimicry characters corresponding to the characters to be displayed by means of a server.
S3: and the display equipment displays the mimicry character corresponding to the character to be displayed and the equipment display data of the character to be displayed. And the mimicry character corresponding to the character to be displayed changes along with the display process of the equipment display data corresponding to the character to be displayed.
According to the embodiment of the application, after the display equipment acquires the mimicry figure corresponding to the character to be displayed and the equipment display data of the character to be displayed, the display equipment can display the mimicry figure and the equipment display data, so that the mimicry figure changes along with the display process of the equipment display data. For ease of understanding, the following description is made in conjunction with three examples.
Example 1, when the "device display data of characters to be displayed" includes "audio display data of characters to be displayed", and the "mimetic character corresponding to characters to be displayed" includes the "mimetic character display sequence corresponding to characters to be displayed", S3 may specifically include: when the audio display data of the characters to be displayed are played by the display equipment, the mimic character display sequence corresponding to the characters to be displayed is displayed according to the audio display time sequence, so that the mimic character corresponding to the characters to be displayed can be dynamically changed along with the playing process of the audio display data of the characters to be displayed, the process of simulating and broadcasting the characters to be displayed by the mimic character corresponding to the characters to be displayed can be realized, and the character display effect is favorably improved.
Example 2, when the "device display data of characters to be displayed" includes "audio display data of characters to be displayed" and "character display sequence of characters to be displayed", S3 may specifically include: when the display equipment plays the audio display data of the characters to be displayed, the character display sequence of the characters to be displayed is displayed according to the audio display time sequence, so that the characters displayed on the display equipment can be kept synchronous with the playing content of the audio display data of the characters to be displayed.
Example 3, when the "device display data of characters to be displayed" includes "audio display data of characters to be displayed" and "character display sequence of characters to be displayed", and the "mimetic character corresponding to characters to be displayed" includes the "mimetic character display sequence corresponding to characters to be displayed", S3 may specifically include: when the display equipment plays the audio display data of the characters to be displayed, the mimic character display sequence corresponding to the characters to be displayed and the character display sequence of the characters to be displayed are displayed according to the audio display time sequence, so that the characters displayed on the display equipment can be consistent with the broadcast contents of the mimic characters when the mimic characters corresponding to the characters to be displayed are broadcasted in a simulation mode aiming at the characters to be displayed.
In some cases, since the long text cannot be displayed on the display device completely, in order to improve the text display effect, the long text can be displayed by using a horizontal rolling display caption, a vertical rolling display caption or a carousel display caption. Based on this, the present application provides another possible implementation manner of S3, in this implementation manner, when the "device display data of characters to be displayed" includes "audio display data of characters to be displayed" and "character display sequence of characters to be displayed", S3 may specifically include: if the number of the characters to be displayed reaches the preset threshold value, when the display equipment plays the audio display data of the characters to be displayed, the character display sequence of the characters to be displayed is updated and displayed according to the audio play time sequence and the preset equipment display character update rule, so that the character content displayed on the display equipment can be dynamically updated along with the playing process of the audio display data of the characters to be displayed, the character content displayed on the display equipment can always comprise the character content synchronously played by the audio display data of the characters to be displayed, and the process of performing simulated report on the long text by the mimicry character can be realized.
The 'preset equipment display character updating rule' can be preset; for example, the "preset device display text update rule" is used to realize a display effect of displaying a long text in a manner of horizontally scrolling a caption, vertically scrolling a caption, or alternatively playing a caption on a display device.
Based on the related content of S3, after the display device obtains the mimicry character corresponding to the character to be displayed and the device display data of the character to be displayed, the display device may perform synchronous display on the mimicry character and the device display data according to a preset display mode, so as to achieve the purpose of explaining the character to be displayed by the mimicry character. For example, as shown in fig. 2, ancient poems in poetry of 'quiet night thoughts' and an anthropomorphic character are displayed on the display device, and the anthropomorphic character can perform voice dynamic broadcast on the ancient poetry in poetry of 'quiet night thoughts'. For another example, as shown in fig. 3, the related explanation content for "beauty" and a mimic character are displayed on the display device, and the mimic character can perform voice dynamic broadcast for the related explanation content for "beauty".
Based on the related contents from S1 to S3, in the display method provided in the embodiment of the present application, after the display device receives a text display request triggered by a user and used for requesting to display a text to be displayed, the display device may first obtain device display data of the text to be displayed and an anthropomorphic character corresponding to the text to be displayed; and then the equipment display data and the mimic figure are displayed on the display equipment, so that the mimic figure on the display equipment can change along with the display process of the equipment display data, the display equipment can simulate the process of introducing the characters to be displayed by the mimic figure, the character display effect can be improved, and the user experience is improved.
Method embodiment two
In order to further improve the expression effect of the above "equipment display data of characters to be displayed", the embodiment of the present application further provides an implementation manner of determining "equipment display data of characters to be displayed", which may specifically include step 31-step
The display method provided by the embodiment of the application comprises the following steps of 31-33:
step 31: and performing character element analysis on the characters to be displayed to obtain at least one character element of the characters to be displayed.
The characters to be displayed refer to the character contents which need to be displayed by the display equipment; the embodiment of the present application does not limit the obtaining process of the "to-be-displayed text", for example, the "to-be-displayed text" may be input by a user through an input device (e.g., a keyboard, a scanning device, etc.) of the display device. For another example, the process of acquiring the "text to be displayed" may also include: after a user triggers a text selection instruction on a display device, the display device may obtain a to-be-displayed character according to the text selection instruction (for example, if the text selection instruction carries the to-be-displayed character, the display device may directly extract the to-be-displayed character from the text selection instruction).
The character element analysis is used for carrying out semantic component analysis processing on the character content; the present embodiment is not limited to the "character element analysis", and may be performed according to a preset character element analysis rule, for example. Here, the "semantic component" refers to a semantic influence factor to be referred to when composing semantic information of one character content.
The term "at least one character element of a character to be displayed" refers to a semantic influence factor that should be referred to when constituting semantic information of the character to be displayed, so that the "at least one character element of the character to be displayed" can completely represent the semantic information of the character to be displayed.
In addition, the embodiment of the present application does not limit "text elements", and for example, when the "text to be presented" belongs to a story class, the "text elements" may include at least one of a time element, a character element, a place element, an event element, and an environment element.
The time element refers to time description information carried by the character to be displayed; in addition, the present embodiment does not limit the "time element", for example, the "time element" may be accurate time description information (e.g., 13:20), fuzzy time description information (e.g., noon, dusk, etc.), and environmental time description information (e.g., moonlight, hot sun).
The character element refers to character description information carried by the character to be displayed; moreover, the embodiment of the present application is not limited to "person elements", and for example, the "person element" may be a person name (e.g., xiaoming) or a person relationship (e.g., dad, good friend, etc.).
The 'place element' refers to the place description information carried by the 'text to be displayed'; in addition, the present embodiment does not limit the "location element," and for example, the "location element" may be accurate geographical location description information (for example, XXX prefecture XXX, etc.) or marker location description information (for example, XXX mountain, XXX tower, etc.).
The 'event element' refers to event description information carried by 'characters to be displayed'; furthermore, the embodiments of the present application do not limit "event elements," and for example, "event elements" may be introduction information of the complete event development process or may be substitute event description information (e.g., XXX events).
The environment element refers to environment description information carried by the character to be displayed; also, the embodiments of the present application do not limit "environmental elements," and for example, "environmental elements" may be natural environment description information, living environment description information, era background environment description information, biological environment description information (for example, animal description information, plant description information, etc.).
In addition, the embodiment of the present application does not limit the expression manner of the "at least one text element of the text to be displayed", for example, each text element of the text to be displayed may be expressed by using at least one vocabulary in the text to be displayed. For another example, each character element of the character to be displayed can be represented by using preset text data corresponding to each character element.
The determining process of the at least one text element of the text to be displayed is not limited in the embodiment of the present application, and for example, the determining process may specifically include: and according to a preset character element analysis rule, performing character element analysis on the character to be displayed to obtain at least one character element of the character to be displayed. The "word element analysis rule" is used to specify how word element analysis processing should be performed for one text data.
In fact, semantic influence factors of different types of text contents may be different, so in order to improve accuracy of text element analysis, text element analysis may be performed by using different text element analysis rules for different types of text contents. Based on this, the embodiment of the present application further provides another possible implementation manner of determining "at least one text element of a text to be displayed," which may specifically include: firstly, identifying the text content type to which the text to be displayed belongs from at least one candidate text content type; and then according to a character element analysis rule corresponding to the character content type of the character to be displayed, performing character element analysis on the character to be displayed to obtain at least one character element of the character to be displayed.
The "at least one candidate text content type" refers to a plurality of preset text content types (e.g., a story type, a word introduction type, a knowledge explanation type, etc.).
The embodiment of the present application does not limit the identification process of the text content type to which the text to be displayed belongs, for example, the text content type to be displayed may be identified by using a preset identification rule of each candidate text content type, so as to obtain the text content type to which the text to be displayed belongs. The "identification rule of the candidate text content type" may be used to describe information features of text contents belonging to the candidate content type.
Based on the related content of the step 31, after the to-be-displayed text is obtained, text element analysis may be performed on the to-be-displayed text to obtain at least one text element of the to-be-displayed text, so that the text elements can show semantic information of the to-be-displayed text, and a multimedia object capable of expressing the to-be-displayed text (in particular, capable of expressing the semantic information of the to-be-displayed text) can be automatically retrieved from a large number of multimedia objects based on the text elements, thereby being beneficial to achieving the purpose of performing diversified display on the to-be-displayed text.
Step 32: and searching the multimedia object to be used matched with the at least one text element from the at least one candidate multimedia object.
Wherein the "at least one candidate multimedia object" may comprise at least one picture and/or at least one video; moreover, the embodiment of the present application does not limit the manner of obtaining the "at least one candidate multimedia object", for example, the "at least one candidate multimedia object" may be read from a pre-constructed multimedia database (e.g., a picture database and/or a video database, etc.), may also be read from a storage space in which a large amount of multimedia data (e.g., pictures and/or videos) is preset to be stored, and may also be obtained from an internet resource by a preset means (e.g., a crawler processing method).
The "multimedia object to be used" means a multimedia object that matches the above-mentioned "at least one text element" so that the "multimedia object to be used" can represent the above-mentioned "at least one text element" by means of multimedia.
In addition, the embodiment of the present application does not limit the "multimedia object to be used," and may specifically include a picture to be used and/or a video to be used, for example. Here, the "picture to be used" refers to picture data matched with the "at least one character element" so that the "picture to be used" can represent the "at least one character element" by means of a picture form. The "video to be used" means video data matched with the above-mentioned "at least one text element" so that the "video to be used" can represent the above-mentioned "at least one text element" by means of a video form.
In addition, the embodiment of the present application does not limit the acquisition process of the "multimedia object to be used", and for convenience of understanding, the following description is made in conjunction with three cases.
Case 1: if the "at least one candidate multimedia object" includes at least one picture, the acquiring process of the "to-be-used multimedia object" may include: and searching the picture to be used matched with the at least one character element from the at least one picture, and determining the multimedia object to be used according to the picture to be used (for example, the picture to be used can be directly determined as the multimedia object to be used).
Case 2: if the "at least one candidate multimedia object" includes at least one video, the acquiring process of the "to-be-used multimedia object" may include: and searching the video to be used matched with the at least one text element from the at least one video, and determining the multimedia object to be used according to the video to be used (for example, the video to be used can be determined as the multimedia object to be used).
Case 3: if the "at least one candidate multimedia object" includes at least one picture and at least one video, the acquiring process of the "to-be-used multimedia object" may include: firstly, searching a picture to be used matched with at least one character element from the 'at least one picture', and searching a video to be used matched with at least one character element from the 'at least one video'; and then, determining the multimedia object to be used according to the picture to be used and the video to be used (for example, the picture to be used and the video to be used may be aggregated to obtain the multimedia object to be used).
The embodiment of the present application is not limited to the implementation of step 32, for example, in a possible implementation, step 32 may specifically include steps 321 to 323:
step 321: determining the coverage of each candidate multimedia object to at least one text element.
The coverage degree of the nth candidate multimedia object on the at least one text element is used for representing the content coverage degree of the information carried by the nth candidate multimedia object aiming at the at least one text element. N is a positive integer, N is less than or equal to N, N is a positive integer, and N represents the number of candidate multimedia objects.
In addition, the embodiment of the present application does not limit "the nth candidate multimedia object," for example, the nth candidate multimedia object may be a picture or a video.
In addition, the embodiment of the present application does not limit the determination process of the coverage of at least one text element by the nth candidate multimedia object, for example, it may specifically include steps 41 to 42:
step 41: and matching the object description information of the nth candidate multimedia object with at least one character element to obtain a matching result of the nth candidate multimedia object.
The "object description information of the nth candidate multimedia object" is used to represent information carried by the nth candidate multimedia object; furthermore, the embodiment of the present application does not limit "the object description information of the nth candidate multimedia object", for example, it may specifically include an object description text of the nth candidate multimedia object and/or at least one object description vocabulary of the nth candidate multimedia object.
The 'object description text of the nth candidate multimedia object' is used for representing the information carried by the nth candidate multimedia object in a text form; furthermore, the embodiment of the present application does not limit the expression of the "object description text of the nth candidate multimedia object", for example, it may be expressed in a similar way to the expression that the text data "the nth candidate multimedia object includes the sea, and there is a chair on the beach beside the sea, and a person lies on the chair".
In addition, the embodiment of the present application does not limit the process of acquiring the object description text of the nth candidate multimedia object, and for example, the process may be implemented in a manner of manual annotation. As another example, any existing or future process that can perform object description text conversion on multimedia data (e.g., pictures or videos) may be used.
"at least one object description vocabulary for an nth candidate multimedia object" is used to represent information carried by the nth candidate multimedia object by means of a plurality of vocabularies (e.g., sea, chair, person, etc.); furthermore, the embodiment of the present application does not limit "at least one object description vocabulary of the nth candidate multimedia object," which may refer to tag information of the nth candidate multimedia object, for example.
"tag information of the nth candidate multimedia object" is used to indicate what exists in the nth candidate multimedia object; moreover, the embodiment of the present application does not limit the determination process of the tag information of the nth candidate multimedia object, and for example, the determination process may be implemented by manually labeling the tag information, or, for example, may also be implemented by using any existing or future processing process that can perform target identification processing on multimedia data (e.g., pictures or videos).
The "matching result of the nth candidate multimedia object" refers to a matching result between the "object description information of the nth candidate multimedia object" and the "at least one text element" so that the "matching result of the nth candidate multimedia object" can indicate a content coverage degree reached by the "object description information of the nth candidate multimedia object" with respect to the "at least one text element".
In addition, the embodiment of the present application does not limit the determination process of the "matching result of the nth candidate multimedia object", and for the convenience of understanding, the following description is made with reference to three examples.
Example 1, if the "object description information of the nth candidate multimedia object" includes the object description text of the nth candidate multimedia object, the determining process of the "matching result of the nth candidate multimedia object" may specifically include: the object description text can be firstly subjected to word segmentation processing to obtain at least one word segmentation; and then, matching the at least one word segmentation with the at least one character element to obtain a matching result of the nth candidate multimedia object.
Example 2, if the "object description information of the nth candidate multimedia object" includes at least one object description vocabulary of the nth candidate multimedia object, the determining process of the "matching result of the nth candidate multimedia object" may specifically include: and matching at least one object description vocabulary of the nth candidate multimedia object with at least one character element to obtain a matching result of the nth candidate multimedia object.
Example 3, if the "object description information of the nth candidate multimedia object" includes the object description text of the nth candidate multimedia object and at least one object description vocabulary of the nth candidate multimedia object, the determining process of the "matching result of the nth candidate multimedia object" may specifically include steps 51 to 54:
step 51: and performing word segmentation processing on the object description text of the nth candidate multimedia object to obtain at least one word segmentation of the object description text.
Step 52: and matching at least one word segmentation of the object description text with at least one word element to obtain a matching result of the object description text, so that the 'matching result of the object description text' is used for representing the matching result between at least one word segmentation and the at least one word element in the object description text of the nth candidate multimedia object.
Step 53: and matching at least one object description vocabulary of the nth candidate multimedia object with at least one word element to obtain a matching result of the at least one object description vocabulary, so that the 'matching result of the at least one object description vocabulary' is used for expressing the matching result between the at least one object description vocabulary of the nth candidate multimedia object and the at least one word element.
Step 54: and collecting the matching result of the object description text and the matching result of the object description text to obtain the matching result of the nth candidate multimedia object.
Based on the related content of step 41, for the nth candidate multimedia object, the object description information of the nth candidate multimedia object may be matched with at least one text element to obtain the matching result of the nth candidate multimedia object, so that the matching result of the nth candidate multimedia object can indicate the content coverage degree of the "object description information of the nth candidate multimedia object" for the "at least one text element".
Step 42: and determining the coverage of the nth candidate multimedia object to at least one text element according to the matching result of the nth candidate multimedia object.
To facilitate an understanding of step 42, three examples are described below.
Example 1, if the "matching result of the nth candidate multimedia object" includes a matching result between at least one word segmentation in the object description text of the nth candidate multimedia object and the "at least one word element", step 42 may specifically include: firstly, determining the number of first matched word pairs according to a matching result between the at least one participle and the at least one character element; and determining the coverage of the nth candidate multimedia object on at least one character element according to the number of the first matching word pairs (for example, directly determining the number of the matching word pairs as the coverage of the nth candidate multimedia object on at least one character element). The "first matching word pair" includes a participle and a text element having a matching relationship.
Example 2, if the "matching result of the nth candidate multimedia object" includes a matching result between at least one object description vocabulary of the nth candidate multimedia object and the "at least one text element", the step 42 may specifically include: firstly, determining the number of second matched word pairs according to the matching result between the at least one object description vocabulary and the at least one character element; and determining the coverage of the nth candidate multimedia object on at least one character element according to the number of the second matching word pairs (for example, directly determining the number of the matching word pairs as the coverage of the nth candidate multimedia object on at least one character element). The "second matching word pair" includes an object description word and a word element having a matching relationship.
Example 3, if the "matching result of the nth candidate multimedia object" includes a matching result between at least one participle in the object description text of the nth candidate multimedia object and the "at least one literal element" and a matching result between at least one object description vocabulary of the nth candidate multimedia object and the "at least one literal element", step 42 may specifically include steps 61-65:
step 61: and determining the number of the first matching word pairs according to the matching result between at least one participle in the object description text of the nth candidate multimedia object and the at least one character element.
Step 62: and determining the matching score of the object description text according to the number of the first matching word pairs.
For example, step 62 may specifically include: and directly determining the number of the first matching word pairs as the matching score of the object description text.
And step 63: and determining the number of the second matching word pairs according to the matching result between at least one object description word of the nth candidate multimedia object and the at least one character element.
Step 64: and determining the matching score of at least one object description vocabulary according to the number of the second matching word pairs.
For example, step 64 may specifically include: and directly determining the number of the second matching word pairs as the matching score of at least one object description word.
Step 65: and performing first statistical analysis processing on the matching score of the object description text and the matching score of the at least one object description word to obtain the coverage of the nth candidate multimedia object to the at least one character element.
Here, the "first statistical analysis processing" may be set in advance, and may specifically be, for example, addition processing, averaging processing, maximum value processing, or minimum value processing.
Based on the related content of step 42, for the nth candidate multimedia object, after the matching result of the nth candidate multimedia object is obtained, the coverage of the nth candidate multimedia object on at least one text element may be determined by referring to the matching result of the nth candidate multimedia object. The matching result of the nth candidate multimedia object can represent the content coverage degree of the object description information of the nth candidate multimedia object aiming at the at least one text element, and the object description information of the nth candidate multimedia object can accurately represent the information carried by the nth candidate multimedia object, so that the matching result of the nth candidate multimedia object can accurately represent the content coverage degree of the information carried by the nth candidate multimedia object aiming at the at least one text element.
Based on the related contents of the above steps 41 to 42, for the "at least one candidate multimedia object" including the multimedia object to be processed, the coverage of the multimedia object to be processed on the at least one text element may be determined by means of the matching result between the object description information of the multimedia object to be processed and the above "at least one text element", so that the coverage of the at least one text element by the multimedia object to be processed "can accurately represent the content coverage degree of the information carried by the multimedia object to be processed on the" at least one text element ". Wherein, the "multimedia object to be processed" is used to represent each candidate multimedia object in the "at least one candidate multimedia object" mentioned above.
Based on the related content of step 321, for the "at least one candidate multimedia object", the coverage of at least one text element by each candidate multimedia object may be determined, so that the coverage of at least one text element by each candidate multimedia object can accurately represent the content coverage of the information carried by each candidate multimedia object for the "at least one text element".
Step 322: and searching a target multimedia object meeting a first condition from at least one candidate multimedia object according to the coverage degree of each candidate multimedia object to at least one character element.
The "first condition" may be preset, and for example, it may specifically include a picture with the maximum coverage; and/or, a video with maximum coverage.
"target multimedia object" is used to represent candidate multimedia objects satisfying a first condition; moreover, the embodiment of the present application is not limited to the "target multimedia object", and for example, the "target multimedia object" may specifically include a picture (e.g., a "target picture" shown below) and/or a video (e.g., a "target video" shown below).
The embodiment of step 322 is not limited in the examples of this application, and for ease of understanding, the following description is made with reference to three examples.
Example 1, when the "at least one candidate multimedia object" includes at least one picture, and the first condition includes a picture with a maximum coverage, step 322 may specifically include: firstly, comparing the coverage of at least one picture to at least one character element to obtain a first maximum coverage, so that the first maximum coverage is the maximum value in the coverage of at least one picture to at least one character element; and then, determining the target multimedia object according to the picture with the first coverage maximum value (for example, the picture with the first coverage maximum value can be directly determined as the target multimedia object).
Example 2, when the "at least one candidate multimedia object" includes at least one video, and the first condition includes a video with a maximum coverage, step 322 may specifically include: firstly, comparing the coverage of at least one text element by at least one video to obtain a second maximum coverage, so that the second maximum coverage is the maximum value in the coverage of at least one text element by at least one video; and then, determining the target multimedia object according to the video with the second coverage maximum value (for example, the video with the second coverage maximum value can be directly determined as the target multimedia object).
Example 3, when the "at least one candidate multimedia object" includes at least one picture and at least one video, and the first condition includes a picture with a maximum coverage and a video with a maximum coverage, step 322 may specifically include steps 3221-3223:
step 3221: and comparing the coverage of at least one picture to at least one character element to obtain a first maximum coverage, so that the first maximum coverage refers to the maximum value in the coverage of at least one picture to at least one character element.
Step 3222: and comparing the coverage of the at least one text element by the at least one video to obtain a second maximum coverage, so that the second maximum coverage refers to the maximum value in the coverage of the at least one text element by the at least one video.
Step 3223: and determining the target multimedia object according to the picture with the first coverage maximum value and the video with the second coverage maximum value.
As an example, step 523 may specifically be: and collecting the picture with the first coverage maximum value and the video with the second coverage maximum value to obtain the target multimedia object.
Based on the related content of the step 322, after the coverage of at least one text element by at least one candidate multimedia object is obtained, the coverage of at least one text element by the candidate multimedia objects may be referred to, and a target multimedia object satisfying a first condition may be searched from the candidate multimedia objects, so that the target multimedia object can express the "at least one text element" in a multimedia form.
Step 323: and determining the multimedia object to be used according to the target multimedia object.
In the embodiment of the application, after the target multimedia object is obtained, the target multimedia object may be directly determined as the multimedia object to be used, so that the multimedia object to be used can express the "at least one text element" in a multimedia form, and thus the multimedia object to be used can express the "text to be displayed" in a multimedia form.
In addition, in order to ensure that the multimedia object to be used can cover the "at least one text element" as completely as possible, it may be checked whether the information carried by the target multimedia object covers the "at least one text element" completely. Based on this, the present application provides another possible implementation manner of step 323, which may specifically include: and if the similarity between the target multimedia object and the at least one character element is higher than a preset threshold value, determining the target multimedia object as a multimedia object to be used.
The "similarity between the target multimedia object and the at least one text element" refers to a degree of similarity between information carried by the target multimedia object and the at least one text element, so that the "similarity between the target multimedia object and the at least one text element" can indicate a possibility that the information carried by the target multimedia object can completely cover the "at least one text element".
In addition, the meaning denoted by "similarity between the target multimedia object and the at least one text element" may specifically be: if the similarity between the target multimedia object and the at least one character element is higher, the possibility that the information carried by the target multimedia object can completely cover the at least one character element is higher; if the similarity between the target multimedia object and the at least one character element is smaller, the probability that the information carried by the target multimedia object can completely cover the at least one character element is smaller.
The preset threshold value can be preset according to the application scene; moreover, the embodiment of the present application does not limit the "preset threshold", and for example, the preset threshold may specifically include a picture threshold and/or a video threshold.
To facilitate understanding of the above "another possible implementation of step 323", the following description is made in conjunction with three examples.
Example 1, when the "target multimedia object" includes a target picture and the "preset threshold" includes a picture threshold, step 323 may specifically include: and if the similarity between the target picture and the at least one character element is higher than the picture threshold value, determining the target picture as the multimedia object to be used. Wherein, the "target picture" may refer to the above "picture with the first coverage maximum".
Example 2, when the "target multimedia object" includes a target video and the "preset threshold" includes a video threshold, step 323 may specifically include: and if the similarity between the target video and the at least one character element is higher than the video threshold value, determining the target video as the multimedia object to be used. Wherein "target video" may refer to "video with second coverage maximum" above.
Example 3, when the "target multimedia object" includes a target picture and a target video, and the "preset threshold" includes a picture threshold and a video threshold, step 323 may specifically include steps 3231 to 3233:
step 3231: if the similarity between the target picture and the at least one character element is higher than the picture threshold value, and the similarity between the target video and the at least one character element is higher than the video threshold value, the target picture and the target video are collected to obtain the multimedia object to be used.
Step 3232: and if the similarity between the target picture and the at least one character element is determined to be not higher than the picture threshold value and the similarity between the target video and the at least one character element is determined to be higher than the video threshold value, determining the target video as the multimedia object to be used.
Step 3233: and if the similarity between the target picture and the at least one character element is determined to be higher than the picture threshold value, and the similarity between the target video and the at least one character element is determined not to be higher than the video threshold value, determining the target picture as the multimedia object to be used.
Based on the related content of the above "another possible implementation manner of step 323", after the target multimedia object is obtained, it is first determined whether the similarity between the target multimedia object and the at least one text element is higher than a preset threshold value, so as to obtain a determination result; and then, determining the multimedia object to be used according to the judgment result, thereby being beneficial to improving the accuracy of the multimedia object to be used.
Based on the related contents in the steps 321 to 323, after the "at least one text element" is obtained, the coverage of at least one candidate multimedia object to the "at least one text element" may be referred to first, and a target multimedia object satisfying a first condition may be searched from the candidate multimedia objects; and then, referring to the target multimedia object to determine the multimedia object to be used, so that the multimedia object to be used can express the 'at least one character element' in a multimedia form.
In some cases, a phenomenon that a multimedia object to be used cannot be found may occur, and in order to improve the user experience, this application example further provides another possible implementation manner of step 32, where step 32 includes, in addition to steps 321 to 322 described above and "another possible implementation manner of step 323" described above, step 324:
step 324: and if the similarity between the target multimedia object and the at least one character element is not higher than the preset threshold value, determining the character display information of the character to be displayed according to the character to be displayed and the preset prompt information, so that the character display information is displayed by the display equipment.
The preset prompt information is used for expressing that a multimedia object capable of expressing characters to be displayed cannot be automatically retrieved.
The character display information of the characters to be displayed refers to information which needs to be displayed when the characters to be displayed are displayed by the display equipment; in addition, the determining process of the "text display information of the text to be displayed" is not limited in the embodiment of the application, and for example, the determining process may specifically include: the words to be displayed and the preset prompt information are combined according to a first combination mode to obtain word display information of the words to be displayed, so that when the display equipment displays the word display information, the words to be displayed can be displayed in a text mode, the preset prompt information can be displayed according to a preset display mode (such as a text mode, a picture mode, a voice mode and the like), and therefore a user can know the prompt information of the multimedia object which can express the words to be displayed and cannot be automatically retrieved from the display equipment, and the user can take corresponding processing measures (such as uploading pictures or videos matched with the words to be displayed) according to the prompt information.
Based on the related content of step 324, after it is determined that the similarity between the target multimedia object and the at least one text element is not higher than the preset threshold, it may be determined that there is no multimedia object capable of completely covering the at least one text element in the at least one candidate multimedia object, so that there is no multimedia object capable of accurately expressing the text to be displayed in the at least one candidate multimedia object, and therefore, in order to improve the user experience, the text display information of the text to be displayed may be determined directly according to the text to be displayed and the preset prompt information, so that when the display device displays the text display information, the text to be displayed may be displayed not only in a text form, but also an automatic retrieval result that the multimedia object matching the text to be displayed cannot be found may be notified to the user, so that the user can take corresponding measures for the prompt message.
In some cases, in order to improve the effect of automatic retrieval of a multimedia object, the present application embodiment further provides another possible implementation manner of step 32, where step 32 includes, in addition to the above-mentioned steps 321 to 322 and the above "another possible implementation manner of step 323" (or, the above-mentioned steps 321 to 322, the above "another possible implementation manner of step 323", and the step 324), step 325:
step 325: if the similarity between the target multimedia object and the at least one character element is determined not to be higher than the preset threshold, after the matching multimedia object of the characters to be displayed is obtained, the matching multimedia object is utilized to update the at least one candidate multimedia object, so that the updated at least one candidate multimedia object comprises the matching multimedia object.
The "matching multimedia object of the text to be displayed" refers to multimedia data (e.g., pictures and/or videos) capable of expressing the text to be displayed in a multimedia form; furthermore, the embodiment of the present application does not limit "the matching multimedia object of the text to be presented", for example, it may refer to multimedia data (e.g., a picture and/or a video) that is manually uploaded by the user with respect to the "text to be presented", or may refer to multimedia data (e.g., a picture and/or a video) that is designed and uploaded by the relevant person with respect to the text to be presented.
Based on the related content of step 325, after determining that "the similarity between the target multimedia object and the at least one text element" is not higher than the preset threshold, the matching multimedia object of the text to be displayed may be obtained by a preset obtaining method (for example, a method of design and upload by a technician, etc.); and then, updating at least one candidate multimedia object by utilizing the matched multimedia object so that the updated 'at least one candidate multimedia object' comprises the matched multimedia object, thereby being convenient for automatically retrieving the matched multimedia object according to the characters to be displayed in the subsequent automatic retrieval process, and being beneficial to improving the automatic retrieval effect of the characters to be displayed.
Based on the related content in the step 32, for the text to be displayed, after the at least one text element of the text to be displayed is obtained, the multimedia object to be used that is matched with the at least one text element may be searched from a large number of candidate multimedia objects, so that the multimedia object to be used can express the text to be displayed in a multimedia form (for example, a form of a drawing and/or a video), so that the multimedia object to be used can be referred to later to generate the device display data of the text to be displayed.
Step 33: and determining equipment display data of the characters to be displayed according to the multimedia objects to be used.
Wherein, the device presentation data of the characters to be presented is used for expressing the characters to be presented in various forms (such as texts, pictures, videos and the like).
In addition, the embodiment of the present application does not limit the determination process of the "device display data of the characters to be displayed", and for convenience of understanding, the following description is made in combination with three cases.
In case 1, when the "multimedia object to be used" includes a picture to be used, the determining process of the "device presentation data of characters to be presented" may specifically include: and synthesizing the picture to be used and the character to be displayed to obtain the equipment display data of the character to be displayed.
Wherein the "synthesis treatment" may be set in advance; furthermore, the embodiment of the present application is not limited to "synthesis processing", and for example, it may specifically include: and taking the picture to be used as a background picture of the character to be displayed.
Based on the above related content of "case 1", after the picture to be used is obtained, the picture to be used may be synthesized as the background picture of the text to be displayed, so as to obtain the device display data of the text to be displayed, so that the picture to be used is always displayed as the background picture when the display device displays the device display data, and the text to be displayed in the text form may be displayed on the picture to be used according to the preset display mode.
It should be noted that the "preset display mode" may be preset, and for example, may specifically be: the characters to be displayed are displayed in a rolling mode along with the playing progress of the audio playing data of the characters to be displayed, until the playing of the audio playing data of the characters to be displayed is finished, the characters to be displayed are completely displayed on the preset position of the picture to be used, and the characters to be displayed and the picture to be used can be displayed on the display equipment at the same time.
In case 2, when the "to-be-used multimedia object" includes a to-be-used video, the determining process of the "to-be-displayed text device display data" may specifically include: and determining the equipment display data of the characters to be displayed according to the video to be used.
As can be seen, after the video to be used is acquired, the device display data of the characters to be displayed may be determined according to the video to be used (for example, the video to be used may be directly determined as the device display data of the characters to be displayed), so that the device display data may express the characters to be displayed in a video form (that is, in a picture form + in a text form + in an audio form).
In addition, the "text to be displayed" may be expressed by only a certain portion of the video to be used, and in this case, in order to further improve the display effect of the "device display data", the "device display data" may be determined by directly using the certain portion of the "video to be used". Based on this, the embodiment of the present application further provides another possible implementation manner of determining "device display data of characters to be displayed", which may specifically include steps 71 to 73:
step 71: and performing preset division processing on the video to be used to obtain at least one candidate video segment.
The preset division processing can be preset according to an application scene; in addition, the embodiment of the present application is not limited to the "preset dividing process", and for example, the preset dividing process may be performed according to a preset dividing interval. The "preset division interval" refers to a distance between two adjacent division positions.
"at least one candidate video segment" refers to each video segment extracted from the video to be used; moreover, the embodiment of the present application does not limit "at least one candidate video segment", for example, the information carried by the "at least one candidate video segment" can completely cover the information carried by the video to be used.
Step 72: and searching a target video segment meeting a second condition from at least one candidate video segment according to the similarity between each candidate video segment and at least one text element.
The similarity between the mth candidate video segment and the at least one text element refers to the degree of similarity between the information carried by the mth candidate video segment and the at least one text element. M is a positive integer, M is less than or equal to M, M is a positive integer, and M represents the number of candidate video segments in the at least one candidate video segment.
In addition, the embodiment of the present application does not limit the determination process of "similarity between the mth candidate video segment and at least one text element", for example, it may specifically include steps 81 to 82:
step 81: and extracting picture information to be processed, character information to be processed and audio information to be processed from the mth candidate video segment.
The "to-be-processed picture information" refers to all video information represented in picture form in the mth candidate video segment.
"to-be-processed text information" refers to all video information represented in text form in the mth candidate video segment.
"audio information to be processed" refers to all video information represented in audio form in the m-th candidate video segment.
Step 82: and carrying out weighted summation on the similarity between the picture information to be processed and the at least one character element, the similarity between the text information to be processed and the at least one character element and the similarity between the audio information to be processed and the at least one character element to obtain the similarity between the mth candidate video segment and the at least one character element.
The "similarity between the picture information to be processed and at least one text element" is used to indicate the degree of similarity between the picture information carried by the mth candidate video segment and the at least one text element.
The "similarity between the text information to be processed and at least one text element" is used to indicate the degree of similarity between the character information carried by the mth candidate video segment and the at least one text element.
The "similarity between the audio information to be processed and at least one text element" is used to indicate the degree of similarity between the audio information carried by the mth candidate video segment and the at least one text element.
The weights of the terms involved in the "weighted sum" in step 82 may be preset according to the application scenario.
Based on the related contents of the above steps 81 to 82, for the "at least one candidate video segment" including the video segment to be processed, the similarity between the video segment to be processed and the at least one literal element can be determined with reference to the similarity between the picture information (i.e., the picture information to be processed) carried by the video segment to be processed and the at least one literal element, the similarity between the character information (i.e., the literal information to be processed) carried by the video segment to be processed and the at least one literal element, and the similarity between the audio information (i.e., the audio information to be processed) carried by the video segment to be processed and the at least one literal element, so that the similarity between the video segment to be processed and the at least one literal element can accurately represent the similarity between the information carried by the video segment to be processed and the at least one literal element. Wherein, the "to-be-processed video segment" is used to represent each candidate video segment in the "at least one candidate video segment".
The "second condition" may be set in advance; furthermore, the embodiment of the present application does not limit the "second condition", and for example, it may specifically be: the candidate video segment with the specific maximum similarity.
The "target video segment" refers to a candidate video segment satisfying the second condition.
Based on the related content in step 72, after the at least one candidate video segment is obtained, a maximum similarity value may be determined from the similarity between the at least one candidate video segment and the at least one text element; and then determining the candidate video segment with the maximum similarity as a target video segment, so that the similarities between the target video segment and at least one text element are all larger than the similarities between any one of the candidate video segments except the target video segment and at least one text element in the at least one candidate video segment.
Step 73: and determining equipment display data of the characters to be displayed according to the target video segment.
In the embodiment of the application, after the target video segment is acquired, the target video segment can be directly determined as the device display data of the characters to be displayed, so that the device display data can express the characters to be displayed in a video form (that is, a text form, a picture form and an audio form) at the same time.
In addition, the text content displayed in the "target video segment" may not be consistent with the text to be displayed, so in order to further improve the expression effect of the "device display data", the embodiment of the present application further provides another possible implementation manner of step 73, which specifically includes: and replacing the text content in the target video segment by the text to be displayed to obtain the equipment display data of the text to be displayed.
It should be noted that, the embodiment of the present application is not limited to the above-mentioned "replacement" implementation, for example, the text content in the target video segment may be deleted first, and then the text to be displayed may be added to the target video segment.
In addition, the audio information recorded in the audio data in the "target video segment" may not be consistent with the text to be displayed, so in order to further improve the expression effect of the "device display data", the embodiment of the present application further provides another possible implementation manner of step 73, which may specifically include: after the audio playing data of the characters to be displayed are acquired, replacing the audio data in the target video segment with the audio playing data of the characters to be displayed to obtain the equipment display data of the characters to be displayed.
It should be noted that the embodiment of the present application is not limited to the above-mentioned "alternative" implementation, for example, the audio playing data of the text to be displayed may be added to the target video segment after the target video segment is muted.
Based on the related contents of the above steps 71 to 73, in some cases, the video segment in the video to be used, which is most suitable for the above "at least one text element", may be used to determine the device display data of the text to be displayed, so that the device display data can better express the text to be displayed.
Based on the related content in the above case 2, after the video to be used is obtained, the device display data of the characters to be displayed may be determined by using the video to be used or a certain portion of the video to be used, so that the device display data may express the characters to be displayed in a video form (i.e., in a text form, a picture form, and an audio form at the same time), so that the device display data may be played on the display device in the following period.
In case 3, when the "to-be-used multimedia object" includes the to-be-used picture and the to-be-used video, the determining process of the "to-be-displayed text device display data" may specifically include steps 331 to 333:
step 331: and determining the character display picture of the characters to be displayed according to the picture to be used.
As an example, step 331 may specifically include: and determining the picture to be used as a character display picture of the characters to be displayed.
Step 332: and determining the character display video of the characters to be displayed according to the video to be used.
It should be noted that, step 332 may be implemented by any one of the embodiments shown in "case 2" in step 33, and only "device display data" in any one of the embodiments shown in "case 2" in step 33 needs to be replaced by "text display video".
Step 333: and combining the characters to be displayed, the character display pictures of the characters to be displayed and the character display videos of the characters to be displayed according to a preset combination mode to obtain the equipment display data of the characters to be displayed.
Wherein, the 'preset combination mode' can be preset; moreover, the embodiment of the present application does not limit the "preset combination manner," and for example, the preset combination manner may specifically include: the method comprises the steps of taking a character display picture of characters to be displayed as a background picture, placing a character display video of the characters to be displayed on a first position (such as the upper left corner) of the background picture for displaying, and placing the characters to be displayed on a second position (such as the middle position) of the background picture in a text form for displaying.
Based on the related content in the above case 3, after the picture to be displayed and the video to be used are obtained, the text to be displayed, the picture to be used, and the video to be used may be referred to, and the device display data of the text to be displayed may be determined, so that the device display data may express the text to be displayed in a text form, a picture form, and a video form, which is beneficial to improving the display effect of the text to be displayed.
Based on the related contents in the steps 31 to 33, it can be known that, in the display method provided in the embodiment of the present application, after the characters to be displayed are obtained, character element analysis is performed on the characters to be displayed first to obtain at least one character element of the characters to be displayed; searching a multimedia object to be used matched with the at least one character element from at least one candidate multimedia object; and finally, determining the equipment display data of the characters to be displayed according to the multimedia objects to be used. The to-be-used multimedia object is subjected to information expression through various media forms (such as forms of texts, images, videos and the like), so that the to-be-displayed characters can be expressed through various media forms based on the equipment display data determined by the to-be-used multimedia object, the to-be-displayed characters can be displayed through various media forms by the display equipment when the equipment display data are displayed by the display equipment, the to-be-displayed characters can be displayed according to various media forms, the character display effect of the to-be-displayed characters can be improved, and the user experience can be improved.
In addition, in order to further improve the text display effect of the text to be displayed, an embodiment of the present application further provides another possible implementation manner of the display method, and in this implementation manner, the display method includes not only the above step 31 to step 32, but also step 34 to step 35:
step 34: and generating audio playing data of the characters to be displayed.
The audio playing data of the characters to be displayed is used for expressing the characters to be displayed in an audio form; in addition, the generation process of the audio playing data of the characters to be displayed is not limited in the embodiment of the application.
Step 35: and determining the equipment display data of the characters to be displayed according to the multimedia objects to be used and the audio playing data of the characters to be displayed.
To facilitate the understanding of step 35, the following description is made in conjunction with three cases.
In case 1, when the "multimedia object to be used" includes a picture to be used, step 35 may specifically include: and according to a second combination mode, combining the picture to be used, the character to be displayed and the audio playing data of the character to be displayed to obtain the equipment display data of the character to be displayed.
The "second combination manner" may be preset, for example, the picture to be used is taken as a background picture of the text to be displayed, and the audio playing data of the text to be displayed is placed at a position to be used (for example, a lower right corner, etc.) on the background picture; and the characters to be displayed are displayed in a rolling mode along with the playing progress of the audio playing data of the characters to be displayed until the characters to be displayed are displayed on the preset position of the picture to be used completely after the playing of the audio playing data of the characters to be displayed is finished.
Based on the related content in the above-mentioned condition 1, after the audio playing data of the picture to be used and the text to be displayed are obtained, the picture to be used, the text to be displayed, and the audio playing data of the text to be displayed can be combined to obtain the device displaying data of the text to be displayed, so that the picture to be used is always displayed as a background picture when the device displaying data is displayed by the displaying device, and the text to be displayed can be displayed on the picture to be used according to the above-mentioned "preset display mode".
In case 2, when the "multimedia object to be used" includes a video to be used, step 35 may specifically include steps 91 to 93:
step 91: and determining a video object to be processed according to the video to be used.
It should be noted that, step 332 may be implemented by any one of the embodiments shown in "case 2" in step 33, and only "to-be-displayed device display data" in any one of the embodiments shown in "case 2" in step 33 needs to be replaced by "to-be-processed video object".
And step 92: and replacing the audio data of the video object to be processed by using the audio playing data of the characters to be displayed to obtain a replaced video object.
In the embodiment of the application, after the audio playing data of the text to be displayed and the video object to be processed are acquired, the audio playing data can be used for replacing the audio data of the video object to be processed to obtain the replaced video object, so that the replaced video object comprises the audio playing data of the text to be displayed, and the replaced video object has a better expression effect for the text to be displayed.
Step 93: and determining the equipment display data of the characters to be displayed according to the replaced video object.
In the embodiment of the application, after the replaced video object is obtained, the device display data of the characters to be displayed can be determined according to the replaced video object (for example, the replaced video object is directly determined as the device display data of the characters to be displayed).
Based on the related content in the above case 2, after the audio playing data of the to-be-used video and the to-be-displayed text is acquired, the to-be-used video and the audio playing data may be referred to, and the device displaying data of the to-be-displayed text is determined, so that the device displaying data can better express the to-be-displayed text.
In case 3, when the "multimedia object to be used" includes a picture to be used and a video to be used, step 35 may specifically include steps 351 to 355:
step 351: and determining the character display picture of the characters to be displayed according to the picture to be used.
It should be noted that the relevant content of step 351 refers to the relevant content of step 331 above.
Step 352: and determining the character display audio frequency of the characters to be displayed according to the audio playing data of the characters to be displayed.
As an example, step 352 may specifically include: and determining the audio playing data of the characters to be displayed as the character display audio of the characters to be displayed.
Step 353: and determining the character display video of the characters to be displayed according to the video to be used.
The embodiment of the present application is not limited to the implementation of step 353, for example, step 353 may be implemented by any implementation shown in "case 2" in step 33, and only "device display data" in any implementation shown in "case 2" in step 33 needs to be replaced by "text display video". For another example, step 353 may also be implemented by using any one of the embodiments shown in "case 2" in step 35, and only "device display data" in any one of the embodiments shown in "case 2" in step 35 needs to be replaced by "text display video".
Step 333: and according to a third combination mode, combining the characters to be displayed, the character display audio of the characters to be displayed, the character display pictures of the characters to be displayed and the character display video of the characters to be displayed to obtain the equipment display data of the characters to be displayed.
Wherein, the "third combination mode" can be preset; furthermore, the embodiment of the present application is not limited to the "third combination manner", and for example, the embodiment may specifically include: the method comprises the steps of taking a character display picture of characters to be displayed as a background picture, placing a character display video of the characters to be displayed on a first position (such as the upper left corner) of the background picture for displaying, placing characters to be displayed on a second position (such as the middle position) of the background picture in a text form for displaying, and placing a character display audio of the characters to be displayed on a third position (such as the upper right corner) of the background picture for displaying.
Based on the above-mentioned related contents from step 34 to step 35, after the text to be displayed is displayed, the device display data of the text to be displayed can be determined by referring to the text to be displayed, the audio playing data of the text to be displayed, and the multimedia object to be used corresponding to the text to be displayed, so that the device display data can express the text to be displayed in more forms, which is beneficial to improving the display effect of the text to be displayed.
Based on the display method provided by the method embodiment, the embodiment of the application also provides a display device, which is explained and explained with reference to the drawings.
Device embodiment
The apparatus embodiment introduces a display apparatus, and please refer to the above method embodiment for related content.
Referring to fig. 5, the figure is a schematic structural diagram of a display device according to an embodiment of the present application.
The display device 500 provided in the embodiment of the present application includes:
a receiving unit 501, configured to receive a text display request triggered by a user; the character display request is used for requesting to display characters to be displayed;
an obtaining unit 502, configured to obtain device display data of the characters to be displayed and an anthropomorphic character corresponding to the characters to be displayed;
the display unit 503 is configured to display the mimicry character corresponding to the character to be displayed and the device display data of the character to be displayed; wherein the mimicry figure changes along with the display process of the device display data.
In one possible implementation, the device presentation data includes audio presentation data, and the mimetic character is changed as the audio presentation data is played.
In a possible implementation, the obtaining unit 502 includes:
the acquisition subunit is used for determining an audio playing time sequence and a character parameter change sequence according to the audio display data; generating a mimic character display sequence corresponding to the character to be displayed according to a preset initial character and the character parameter change sequence;
the display unit 503 is configured to display the mimetic character display sequence corresponding to the characters to be displayed according to the audio playing time sequence when the audio display data is played.
In one possible embodiment, the human parameter variation sequence includes at least one of a parameter variation sequence of at least one body part, a parameter variation sequence of at least one human action, a parameter variation sequence of at least one wearing object, a parameter variation sequence of at least one handheld object, and a parameter variation sequence of a human background.
In one possible embodiment, the device presentation data further comprises a text presentation sequence;
the process for acquiring the character display sequence comprises the following steps: determining an audio playing time sequence according to the audio display data; determining the character display sequence according to the audio display data, the audio playing time sequence and the characters to be displayed;
the display unit 503 includes:
and the display subunit is used for displaying the character display sequence according to the audio playing time sequence when the audio display data is played.
In a possible implementation manner, the number of the characters to be displayed reaches a preset threshold;
the display subunit is specifically configured to: and updating and displaying the character display sequence according to the audio playing time sequence and a preset equipment display character updating rule.
Further, an embodiment of the present application further provides an apparatus, including: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is used for storing one or more programs, and the one or more programs comprise instructions which, when executed by the processor, cause the processor to execute any one of the implementation methods of the display method.
Further, an embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is caused to execute any implementation method of the above display method.
Further, an embodiment of the present application further provides a computer program product, which when running on a terminal device, causes the terminal device to execute any one of the implementation methods of the display method.
As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that all or part of the steps in the above embodiment methods can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network communication device such as a media gateway, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A display method is characterized by being applied to display equipment and comprises the following steps:
receiving a text display request triggered by a user; the character display request is used for requesting to display characters to be displayed;
acquiring equipment display data of the characters to be displayed and the mimicry characters corresponding to the characters to be displayed;
displaying the mimicry character corresponding to the character to be displayed and the equipment display data of the character to be displayed; wherein the mimicry figure changes along with the display process of the device display data.
2. The method of claim 1, wherein the device presentation data comprises audio presentation data, and wherein the mimicry character changes as the audio presentation data is played.
3. The method of claim 2, wherein obtaining the mimicry character corresponding to the text to be displayed comprises:
determining an audio playing time sequence and a character parameter change sequence according to the audio display data;
generating a mimic character display sequence corresponding to the character to be displayed according to a preset initial character and the character parameter change sequence;
the device display data for displaying the mimicry character corresponding to the character to be displayed and the character to be displayed comprises:
and when the audio display data is played, displaying the mimic character display sequence corresponding to the characters to be displayed according to the audio playing time sequence.
4. The method of claim 3, wherein the sequence of human parameter changes comprises at least one of a sequence of at least one body part parameter changes, a sequence of at least one human action parameter changes, a sequence of at least one wearing object parameter changes, a sequence of at least one hand held object parameter changes, and a sequence of human context parameter changes.
5. The method of any of claims 2-4, wherein the device presentation data further comprises a text presentation sequence;
the process for acquiring the character display sequence comprises the following steps:
determining an audio playing time sequence according to the audio display data;
determining the character display sequence according to the audio display data, the audio playing time sequence and the characters to be displayed;
the process of displaying the data by the equipment for displaying the characters comprises the following steps:
and when the audio display data is played, displaying the character display sequence according to the audio playing time sequence.
6. The method according to claim 5, wherein the number of the characters to be displayed reaches a preset threshold;
the displaying the text display sequence according to the audio playing time sequence comprises:
and updating and displaying the character display sequence according to the audio playing time sequence and a preset equipment display character updating rule.
7. A display device, comprising:
the receiving unit is used for receiving a character display request triggered by a user; the character display request is used for requesting to display characters to be displayed;
the acquisition unit is used for acquiring the equipment display data of the characters to be displayed and the mimicry characters corresponding to the characters to be displayed;
the display unit is used for displaying the mimicry character corresponding to the character to be displayed and the equipment display data of the character to be displayed; wherein the mimicry figure changes along with the display process of the device display data.
8. An apparatus, characterized in that the apparatus comprises: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is to store one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform the method of any of claims 1 to 6.
9. A computer-readable storage medium having stored therein instructions which, when run on a terminal device, cause the terminal device to perform the method of any one of claims 1 to 6.
10. A computer program product, characterized in that it, when run on a terminal device, causes the terminal device to perform the method of any one of claims 1 to 6.
CN202110961076.0A 2021-08-20 2021-08-20 Display method and related equipment thereof Pending CN113641836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110961076.0A CN113641836A (en) 2021-08-20 2021-08-20 Display method and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110961076.0A CN113641836A (en) 2021-08-20 2021-08-20 Display method and related equipment thereof

Publications (1)

Publication Number Publication Date
CN113641836A true CN113641836A (en) 2021-11-12

Family

ID=78423092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110961076.0A Pending CN113641836A (en) 2021-08-20 2021-08-20 Display method and related equipment thereof

Country Status (1)

Country Link
CN (1) CN113641836A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489439A (en) * 2022-01-20 2022-05-13 安徽淘云科技股份有限公司 Article correcting method and related equipment thereof
CN114489438A (en) * 2022-01-20 2022-05-13 安徽淘云科技股份有限公司 Display method and related equipment thereof
CN114489440A (en) * 2022-01-20 2022-05-13 安徽淘云科技股份有限公司 Display method and related equipment thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489439A (en) * 2022-01-20 2022-05-13 安徽淘云科技股份有限公司 Article correcting method and related equipment thereof
CN114489438A (en) * 2022-01-20 2022-05-13 安徽淘云科技股份有限公司 Display method and related equipment thereof
CN114489440A (en) * 2022-01-20 2022-05-13 安徽淘云科技股份有限公司 Display method and related equipment thereof

Similar Documents

Publication Publication Date Title
CN113641836A (en) Display method and related equipment thereof
CN110488975B (en) Data processing method based on artificial intelligence and related device
CN110868635B (en) Video processing method and device, electronic equipment and storage medium
CN110968736B (en) Video generation method and device, electronic equipment and storage medium
CN110110104B (en) Method and device for automatically generating house explanation in virtual three-dimensional space
CN112560605B (en) Interaction method, device, terminal, server and storage medium
JP6366626B2 (en) Generating device, generating method, and generating program
CN112261477A (en) Video processing method and device, training method and storage medium
CN112533051A (en) Bullet screen information display method and device, computer equipment and storage medium
CN113835522A (en) Sign language video generation, translation and customer service method, device and readable medium
CN112270768A (en) Ancient book reading method and system based on virtual reality technology and construction method thereof
CN110781346A (en) News production method, system, device and storage medium based on virtual image
CN112102157A (en) Video face changing method, electronic device and computer readable storage medium
CN113132780A (en) Video synthesis method and device, electronic equipment and readable storage medium
CN107122393B (en) electronic album generating method and device
CN113641837A (en) Display method and related equipment thereof
CN108833964B (en) Real-time continuous frame information implantation identification system
US20220308262A1 (en) Method and apparatus of generating weather forecast video, electronic device, and storage medium
JP2020095615A (en) Generator, method for generation, and generating program
CN117333645A (en) Annular holographic interaction system and equipment thereof
US20220222432A1 (en) Recommending theme patterns of a document
CN113407766A (en) Visual animation display method and related equipment
KR20100102515A (en) Method and system for automatically expressing emotions of digital actor
CN111582281A (en) Picture display optimization method and device, electronic equipment and storage medium
CN111160051A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination