CN108279964B - Method and device for realizing covering layer rendering, intelligent equipment and storage medium - Google Patents

Method and device for realizing covering layer rendering, intelligent equipment and storage medium Download PDF

Info

Publication number
CN108279964B
CN108279964B CN201810053788.0A CN201810053788A CN108279964B CN 108279964 B CN108279964 B CN 108279964B CN 201810053788 A CN201810053788 A CN 201810053788A CN 108279964 B CN108279964 B CN 108279964B
Authority
CN
China
Prior art keywords
rendering
text
layer
target character
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810053788.0A
Other languages
Chinese (zh)
Other versions
CN108279964A (en
Inventor
张强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shirui Electronics Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201810053788.0A priority Critical patent/CN108279964B/en
Publication of CN108279964A publication Critical patent/CN108279964A/en
Application granted granted Critical
Publication of CN108279964B publication Critical patent/CN108279964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Document Processing Apparatus (AREA)

Abstract

The invention discloses a text control and a method and a device for realizing overlay rendering for character editing. The implementation method comprises the following steps: after triggering and entering a Mongolian drawing interface, monitoring the movement of a cursor in a current edit box to select a target character; determining the area information of each target character relative to the current edit box; and performing layer rendering on each target character based on the region information of each target character. By the method, the cursory layer rendering of any character in the current edit box can be simply and effectively realized on the basis of self-defining the current edit box displayed by the text control, and the user experience of the manuscript edit tool in character editing and demonstration is better improved.

Description

Method and device for realizing covering layer rendering, intelligent equipment and storage medium
Technical Field
The invention relates to the technical field of computer application, in particular to a method and a device for realizing overlay rendering, intelligent equipment and a storage medium.
Background
The manuscript editing tool is office software frequently used in work and study of people, such as Microsoft's presentation manuscript software (PowerPoint, PPT), a user can edit the manuscript based on the manuscript editing tool and can display the edited content to others, and a similar manuscript editing tool is also installed in the currently popular intelligent teaching whiteboard, so that a teacher can edit and display the teaching content.
Generally, when a document is edited, the text part is often edited by depending on a text control in a document editing tool, the text control is equivalent to a functional plug-in for performing a text editing operation in the document editing tool, in an actual operation, a user needs to perform text editing in a text editing box supported by the text control, and the user usually has a requirement for performing text masking on the edited text. However, the text control used for forming the text editing box is often designed in an integral form, so that the current implementation of the text covering layer is based on all the characters in the whole text editing box, the covering layer covering cannot be set for partial characters in the text editing box, and if a user only wants to cover partial characters, the user can perform the operation of adding another element to directly cover the characters to be covered in the text editing box so as to simulate the covering layer for the partial characters.
However, when the number of the characters to be shielded is large and the distribution is scattered, the operation of performing the layer setting based on the above method is quite complicated, and the user experience of the document editing tool in the character editing and demonstration is affected.
Disclosure of Invention
The embodiment of the invention provides a text control for character editing and a method and a device for realizing overlay rendering, which can simply and effectively realize overlay rendering of any edited character.
In a first aspect, an embodiment of the present invention provides a text control for text editing, including: the system comprises a text input component, a text processing component and a text rendering component;
the text input component is used as an interactive interface for character editing and used for receiving input information and operation instructions generated by external triggering;
the text processing component is used for editing and forming characters to be presented according to the input information received by the text input component and analyzing and determining the form to be presented of the received operation instruction;
the text rendering component is used for rendering the characters to be presented based on given rendering attributes and presenting the characters in real time; and the rendering module is also used for responding to each operating instruction and rendering the display in a corresponding to-be-displayed form in real time.
In a second aspect, an embodiment of the present invention provides a method for implementing a mask rendering, including:
after triggering and entering a Mongolian drawing interface, monitoring the movement of a cursor in a current edit box to select a target character;
determining the area information of each target character relative to a current edit box, wherein the current edit box is a presentation form of a text control on a screen provided by the embodiment of the first aspect of the invention;
and performing layer rendering on each target character based on the region information of each target character.
In a third aspect, an embodiment of the present invention provides an apparatus for implementing a mask rendering, including:
the information monitoring module is used for monitoring the target character selected by the cursor in the current edit box after triggering to enter the masking layer drawing interface;
an information determining module, configured to determine region information of each target word relative to a current edit box, where the current edit box is a presentation form of a text control on a screen, according to an embodiment of the first aspect of the present invention;
and the mask layer rendering module is used for performing mask layer rendering on each target character based on the region information of each target character.
In a fourth aspect, an embodiment of the present invention provides an intelligent device, including:
one or more processors;
the storage device is used for storing the text control provided by the embodiment of the first aspect of the invention and is also used for storing one or more programs;
the text control is executed by the one or more processors such that the one or more processors implement text editing;
the one or more programs are executed by the one or more processors, so that the one or more processors implement the method for implementing the overlay rendering according to the embodiment of the second aspect of the present invention.
In a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements text editing and/or implements an implementation method of a masking layer rendering as provided in the second aspect of the present invention.
In the method and the device for implementing the text control and the masking layer rendering for the text editing, the masking layer rendering is implemented as follows: after triggering and entering a Mongolian drawing interface, monitoring the movement of a cursor in a current edit box to select a target character; then determining the area information of each target character relative to the current edit box; and finally, performing covering layer rendering on each target character according to the area information of each target character. By the technical scheme, the cursory layer rendering of any character in the current edit box can be simply and effectively realized on the basis of custom construction of the current edit box presented by the text control, and the user experience of the manuscript editing tool in character editing and demonstration is better improved.
Drawings
Fig. 1 is a schematic structural diagram of a text control for text editing according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for implementing a mask rendering according to a second embodiment of the present invention;
fig. 3 is a block diagram of an implementation apparatus for overlay rendering according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an intelligent device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic structural diagram of a text control for text editing according to an embodiment of the present invention. The text control can be used as a functional plug-in a manuscript editing tool to realize character editing. As shown in fig. 1, the text control includes: a text input component 11, a text processing component 12 and a text rendering component 13;
the text input component 11 is used as an interactive interface for character editing and receives input information and an operation instruction generated by external triggering;
the text processing component 12 is used for editing and forming characters to be presented according to the input information received by the text input component, and is also used for analyzing and determining the form to be presented of the received operation instruction;
a text rendering component 13, configured to render the text to be presented based on a given rendering attribute and present the rendered text in real time; and the rendering module is also used for responding to each operating instruction and rendering the display in a corresponding to-be-displayed form in real time.
It should be noted that, the application context of the text control provided in the first embodiment may be understood as: the existing text control for performing the text editing is generally an integral control, and if the design of the existing text control is completed, the type of the function rendering items which can be performed in the text editing process can be determined accordingly, and the function rendering items which cannot be flexibly added in the subsequent use cannot be added.
Specifically, the text control provided by the present embodiment mainly includes three hierarchical regions, namely a text input component 11, a text processing component 12, and a text rendering component 13, and it can be seen from fig. 1 that the three components are in a layer-by-layer contained relationship, wherein, the text input component 11 is the outermost layer of the text control provided in this embodiment, and is equivalent to a container, the text processing component 12 and the text rendering component 13 are encapsulated inside the text input component 11, so that the text input component 11 is equivalent to a unified entry for the outside, and is specifically used as an interactive interface for text editing to receive input information generated by external trigger (for example, a user can receive a trigger response to a keyboard to acquire characters input by the keyboard, and the keyboard can be a soft keyboard or a hardware keyboard) and an operation instruction (for example, an operation instruction triggered by the user can be received, and the operation instruction is generally generated by a user through touch or click or drag performed by a mouse).
Meanwhile, the text processing component 12 is equivalent to an intermediate layer of the text control provided in this embodiment, and is packaged in the content of the text input component 11, and is packaged in its interior with the text rendering component 13, specifically, the component can obtain input information or an operation instruction received by the text input component 11 in real time, and then the component is responsible for processing the obtained information, for example, corresponding characters to be presented (only characters are formed but the characters cannot be directly presented on the screen) are formed according to the processing of the input information, and for example, a mode to be presented corresponding to the operation instruction is determined by processing, for example, functions to be actually realized according to the operation instruction are different, and correspondingly, the mode to be presented is determined to be different, for example, when an operation instruction generated by clicking a mouse is processed, the display position to be presented when the mouse is clicked can be determined, for example, when the operation instruction generated by the dragging of the mouse is processed, the operation instruction can determine a region to be presented with a selected effect when the mouse is dragged, and the like.
In addition, the text rendering component 13 is equivalent to the innermost layer area of the text control provided in this embodiment, and it should be noted that, in the text rendering component, function rendering items required in the text editing are actually and specifically packaged, each function rendering item is mainly responsible for performing corresponding rendering processing on the edited text and presenting a corresponding rendering processing effect in real time, for example, the function rendering item responsible for text presentation may perform text rendering and real-time presentation on the formed text to be presented based on a given rendering attribute (such as font color, font size, font format, and the like), further, for example, the function rendering item responsible for cursor presentation may render and present a cursor at a determined display position to be presented by responding to a mouse click on a corresponding operation instruction, and for example, the function rendering item responsible for presentation of the selected effect may render and present the determined selected effect area to be presented by responding to a mouse drag a corresponding operation instruction at the determined selected effect.
Further, the text rendering component 13 includes a rendering layer set arranged in parallel for implementing rendering with different functions; the rendering layer set comprises a character covering layer rendering layer and at least one of the following: the system comprises a character rendering layer, a cursor rendering layer and a selected effect rendering layer.
In this embodiment, the specific implementation of the various function rendering items that may be added in the text rendering component 13 is equivalent to packaging function rendering layers corresponding to the various function rendering items in the text rendering component 13, and it may be considered that parallel presentation layers exist between the function rendering layers. One of the purposes of the text control provided in this embodiment is to realize the presentation of the text cover layer effect, and thus, in order to realize the text cover layer in the text rendering component 13, it is equivalent to increase a function rendering item of the text cover layer, that is, it can be considered that the text rendering component 13 actually includes the text cover layer rendering layer for realizing the text cover layer effect. In addition, the text rendering component 13 of this embodiment may further include a rendering layer for implementing rendering of other functions, such as a text rendering layer, a cursor rendering layer, and a selected effect rendering layer.
In order to ensure that the text mask effect is always presented in the uppermost layer of the screen, the embodiment may preferably set the mask display priority of the text mask rendering layer to be the highest.
Further, the text control further comprises:
the text management component is used for carrying out attribute management on the edited characters to be presented so as to form rendering attributes including the display form and the display position of the characters to be presented;
and the rendering data assembly is used for packaging the rendered characters to be presented and the current rendering data to form new rendering data, and the new rendering data is used as the processing basis of the text processing assembly.
It should be noted that the text control provided in this embodiment includes a text management component and a rendering data component in addition to the interactive components of the text input component 11, the text processing component 12, and the text rendering component 13. Specifically, when performing text editing based on the text control provided in this embodiment, before performing rendering and presentation on the text to be presented based on the text rendering component 13, attribute management needs to be performed on the edited text to be presented, that is, it is necessary to manage and determine the display form and the display position of the text to be presented, and the editing paragraphs to which the display form and the display position of the text to be presented belong, which are equivalent to rendering attributes of the text to be presented, and which can be formed by the text management component, based on the management of the text to be presented by the text management component, the text rendering component 13 can correctly present the text to be presented on the screen at the correct editing position in the selected display form.
It can be understood that, when the text management component is designed, an appropriate data structure can be set for the text management component to realize management of text attributes according to the designed data structure, for example, when text editing is performed, operations of inserting, deleting and searching paragraphs are often frequently performed, so that a red-black tree can be preferably set as the data structure for paragraph management in the embodiment, because the time complexity of the data structure in the worst case is o log (n), which is a data structure with extremely high efficiency, fluency during editing and rendering of a large amount of texts can be ensured, and user experience of text editing can be improved.
In addition, after the text rendering component 13 renders the text to be presented or renders the operation instruction in the form to be presented, the present embodiment further designs that the rendering data component encapsulates the newly rendered text data and the rendering data formed again to form new rendering data, it can be known that the formed rendering data can be fed back to the text processing component 12 as a data processing basis for text processing, and on the basis of the data processing, the text processing component 12 can process the input information or the operation instruction newly received by the text input component again, for example, a new text to be presented, a new cursor or a selected effect, and the like can be obtained.
Compared with the existing adopted text control, the text control provided by the embodiment of the invention realizes the layered design of the components required by the text editing, and the layered design can ensure that each component in the text control is manually controlled, namely, the functional rendering items required in the text editing can be dynamically increased on the premise of not influencing the text editing, so that the flexibility of the text editing is enhanced, and the user experience of the text editing is improved.
Example two
Fig. 2 is a flowchart of an implementation method for a mask layer rendering according to a second embodiment of the present invention, where the method is suitable for performing a mask layer rendering operation on characters in a text editing box, and the method may be executed by an implementation apparatus for a mask layer rendering, where the apparatus may be implemented by software and/or hardware, and is generally integrated on an intelligent device with a manuscript editing tool.
In this embodiment, the intelligent device may specifically be an intelligent mobile device with a document editing function, such as a mobile phone, a tablet computer, and a notebook, or may also be a fixed electronic device with a document editing function, such as a desktop computer.
It should be noted that in this embodiment, the implementation of the masking layer rendering still depends on the text control, but the method is described in this embodiment based on the interactive layer of text editing, and the text control is specifically presented in the form of a text edit box in the interactive layer. Specifically, the application context of this embodiment may be understood as that in a new document page of the document editing tool, a text editing box may be presented after triggering text editing, so that text editing, cursor position and selection effect, and the like may be presented in the text editing box. The present embodiment corresponds to a mask rendering implemented in a text edit box (text control) having a text mask rendering function.
As shown in fig. 2, a method for implementing a mask layer rendering according to a second embodiment of the present invention specifically includes the following operations:
s201, after the user is triggered to enter a Mongolian drawing interface, monitoring the target character which is selected by the cursor in the current editing frame.
Specifically, the current edit box may be equivalent to a presentation form of a text control currently performing text editing on the screen, where the text control is the text control provided in the first embodiment. In the embodiment, the text overlay rendering function in the text control is embodied on the interactive layer in the form of a function button or a shortcut key, so that a user can trigger an operation instruction for forming overlay rendering by overlay rendering.
In the step, the user can enter the masking layer drawing interface according to the triggering of the masking layer rendering function button, and then the operation of the user on the cursor in the current editing box can be monitored, the description from the bottom layer can be regarded as the receiving of the cursor moving operation by the text input component in the text control, and the determination of the form to be presented (equivalent to the actual selected position of the cursor moving) of the cursor moving operation by the text processing component. In the implementation, the characters selected by cursor movement are marked as target characters, and can also be understood as characters to be subjected to layer rendering.
S202, determining the area information of each target character relative to the current edit box.
It can be understood that, since the text editing is performed in the current edit box, it is equivalent to that the text editing is implemented depending on the text control provided in the above embodiment, when the target text is determined by monitoring the movement of the cursor, the coordinate information and the area information of the position occupied by the target text relative to the text control are also correspondingly determined.
Specifically, the determining the area information of each target character relative to the current edit box includes:
acquiring relative coordinate information of each target character relative to the current edit box; determining the area of each target character in the current edit box according to the length and width value occupied by each target character in the current edit box; and respectively determining the relative coordinate information and the area of each region as the region information of the corresponding target character equivalent to the current edit box.
In this embodiment, the area information of each target character selected when the cursor moves in the current edit box may be represented by a set data structure, and exemplary parameter information included in the preset data structure may be: x; y; width and height, wherein x represents the relative abscissa of a single selected character relative to the upper left corner of the current edit box; y represents the relative ordinate of a single selected character relative to the upper left corner of the current edit box; the width represents the width value of the selected area corresponding to the single selected character; height represents the length value (also equivalent to the height value) of the selected area corresponding to the single selected word. Thus, it is considered that the area information corresponding to each target character obtained in this step is actually represented in the above-described data structure format.
And S203, performing covering rendering on each target character based on the area information of each target character.
Describing from the bottom layer, the masking layer rendering performed in the step is actually equivalent to the masking layer rendering completed by the text rendering component of the text control, specifically, the operation of the step is mainly realized based on the character masking layer rendering layer in the text rendering component, and the operation process of realizing the masking layer rendering by the character masking layer rendering layer can be described as follows: and the character covering layer rendering layer acquires the area information of each target character, analyzes the coordinate information and the area information included in the acquired area information, determines the rendering area of each target character and draws the selected color in the rendering area.
Further, the performing a masking rendering on each target character based on the region information of each target character includes: determining an actual rendering region corresponding to each target character according to the relative coordinate information and the region area of each target character; and carrying out color covering on each actual rendering area based on the selected masking layer rendering color, and displaying on the current screen in real time.
In this embodiment, the actual rendering area may be understood as a rendering area of the target text actually corresponding to the text cover layer rendering layer, the color of the cover layer rendering in this embodiment may be a default white color, or may be a color selected manually, and the color that can be known to be covered by the cover layer is often displayed on the uppermost layer of other rendering effects of the text, thereby realizing the overall covering of various information of the text.
The present embodiment further optimizes and adds the operation of S204 on the basis of the above method steps, as shown in fig. 2, after S203, the method further includes:
and S204, after the mode of entering the manuscript demonstration mode is triggered, when the cursor is monitored to be positioned in a position area with the masking layer rendering on the screen, removing the masking layer rendering on the position area.
In this embodiment, it can be considered that the operations of S201 to S203 are specifically implemented in the text editing mode, and according to the setting, after entering the document presentation mode, the target characters at the position where the masking layer is rendered are still presented in the form of masked letters with colors, but in the document presentation mode, if the user clicks on a certain position with colors via a cursor, based on the operations in this step, the triggering of the user on any character with a masking layer rendering effect via the cursor can be monitored, and a position area of which target character is specifically triggered by the user can be determined according to the triggering position of the cursor on the screen, so that the color masking on the position area corresponding to the target character can be removed in this step, and the target character under color masking can be presented in real time. Specifically, the step is to perform a manuscript demonstration mode according to the triggering of a demonstration function button or a demonstration function shortcut key by a user.
In the method for implementing the overlay rendering provided by the second embodiment of the present invention, after entering the overlay rendering interface, a cursor is monitored to move a selected target character in a current edit box; then determining the area information of each target character relative to the current edit box; and finally, performing covering layer rendering on each target character according to the area information of each target character. By the technical scheme, the cursory layer rendering of any character in the current edit box can be simply and effectively realized on the basis of custom construction of the current edit box presented by the text control, and the user experience of the manuscript editing tool in character editing and demonstration is better improved.
EXAMPLE III
Fig. 3 is a block diagram of an implementation apparatus for performing a masking rendering according to a third embodiment of the present invention, where the apparatus is suitable for performing a masking rendering operation on a text in a text editing box, where the apparatus may be implemented by software and/or hardware and is generally integrated on an intelligent device having a manuscript editing tool. As shown in fig. 3, the apparatus includes: an information listening module 31, an information determining module 32 and a mask rendering module.
The information monitoring module 31 is configured to monitor that a cursor moves a selected target character in a current edit box after triggering entry into a masked drawing interface;
an information determining module 32, configured to determine region information of each target text relative to a current edit box, where the current edit box is a presentation form of a text control on a screen according to a first embodiment of the present invention;
and a mask layer rendering module 33, configured to perform mask layer rendering on each target character based on the region information of each target character.
In this embodiment, the device firstly monitors the target character selected by the cursor moving in the current edit box after triggering to enter the masking drawing interface through the information monitoring module 31; then, determining the area information of each target character relative to the current edit box through an information determination module 32; finally, the masking layer rendering module 33 performs masking layer rendering on each target character based on the region information of each target character.
The device for realizing the overlay rendering provided by the third embodiment of the invention can simply and effectively realize the overlay rendering of any character in the current edit box on the basis of self-defining the current edit box displayed by the text control, and better improves the user experience of the manuscript edit tool in character editing and demonstration.
Further, the information determining module 31 is specifically configured to:
acquiring relative coordinate information of each target character relative to the current edit box; determining the area of each target character in the current edit box according to the length and width value occupied by each target character in the current edit box; and respectively determining the relative coordinate information and the area of each region as the region information of the corresponding target character equivalent to the current edit box.
Further, the masking layer rendering module 32 is specifically configured to:
determining an actual rendering region corresponding to each target character according to the relative coordinate information and the region area of each target character; and carrying out color covering on each actual rendering area based on the selected masking layer rendering color, and displaying on the current screen in real time.
Further, the apparatus further comprises:
and the masking layer removing module 34 is configured to, after the entry into the document presentation mode is triggered, remove the masking layer rendering on the position area when the cursor is monitored to be located in the position area with the masking layer rendering on the screen.
Example four
Fig. 4 is a schematic diagram of a hardware structure of an intelligent device according to a fourth embodiment of the present invention, and as shown in fig. 4, the intelligent device according to the fourth embodiment of the present invention includes: a processor 41 and a storage device 42. The number of the processors in the smart device may be one or more, fig. 4 illustrates one processor 41, the processor 41 and the storage device 42 in the smart device are also connected by a bus or in other manners, and fig. 4 illustrates a connection by a bus.
The storage device 42 in the smart device serves as a computer-readable storage medium, and may be configured to store one or more programs, where the programs may be software programs (such as the text control provided in the foregoing embodiments), computer-executable programs, and modules, such as program instructions/modules corresponding to the implementation method of the mask layer rendering in the embodiment of the present invention (for example, the modules in the implementation device of the mask layer rendering shown in fig. 3 include the information monitoring module 31, the information determining module 32, and the mask layer rendering module). The processor 41 implements the editing of the text by executing the text control, as provided in the above embodiments, stored in the storage device; meanwhile, the processor 41 executes various functional applications and data processing of the smart device by running software programs, instructions and modules stored in the storage device 42, that is, implements the method for implementing the masking layer rendering in the above method embodiments.
The storage device 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the device, etc. (such as the mask color and area information in the above-described embodiments). Further, the storage 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 42 may further include memory located remotely from processor 41, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
And, when the one or more programs included in the above-mentioned smart device are executed by the one or more processors 41, the programs perform the following operations:
after triggering and entering a Mongolian drawing interface, monitoring a target character selected by a cursor in a current edit box, wherein the current edit box is a presentation form of a text control provided by the first embodiment of the invention on a screen; determining the area information of each target character relative to the current edit box; and performing layer rendering on each target character based on the region information of each target character.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program may be a text control, and when the text control is executed by a processor, the text control may implement editing and presenting of characters; the program may also be an application program of the method for implementing the mask rendering provided in embodiment two, and when the application program is executed by a processor, the application program may implement the method for implementing the mask rendering, where the method includes: after triggering and entering a Mongolian drawing interface, monitoring a target character selected by a cursor in a current edit box, wherein the current edit box is a presentation form of a text control provided by the first embodiment of the invention on a screen; determining the area information of each target character relative to the current edit box; and performing layer rendering on each target character based on the region information of each target character.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for realizing overlay rendering is characterized by comprising the following steps:
after triggering and entering a Mongolian drawing interface, monitoring a target character selected by a cursor in a current edit box, wherein the current edit box is a presentation form of a preset text control on a screen;
determining the area information of each target character relative to the current edit box;
performing covering layer rendering on each target character based on the region information of each target character;
the preset text control is presented in a text editing box form, and comprises the following steps: the system comprises a text input component, a text processing component and a text rendering component;
the text input assembly, the text processing assembly and the text rendering assembly are in layer-by-layer inclusion relation;
the text input assembly is used as a container and used for packaging the text processing assembly, and the text rendering assembly is packaged in the text processing assembly;
and packaging a function rendering layer corresponding to various required function rendering items in the text rendering component.
2. The method of claim 1,
the text input component is used as an interactive interface for character editing and used for receiving input information and operation instructions generated by external triggering;
the text processing component is used for editing and forming characters to be presented according to the input information received by the text input component and analyzing and determining the form to be presented of the received operation instruction;
the text rendering component is used for rendering the characters to be presented based on given rendering attributes and presenting the characters in real time; and the rendering module is also used for responding to each operating instruction and rendering the display in a corresponding to-be-displayed form in real time.
3. The method of claim 1, wherein the text rendering component comprises a set of rendering layers arranged in parallel for implementing different functional renderings;
the rendering layer set comprises a character covering layer rendering layer and at least one of the following: the system comprises a character rendering layer, a cursor rendering layer and a selected effect rendering layer.
4. The method of claim 1, further comprising:
the text management component is used for carrying out attribute management on the edited characters to be presented so as to form rendering attributes including the display form and the display position of the characters to be presented;
and the rendering data assembly is used for packaging the rendered characters to be presented and the current rendering data to form new rendering data, and the new rendering data is used as the processing basis of the text processing assembly.
5. The method according to any one of claims 1 to 4, wherein the determining of the area information of each target text relative to the current edit box comprises:
acquiring relative coordinate information of each target character relative to the current edit box;
determining the area of each target character in the current edit box according to the length and width value occupied by each target character in the current edit box;
and respectively determining the relative coordinate information and the area of each region as the region information of the corresponding target character equivalent to the current edit box.
6. The method of claim 5, wherein performing a masked rendering on each of the target words based on the region information of each of the target words comprises:
determining an actual rendering region corresponding to each target character according to the relative coordinate information and the region area of each target character;
and carrying out color covering on each actual rendering area based on the selected masking layer rendering color, and displaying on the current screen in real time.
7. The method of claim 1, further comprising:
and after the mode of entering the manuscript demonstration mode is triggered, when the cursor is monitored to be positioned in a position area with the masking layer rendering on the screen, removing the masking layer rendering on the position area.
8. An apparatus for implementing overlay rendering, comprising:
the information monitoring module is used for monitoring the target character selected by the cursor in the current edit box after triggering to enter the masking layer drawing interface;
the information determining module is used for determining the area information of each target character relative to a current edit box, wherein the current edit box is a presentation form of a preset text control on a screen;
the mask layer rendering module is used for performing mask layer rendering on each target character based on the region information of each target character;
the preset text control is presented in a text editing box form, and comprises the following steps: the system comprises a text input component, a text processing component and a text rendering component;
the text input assembly, the text processing assembly and the text rendering assembly are in layer-by-layer inclusion relation;
the text input assembly is used as a container and used for packaging the text processing assembly, and the text rendering assembly is packaged in the text processing assembly;
and packaging a function rendering layer corresponding to various required function rendering items in the text rendering component.
9. A smart device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executable by the one or more processors to cause the one or more processors to implement an implementation of a masking rendering as recited in any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for implementing a skin rendering according to any one of claims 1-7.
CN201810053788.0A 2018-01-19 2018-01-19 Method and device for realizing covering layer rendering, intelligent equipment and storage medium Active CN108279964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810053788.0A CN108279964B (en) 2018-01-19 2018-01-19 Method and device for realizing covering layer rendering, intelligent equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810053788.0A CN108279964B (en) 2018-01-19 2018-01-19 Method and device for realizing covering layer rendering, intelligent equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108279964A CN108279964A (en) 2018-07-13
CN108279964B true CN108279964B (en) 2021-09-10

Family

ID=62804184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810053788.0A Active CN108279964B (en) 2018-01-19 2018-01-19 Method and device for realizing covering layer rendering, intelligent equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108279964B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958876B (en) * 2018-07-23 2022-02-01 郑州云海信息技术有限公司 Display method and device of browser page
CN109145272B (en) * 2018-07-27 2022-09-16 广州视源电子科技股份有限公司 Text rendering and layout method, device, equipment and storage medium
CN109375972B (en) * 2018-09-17 2022-03-08 广州视源电子科技股份有限公司 Method, apparatus, computer device and storage medium for multi-element layout
CN109657220A (en) * 2018-12-11 2019-04-19 万兴科技股份有限公司 The online editing method, apparatus and electronic equipment of PDF document
CN109976632A (en) * 2019-03-15 2019-07-05 广州视源电子科技股份有限公司 text animation control method and device, storage medium and processor
CN111723316B (en) * 2019-03-22 2024-06-04 阿里巴巴集团控股有限公司 Character string rendering method and device, terminal equipment and computer storage medium
US11733669B2 (en) * 2019-09-27 2023-08-22 Rockwell Automation Technologies, Inc. Task based configuration presentation context
CN112862945B (en) * 2019-11-27 2024-07-16 北京沃东天骏信息技术有限公司 Record generation method and device
CN111241643B (en) * 2020-02-11 2023-05-09 广东三维家信息科技有限公司 Processing method and device of polygonal cabinet body and electronic equipment
CN111782311B (en) * 2020-05-11 2023-01-10 完美世界(北京)软件科技发展有限公司 Rendering method, device, equipment and readable medium
CN111857491B (en) * 2020-08-06 2022-01-11 泰山信息科技有限公司 Method, equipment and storage medium for implementing filter for selecting content of word processing software
CN113535046B (en) * 2021-07-23 2024-06-18 腾讯云计算(北京)有限责任公司 Text component editing method, device, equipment and readable medium
CN113705156A (en) * 2021-08-30 2021-11-26 上海哔哩哔哩科技有限公司 Character processing method and device
CN113805753A (en) * 2021-09-24 2021-12-17 维沃移动通信有限公司 Character editing method and device and electronic equipment
CN116302257B (en) * 2023-02-28 2024-04-30 南京索图科技有限公司 Method for realizing text editing box supporting drop-down selection of multiple groups of words

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1285337A2 (en) * 1999-11-02 2003-02-26 Canal+ Technologies Displaying graphical objects
CN104111787A (en) * 2013-04-18 2014-10-22 三星电子(中国)研发中心 Method and device for realizing text editing on touch screen interface

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7895531B2 (en) * 2004-08-16 2011-02-22 Microsoft Corporation Floating command object
CN102819325A (en) * 2012-07-21 2012-12-12 上海量明科技发展有限公司 Input method and system for obtaining a plurality of character presenting effects
CN104142911B (en) * 2013-05-08 2017-11-03 腾讯科技(深圳)有限公司 A kind of text information input method and device
CN104731787A (en) * 2013-12-18 2015-06-24 中兴通讯股份有限公司 Method, device and terminal capable of realizing page layout
CN104050155A (en) * 2014-07-01 2014-09-17 西安诺瓦电子科技有限公司 Text editing device and method
CN104636321B (en) * 2015-02-28 2018-01-16 广东欧珀移动通信有限公司 Text display method and device
CN106911833A (en) * 2015-12-21 2017-06-30 北京奇虎科技有限公司 A kind of data processing method and device
CN105760153A (en) * 2016-01-27 2016-07-13 努比亚技术有限公司 Text extracting device and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1285337A2 (en) * 1999-11-02 2003-02-26 Canal+ Technologies Displaying graphical objects
CN104111787A (en) * 2013-04-18 2014-10-22 三星电子(中国)研发中心 Method and device for realizing text editing on touch screen interface

Also Published As

Publication number Publication date
CN108279964A (en) 2018-07-13

Similar Documents

Publication Publication Date Title
CN108279964B (en) Method and device for realizing covering layer rendering, intelligent equipment and storage medium
CN107844297B (en) Data visualization implementation system and method
CN109933322B (en) Page editing method and device and computer readable storage medium
CN104267947B (en) A kind of editor's method of pop-up picture and pop-up picture editor's device
US11029836B2 (en) Cross-platform interactivity architecture
US7924284B2 (en) Rendering highlighting strokes
US20170109136A1 (en) Generation of application behaviors
US20140331179A1 (en) Automated Presentation of Visualized Data
CN113655999B (en) Page control rendering method, device, equipment and storage medium
CN104915186B (en) A kind of method and apparatus making the page
CN114115681B (en) Page generation method and device, electronic equipment and medium
WO2022242379A1 (en) Stroke-based rendering method and device, storage medium and terminal
CN111401323A (en) Character translation method, device, storage medium and electronic equipment
CN110506267A (en) The rendering of digital assembly background
CN108389244B (en) Implementation method for rendering flash rich text according to specified character rules
CN110968991A (en) Method and related device for editing characters
CN117057318A (en) Domain model generation method, device, equipment and storage medium
US9928220B2 (en) Temporary highlighting of selected fields
CN113536755A (en) Method, device, electronic equipment, storage medium and product for generating poster
US9037958B2 (en) Dynamic creation of user interface hot spots
US20180189243A1 (en) Server-side chart layout for interactive web application charts
US11048405B2 (en) Information processing device and non-transitory computer readable medium
CN113835835B (en) Method, device and computer readable storage medium for creating consistency group
CN115617441A (en) Method and device for binding model and primitive, storage medium and computer equipment
CN113407183A (en) Interface generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant