CN116521166A - IOS-based rich text area click content acquisition method, device and related medium - Google Patents

IOS-based rich text area click content acquisition method, device and related medium Download PDF

Info

Publication number
CN116521166A
CN116521166A CN202310496575.6A CN202310496575A CN116521166A CN 116521166 A CN116521166 A CN 116521166A CN 202310496575 A CN202310496575 A CN 202310496575A CN 116521166 A CN116521166 A CN 116521166A
Authority
CN
China
Prior art keywords
rich text
character
coordinate
real
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310496575.6A
Other languages
Chinese (zh)
Inventor
李伟伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Afirstsoft Co Ltd
Original Assignee
Afirstsoft Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Afirstsoft Co Ltd filed Critical Afirstsoft Co Ltd
Priority to CN202310496575.6A priority Critical patent/CN116521166A/en
Publication of CN116521166A publication Critical patent/CN116521166A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method, a device and a related medium for acquiring rich text area click content based on an IOS (input/output system), wherein the method comprises the steps of creating a gesture recognizer, and binding and recognizing the gesture recognizer and a rich text view to obtain a recognition result; carrying out coordinate calculation of the touch point according to the identification result to obtain the real coordinate of the touch point in the rich text view; index calculation is carried out by utilizing a layout manager of the rich text view according to the real coordinates, so that a character index value is obtained; performing space calculation according to the character index value to obtain a space rectangle; judging whether the real coordinates are in the range of the space rectangle or not; if not, ending the current judging flow; if yes, outputting click contents of the touch points in the rich text area. According to the method and the device, the real coordinates of the touch points are obtained through calculation of the gesture identifier, and the clicking content of which the real coordinates are located in the space rectangle is output, so that the rich text content touched by the user can be accurately obtained.

Description

IOS-based rich text area click content acquisition method, device and related medium
Technical Field
The invention relates to the technical field of electronic equipment display, in particular to an IOS-based rich text area click content acquisition method, an IOS-based rich text area click content acquisition device and a related medium.
Background
The graphics context mixed arrangement technology of IOS (i.e. the arrangement of characters and pictures in a mixed manner, wherein the characters can surround the periphery of the picture, or can be embedded into a rich text form below the picture or floating above the picture) is mainly born in the early programming language Objective-C (i.e. OC), while in the late official programming language Swift, the related technology is only to repackage OC, and the use method thereof still has more inconvenience. In the prior art, the graphics context shuffling scheme realizes the function of rich text through NSAttributeString class and NSAttachment class, but the clicked content in the rich text is acquired through a touch point, so that the language characteristic difference between the shift and the OC needs to be considered, the rich text content touched by a user cannot be accurately acquired in the prior art, and no effective solution exists at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for acquiring click content in a rich text area based on an IOS (object oriented system), and a related medium, aiming at solving the problem that in rich text of graphic and text mixed arrangement in the prior art, the content of the rich text touched by a user cannot be accurately acquired.
In a first aspect, an embodiment of the present invention provides a method for obtaining a rich text area click content based on IOS, including:
creating a gesture recognizer, and binding the gesture recognizer with the rich text view for recognition to obtain a recognition result;
carrying out coordinate calculation of the touch point according to the identification result to obtain the real coordinate of the touch point in the rich text view;
index calculation is carried out by utilizing a layout manager of the rich text view according to the real coordinates, so that a character index value is obtained;
performing space calculation according to the character index value to obtain a space rectangle in the rich text view;
judging whether the real coordinates are in the range of the space rectangle or not; if not, ending the current judging flow; if yes, outputting click contents of the touch points in the rich text area; wherein the click content includes text characters and pictures.
In a second aspect, an embodiment of the present invention provides an IOS-based rich text area click content obtaining apparatus, including:
the recognition result unit is used for creating a gesture recognizer, and binding the gesture recognizer with the rich text view for recognition to obtain a recognition result;
the coordinate calculation unit is used for calculating the coordinates of the touch points according to the identification result to obtain the real coordinates of the touch points in the rich text view;
the index calculation unit is used for carrying out index calculation by utilizing the layout manager of the rich text view according to the real coordinates to obtain character index values;
the space calculation unit is used for carrying out space calculation according to the character index value to obtain a space rectangle in the rich text view;
the judging and outputting unit is used for judging whether the real coordinates are in the range of the space rectangle or not; if not, ending the current judging flow; if yes, outputting click contents of the touch points in the rich text area; wherein the click content includes text characters and pictures.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the IOS-based rich text area click content obtaining method of the first aspect when the processor executes the computer program.
The embodiment of the invention provides a rich text area click content acquisition method based on an IOS, which comprises the steps of creating a gesture recognizer, and binding and recognizing the gesture recognizer and a rich text view to obtain a recognition result; carrying out coordinate calculation of the touch point according to the identification result to obtain the real coordinate of the touch point in the rich text view; index calculation is carried out by utilizing a layout manager of the rich text view according to the real coordinates, so that a character index value is obtained; performing space calculation according to the character index value to obtain a space rectangle; judging whether the real coordinates are in the range of the space rectangle or not; if not, ending the current judging flow; if yes, outputting click contents of the touch points in the rich text area. According to the method and the device, the real coordinates of the touch points are obtained through calculation of the gesture identifier, and the clicking content of which the real coordinates are located in the space rectangle is output, so that the rich text content touched by the user can be accurately obtained.
The embodiment of the invention also provides a rich text area clicking content acquisition device and computer equipment based on the IOS, which also have the beneficial effects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for obtaining content in a rich text area click based on an IOS according to an embodiment of the present invention;
FIG. 2 is another flow chart of a method for obtaining content in a rich text area click based on IOS according to the embodiment of the invention;
fig. 3 is a schematic block diagram of an IOS-based rich text area click content acquiring apparatus according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 is a flowchart of a method for obtaining content in clicking on a rich text area based on IOS according to an embodiment of the present invention, which specifically includes: steps S101 to S105.
S101, creating a gesture recognizer, and binding and recognizing the gesture recognizer and the rich text view to obtain a recognition result;
s102, carrying out coordinate calculation of the touch point according to the identification result to obtain the real coordinate of the touch point in the rich text view;
s103, index calculation is carried out by utilizing a layout manager of the rich text view according to the real coordinates, and a character index value is obtained;
s104, performing space calculation according to the character index value to obtain a space rectangle in the rich text view;
s105, judging whether the real coordinates are located in the range of the space rectangle; if not, ending the current judging flow; if yes, outputting click contents of the touch points in the rich text area; wherein the click content includes text characters and pictures.
Referring to fig. 2, in step S101, the programming language is Swift 5.5, and the code construction tool is Xcode, and all embodiments of the present invention use the programming language and the code construction tool, although other alternative programming languages and code construction tools may be used as well; the data type for displaying the rich text view is UILabel (IOS is used for displaying a view of one or more lines of information text, which not only can configure the overall appearance of the text, but also can use attribute strings to customize the appearance of each character respectively, and can also customize the layout scheme of the text), and is named as rich label in the code.
Firstly, a gesture recognizer is required to be created, and the data type is UITAPGestureRecognizer and is used for recognizing, managing and responding to all click events occurring in the gesture recognizer; in the invention, a click gesture recognizer is taken as an example, and besides, a translation gesture recognizer, a long-press gesture recognizer and the like are also provided; and then binding the gesture recognizer with the rich text view, wherein the gesture recognizer can recognize the click event on the bound rich text view, so that a recognition result is obtained.
In one embodiment, the step S101 includes:
declaring a response function, assigning the response function to the gesture recognizer; binding the gesture recognizer with the rich text view; and recognizing a click event on the rich text view by using the gesture recognizer to obtain a recognition result.
Further, the step of identifying the click event on the rich text view by using the gesture identifier to obtain an identification result includes: judging whether the screen click time is smaller than a threshold value or not; if the event is larger than the threshold value, judging that the event is a non-click event; if the identification result is smaller than the threshold value, judging that the event is clicked, and obtaining the identification result.
In this embodiment, the declaration of the response function, i.e., a piece of preset code, (e.g., @ objc func contentTapAction (sensor: uitagesturerecognizer) { }), may be named contentTapAction and assigned to the gesture recognizer; each time the gesture recognizer recognizes a click event of the user on the rich text view, a responsive action (i.e., feedback of the recognition result, infra) is performed.
Further, the process of the gesture identifier for identifying the click event of the user is specifically as follows:
if the user lifts the finger within the set threshold (500 ms by default) and the touch position is unchanged, the gesture recognizer will treat the interaction as a one-click event, otherwise, it will treat the interaction as other gesture events (i.e., non-click events). After the user touches the screen, the gesture recognizer acquires the touch point of the user and waits for subsequent actions. In general, the application issues the click event to the top view visible to the user, and then the top view distributes the event to the secondary view including the touch point according to the touch point, where the secondary view in the current operation flow is the rich text view. If the rich text view is not bound to the gesture recognizer, a gesture event may be ignored; and if the rich text view is bound with the gesture recognizer, executing a response action when the gesture recognizer successfully recognizes and acquires the click event related data.
In step S102, the coordinates of the touch point obtained by the recognition result are not the coordinates required by the present invention, so that the coordinates of the touch point need to be further calculated according to the recognition result, and the actual coordinates of the touch point in the rich text view are finally obtained, where the actual coordinates are used for the calculation in the next step.
In one embodiment, the step S102 includes:
analyzing the identification result to obtain the two-dimensional coordinates of the touch point; subtracting the left margin between the rich text view and a text manager on the rich text view from the abscissa of the two-dimensional coordinate to obtain the abscissa of the real coordinate; and subtracting the upper margin between the rich text view and the text manager on the rich text view from the ordinate of the two-dimensional coordinate to obtain the ordinate of the real coordinate.
In this embodiment, by analyzing the recognition result, two-dimensional coordinates (x, y) of the touch point of the current click event may be directly obtained, which may be named as tapPoint, and the origin of the two-dimensional coordinates is located at the top left corner of the rich text view. It should be noted that, in the rich text view, a text manager textContainer is provided, which is used for controlling a layout manner of text, a space exists between four sides of the text manager and the rich text view, and the text manager is a real display area of the rich text view within a layout range of the text manager, so that the two-dimensional coordinate needs to be converted into a real coordinate with an upper left corner of the text manager as an origin, that is, an abscissa (i.e., an x-axis) of the two-dimensional coordinate minus a left margin between the rich text view and the text manager, and an ordinate (i.e., a y-axis) of the two-dimensional coordinate minus a top margin between the rich text view and the text manager, and finally, the converted coordinate is a real coordinate of a touch point within the display area of the rich text view.
In step S103, after the true coordinates are obtained, the layout manager (layoutManager, the following description) of the rich text view is used to perform index calculation, so that a character index value (i.e. an offset, which indicates how many characters are from the character to be referenced to the first character of the array, the following description is obtained).
In one embodiment, the step S103 includes:
judging whether characters exist in the real coordinates; if the character index value exists, index calculation is carried out on the character under the real coordinates by using the layout manager, and the character index value is obtained; if the character closest to the real coordinates does not exist, inserting points are arranged on the left side and the right side of each character of the rich text view, the character closest to the real coordinates is obtained according to the inserting points, and index calculation is carried out on the character closest to the real coordinates by using the layout manager, so that the character index value is obtained.
In this embodiment, after the real coordinates are obtained, a code (for example, let index=rich label. Layoutmanager. Characterindex (for: tapPoint, in: richLabel.textContainer, fractionOfDistanceBetweenInsertionPoints: nil)) may be used to obtain, by the layout manager, a character index value located under the real coordinates, which may be named index. If no character exists in the real coordinates, the layout manager returns a character index value of the character closest to the real coordinates, and specifically returns the following flow:
the layout manager sets insertion points to the left and right sides of each character in the rich text view in a layout stage (if the character length in the rich text view is a, the number of insertion points is a+1); the layout manager can obtain the nearest insertion point to the real coordinate by comparing the distance between the coordinate of the insertion point and the real coordinate, and if the real coordinate is at the left side of the nearest insertion point, the character at the right side nearest to the insertion point is the nearest character to the real coordinate; if the real coordinate is on the right side of the nearest insertion point, the character on the left side of the nearest insertion point is the character nearest to the real coordinate; finally, the layout manager returns the position of the character nearest to the real coordinates in the rich text view, namely the character index value. For example, if the character length in the rich text view is a, the character index value of the first character is 0, and the character index value of the last character is a-1.
In step S104, spatial computation is performed according to the character index value, so as to obtain a spatial rectangle in the rich text view. The space rectangle is used for expressing the rectangular position and size of one view element in the rich text view, wherein the space rectangle comprises the coordinates of four corners of the space rectangle and the width and height of the space rectangle, and meanwhile, the space rectangle also comprises the position and size information of characters in the display area of the rich text view.
In one embodiment, the step S104 includes:
acquiring a first text position of a first character of the rich text view; according to the first text position and the character index value, calculating to obtain a second text position of a character corresponding to the character index value; according to the second text position, calculating to obtain a third text position of the next character of the character index value; and carrying out space calculation according to the second text position and the third text position to obtain the space rectangle.
In this embodiment, the text position uintextposition type in the rich text view represents the position of a character in a string of text, and each character has a corresponding text position in the text. Firstly, a first text position (i.e. rich Label. Begin OfDocument) of a first character of the rich text view is acquired, and the first text position can be named startTextPosition; calculating the second text position (i.e., taptext position) according to the first text position and the obtained character index value (which may be named index) by using codes (such as let tapTextPosition =rich label. Position (from: starttext position, offset: index)), wherein the second text position corresponds to a character of the character index value; then, calculating a third text position (i.e., next text position) of the next character of the character index value according to the second text position using a code (e.g., letnext text position=rich label. Position (from: taptext position, offset: 1)); finally, a spatial calculation is performed according to the second text position and the third text position using codes (e.g., lettpap text range=rich Label. Textrange (from: tapTextPosition, to: next text position) and let tapect=rich Label. First direction (for: taprectrange)), resulting in the spatial rectangle, which may be named tapect.
In step S105, if it is determined that the real coordinates are not located within the space rectangle, the determination process is ended, and a new process may be performed again from step S101; and if the real coordinates are judged to be located in the range of the space rectangle, outputting the clicked content of the touch point in the rich text area, and acquiring the clicked content of the user in the rich text area.
In one embodiment, the step S105 includes:
judging whether the abscissa of the real coordinate is larger than or equal to the abscissa of the upper left angular coordinate of the space rectangle and smaller than or equal to the abscissa of the upper right angular coordinate of the space rectangle; if not, ending the current flow; if yes, a first judgment result is obtained; judging whether the ordinate of the real coordinate is larger than or equal to the ordinate of the upper left corner coordinate of the space rectangle and smaller than or equal to the ordinate of the lower left corner coordinate of the space rectangle; if not, ending the current flow; if yes, a second judging result is obtained; and judging whether the real coordinates are positioned in the range of the space rectangle according to the first judging result and the second judging result.
In this embodiment, it is necessary to determine whether the real coordinate is located within the range of the space rectangle, and if the upper left corner coordinate of the space rectangle is leftTop, the upper right corner coordinate is lighttop, the lower left corner coordinate is leftbotom, and the lower right corner coordinate is lightbotom, two conditions are satisfied at the same time to confirm that the real coordinate is located within the range of the space rectangle; the first condition is: the abscissa of the real coordinate is larger than or equal to the abscissa of the leftTop and smaller than or equal to the abscissa of the rightTop; the second condition is: the ordinate of the real coordinate is larger than or equal to the ordinate of the leftTop and smaller than or equal to the ordinate of the leftBoottom; if the two conditions cannot be met simultaneously, the fact that the user does not actually touch any character in the rich text view is indicated, and the fact that the real coordinates are not located in the range of the space rectangle is confirmed.
In an embodiment, the step S105 further includes:
judging whether the character attribute corresponding to the character index value is true or not; if not, outputting the text character clicked by the touch point in the rich text area; and if yes, outputting the picture clicked by the touch point in the rich text area.
In this embodiment, after confirming that the real coordinates are within the range of the spatial rectangle, and determining that the character attribute (i.e., ispymbol, hereinafter, the same applies) corresponding to the character index value is not true, a code may be used to perform calculation for obtaining the text character (e.g., let tapcharacter=rich label. If the character attribute corresponding to the character index value is true, it is stated that the user clicks not a text character but a picture, and the picture may be obtained by using a picture obtaining code (for example, let resultage= (rich label. Attribute. Key. Attribute, at: index, effect range: nil) asnstextattribute).
Referring to fig. 3, fig. 3 is a schematic block diagram of an IOS-based rich text area click content obtaining apparatus 300 according to an embodiment of the present invention, where the IOS-based rich text area click content obtaining apparatus 300 includes:
the recognition result unit 301 is configured to create a gesture recognizer, and perform binding recognition on the gesture recognizer and the rich text view to obtain a recognition result;
the coordinate calculation unit 302 is configured to perform coordinate calculation of the touch point according to the identification result, so as to obtain a real coordinate of the touch point in the rich text view;
an index calculation unit 303, configured to perform index calculation by using the layout manager of the rich text view according to the real coordinates, so as to obtain a character index value;
a space calculating unit 304, configured to perform space calculation according to the character index value, so as to obtain a space rectangle in the rich text view;
a judgment output unit 305 for judging whether the real coordinates are within the range of the spatial rectangle; if not, ending the current judging flow; if yes, outputting click contents of the touch points in the rich text area; wherein the click content includes text characters and pictures.
In this embodiment, the recognition result unit 301 creates a gesture recognizer, and performs binding recognition on the gesture recognizer and the rich text view to obtain a recognition result; the coordinate calculation unit 302 performs coordinate calculation of the touch point according to the identification result to obtain the real coordinate of the touch point in the rich text view; the index calculation unit 303 performs index calculation by using the layout manager of the rich text view according to the real coordinates to obtain a character index value; the space calculation unit 304 performs space calculation according to the character index value to obtain a space rectangle in the rich text view; the judgment output unit 305 judges whether the real coordinates are within the range of the space rectangle; if not, ending the current judging flow; if yes, outputting click contents of the touch points in the rich text area; wherein the click content includes text characters and pictures.
In an embodiment, the recognition result unit 301 includes:
a declaration unit for declaring a response function, and giving the response function to the gesture recognizer;
a binding unit, configured to bind the gesture recognizer with the rich text view;
and the recognition unit is used for recognizing the click event on the rich text view by using the gesture recognizer to obtain a recognition result.
In an embodiment, the identification unit comprises:
the threshold value unit is used for judging whether the screen clicking time is smaller than a threshold value or not; if the event is larger than the threshold value, judging that the event is a non-click event; if the identification result is smaller than the threshold value, judging that the event is clicked, and obtaining the identification result.
In one embodiment, the coordinate calculating unit 302 includes:
the analysis unit is used for analyzing the identification result to obtain the two-dimensional coordinates of the touch point;
a left distance unit, configured to subtract a left edge distance between the rich text view and a text manager on the rich text view from an abscissa of the two-dimensional coordinate to obtain an abscissa of the real coordinate;
and the right distance unit is used for subtracting the upper edge distance between the rich text view and the text manager on the rich text view from the ordinate of the two-dimensional coordinate to obtain the ordinate of the real coordinate.
In an embodiment, the index calculation unit 303 includes:
the index unit is used for judging whether characters exist under the real coordinates; if the character index value exists, index calculation is carried out on the character under the real coordinates by using the layout manager, and the character index value is obtained; if the character closest to the real coordinates does not exist, inserting points are arranged on the left side and the right side of each character of the rich text view, the character closest to the real coordinates is obtained according to the inserting points, and index calculation is carried out on the character closest to the real coordinates by using the layout manager, so that the character index value is obtained.
In an embodiment, the spatial calculation unit 304 includes:
a first unit configured to obtain a first text position of a first character of the rich text view;
a second unit, configured to calculate, according to the first text position and the character index value, a second text position of a character corresponding to the character index value;
a third unit, configured to calculate, according to the second text position, a third text position of a next character of the character index value;
and the space unit is used for carrying out space calculation according to the second text position and the third text position to obtain the space rectangle.
In one embodiment, the judging output unit 305 includes:
the horizontal axis unit is used for judging whether the horizontal coordinate of the real coordinate is larger than or equal to the horizontal coordinate of the left upper corner coordinate of the space rectangle and smaller than or equal to the horizontal coordinate of the right upper corner coordinate of the space rectangle; if not, ending the current flow; if yes, a first judgment result is obtained;
the vertical axis unit is used for judging whether the vertical coordinate of the real coordinate is larger than or equal to the vertical coordinate of the upper left corner coordinate of the space rectangle and smaller than or equal to the vertical coordinate of the lower left corner coordinate of the space rectangle; if not, ending the current flow; if yes, a second judging result is obtained;
and the judging unit is used for judging whether the real coordinates are positioned in the range of the space rectangle according to the first judging result and the second judging result.
In an embodiment, the judging output unit 305 further includes:
the output unit is used for judging whether the character attribute corresponding to the character index value is true or not; if not, outputting the text character clicked by the touch point in the rich text area; and if yes, outputting the picture clicked by the touch point in the rich text area.
Since the embodiments of the apparatus portion and the embodiments of the method portion correspond to each other, the embodiments of the apparatus portion are referred to the description of the embodiments of the method portion, and are not repeated herein.
The embodiment of the invention also provides a computer device, which can comprise a memory and a processor, wherein the memory stores a computer program, and the processor can realize the steps provided by the embodiment when calling the computer program in the memory. Of course, the computer device may also include various network interfaces, power supplies, and the like.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it would be obvious to those skilled in the art that various improvements and modifications can be made to the present application without departing from the principles of the present application, and such improvements and modifications fall within the scope of the claims of the present application.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. An IOS-based rich text area click content acquisition method, comprising the steps of:
creating a gesture recognizer, and binding the gesture recognizer with the rich text view for recognition to obtain a recognition result;
carrying out coordinate calculation of the touch point according to the identification result to obtain the real coordinate of the touch point in the rich text view;
index calculation is carried out by utilizing a layout manager of the rich text view according to the real coordinates, so that a character index value is obtained;
performing space calculation according to the character index value to obtain a space rectangle in the rich text view;
judging whether the real coordinates are in the range of the space rectangle or not; if not, ending the current judging flow; if yes, outputting click contents of the touch points in the rich text area; wherein the click content includes text characters and pictures.
2. The IOS-based rich text area click content acquisition method of claim 1, wherein creating a gesture recognizer and binding the gesture recognizer with a rich text view to obtain a recognition result comprises:
declaring a response function, assigning the response function to the gesture recognizer;
binding the gesture recognizer with the rich text view;
and recognizing a click event on the rich text view by using the gesture recognizer to obtain a recognition result.
3. The IOS-based rich text area click content retrieval method according to claim 2, wherein said identifying a click event on the rich text view with the gesture recognizer results in an identification result comprising:
judging whether the screen click time is smaller than a threshold value or not; if the event is larger than the threshold value, judging that the event is a non-click event; if the identification result is smaller than the threshold value, judging that the event is clicked, and obtaining the identification result.
4. The IOS-based rich text area click content obtaining method according to claim 1, wherein the calculating coordinates of the touch point according to the recognition result to obtain real coordinates of the touch point in the rich text view comprises:
analyzing the identification result to obtain the two-dimensional coordinates of the touch point;
subtracting the left margin between the rich text view and a text manager on the rich text view from the abscissa of the two-dimensional coordinate to obtain the abscissa of the real coordinate;
and subtracting the upper margin between the rich text view and the text manager on the rich text view from the ordinate of the two-dimensional coordinate to obtain the ordinate of the real coordinate.
5. The IOS-based rich text area click content acquiring method according to claim 1, wherein the performing index calculation by using the layout manager of the rich text view according to the real coordinates to obtain a character index value comprises:
judging whether characters exist in the real coordinates; if the character index value exists, index calculation is carried out on the character under the real coordinates by using the layout manager, and the character index value is obtained; if the character closest to the real coordinates does not exist, inserting points are arranged on the left side and the right side of each character of the rich text view, the character closest to the real coordinates is obtained according to the inserting points, and index calculation is carried out on the character closest to the real coordinates by using the layout manager, so that the character index value is obtained.
6. The IOS-based rich text area click content acquisition method according to claim 1, wherein the performing spatial calculation according to the character index value to obtain a spatial rectangle in the rich text view comprises:
acquiring a first text position of a first character of the rich text view;
according to the first text position and the character index value, calculating to obtain a second text position of a character corresponding to the character index value;
according to the second text position, calculating to obtain a third text position of the next character of the character index value;
and carrying out space calculation according to the second text position and the third text position to obtain the space rectangle.
7. The IOS-based rich text area click content acquiring method according to claim 1, wherein said determining whether the real coordinates are within the range of the spatial rectangle comprises:
judging whether the abscissa of the real coordinate is larger than or equal to the abscissa of the upper left angular coordinate of the space rectangle and smaller than or equal to the abscissa of the upper right angular coordinate of the space rectangle; if not, ending the current flow; if yes, a first judgment result is obtained;
judging whether the ordinate of the real coordinate is larger than or equal to the ordinate of the upper left corner coordinate of the space rectangle and smaller than or equal to the ordinate of the lower left corner coordinate of the space rectangle; if not, ending the current flow; if yes, a second judging result is obtained;
and judging whether the real coordinates are positioned in the range of the space rectangle according to the first judging result and the second judging result.
8. The IOS-based rich text area click content acquisition method according to claim 1, wherein outputting the click content of the touch point in the rich text area comprises:
judging whether the character attribute corresponding to the character index value is true or not; if not, outputting the text character clicked by the touch point in the rich text area; and if yes, outputting the picture clicked by the touch point in the rich text area.
9. An IOS-based rich text area click content acquisition apparatus, comprising:
the recognition result unit is used for creating a gesture recognizer, and binding the gesture recognizer with the rich text view for recognition to obtain a recognition result;
the coordinate calculation unit is used for calculating the coordinates of the touch points according to the identification result to obtain the real coordinates of the touch points in the rich text view;
the index calculation unit is used for carrying out index calculation by utilizing the layout manager of the rich text view according to the real coordinates to obtain character index values;
the space calculation unit is used for carrying out space calculation according to the character index value to obtain a space rectangle in the rich text view;
the judging and outputting unit is used for judging whether the real coordinates are in the range of the space rectangle or not; if not, ending the current judging flow; if yes, outputting click contents of the touch points in the rich text area; wherein the click content includes text characters and pictures.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the IOS-based rich text area click content retrieval method of any one of claims 1 to 8 when the computer program is executed.
CN202310496575.6A 2023-05-05 2023-05-05 IOS-based rich text area click content acquisition method, device and related medium Pending CN116521166A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310496575.6A CN116521166A (en) 2023-05-05 2023-05-05 IOS-based rich text area click content acquisition method, device and related medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310496575.6A CN116521166A (en) 2023-05-05 2023-05-05 IOS-based rich text area click content acquisition method, device and related medium

Publications (1)

Publication Number Publication Date
CN116521166A true CN116521166A (en) 2023-08-01

Family

ID=87407840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310496575.6A Pending CN116521166A (en) 2023-05-05 2023-05-05 IOS-based rich text area click content acquisition method, device and related medium

Country Status (1)

Country Link
CN (1) CN116521166A (en)

Similar Documents

Publication Publication Date Title
KR102167879B1 (en) Test methods, systems, devices and readable storage media
KR100246066B1 (en) Multi-lateral annotation and hot links in interactive 3d graphics
CA2718636C (en) Method and tool for recognizing a hand-drawn table
US20150243288A1 (en) Mouse-free system and method to let users access, navigate, and control a computer device
DE112008004156B4 (en) SYSTEM AND METHOD FOR A GESTURE-BASED EDITING MODE AND COMPUTER-READABLE MEDIUM FOR IT
CN102667696B (en) For the System and method for of the object identity in user interface
US8634645B2 (en) Method and tool for recognizing a hand-drawn table
CN106484266A (en) A kind of text handling method and device
CN105511792A (en) In-position hand input method and system for form
US20140189482A1 (en) Method for manipulating tables on an interactive input system and interactive input system executing the method
CN112965645B (en) Page dragging method and device, computer equipment and storage medium
JP2001337944A (en) Storage medium storing cursor display controlling program and cursor display controller
CN112699362A (en) Login verification method and device, electronic equipment and computer readable storage medium
US20070198922A1 (en) Dynamically placing resources within a graphical user interface
CN116521166A (en) IOS-based rich text area click content acquisition method, device and related medium
CN107679219B (en) Matching method and device, interactive intelligent panel and storage medium
US7296240B1 (en) Document object membranes
JPH01240978A (en) Interactive screen defining device
Escott et al. Design patterns for angular hotdraw
JP4960188B2 (en) Screen transition diagram display method and system
US20090174661A1 (en) Gesture based modeling system and method
CN110750501A (en) File retrieval method and device, storage medium and related equipment
JP2007286822A (en) Gui specification creation method and gui specification creation system
JP4730033B2 (en) Display drawing creation program, method and apparatus
CN111131660B (en) Image data processing method and device, electronic equipment and storage medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination