CN110275667B - Content display method, mobile terminal, and computer-readable storage medium - Google Patents

Content display method, mobile terminal, and computer-readable storage medium Download PDF

Info

Publication number
CN110275667B
CN110275667B CN201910555988.0A CN201910555988A CN110275667B CN 110275667 B CN110275667 B CN 110275667B CN 201910555988 A CN201910555988 A CN 201910555988A CN 110275667 B CN110275667 B CN 110275667B
Authority
CN
China
Prior art keywords
touch position
text content
value
touch
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910555988.0A
Other languages
Chinese (zh)
Other versions
CN110275667A (en
Inventor
李佩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201910555988.0A priority Critical patent/CN110275667B/en
Publication of CN110275667A publication Critical patent/CN110275667A/en
Application granted granted Critical
Publication of CN110275667B publication Critical patent/CN110275667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)

Abstract

The application provides a content display method, which comprises the following steps: when the touch operation is recognized as the preset operation, screen capturing is carried out on the current display interface to obtain a screen capturing picture; performing image recognition processing on the screenshot picture to obtain text content contained in the screenshot picture and a longitudinal coordinate value of each line in the text content; when the ordinate value of a certain row in the screen capture picture is matched with the ordinate value y of the touch position, acquiring a first abscissa value X1 and a second abscissa value X2 of the matched row; and calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position. The application also provides a mobile terminal and a computer readable storage medium. In this way, the position of the touch position in the text content can be calculated, the text content corresponding to the touch position can be preferentially displayed, and the user experience is improved.

Description

Content display method, mobile terminal, and computer-readable storage medium
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to a content display method, a mobile terminal, and a computer-readable storage medium.
Background
An existing mobile terminal (for example, a mobile phone) is generally provided with an intelligent screen recognition function, when a touch operation of a user is received, the mobile terminal can capture a screen of a current display interface and automatically recognize a captured picture to obtain text content contained in the captured picture, so that the mobile terminal can directly display the text content required by the user according to the text content. However, when the recognized text content is a segment of text content, the recognized text content may include many text contents required by the user, if the text contents are not processed at this time, the mobile terminal may display the text contents required by the user in the text contents in sequence, and if the user wants to view the text contents corresponding to the touch position preferentially at this time, the existing intelligent screen recognition function cannot achieve preferential viewing of the text contents corresponding to the touch position, thereby reducing user experience.
Disclosure of Invention
The main purpose of the present application is to provide a content display method, a mobile terminal, and a computer-readable storage medium, which aim to implement preferential display of text content corresponding to a touch position by calculating the position of the touch position in the text content, so as to improve user experience.
In order to achieve the above object, the present application provides a content display method, which is applied to a mobile terminal, and includes: when the touch operation is recognized as the preset operation, screen capturing is carried out on the current display interface to obtain a screen capturing picture; performing image recognition processing on the screenshot picture to obtain text content contained in the screenshot picture and a longitudinal coordinate value of each line in the text content; when the ordinate value of a certain row in the screen capture picture is matched with the ordinate value y of the touch position, acquiring a first abscissa value X1 and a second abscissa value X2 of the matched row; and calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position.
Optionally, the preset operation is a touch operation of pressing the touch position for a long time to reach a preset time.
Optionally, after the step of obtaining the text content included in the screenshot picture and the ordinate value of each line in the text content, the method further includes: acquiring a longitudinal coordinate value y of the touch position; and comparing the longitudinal coordinate value y of the touch position with the longitudinal and transverse coordinate values of each row to search a matched row matched with the longitudinal coordinate value y of the touch position.
Optionally, the first abscissa value X1 is the abscissa value of the first character in the matching row, and the second abscissa value X2 is the abscissa value of the last character in the matching row.
Optionally, the step of calculating the position of the touch position in the text content includes: calculating a first index value 1 of the touch position at the matching line and calculating a second index value index2 of a first character of the matching line at the text content; and calculating a position index3 of the touch position in the text content according to the first index value index1 and the second index value index2, wherein index3 is index1+ index 2.
Optionally, the step of calculating the first index value index1 of the touch position at the matching row includes: acquiring the number text1.length () of the character string of the matching line and the abscissa value x of the touch position; and calculating a first index value index1 of the touch position in the matching line according to the first abscissa value X1, the second abscissa value X2, the abscissa value X of the touch position, and the number text1.length () of the character string in the matching line, wherein index1 is [ (X-X1)/(X2-X1) ]. text1.length ().
Optionally, the step of calculating a second index2 of the first character of the matching line in the text content includes: and calculating a second index2 of the first character of the matching line in the text content according to the text content text1 of the matching line, wherein the index2 is text.
Optionally, after the step of preferentially displaying the text content corresponding to the touch position, the method further includes: and displaying other text contents contained in the screen capture picture after the text contents corresponding to the touch position.
The present application further provides a mobile terminal, the mobile terminal including: a touch screen; a processor; and the memory is connected with the processor and comprises a control instruction, and when the processor reads the control instruction, the memory controls the mobile terminal to realize the content display method.
The present application also provides a computer-readable storage medium having one or more programs, which are executed by one or more processors, to implement the above-described content display method.
According to the content display method, the mobile terminal and the computer readable storage medium, when the touch operation is identified as the preset operation, firstly, the current display interface is subjected to screen capture to obtain a screen capture picture; secondly, carrying out image recognition processing on the screen capture picture to obtain text content contained in the screen capture picture and an abscissa interval of each line in the text content; when the abscissa interval of a certain row in the screenshot picture is matched with the abscissa value X of the touch position, acquiring a first abscissa value X1 and a second abscissa value X2 of the matched row; and finally, calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position, so that the text content corresponding to the touch position is preferentially displayed by calculating the position of the touch position in the text content, and the use experience of a user is improved. Further, after the text content corresponding to the touch position is preferentially displayed, other text content included in the screen capture picture can be displayed after the text content corresponding to the touch position, so that all text content of the screen capture picture can be displayed.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an optional mobile terminal for implementing various embodiments of the present application;
FIG. 2 is a schematic diagram of a communication network system of the mobile terminal shown in FIG. 1;
fig. 3 is a flowchart of a content display method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The various components of the mobile terminal 100 are described in detail below with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
Fig. 3 is a flowchart of an embodiment of a content display method provided in the present application. Once the method of this embodiment is triggered by the user, the process in this embodiment is automatically executed by the mobile terminal 100, where each step may be executed sequentially according to the sequence in the flowchart, or may be executed simultaneously according to a plurality of steps in an actual situation, which is not limited herein. The content display method provided by the application comprises the following steps:
step S310, when the touch operation is recognized as the preset operation, screen capturing is carried out on the current display interface to obtain a screen capturing picture;
step S330, carrying out image recognition processing on the screenshot picture so as to obtain text content contained in the screenshot picture and a longitudinal coordinate value of each line in the text content;
step S350, when the vertical coordinate value of a certain row in the screen capture picture is matched with the vertical coordinate value y of the touch position, acquiring a first horizontal coordinate value X1 and a second horizontal coordinate value X2 of the matched row;
step S370, calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position.
Through the embodiment, the position of the touch position in the text content is calculated, so that the text content corresponding to the touch position is preferentially displayed, and the user experience is improved.
The above steps will be specifically described with reference to specific examples.
In step S310, when the touch operation is identified as the preset operation, a screen capture is performed on the current display interface to obtain a screen capture picture.
Specifically, when the user wants to identify some content of the display interface to obtain text content of the display interface, a touch operation may be performed on the display interface. The touch operations may include, but are not limited to, long press touch, click, press, and slide. The long-press touch may include a single-finger long press, a double-finger long press, a multi-finger long press, and a finger joint long press. The clicks may include single-finger clicks, double-finger clicks, and multi-finger clicks. The pressing may include single-finger large-area pressing, double-finger large-area pressing, and multi-finger large-area pressing. The sliding can be performed by a single finger, double fingers or multiple fingers according to a preset track. If the touch operation is sliding according to a preset track, the sliding track can be a closed graph so as to identify the text content in the closed graph.
In this embodiment, when the touch operation of the user on the mobile terminal 100 is detected and the touch operation is a preset operation, the screen capturing operation is triggered to capture a screen of the current display interface, so as to obtain a screen capturing picture. For example, the preset operation is a touch operation of pressing the touch position for a preset time. The preset time may be 2s, 5s, etc. The preset time can be preset according to the requirements of users. Taking the preset time as 2s as an example, when it is detected that the touch operation of the user on the mobile terminal 100 is that the long-press touch position reaches 2s, performing screen capture on the current display interface to obtain a screen capture picture. The preset operation may be used to wake up the mobile terminal 100 to identify the text content of the screenshot picture.
In step S330, image recognition processing is performed on the screenshot picture to obtain text content included in the screenshot picture and a longitudinal coordinate value of each line in the text content.
Specifically, the screenshot includes an image and/or a text, and after the screenshot is obtained, the mobile terminal 100 may store an image portion and/or a text portion of the screenshot. In this embodiment, after obtaining the screenshot picture, the mobile terminal 100 stores the text portion of the screenshot picture and performs image recognition processing on the text portion of the screenshot picture. For example, the mobile terminal 100 may use an OCR (Optical Character Recognition) technology to recognize text content included in the screenshot picture. The OCR technology can scan text data, then analyze and process image files, and acquire character and layout information. In other embodiments, the mobile terminal 100 may further perform image recognition processing on the screenshot by using other image recognition technologies to recognize text content contained in the screenshot, which is not listed here.
In this embodiment, image recognition processing may be performed on a bitmap (bitmap) of the screenshot picture to obtain text content included in the screenshot picture and a vertical coordinate value of each line in the text content.
In this embodiment, after the step of acquiring the text content included in the screenshot picture and the ordinate value of each line in the text content, the method may further include:
step S3301, obtaining a longitudinal coordinate value y of the touch position; and
and step S3302, comparing the ordinate value y of the touch position with the ordinate value y of each row to search a matching row matched with the ordinate value y of the touch position.
Specifically, the mobile terminal 100 generally has a rectangular display screen. And establishing a two-dimensional coordinate system by using the rectangular display screen, defining the lower left corner of the rectangular display screen as a coordinate origin, and taking the short side of the rectangular display screen as a transverse axis and the long side of the rectangular display screen as a longitudinal axis.
In this embodiment, after the text content included in the screenshot picture and the ordinate value of each line in the text content are acquired, the ordinate value y of the touch position is acquired, and then the ordinate value y of the touch position is compared with the ordinate value y of each line to find a matching line matching with the ordinate value y of the touch position. It can be understood that when the ordinate value y of the touch position is not equal to the ordinate value y of a certain line in the text content, that is, the ordinate value y of the line does not match with the ordinate value y of the touch position, the line is not a matching line. And when the ordinate value y of the touch position is equal to the ordinate value y of a certain line in the text content, matching the ordinate value y of the line with the ordinate value y of the touch position, wherein the line is a matching line.
In step S350, when the ordinate value of a certain line in the screenshot matches the ordinate value y of the touch position, a first abscissa value X1 and a second abscissa value X2 of the matching line are acquired.
Specifically, when the ordinate value of a certain row in the screenshot picture is equal to the ordinate value y of the touch position, the row is a matching row, and a first abscissa value X1 and a second abscissa value X2 of the matching row are acquired. The first X1 coordinate value is the first character of the matching row, and the second X2 coordinate value is the last character of the matching row. It is understood that since the ordinate value of the matching row matches the ordinate value y of the touch position, the abscissa value of the touch position may be between the first abscissa value X1 and the second abscissa value X2, or may be the first abscissa value X1 or the second abscissa value X2.
In step S370, the position of the touch position in the text content is calculated according to the first abscissa value X1 and the second abscissa value X2, and the text content corresponding to the touch position is preferentially displayed.
Specifically, when the touch position is not in the text portion of the screenshot picture, the text content corresponding to the touch position may be the text content closest to the touch position.
In this embodiment, the step of calculating the position of the touch position in the text content includes:
step S3701, calculating a first index value index1 of the touch position on the matching line and calculating a second index value index2 of the first character on the matching line on the text content; and
step S3702, calculating a location index3 of the touch location at the text content according to the first index value index1 and the second index value index2, where index3 is index1+ index 2.
Specifically, the first index value index1 of the touch position on the matching row is the position of the touch position on the matching row. The second index2 of the first character of the matching line in the text content is the position of the first character of the matching line in the text content.
In this embodiment, the step of calculating the first index value index1 of the touch position at the matching row includes:
step 37011, obtaining the number text1.length () of the character string of the matching line and the abscissa value x of the touch position; and
step S37012 calculates a first index value index1 of the touch position in the matching line, where index1 is [ (X-X1)/(X2-X1) ]. text1.length (), based on the first abscissa value X1, the second abscissa value X2, the abscissa value X of the touch position, and the number text1.length () of the character string in the matching line.
Specifically, the text content of the matching line may be marked as text1, and accordingly, text1.length () is the number of characters in the text1, which is the text content of the matching line.
In this embodiment, the calculating the second index2 of the first character of the matching line in the text content includes:
step 37013, a second index value index2 of the first character of the matching line in the text content is calculated according to text1 of the matching line, where index2 is text.
Specifically, first, a first index value index1 of the touch position in the matching line is calculated according to the first abscissa value X1, the second abscissa value X2, the abscissa value X of the touch position, and the number text1.length () of the character string in the matching line; secondly, a second index value 2 of the first character of the matching line in the text content is calculated according to the text content text1 of the matching line, and finally, the position index3 of the touch position in the text content is calculated according to the first index value index1 and the second index value index2, so that the text content corresponding to the touch position is preferentially displayed in the subsequent steps, the text content which the user most wants to pay attention to can be preferentially displayed, and the use experience of the user is improved. Where indexOf () may enable the return of the location in the string where a certain specified string value first appears.
In this embodiment, after the step of preferentially displaying the text content corresponding to the touch position, the method may further include:
and step 3703, displaying other text contents contained in the screenshot picture after the text content corresponding to the touch position.
Specifically, after preferentially displaying the text content corresponding to the touch position, the mobile terminal 100 may display other text contents included in the screenshot picture after the text content corresponding to the touch position, so that the text contents included in the screenshot picture may be completely displayed under the condition that the text content that the user most wants to pay attention to is preferentially displayed.
By the embodiment, when the touch operation is identified as the preset operation, firstly, the current display interface is subjected to screen capture to obtain a screen capture picture; secondly, carrying out image recognition processing on the screen capture picture to obtain text content contained in the screen capture picture and an abscissa interval of each line in the text content; when the abscissa interval of a certain row in the screenshot picture is matched with the abscissa value X of the touch position, acquiring a first abscissa value X1 and a second abscissa value X2 of the matched row; and finally, calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position, so that the text content corresponding to the touch position is preferentially displayed by calculating the position of the touch position in the text content, and the use experience of a user is improved. Further, after the text content corresponding to the touch position is preferentially displayed, other text content included in the screen capture picture can be displayed after the text content corresponding to the touch position, so that all text content of the screen capture picture can be displayed.
Fig. 4 is a schematic structural component diagram of the mobile terminal 100 according to an embodiment of the present application, where the mobile terminal 100 includes: a touch panel 1071; a processor 110; the memory 109 is connected to the processor 110, the memory 109 contains a control instruction, and when the processor 110 reads the control instruction, the mobile terminal 100 is controlled to implement the following steps:
when the touch operation is recognized as the preset operation, screen capturing is carried out on the current display interface to obtain a screen capturing picture; performing image recognition processing on the screenshot picture to obtain text content contained in the screenshot picture and a longitudinal coordinate value of each line in the text content; when the ordinate value of a certain row in the screen capture picture is matched with the ordinate value y of the touch position, acquiring a first abscissa value X1 and a second abscissa value X2 of the matched row; and calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position.
Optionally, the preset operation is a touch operation of pressing the touch position for a long time to reach a preset time.
Optionally, after the step of obtaining the text content included in the screenshot picture and the ordinate value of each line in the text content, the method further includes: acquiring a longitudinal coordinate value y of the touch position; and comparing the longitudinal coordinate value y of the touch position with the longitudinal and transverse coordinate values of each row to search a matched row matched with the longitudinal coordinate value y of the touch position.
Optionally, the first abscissa value X1 is the abscissa value of the first character in the matching row, and the second abscissa value X2 is the abscissa value of the last character in the matching row.
Optionally, the step of calculating the position of the touch position in the text content includes: calculating a first index value 1 of the touch position at the matching line and calculating a second index value index2 of a first character of the matching line at the text content; and calculating a position index3 of the touch position in the text content according to the first index value index1 and the second index value index2, wherein index3 is index1+ index 2.
Optionally, the step of calculating the first index value index1 of the touch position at the matching row includes: acquiring the number text1.length () of the character string of the matching line and the abscissa value x of the touch position; and calculating a first index value index1 of the touch position in the matching line according to the first abscissa value X1, the second abscissa value X2, the abscissa value X of the touch position, and the number text1.length () of the character string in the matching line, wherein index1 is [ (X-X1)/(X2-X1) ]. text1.length ().
Optionally, the step of calculating a second index2 of the first character of the matching line in the text content includes: and calculating a second index2 of the first character of the matching line in the text content according to the text content text1 of the matching line, wherein the index2 is text.
Optionally, after the step of preferentially displaying the text content corresponding to the touch position, the method further includes: and displaying other text contents contained in the screen capture picture after the text contents corresponding to the touch position.
Through the mobile terminal 100, when the touch operation is identified as the preset operation, firstly, a screen capture is performed on a current display interface to obtain a screen capture picture; secondly, carrying out image recognition processing on the screen capture picture to obtain text content contained in the screen capture picture and an abscissa interval of each line in the text content; when the abscissa interval of a certain row in the screenshot picture is matched with the abscissa value X of the touch position, acquiring a first abscissa value X1 and a second abscissa value X2 of the matched row; and finally, calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position, so that the text content corresponding to the touch position is preferentially displayed by calculating the position of the touch position in the text content, and the use experience of a user is improved. Further, after the text content corresponding to the touch position is preferentially displayed, other text content included in the screen capture picture can be displayed after the text content corresponding to the touch position, so that all text content of the screen capture picture can be displayed.
Embodiments of the present application also provide a computer-readable storage medium having one or more programs, where the one or more programs are executed by one or more processors to implement the following steps:
when the touch operation is recognized as the preset operation, screen capturing is carried out on the current display interface to obtain a screen capturing picture; performing image recognition processing on the screenshot picture to obtain text content contained in the screenshot picture and a longitudinal coordinate value of each line in the text content; when the ordinate value of a certain row in the screen capture picture is matched with the ordinate value y of the touch position, acquiring a first abscissa value X1 and a second abscissa value X2 of the matched row; and calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position.
Optionally, the preset operation is a touch operation of pressing the touch position for a long time to reach a preset time.
Optionally, after the step of obtaining the text content included in the screenshot picture and the ordinate value of each line in the text content, the method further includes: acquiring a longitudinal coordinate value y of the touch position; and comparing the longitudinal coordinate value y of the touch position with the longitudinal and transverse coordinate values of each row to search a matched row matched with the longitudinal coordinate value y of the touch position.
Optionally, the first abscissa value X1 is the abscissa value of the first character in the matching row, and the second abscissa value X2 is the abscissa value of the last character in the matching row.
Optionally, the step of calculating the position of the touch position in the text content includes: calculating a first index value 1 of the touch position at the matching line and calculating a second index value index2 of a first character of the matching line at the text content; and calculating a position index3 of the touch position in the text content according to the first index value index1 and the second index value index2, wherein index3 is index1+ index 2.
Optionally, the step of calculating the first index value index1 of the touch position at the matching row includes: acquiring the number text1.length () of the character string of the matching line and the abscissa value x of the touch position; and calculating a first index value index1 of the touch position in the matching line according to the first abscissa value X1, the second abscissa value X2, the abscissa value X of the touch position, and the number text1.length () of the character string in the matching line, wherein index1 is [ (X-X1)/(X2-X1) ]. text1.length ().
Optionally, the step of calculating a second index2 of the first character of the matching line in the text content includes: and calculating a second index2 of the first character of the matching line in the text content according to the text content text1 of the matching line, wherein the index2 is text.
Optionally, after the step of preferentially displaying the text content corresponding to the touch position, the method further includes: and displaying other text contents contained in the screen capture picture after the text contents corresponding to the touch position.
Through the computer-readable storage medium, when the touch operation is identified as a preset operation, firstly, screen capturing is carried out on a current display interface to obtain a screen capturing picture; secondly, carrying out image recognition processing on the screen capture picture to obtain text content contained in the screen capture picture and an abscissa interval of each line in the text content; when the abscissa interval of a certain row in the screenshot picture is matched with the abscissa value X of the touch position, acquiring a first abscissa value X1 and a second abscissa value X2 of the matched row; and finally, calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position, so that the text content corresponding to the touch position is preferentially displayed by calculating the position of the touch position in the text content, and the use experience of a user is improved. Further, after the text content corresponding to the touch position is preferentially displayed, other text content included in the screen capture picture can be displayed after the text content corresponding to the touch position, so that all text content of the screen capture picture can be displayed.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium herein stores one or more programs. Among other things, computer-readable storage media may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of memories of the kind described above.
The corresponding technical features in the above embodiments may be used with each other without causing contradiction in the schemes or without being implementable.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A content display method is applied to a mobile terminal, and is characterized by comprising the following steps: when the touch operation is recognized as the preset operation, screen capturing is carried out on the current display interface to obtain a screen capturing picture;
performing image recognition processing on the screenshot picture to obtain text content contained in the screenshot picture and a longitudinal coordinate value of each line in the text content;
when the ordinate value of a certain line in the screen capture picture is matched with the ordinate value y of the touch position, acquiring a first abscissa value X1 and a second abscissa value X2 of the matched line, wherein the touch position is a position corresponding to the touch operation; and
calculating the position of the touch position in the text content according to the first abscissa value X1 and the second abscissa value X2, and preferentially displaying the text content corresponding to the touch position;
after the step of obtaining the text content contained in the screenshot picture and the ordinate value of each line in the text content, the method further comprises:
acquiring a longitudinal coordinate value y of the touch position; and
comparing the longitudinal coordinate value y of the touch position with the longitudinal coordinate value y of each row to search a matched row matched with the longitudinal coordinate value y of the touch position;
after the step of preferentially displaying the text content corresponding to the touch position, the method further includes:
and displaying other text contents contained in the screen capture picture after the text contents corresponding to the touch position.
2. The content display method according to claim 1, wherein the preset operation is a touch operation of pressing a touch position for a long time for a preset time.
3. The method as claimed in claim 1, wherein said first abscissa value X1 is the abscissa value of the first character in the matching row, and said second abscissa value X2 is the abscissa value of the last character in the matching row.
4. The content display method according to claim 1, wherein the step of calculating the position of the touch position at the text content comprises:
calculating a first index value index1 of the touch position on the matching line and calculating a second index value index2 of a first character of the matching line on the text content, wherein the first index value index1 is the position of the touch position on the matching line, and the second index value index2 is the position of the first character of the matching line on the text content; and
and calculating the position index3 of the touch position in the text content according to the first angle index1 and the second angle index2, wherein index3= index1+ index 2.
5. The content display method according to claim 4, wherein the step of calculating the first index value index1 of the touch position at the matching row comprises:
acquiring the number text1.length () of the character string of the matching line and the abscissa value x of the touch position; and
and calculating a first index value index1 of the touch position in the matching line according to the first abscissa value X1, the second abscissa value X2, the abscissa value X of the touch position and the number text1.length () of the character strings of the matching line, wherein index1= [ (X-X1)/(X2-X1) ]. text1.length ().
6. The content display method according to claim 4, wherein the step of calculating the second index2 of the first character of the matching line in the text content comprises:
and calculating a second index2 of the first character of the matching line in the text content according to the text content text1 of the matching line, wherein index2= text.
7. A mobile terminal, characterized in that the mobile terminal comprises:
a touch screen;
a processor; and
a memory connected to the processor, the memory containing control instructions, when the processor reads the control instructions, the memory controlling the mobile terminal to implement the content display method according to any one of claims 1 to 6.
8. A computer-readable storage medium having one or more programs thereon, the one or more programs being executable by one or more processors to implement the content display method of any one of claims 1-6.
CN201910555988.0A 2019-06-25 2019-06-25 Content display method, mobile terminal, and computer-readable storage medium Active CN110275667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910555988.0A CN110275667B (en) 2019-06-25 2019-06-25 Content display method, mobile terminal, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910555988.0A CN110275667B (en) 2019-06-25 2019-06-25 Content display method, mobile terminal, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110275667A CN110275667A (en) 2019-09-24
CN110275667B true CN110275667B (en) 2021-12-17

Family

ID=67963129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910555988.0A Active CN110275667B (en) 2019-06-25 2019-06-25 Content display method, mobile terminal, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110275667B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626035B (en) * 2020-04-08 2022-09-02 华为技术有限公司 Layout analysis method and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365591A (en) * 2012-04-06 2013-10-23 Lg电子株式会社 Electronic device and method of controlling same
CN104778195A (en) * 2014-12-26 2015-07-15 北京奇虎科技有限公司 Terminal and touch operation-based searching method
CN107632773A (en) * 2017-10-17 2018-01-26 北京百度网讯科技有限公司 For obtaining the method and device of information
CN107729897A (en) * 2017-11-03 2018-02-23 成都野望数码科技有限公司 A kind of text maninulation method, apparatus and terminal
CN108628814A (en) * 2017-03-20 2018-10-09 珠海金山办公软件有限公司 A kind of method and device of quick insertion identification word
CN109753202A (en) * 2018-12-29 2019-05-14 维沃移动通信有限公司 A kind of screenshotss method and mobile terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100720335B1 (en) * 2006-12-20 2007-05-23 최경순 Apparatus for inputting a text corresponding to relative coordinates values generated by movement of a touch position and method thereof
US8918739B2 (en) * 2009-08-24 2014-12-23 Kryon Systems Ltd. Display-independent recognition of graphical user interface control
US9170714B2 (en) * 2012-10-31 2015-10-27 Google Technology Holdings LLC Mixed type text extraction and distribution
CN106293034A (en) * 2015-06-11 2017-01-04 中兴通讯股份有限公司 The method of a kind of information output and terminal
US20190114065A1 (en) * 2017-10-17 2019-04-18 Getac Technology Corporation Method for creating partial screenshot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365591A (en) * 2012-04-06 2013-10-23 Lg电子株式会社 Electronic device and method of controlling same
CN104778195A (en) * 2014-12-26 2015-07-15 北京奇虎科技有限公司 Terminal and touch operation-based searching method
CN108628814A (en) * 2017-03-20 2018-10-09 珠海金山办公软件有限公司 A kind of method and device of quick insertion identification word
CN107632773A (en) * 2017-10-17 2018-01-26 北京百度网讯科技有限公司 For obtaining the method and device of information
CN107729897A (en) * 2017-11-03 2018-02-23 成都野望数码科技有限公司 A kind of text maninulation method, apparatus and terminal
CN109753202A (en) * 2018-12-29 2019-05-14 维沃移动通信有限公司 A kind of screenshotss method and mobile terminal

Also Published As

Publication number Publication date
CN110275667A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN108572764B (en) Character input control method and device and computer readable storage medium
CN107329682B (en) Edge interaction method and mobile terminal
CN109195143B (en) Network access method, mobile terminal and readable storage medium
CN108449513B (en) Interactive regulation and control method, equipment and computer readable storage medium
CN110180181B (en) Method and device for capturing wonderful moment video and computer readable storage medium
CN108563388B (en) Screen operation method, mobile terminal and computer-readable storage medium
CN107422956B (en) Mobile terminal operation response method, mobile terminal and readable storage medium
CN112188058A (en) Video shooting method, mobile terminal and computer storage medium
CN110278481B (en) Picture-in-picture implementation method, terminal and computer readable storage medium
CN109683778B (en) Flexible screen control method and device and computer readable storage medium
CN109683797B (en) Display area control method and device and computer readable storage medium
CN109491577B (en) Holding interaction method and device and computer readable storage medium
CN112437472B (en) Network switching method, equipment and computer readable storage medium
CN112102780B (en) Display frame rate regulation and control method, device and computer readable storage medium
CN110083294B (en) Screen capturing method, terminal and computer readable storage medium
CN109710168B (en) Screen touch method and device and computer readable storage medium
CN109683796B (en) Interaction control method, equipment and computer readable storage medium
CN109669616B (en) Side screen interaction control method and device and computer readable storage medium
CN109462829B (en) Call transfer method, device and computer readable storage medium
CN108900696B (en) Data processing method, terminal and computer readable storage medium
CN110955397A (en) Method for setting frame rate of game terminal, game terminal and storage medium
CN110275667B (en) Content display method, mobile terminal, and computer-readable storage medium
CN107831998B (en) Interface moving method, terminal and computer readable storage medium
CN107562304B (en) Control method, mobile terminal and computer readable storage medium
CN112532838B (en) Image processing method, mobile terminal and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant