CN109002759A - text recognition method, device, mobile terminal and storage medium - Google Patents

text recognition method, device, mobile terminal and storage medium Download PDF

Info

Publication number
CN109002759A
CN109002759A CN201810586716.2A CN201810586716A CN109002759A CN 109002759 A CN109002759 A CN 109002759A CN 201810586716 A CN201810586716 A CN 201810586716A CN 109002759 A CN109002759 A CN 109002759A
Authority
CN
China
Prior art keywords
control
user interface
touch
image
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810586716.2A
Other languages
Chinese (zh)
Inventor
揭骏仁
林建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810586716.2A priority Critical patent/CN109002759A/en
Publication of CN109002759A publication Critical patent/CN109002759A/en
Priority to PCT/CN2019/084377 priority patent/WO2019233212A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The embodiment of the present application discloses a kind of text recognition method, device, mobile terminal and storage medium, is related to technical field of mobile terminals.The described method includes: detection acts on the touch control operation of user interface, when the touch control operation meets preset condition, interface element in user interface corresponding with the position of the touch control operation is identified, when unidentified success, it intercepts control image corresponding with the position of the touch control operation and the control image is identified, at least one card of Overlapping display on the partial region of user interface, at least one card is for showing the information identified by the control image.Text recognition method, device, mobile terminal and storage medium provided by the embodiments of the present application promote the agility and accuracy for taking word to identify, by image recognition technology to promote user experience.

Description

Text recognition method, device, mobile terminal and storage medium
Technical field
This application involves technical field of mobile terminals, eventually more particularly, to a kind of text recognition method, device, movement End and storage medium.
Background technique
With the development of science and technology, mobile terminal have become in people's daily life most common electronic product it One.Also, user is often largely read by mobile terminal, still, when user wants to have more the information of reading Mostly detailed understanding when, need voluntarily to scan for obtain detail information, operating process is cumbersome, inconvenient.
Summary of the invention
In view of the above problems, present applicant proposes a kind of text recognition method, device, mobile terminal and storage medium, By image recognition technology, the agility and accuracy for taking word to identify are promoted, to promote user experience.
In a first aspect, the embodiment of the present application provides a kind of text recognition method, which comprises detection acts on use The touch control operation at family interface, when the touch control operation meets preset condition, to institute corresponding with the position of the touch control operation The interface element stated in user interface is identified;When unidentified success, intercept corresponding with the position of the touch control operation Control image simultaneously identifies the control image;At least one card of Overlapping display on the partial region of the user interface Piece, at least one described card is for showing the information identified by the control image.
Second aspect, the embodiment of the present application provide a kind of text identification device, and described device includes: interface element identification Module, for detecting the touch control operation for acting on user interface, when the touch control operation meets preset condition, to the touch-control Interface element in the corresponding user interface in the position of operation is identified;Image interception module, for when it is unidentified at When function, intercepts control image corresponding with the position of the touch control operation and the control image is identified;Card is shown Module, at least one card of Overlapping display on the partial region of the user interface, at least one described card is used for Show the information identified by the control image.
The third aspect, the embodiment of the present application provide a kind of mobile terminal, including touch screen, memory and processor, The touch screen and the memory are couple to the processor, the memory store instruction, when described instruction is by described It manages processor when device executes and executes the above method.
Fourth aspect, the embodiment of the present application provide it is a kind of with processor can be performed program code it is computer-readable Storage medium is taken, said program code makes the processor execute the above method.
A kind of text recognition method, device, mobile terminal and storage medium provided by the embodiments of the present application, detection effect In the touch control operation of user interface, when the touch control operation meets preset condition, to use corresponding with the position of the touch control operation Interface element on the interface of family is identified, when unidentified success, intercepts control figure corresponding with the position of the touch control operation Picture simultaneously identifies that at least one card of Overlapping display on the partial region of user interface, this at least one to the control image A card, by image recognition technology, promotes the agility for taking word to identify for showing the information identified by the control image And accuracy, to promote user experience.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 shows the flow diagram of the first text recognition method provided by the embodiments of the present application;
Fig. 2 shows a kind of schematic diagrames of the user interface of mobile terminal provided by the embodiments of the present application;
Fig. 3 shows the flow diagram of second of text recognition method provided by the embodiments of the present application;
Fig. 4 shows the step S240 flow diagram of second of text recognition method provided by the embodiments of the present application;
Fig. 5 shows another schematic diagram of the user interface of mobile terminal provided by the embodiments of the present application;
Fig. 6 shows the step S270 flow diagram of second of text recognition method provided by the embodiments of the present application;
Fig. 7 shows the flow diagram of the third text recognition method provided by the embodiments of the present application;
Fig. 8 shows a kind of module frame chart of text identification device provided by the embodiments of the present application;
Fig. 9 shows another module frame chart of text identification device provided by the embodiments of the present application;
Figure 10 shows a kind of structural schematic diagram of mobile terminal provided by the embodiments of the present application;
Figure 11 shows the block diagram of the mobile terminal for executing the text recognition method according to the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts all other Embodiment shall fall in the protection scope of this application.
Currently, user by mobile terminal Internet access chat, read text, check picture or viewing video when, often Interest can be generated to some of which content and scans for obtaining more detailed information, at this point, user is firstly the need of duplication Interested content is perhaps kept firmly in mind in interested, then opens browser, and the content of duplication is pasted into searching for browser It is input in the search box of browser in rope frame or by the content kept firmly in mind and scans for leading to operating process to obtain details It is very cumbersome, it takes a long time and is easy to produce mistake.Inventor has found by long-term research, can use what system carried Auxiliary mode carries out acquisition and the identification of content of text according to the touch control operation of user, but this mode is often because use The influence of the touch control operation at family and cause content of text acquisition failure or mistake.In view of the above technical problems, inventor proposes Text recognition method, device, mobile terminal and storage medium provided by the embodiments of the present application, by image recognition technology, The agility and accuracy for taking word to identify are promoted, to promote user experience.Wherein, specific text recognition method is in subsequent reality It applies in example and is described in detail.
Embodiment
Referring to Fig. 1, Fig. 1 shows the flow diagram of the first text recognition method provided by the embodiments of the present application. The text recognition method is used to the agility and accuracy for taking word to identify are promoted, to promote user by image recognition technology Experience.In the particular embodiment, the text recognition method is applied to text identification device 200 as shown in Figure 8 and matches It is equipped with the mobile terminal (Figure 10) of the text identification device 200.It will illustrate the tool of the present embodiment by taking mobile terminal as an example below Body process, it will of course be understood that, mobile terminal applied by the present embodiment can be smart phone, tablet computer, wearable Electronic equipment etc. does not do specific restriction herein.It will be explained in detail below for process shown in FIG. 1, the text Recognition methods can specifically include following steps:
Step S110: detection acts on the touch control operation of user interface, right when the touch control operation meets preset condition Interface element in the user interface corresponding with the position of the touch control operation is identified.
In the present embodiment, the touch control operation for acting on user interface is detected, as a kind of mode, touch-control behaviour Work may include that single give directions hits, gives more directions and hit, singly refer to long-pressing, refer to long-pressing, weight, repeatedly click, slide, duplication behaviour more Make, press area etc., wherein single give directions hits the operation for referring to and singly referring to and being clicked on a user interface;Giving more directions to hit is Refer to the operation for referring to more and being clicked simultaneously on a user interface;Singly refer to that long-pressing refers to and singly refers to that pressing on a user interface is more than default Duration;Refer to that long-pressing refers to that referring to more while pressing on a user interface is more than preset duration more;Weight refer on a user interface by Surging is more than default dynamics;It repeatedly clicks and refers to that number of clicks is more than preset times within a preset time;Slide refers to Singly refer to the operation slided on a user interface;Duplication operation, which refers to, copies to pasting boards for text information in user interface Operation;Pressing area refer on a user interface singly refer to pressing area be more than preset area.
Further, mobile terminal presets and is stored with preset condition, wherein the preset condition is used to be used as touch-control The touch control operation is compared, to sentence by the judgment basis of operation that is, after detection obtains the touch control operation with preset condition Whether the touch control operation that breaks meets preset condition, obtains the touching when the touch control operation meets preset condition as a kind of mode The position of operation is controlled, for example, the corresponding coordinate information in the position for obtaining the touch control operation, then to the position for being located at the touch control operation The interface element set in corresponding user interface is identified.Specifically, the interface element includes but are not limited to text, figure Piece, audio and video, meanwhile, the mobile terminal can determine at least one interface element based on the position of the touch control operation And identify, for example, the interface element at the position of the touch control operation can be identified, can be to the position with the touch control operation Other interface elements that the interface element at the place of setting is located at same paragraph identify etc., do not do specific restriction herein.
Step S120: when unidentified success, control image corresponding with the position of the touch control operation is intercepted and to institute Control image is stated to be identified.
Wherein, when the unidentified success of the corresponding interface element in the position of touch control operation, i.e., to the position pair of the touch control operation When the result that the interface element answered is identified is empty, image interception is carried out to the corresponding text in the position of touch control operation automatically and is obtained Control image is taken, and the control image is identified.Specifically, system obtains the corresponding text in position of touch control operation automatically The control at place, and the control is intercepted, Text region OCR (Optical is then turned by Background scheduling image Character Recognition, optical character identification) module identified.Wherein, it is to utilize image that image, which turns Text region, Turn character recognition technology, can take offline, i.e., the identification library that image turns text is transplanted to the mode of mobile terminal.Specifically Ground turns Text region library according to the image in mobile terminal and turns Text region operation to pictorial information progress image;It can also lead to Online mode is crossed, i.e., image is sent to remote image and turns text service device and identify.Pictorial information is uploaded to image Turn text service device, image turns text service device and according to internal image turns Text region library pictorial information and carry out image to turn text Identification operation, and recognition result is sent to mobile terminal.Further, image turns text in addition to identification returns to the text in image Other than word information, it can also be attached to x coordinate, y-coordinate, width and height of each text etc., details are not described herein.
As a kind of mode, during system progress image turns Text region, the user interface of the mobile terminal It can be with display reminding information, wherein the prompt information turns Text region operation for prompting user currently carrying out image.
Step S130: at least one card of Overlapping display on the partial region of the user interface, it is described at least one Card is for showing the information identified by the control image.
Referring to Fig. 2, the partial region of the user interface can be located at the user interface lower half partial region, can Be located at the user interface the upper half partial region, can be located at the user interface left side partial region, can also To be located at the partial region etc. of the right side of the user interface, optionally, in the present embodiment, the partial region is located at user Specific restriction is not done close to the region of bottom, size in the lower half at interface.Specifically, figure is carried out to the control image of interception Picture turns Text region, obtains at least one keyword in the control image, and scan for at least one described keyword, To obtain search result information corresponding with the content of the control image, which is shown in the form of card Show, wherein carrier of the card as carrying described search result information, each card can at least show a search knot The quantity of fruit information, the search result information that each card at least one card is shown may be the same or different, and The search result information that each card is shown can come from same application, or from different application programs.Further, At least one described card can also show participle information, it can after display is identified based on the control image, acquisition At least one keyword, user can carry out selecting word editor based on the participle information, for example, to the keyword in participle information into Row search, translation, sharing etc..
Further, at least one described card is shown on the partial region of the user interface in the form being superimposed, It should be understood that at this point, the top for being shown in the partial region of the user interface, the card can be laminated in the card The partial region of the user interface can be covered and shown with the user interface in different levels.In addition, in the present embodiment In, the original when the superposition of at least one described card is shown in the partial region of the user interface, positioned at the partial region Beginning content is still partially visible, is not exclusively blocked, for user's clicking operation.
Text recognition method provided in this embodiment, detection acts on the touch control operation of user interface, when the touch control operation When meeting preset condition, the interface element in user interface corresponding with the position of the touch control operation is identified, when not knowing Not Cheng Gong when, intercept corresponding with the position of touch control operation control image and simultaneously the control image identified, in user circle At least one card of Overlapping display on the partial region in face, at least one card are used to show and be identified by the control image Information promotes the agility and accuracy for taking word to identify, by image recognition technology to promote user experience.
Referring to Fig. 3, Fig. 3 shows the flow diagram of second of text recognition method provided by the embodiments of the present application. It will be explained in detail below for process shown in Fig. 3, the method can specifically include following steps:
Step S210: detection acts on the touch control operation of user interface, right when the touch control operation meets preset condition Interface element in the user interface corresponding with the position of the touch control operation is identified.
Step S220: when unidentified success, the corresponding application program of the user interface is obtained.
Wherein, an application program includes having multiple user interfaces, can based on the user interface after obtaining user interface To obtain the application program corresponding to it.As a kind of mode, by the user interface, the class of the available application program Type, the title for obtaining the application program or the purposes for obtaining the application program etc..
Step S230: judging whether the application program is emphasis application program, if it is not, step S270 is executed, if so, holding Row step S240.
Further, mobile terminal presets and is stored with focus on the application program, which is used for conduct The judgment basis of application program, wherein focus on the application program can be system native applications program, or user downloads peace The third party application of dress, also, the focus on the application program can voluntarily be configured in advance by mobile terminal system, can also be by User's manual configuration etc..Specifically, when focus on the application program is voluntarily configured by mobile terminal system, which can be according to answering It is configured with the frequency of use of program, for example, the application program that frequency of use is higher than some frequency threshold is answered as emphasis With program, the application program using frequency of use not higher than some frequency threshold is as non-focus on the application program;Or when emphasis is answered When voluntarily being configured with program by mobile terminal system, it can be configured according to the type of application program, for example, text is shown The application program of class or instant messaging class will such as wechat, QQ, microblogging, news category, browser class as non-focus on the application program Video shows the application program of class as focus on the application program etc..When focus on the application program is by user's manual configuration, Ke Yigen Select one or more application program as focus on the application program according to the hobby or demand of user.
Step S240: when the application program is the focus on the application program, the position of interception and the touch control operation Corresponding control image simultaneously identifies the control image.
Wherein, when judging the application program is focus on the application program, to the corresponding text in the position of touch control operation into Row image interception obtains control image, and identifies to the control image.
Referring to Fig. 4, Fig. 4 shows the stream of the step S240 of second of text recognition method provided by the embodiments of the present application Journey schematic diagram.It will be explained in detail below for process shown in Fig. 4, the method can specifically include following steps:
Step S241: when the application program is the focus on the application program, the position with the touch control operation is obtained Corresponding control type.
As a kind of mode, when judging the application program is focus on the application program, to the position of current touch control operation The control type of corresponding control is detected and is obtained.It should be understood that the control type at least may include text type, Picture type, video type etc..
Step S242: judge whether the control type meets preset kind.
Further, the mobile terminal presets and is stored with preset kind, which is used to be used as control The judgment basis of type, as a kind of mode, which can be text view, therefore, obtain the control in detection After type, control type is compared with text view, to judge whether the control type meets text view.
Step S243: it when the control type meets preset kind, intercepts corresponding with the position of the touch control operation Control image simultaneously identifies the control image.
Wherein, when judging that the control type meets preset kind, then control corresponding with the position of touch control operation is intercepted Image simultaneously carries out automatic OCR and identifies to control image.
Step S250: judge whether the control image can recognize that effective information, wherein the effective information is set Believe that probability is higher than preset value;If so, step S260 is executed, if it is not, executing step S270.
Further, whether effective information judges can recognize that the control image, wherein the effective information Fiducial probability be higher than preset value specifically the information obtained after identifying control image is detected, as one kind Mode detects whether the information includes text information first, and when the information does not include text information, characterizing the information is sky, Recognition failures;When the information includes text information, continues to obtain the fiducial probability of text information and be judged, as one Kind mode, mobile terminal are previously stored with the algorithm and preset value of fiducial probability, can calculate setting for the information by the algorithm Believe probability, then the fiducial probability be compared with preset value, to judge whether the fiducial probability is higher than the preset value, wherein When the fiducial probability is higher than preset value, characterizing the control image can recognize that effective information.
There is messy code text as a kind of mode, when if carrying out parsing identification to control image, needs to parsing Result carry out preliminary screening, messy code and character are filtered, it is aobvious in user interface after filtering if without effective information Show selection control, if there is effective information, then at least one card of Overlapping display on the partial region of user interface.
Step S260: at least one card of Overlapping display on the partial region of the user interface.
Further, it if the control image can recognize that the effective information, shows as a result, i.e. in the portion of user interface At least one card of Overlapping display on subregion.As a kind of mode, selection control and the card are shown below the card Piece is located at same interface, to provide user's entrance for carrying out manual frame choosing when user is dissatisfied to the effective information.
Step S270: selection control is shown in the user interface, wherein the selection control is for triggering manual frame choosing Or cancel identification.
Referring to Fig. 5, further, if the application program is not that focus on the application program or the control image can not be known Not Chu the effective information, then show the selection control in user interface, wherein the selection control for trigger the choosing of manual frame or Cancel identification.
Referring to Fig. 6, Fig. 6 shows the stream of the step S270 of second of text recognition method provided by the embodiments of the present application Journey schematic diagram.It will be explained in detail below for process shown in fig. 6, the method can specifically include following steps:
Step S271: the duration identified to the control image is obtained.
Step S272: judge whether the duration is more than preset duration.
As a kind of mode, when system identifies control image, the duration that it is identified is obtained, and should Duration is compared with preset duration, wherein when the preset duration presets in the terminal and stores for as this Long judgment basis, for example, the preset duration can be 8s, 10s etc..In the present embodiment, when the duration is more than preset duration When, characterization control image identifies too long, recognition failures, then shows selection control in user interface, choose whether to continue by user Carry out manual frame choosing.
Step S273: when the duration is more than the preset duration, the selection control is shown in the user interface.
As a kind of mode, if user selects manual frame choosing, after manual frame selects corresponding region, under frame selected control Square two-dimensional code display identification control, commodity identification control and text identification control, identify control according to the two dimensional code of user's triggering Part can carry out two dimensional code identification to interception image;It can be to the carry out quotient of interception image according to the commodity identification control of user's triggering Product identification;Text identification is carried out to interception image according to the text identification control of user's triggering.Further, in identification process In, a circle progress prompt is shown in user interface, and the card of response is popped up after end of identification.
Second of text recognition method provided by the embodiments of the present application, detection act on the touch control operation of user interface, when When the touch control operation meets preset condition, the interface element in user interface corresponding with the position of the touch control operation is known Not, when unidentified success, the corresponding application program of user interface is obtained, judges whether the application program is emphasis application journey Sequence shows selection control in user interface, the selection control is for triggering hand when the application program is not focus on the application program Identification is cancelled in dynamic money choosing, when application program is focus on the application program, intercepts control figure corresponding with the position of touch control operation Picture simultaneously identifies the control image, judges whether the control image can recognize that effective information, wherein the effective information Fiducial probability be higher than preset value, when the control image can recognize that effective information, on the partial region of user interface At least one card of Overlapping display shows selection control in user interface when control image cannot recognize that effective information, from And known otherwise by automatic frame choosing identification and the choosing of manual frame, the agility and accuracy for taking word to identify are promoted, to promote use Family experience.
Referring to Fig. 7, Fig. 7 shows the flow diagram of the third text recognition method provided by the embodiments of the present application. It will be explained in detail below for process shown in Fig. 7, the method can specifically include following steps:
Step S310: detection acts on the touch control operation of user interface, right when the touch control operation meets preset condition Interface element in the user interface corresponding with the position of the touch control operation is identified.
Step S320: when unidentified success, the corresponding touch-control center of the touch control operation is obtained.
In the present embodiment, when unidentified success, the corresponding touch-control center of the touch control operation is obtained, as one kind Mode obtains the touch area of the touch control operation, is calculated based on the touch area, and the centre bit of the touch area is obtained It sets, wherein the center is the corresponding touch-control center of touch control operation.
Step S330: judge the touch-control center whether on effective control of the user interface, wherein described Effective control includes at least an interface element.
Further, user interface includes multiple controls, as a kind of mode, can be by judging multiple control No includes that interface element divides control, wherein, then can be by the control when control includes at least one interface element Part is regarded as effective control;When the control does not include interface element, then the control can be regarded as to blank control or invalid control Part.In the present embodiment, the coordinate position of each effective control is detected, and passes through the coordinate bit of the touch-control center It sets and judges the touch-control center whether on effective control with the coordinate position of effective control.
Step S340: when the touch-control center is on effective control, effective control image is intercepted and to institute Effective control image is stated to be identified.
As a kind of mode, when the touch-control center is on effective control, then effective control image is intercepted and to this Effective control image carries out image and turns Text region.Wherein it is possible to carry out interception identification to the text on effective control.
Step S350: when the touch-control center is not on effective control, interception user interface image is simultaneously right The user interface image is identified.
Alternatively, when on the no longer valid control in touch-control center, then user interface image is intercepted simultaneously Image is carried out to the user interface image and turns Text region.Wherein it is possible to carry out interception identification to the full frame of the user interface.
Step S360: at least one card of Overlapping display on the partial region of the user interface, it is described at least one Card is for showing the information identified by effective control image or the user interface image.
The third text recognition method provided by the embodiments of the present application, detection act on the touch control operation of user interface, when When the touch control operation meets preset condition, the interface element in user interface corresponding for the position of the touch control operation is known Not, when unidentified success, the corresponding touch-control center of touch control operation is obtained, judges the touch-control center whether in user On effective control at interface, wherein effective control includes at least an interface element, when the touch-control center is effectively being controlled It when on part, intercepts effective control image and effective control image is identified, when touch-control center is not on effective control When, it intercepts user interface image and user interface image is identified, to carry out image interception according to touch-control center Identification promotes the agility for taking word to identify, to promote user experience.
Referring to Fig. 8, Fig. 8 shows a kind of module frame chart of text identification device 200 provided by the embodiments of the present application.Under Face will be illustrated for block diagram shown in Fig. 8, and the text identification device 200 includes: interface element identification module 210, figure As interception module 220 and card display module 230, in which:
Interface element identification module 210, for detecting the touch control operation for acting on user interface, when the touch control operation is full When sufficient preset condition, the interface element in the user interface corresponding to the position of the touch control operation is identified.
Image interception module 220, for intercepting control corresponding with the position of the touch control operation when unidentified success Image simultaneously identifies the control image.Referring to Fig. 9, Fig. 9 shows text identification dress provided by the embodiments of the present application Set 200 another module frame chart, further, described image interception module 220 include: application program acquisition submodule 221, Application program judging submodule 222, control image identification submodule 223, selection control display sub-module 224, touch-control centre bit Set acquisition submodule 225, touch-control center judging submodule 226, effective control image identification submodule 227 and user circle Face image recognition submodule 228, in which:
Application program acquisition submodule 221, it is corresponding using journey for when unidentified success, obtaining the user interface Sequence.
Application program judging submodule 222, for breaking, whether the application program is emphasis application program.
Control image identifies submodule 223, for when the application program is the focus on the application program, interception and institute It states the corresponding control image in position of touch control operation and the control image is identified.Further, the control image Identify that submodule 223 includes: control type acquiring unit, control type judging unit and control image recognition unit, in which:
Control type acquiring unit, for obtaining and the touching when the application program is the focus on the application program Control the corresponding control type in position of operation.
Control type judging unit, for judging whether the control type meets preset kind.
Control image recognition unit, for intercepting and the touch control operation when the control type meets preset kind The corresponding control image in position and the control image is identified.
Control display sub-module 224 is selected, for when the application program is not the focus on the application program, described User interface display selection control, wherein the selection control is for triggering manual frame choosing or cancelling identification.
Touch-control center acquisition submodule 225, for obtaining the corresponding touching of the touch control operation when unidentified success Control center.
Touch-control center judging submodule 226, for judging the touch-control center whether in the user interface Effective control on, wherein effective control include at least an interface element.
Effective control image identifies submodule 227, for cutting when the touch-control center is on effective control It takes effective control image and effective control image is identified.
User interface image identifies submodule 228, is used for when the touch-control center is not on effective control, Interception user interface image simultaneously identifies the user interface image.
Card display module 230, at least one card of Overlapping display on the partial region of the user interface, institute At least one card is stated for showing the information identified by the control image.Further, the card display module 230 It include: effective information judging submodule 231, card display sub-module 232 and selection control display sub-module 233, in which:
Effective information judging submodule 231, for judging whether the control image can recognize that effective information, In, the fiducial probability of the effective information is higher than preset value.
Card display sub-module 232, for when the control image can recognize that the effective information, in the use At least one card of Overlapping display on the partial region at family interface.
Control display sub-module 233 is selected, for when the control image cannot recognize that the effective information, in institute It states user interface and shows the selection control.Further, the selection control display sub-module 233 includes: that duration obtains list Member, duration judging unit and selection control display unit, in which:
Duration acquiring unit, for obtaining the duration identified to the control image.
Duration judging unit, for judging whether the duration is more than preset duration.
Control display unit is selected, for being shown in the user interface when the duration is more than the preset duration The selection control.
In conclusion a kind of text recognition method provided by the embodiments of the present application, device, mobile terminal and storage are situated between Matter, detection acts on the touch control operation of user interface, when the touch control operation meets preset condition, for the position of the touch control operation The interface element set in corresponding user interface is identified, when unidentified success, the position pair of interception and the touch control operation The control image answered simultaneously identifies the control image, at least one card of Overlapping display on the partial region of user interface Piece, at least one card is for showing the information identified by the control image, and by image recognition technology, promotion takes word to know Other accuracy, to promote user experience.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weight Point explanation is all differences from other embodiments, and the same or similar parts between the embodiments can be referred to each other. For device class embodiment, since it is basically similar to the method embodiment, so being described relatively simple, related place ginseng See the part explanation of embodiment of the method.For arbitrary processing mode described in embodiment of the method, in device reality Apply in example can no longer be repeated in Installation practice by corresponding processing modules implement one by one.
Referring to Figure 10, based on above-mentioned text recognition method, device, the embodiment of the present application also provides a kind of movement Terminal 100 comprising electronic body portion 10, the electronic body portion 10 include shell 12 and the master being arranged on the shell 12 Display screen 120.Metal can be used in the shell 12, such as steel, aluminium alloy are made.In the present embodiment, the main display 120 is logical Often include display panel 111, may also comprise for responding the circuit etc. for carrying out touch control operation to the display panel 111.It is described Display panel 111 can be a liquid crystal display panel (Liquid Crystal Display, LCD), in some embodiments, The display panel 111 is a touch screen 109 simultaneously.
Please refer to Figure 11, in actual application scenarios, the mobile terminal 100 can be used as intelligent mobile phone terminal into It exercises and uses, the electronic body portion 10 also typically includes one or more (only showing one in figure) processors in this case 102, memory 104, RF (Radio Frequency, radio frequency) module 106, voicefrequency circuit 110, sensor 114, input module 118, power module 122.It will appreciated by the skilled person that structure shown in Figure 11 is only to illustrate, not to institute The structure for stating electronic body portion 10 causes to limit.For example, the electronic body portion 10 may also include it is more than shown in Figure 11 or The less component of person, or with the configuration different from shown in Figure 11.
It will appreciated by the skilled person that all other component belongs to for the processor 102 It is coupled between peripheral hardware, the processor 102 and these peripheral hardwares by multiple Peripheral Interfaces 124.The Peripheral Interface 124 can Based on following standard implementation: Universal Asynchronous Receive/sending device (Universal Asynchronous Receiver/ Transmitter, UART), universal input/output (General Purpose Input Output, GPIO), serial peripheral connect Mouthful (Serial Peripheral Interface, SPI), internal integrated circuit (Inter-Integrated Circuit, I2C), but it is not limited to above-mentioned standard.In some instances, the Peripheral Interface 124 can only include bus;In other examples In, the Peripheral Interface 124 may also include other elements, such as one or more controller, such as connecting the display The display controller of panel 111 or storage control for connecting memory.In addition, these controllers can also be from described It detaches, and is integrated in the processor 102 or in corresponding peripheral hardware in Peripheral Interface 124.
The memory 104 can be used for storing software program and module, and the processor 102 is stored in institute by operation The software program and module in memory 104 are stated, thereby executing various function application and data processing.The memory 104 may include high speed random access memory, may also include nonvolatile memory, and such as one or more magnetic storage device dodges It deposits or other non-volatile solid state memories.In some instances, the memory 104 can further comprise relative to institute The remotely located memory of processor 102 is stated, these remote memories can pass through network connection to the electronic body portion 10 Or the main display 120.The example of above-mentioned network includes but is not limited to internet, intranet, local area network, mobile communication Net and combinations thereof.
The RF module 106 is used to receive and transmit electromagnetic wave, realizes the mutual conversion of electromagnetic wave and electric signal, thus It is communicated with communication network or other equipment.The RF module 106 may include various existing for executing these functions Circuit element, for example, antenna, RF transceiver, digital signal processor, encryption/deciphering chip, subscriber identity module (SIM) card, memory etc..The RF module 106 can be carried out with various networks such as internet, intranet, wireless network Communication is communicated by wireless network and other equipment.Above-mentioned wireless network may include cellular telephone networks, wireless Local area network or Metropolitan Area Network (MAN).Various communication standards, agreement and technology can be used in above-mentioned wireless network, including but not limited to Global system for mobile communications (Global System for Mobile Communication, GSM), enhanced mobile communication skill Art (Enhanced Data GSM Environment, EDGE), Wideband CDMA Technology (wideband code Division multiple access, W-CDMA), Code Division Multiple Access (Code division access, CDMA), time-division Multiple access technology (time division multiple access, TDMA), adopting wireless fidelity technology (Wireless, Fidelity, WiFi) (such as American Institute of Electrical and Electronics Engineers's standard IEEE 802.10A, IEEE 802.11b, IEEE802.11g and/ Or IEEE 802.11n), the networking telephone (Voice over internet protocal, VoIP), worldwide interoperability for microwave accesses It is (Worldwide Interoperability for Microwave Access, Wi-Max), other for mail, Instant Messenger The agreement and any other suitable communications protocol of news and short message, or even may include that those are not developed currently yet Agreement.
Voicefrequency circuit 110, earpiece 101, sound jack 103, microphone 105 provide user and the electronic body portion jointly Audio interface between 10 or the main display 120.Specifically, the voicefrequency circuit 110 receives from the processor 102 Voice data is converted to electric signal by voice data, by electric signal transmission to the earpiece 101.The earpiece 101 is by electric signal Be converted to the sound wave that human ear can be heard.The voicefrequency circuit 110 receives electric signal also from the microphone 105, by electric signal Voice data is converted to, and gives the processor 102 to be further processed data transmission in network telephony.Audio data can be with It is obtained from the memory 104 or through the RF module 106.In addition, audio data also can store to the storage It is sent in device 104 or by the RF module 106.
The setting of sensor 114 is in the electronic body portion 10 or in the main display 120, the sensor 114 example includes but is not limited to: optical sensor, operation sensor, pressure sensor, gravity accelerometer and Other sensors.
Specifically, the sensor 114 may include light sensor 114F, pressure sensor 114G.Wherein, pressure sensing Device 114G can detecte the sensor by pressing the pressure generated in mobile terminal 100.That is, pressure sensor 114G detection by with The pressure that contact between family and mobile terminal or pressing generate, for example, by between the ear and mobile terminal of user contact or Press the pressure generated.Therefore, whether pressure sensor 114G may be used to determine occurs between user and mobile terminal 100 The size of contact or pressing and pressure.
Referring to Figure 11, specifically in the embodiment shown in fig. 11, the light sensor 114F and the pressure Force snesor 114G is arranged adjacent to the display panel 111.The light sensor 114F can have object aobvious close to the master When display screen 120, such as when the electronic body portion 10 is moved in one's ear, the processor 102 closes display output.
As a kind of motion sensor, gravity accelerometer can detect in all directions (generally three axis) and accelerate The size of degree can detect that size and the direction of gravity when static, can be used to identify the application of 100 posture of mobile terminal (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.. In addition, the electronic body portion 10 can also configure other sensors such as gyroscope, barometer, hygrometer, thermometer, herein no longer It repeats,
In the present embodiment, the input module 118 may include the touch screen being arranged on the main display 120 109, the touch screen 109 collects the touch operation of user on it or nearby, and (for example user is any using finger, stylus etc. Operation of the suitable object or attachment on the touch screen 109 or near the touch screen 109), and according to presetting The corresponding attachment device of driven by program.Optionally, the touch screen 109 may include touch detecting apparatus and touch controller. Wherein, the touch orientation of the touch detecting apparatus detection user, and touch operation bring signal is detected, it transmits a signal to The touch controller;The touch controller receives touch information from the touch detecting apparatus, and by the touch information It is converted into contact coordinate, then gives the processor 102, and order that the processor 102 is sent can be received and executed. Furthermore, it is possible to realize the touching of the touch screen 109 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves Touch detection function.In addition to the touch screen 109, in other change embodiments, the input module 118 can also include it Its input equipment, such as key 107.The key 107 for example may include the character keys for inputting character, and for triggering The control button of control function.The example of the control button includes " returning to main screen " key, power on/off key etc..
The information and the electronics that the main display 120 is used to show information input by user, is supplied to user The various graphical user interface of body part 10, these graphical user interface can by figure, text, icon, number, video and its Any combination is constituted, in an example, the touch screen 109 may be disposed on the display panel 111 to it is described Display panel 111 constitutes an entirety.
The power module 122 is used to provide power supply to the processor 102 and other each components.Specifically, The power module 122 may include power-supply management system, one or more power supply (such as battery or alternating current), charging circuit, Power-fail detection circuit, inverter, indicator of the power supply status and it is other arbitrarily with the electronic body portion 10 or the master The generation, management of electric power and the relevant component of distribution in display screen 120.
The mobile terminal 100 further includes locator 119, and the locator 119 is for determining 100 institute of mobile terminal The physical location at place.In the present embodiment, the locator 119 realizes the positioning of the mobile terminal 100 using positioning service, The positioning service, it should be understood that the location information of the mobile terminal 100 is obtained by specific location technology (as passed through Latitude coordinate), it is marked on the electronic map by the technology or service of the position of positioning object.
It should be understood that above-mentioned mobile terminal 100 is not limited to intelligent mobile phone terminal, should refer to can moved Computer equipment used in dynamic.Specifically, mobile terminal 100, refers to the mobile computer for being equipped with intelligent operating system Equipment, mobile terminal 100 include but is not limited to smart phone, smartwatch, tablet computer, etc..
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present application, the meaning of " plurality " is at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process or the method description described in other ways in flow chart or herein is construed as, and expression includes It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discussed suitable Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be by the application Embodiment person of ordinary skill in the field understood.
The logic and/or step for indicating or describing in other ways herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for (such as computer based system, the system including processor other can be held from instruction for instruction execution system, device or equipment The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings Interconnecting piece (mobile terminal), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other suitable Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media Its suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or their combination.Above-mentioned In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware Any one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signal Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.In addition, in each embodiment of the application In each functional unit can integrate in a processing module, be also possible to each unit and physically exist alone, can also two A or more than two units are integrated in a module.Above-mentioned integrated module both can take the form of hardware realization, can also It is realized in the form of using software function module.If the integrated module realized in the form of software function module and as Independent product when selling or using, also can store in a computer readable storage medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the application System, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of application Type.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although The application is described in detail with reference to the foregoing embodiments, those skilled in the art are when understanding: it still can be with It modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;And These are modified or replaceed, do not drive corresponding technical solution essence be detached from each embodiment technical solution of the application spirit and Range.

Claims (10)

1. a kind of text recognition method, which is characterized in that the described method includes:
Detection acts on the touch control operation of user interface, when the touch control operation meets preset condition, grasps to the touch-control Interface element in the corresponding user interface in the position of work is identified;
When unidentified success, intercepts control image corresponding with the position of the touch control operation and the control image is carried out Identification;
At least one card of Overlapping display on the partial region of the user interface, at least one described card for show by The information that the control image identifies.
2. interception is grasped with the touch-control the method according to claim 1, wherein described when unidentified success The corresponding control image in the position of work simultaneously identifies the control image, further includes:
When unidentified success, the corresponding application program of the user interface is obtained;
Judge whether the application program is emphasis application program;
When the application program is the focus on the application program, control image corresponding with the position of the touch control operation is intercepted And the control image is identified;
When the application program is not the focus on the application program, selection control is shown in the user interface, wherein described Selection control is for triggering manual frame choosing or cancelling identification.
3. according to the method described in claim 2, it is characterized in that, described when the application program is the focus on the application program When, it intercepts control image corresponding with the position of the touch control operation and the control image is identified, comprising:
When the application program is the focus on the application program, control class corresponding with the position of the touch control operation is obtained Type;
Judge whether the control type meets preset kind;
When the control type meets preset kind, control image corresponding with the position of the touch control operation is intercepted and to institute Control image is stated to be identified.
4. according to the method described in claim 3, it is characterized in that, described be superimposed on the partial region of the user interface shows Show at least one card, comprising:
Judge whether the control image can recognize that effective information, wherein the fiducial probability of the effective information is higher than pre- If value;
When the control image can recognize that the effective information, the Overlapping display on the partial region of the user interface At least one card;
When the control image cannot recognize that the effective information, the selection control is shown in the user interface.
5. according to the method described in claim 4, it is characterized in that, it is described when the control image cannot recognize that it is described effectively When information, the selection control is shown in the user interface, comprising:
Obtain the duration identified to the control image;
Judge whether the duration is more than preset duration;
When the duration is more than the preset duration, the selection control is shown in the user interface.
6. according to the method described in claim 4, it is characterized in that, it is described when the control image cannot recognize that it is described effectively When information, the selection control is shown in the user interface, further includes:
When the control image cannot recognize that the effective information, the selection control and knowledge are shown in the user interface Not unsuccessful prompt information.
7. method according to claim 1-6, which is characterized in that described when unidentified success, interception and institute It states the corresponding control image in touch control operation position and the control image is identified, further includes:
When unidentified success, the corresponding touch-control center of the touch control operation is obtained;
Judge the touch-control center whether on effective control of the user interface, wherein effective control is at least Including an interface element;
When the touch-control center is on effective control, effective control image is intercepted and to effective control image It is identified;
When the touch-control center is not on effective control, user interface image is intercepted and to the user interface map As being identified.
8. a kind of text identification device, which is characterized in that described device includes:
Interface element identification module is preset for detecting the touch control operation for acting on user interface when the touch control operation meets When condition, the interface element in the user interface corresponding to the position of the touch control operation is identified;
Image interception module, for intercepting control image corresponding with the position of the touch control operation simultaneously when unidentified success The control image is identified;
Card display module, at least one card of Overlapping display on the partial region of the user interface, it is described at least One card is for showing the information identified by the control image.
9. a kind of mobile terminal, which is characterized in that including touch screen, memory and processor, the touch screen is deposited with described Reservoir is couple to the processor, the memory store instruction, the when executed by the processor processing Device executes the method according to claim 1 to 7.
10. a kind of computer-readable storage medium for the program code that can be performed with processor, which is characterized in that the journey Sequence code makes the processor execute the method according to claim 1 to 7.
CN201810586716.2A 2018-06-07 2018-06-07 text recognition method, device, mobile terminal and storage medium Pending CN109002759A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810586716.2A CN109002759A (en) 2018-06-07 2018-06-07 text recognition method, device, mobile terminal and storage medium
PCT/CN2019/084377 WO2019233212A1 (en) 2018-06-07 2019-04-25 Text identification method and device, mobile terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810586716.2A CN109002759A (en) 2018-06-07 2018-06-07 text recognition method, device, mobile terminal and storage medium

Publications (1)

Publication Number Publication Date
CN109002759A true CN109002759A (en) 2018-12-14

Family

ID=64600402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810586716.2A Pending CN109002759A (en) 2018-06-07 2018-06-07 text recognition method, device, mobile terminal and storage medium

Country Status (2)

Country Link
CN (1) CN109002759A (en)
WO (1) WO2019233212A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109803050A (en) * 2019-01-14 2019-05-24 南京点明软件科技有限公司 A kind of full frame guidance click method suitable for operation by blind mobile phone
CN109857673A (en) * 2019-02-25 2019-06-07 广州云测信息技术有限公司 Control recognition methods and device
CN110069161A (en) * 2019-04-01 2019-07-30 努比亚技术有限公司 On-Screen Identification method, mobile terminal and computer readable storage medium
WO2019233212A1 (en) * 2018-06-07 2019-12-12 Oppo广东移动通信有限公司 Text identification method and device, mobile terminal, and storage medium
CN110704153A (en) * 2019-10-10 2020-01-17 深圳前海微众银行股份有限公司 Interface logic analysis method, device and equipment and readable storage medium
CN111126301A (en) * 2019-12-26 2020-05-08 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
CN111242109A (en) * 2020-04-26 2020-06-05 北京金山数字娱乐科技有限公司 Method and device for manually fetching words
CN111598128A (en) * 2020-04-09 2020-08-28 腾讯科技(上海)有限公司 Control state identification and control method, device, equipment and medium of user interface
CN112068763A (en) * 2020-09-22 2020-12-11 深圳市欢太科技有限公司 Target information management method and device, electronic equipment and storage medium
CN112596656A (en) * 2020-12-28 2021-04-02 北京小米移动软件有限公司 Content identification method, device and storage medium
WO2021159992A1 (en) * 2020-02-11 2021-08-19 Oppo广东移动通信有限公司 Picture text processing method and apparatus, electronic device, and storage medium
CN114564141A (en) * 2020-11-27 2022-05-31 华为技术有限公司 Text extraction method and device
CN112822539B (en) * 2020-12-30 2023-07-14 咪咕文化科技有限公司 Information display method, device, server and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488110A (en) * 2020-12-18 2021-03-12 深圳简捷电子科技有限公司 Method and system for accurately capturing local information in picture
CN114967994A (en) * 2021-02-26 2022-08-30 Oppo广东移动通信有限公司 Text processing method and device and electronic equipment
CN113900621A (en) * 2021-11-09 2022-01-07 杭州逗酷软件科技有限公司 Operation instruction processing method, control method, device and electronic equipment
CN115035520B (en) * 2021-11-22 2023-04-18 荣耀终端有限公司 Character recognition method for image, electronic device and storage medium
CN117148981A (en) * 2023-08-28 2023-12-01 广州文石信息科技有限公司 Text input method, device and equipment of ink screen and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294665A (en) * 2012-02-22 2013-09-11 汉王科技股份有限公司 Text translation method for electronic reader and electronic reader
CN104778194A (en) * 2014-12-26 2015-07-15 北京奇虎科技有限公司 Search method and device based on touch operation
CN105045504A (en) * 2015-07-23 2015-11-11 小米科技有限责任公司 Image content extraction method and apparatus
CN106484266A (en) * 2016-10-18 2017-03-08 北京锤子数码科技有限公司 A kind of text handling method and device
US9690474B2 (en) * 2007-12-21 2017-06-27 Nokia Technologies Oy User interface, device and method for providing an improved text input
CN106951893A (en) * 2017-05-08 2017-07-14 奇酷互联网络科技(深圳)有限公司 Text information acquisition methods, device and mobile terminal
CN107256109A (en) * 2017-05-27 2017-10-17 北京小米移动软件有限公司 Method for information display, device and terminal
CN107391017A (en) * 2017-07-20 2017-11-24 广东欧珀移动通信有限公司 Literal processing method, device, mobile terminal and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002759A (en) * 2018-06-07 2018-12-14 Oppo广东移动通信有限公司 text recognition method, device, mobile terminal and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9690474B2 (en) * 2007-12-21 2017-06-27 Nokia Technologies Oy User interface, device and method for providing an improved text input
CN103294665A (en) * 2012-02-22 2013-09-11 汉王科技股份有限公司 Text translation method for electronic reader and electronic reader
CN104778194A (en) * 2014-12-26 2015-07-15 北京奇虎科技有限公司 Search method and device based on touch operation
CN105045504A (en) * 2015-07-23 2015-11-11 小米科技有限责任公司 Image content extraction method and apparatus
CN106484266A (en) * 2016-10-18 2017-03-08 北京锤子数码科技有限公司 A kind of text handling method and device
CN106951893A (en) * 2017-05-08 2017-07-14 奇酷互联网络科技(深圳)有限公司 Text information acquisition methods, device and mobile terminal
CN107256109A (en) * 2017-05-27 2017-10-17 北京小米移动软件有限公司 Method for information display, device and terminal
CN107391017A (en) * 2017-07-20 2017-11-24 广东欧珀移动通信有限公司 Literal processing method, device, mobile terminal and storage medium

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019233212A1 (en) * 2018-06-07 2019-12-12 Oppo广东移动通信有限公司 Text identification method and device, mobile terminal, and storage medium
CN109803050A (en) * 2019-01-14 2019-05-24 南京点明软件科技有限公司 A kind of full frame guidance click method suitable for operation by blind mobile phone
CN109803050B (en) * 2019-01-14 2020-09-25 南京点明软件科技有限公司 Full screen guiding clicking method suitable for blind person to operate mobile phone
CN109857673A (en) * 2019-02-25 2019-06-07 广州云测信息技术有限公司 Control recognition methods and device
CN109857673B (en) * 2019-02-25 2022-02-15 北京云测信息技术有限公司 Control identification method and device
CN110069161A (en) * 2019-04-01 2019-07-30 努比亚技术有限公司 On-Screen Identification method, mobile terminal and computer readable storage medium
CN110069161B (en) * 2019-04-01 2023-03-10 努比亚技术有限公司 Screen recognition method, mobile terminal and computer-readable storage medium
CN110704153B (en) * 2019-10-10 2021-11-19 深圳前海微众银行股份有限公司 Interface logic analysis method, device and equipment and readable storage medium
CN110704153A (en) * 2019-10-10 2020-01-17 深圳前海微众银行股份有限公司 Interface logic analysis method, device and equipment and readable storage medium
CN111126301A (en) * 2019-12-26 2020-05-08 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
CN111126301B (en) * 2019-12-26 2022-01-11 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
WO2021159992A1 (en) * 2020-02-11 2021-08-19 Oppo广东移动通信有限公司 Picture text processing method and apparatus, electronic device, and storage medium
CN111598128A (en) * 2020-04-09 2020-08-28 腾讯科技(上海)有限公司 Control state identification and control method, device, equipment and medium of user interface
CN111598128B (en) * 2020-04-09 2023-05-12 腾讯科技(上海)有限公司 Control state identification and control method, device, equipment and medium of user interface
CN111242109B (en) * 2020-04-26 2021-02-02 北京金山数字娱乐科技有限公司 Method and device for manually fetching words
CN111242109A (en) * 2020-04-26 2020-06-05 北京金山数字娱乐科技有限公司 Method and device for manually fetching words
CN112068763A (en) * 2020-09-22 2020-12-11 深圳市欢太科技有限公司 Target information management method and device, electronic equipment and storage medium
CN114564141A (en) * 2020-11-27 2022-05-31 华为技术有限公司 Text extraction method and device
CN112596656A (en) * 2020-12-28 2021-04-02 北京小米移动软件有限公司 Content identification method, device and storage medium
CN112822539B (en) * 2020-12-30 2023-07-14 咪咕文化科技有限公司 Information display method, device, server and storage medium

Also Published As

Publication number Publication date
WO2019233212A1 (en) 2019-12-12

Similar Documents

Publication Publication Date Title
CN109002759A (en) text recognition method, device, mobile terminal and storage medium
CN108664190B (en) Page display method, device, mobile terminal and storage medium
CN108024019B (en) Message display method and device
CN106375179B (en) Method and device for displaying instant communication message
CN108388671B (en) Information sharing method and device, mobile terminal and computer readable medium
CN108932102B (en) Data processing method and device and mobile terminal
CN108228025A (en) message display method, device, mobile terminal and storage medium
CN108958634A (en) Express delivery information acquisition method, device, mobile terminal and storage medium
CN108762859A (en) Wallpaper displaying method, device, mobile terminal and storage medium
CN108833769A (en) Shoot display methods, device, mobile terminal and storage medium
CN109218982A (en) Sight spot information acquisition methods, device, mobile terminal and storage medium
CN108958576A (en) content identification method, device and mobile terminal
CN108898040A (en) A kind of recognition methods and mobile terminal
CN109032491A (en) Data processing method, device and mobile terminal
CN108881979A (en) Information processing method, device, mobile terminal and storage medium
CN109085982A (en) content identification method, device and mobile terminal
CN108664205A (en) Method for information display, device, mobile terminal and storage medium
CN108510266A (en) A kind of Digital Object Unique Identifier recognition methods and mobile terminal
CN108803972B (en) Information display method, device, mobile terminal and storage medium
CN108646967A (en) Display changeover method, device, mobile terminal and storage medium
CN109062648B (en) Information processing method and device, mobile terminal and storage medium
CN108494851B (en) Application program recommended method, device and server
CN108803961A (en) Data processing method, device and mobile terminal
CN110221882A (en) Display methods, device, mobile terminal and storage medium
CN108763243A (en) Application program recommends method, apparatus, mobile terminal and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination