CN107678650A - A kind of image identification method, mobile terminal and computer-readable recording medium - Google Patents
A kind of image identification method, mobile terminal and computer-readable recording medium Download PDFInfo
- Publication number
- CN107678650A CN107678650A CN201710903876.0A CN201710903876A CN107678650A CN 107678650 A CN107678650 A CN 107678650A CN 201710903876 A CN201710903876 A CN 201710903876A CN 107678650 A CN107678650 A CN 107678650A
- Authority
- CN
- China
- Prior art keywords
- target
- target photo
- content
- identification
- photo
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/24—Character recognition characterised by the processing or recognition method
- G06V30/248—Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Telephone Function (AREA)
Abstract
The invention discloses a kind of image identification method, mobile terminal and computer-readable recording medium, methods described includes:Receive the trigger action of the identification Target Photo of user's input;Based on the trigger action, it is determined that the target identification mode to the Target Photo;The content in the Target Photo is identified in a manner of the target identification;The content in the Target Photo is extracted, and the editable document of the content included in the Target Photo is provided.Image identification method provided by the invention the content in Target Photo can be identified according to the identification method of determination, and the contents extraction in the Target Photo is come out into the content as editable document, so, user is facilitated to extract the text information in picture, and facilitate user to enter edlin to text information, provide the user with great convenience.
Description
Technical field
The present invention relates to picture Processing Technique field, more particularly to a kind of image identification method, mobile terminal and computer
Readable storage medium storing program for executing.
Background technology
With the continuous development of electronic information technology, the application of mobile terminal (such as smart mobile phone, tablet personal computer etc.) is got over
Carry out more extensive, indispensable necessity in turning into user's life or working.Mobile terminal typically has camera function at present, when
Need to record some important informations or to good friend share information when, user would generally by the way of taking pictures record information
Or the mode sharing information of picture is sent, however, due to being picture format, user can not compile to the information in picture
Volume, it has not been convenient to the information in picture is handled.
The content of the invention
In view of this, the present invention proposes a kind of image identification method, mobile terminal and computer-readable recording medium to solve
Certainly above-mentioned technical problem.
First, to achieve the above object, the present invention proposes a kind of image identification method, applied to mobile terminal, the side
Method includes:
Receive the trigger action of the identification Target Photo of user's input;
Based on the trigger action, it is determined that the target identification mode to the Target Photo;
The content in the Target Photo is identified in a manner of the target identification;
The content in the Target Photo is extracted, and the editable document of the content included in the Target Photo is provided.
Alternatively, before the content identified in a manner of the target identification in the Target Photo, methods described is also
Including:
Judge whether the Target Photo meets preparatory condition;
If the picture does not meet the preparatory condition, default processing is carried out to the Target Photo.
Alternatively, the target identification mode includes Text region or Table recognition;
The content identified in a manner of the target identification in the Target Photo, including:
If the target identification mode is Text region, the word content in the Target Photo is identified;
If the target identification mode is Table recognition, form and table content in the Target Photo are identified.
Alternatively, the content identified in a manner of target identification in the Target Photo, including:
If the word in the Target Photo is handwriting, according to the corresponding relation of handwritten stroke shape and default stroke
Identify the handwriting in the Target Photo.
Alternatively, methods described also includes:
Receive the shot operation performed to target picture;
The colouring information of the target picture is obtained, and the target picture is divided into by multiple areas according to color contrast
Domain, wherein, the continuous edge in same region;
Target area is determined from the multiple region, sectional drawing is carried out to the target area of the target picture.
In addition, to achieve the above object, the present invention also provides a kind of mobile terminal, the mobile terminal include memory,
At least one processor and be stored on the memory and can at least one program of at least one computing device,
Following steps are realized when at least one program is by least one computing device:
Receive the trigger action of the identification Target Photo of user's input;
Based on the trigger action, it is determined that the target identification mode to the Target Photo;
The content in the Target Photo is identified in a manner of the target identification;
The content in the Target Photo is extracted, and the editable document of the content included in the Target Photo is provided.
Alternatively, before the content identified in a manner of the target identification in the Target Photo, described at least one
Individual processor is additionally operable to:
Judge whether the Target Photo meets preparatory condition;
If the picture does not meet the preparatory condition, default processing is carried out to the Target Photo.
Alternatively, the target identification mode includes Text region or Table recognition;
The content identified in a manner of the target identification in the Target Photo, including:
If the target identification mode is Text region, the word content in the Target Photo is identified;
If the target identification mode is Table recognition, form and table content in the Target Photo are identified.
Alternatively, the content identified in a manner of target identification in the Target Photo, including:
If the word in the Target Photo is handwriting, according to the corresponding relation of handwritten stroke shape and default stroke
Identify the handwriting in the Target Photo.
Further, to achieve the above object, the present invention also provides a kind of computer-readable recording medium, the computer
Readable storage medium storing program for executing is stored with the executable at least one program of computer, and at least one program is performed by the computer
When the computer is performed the step in the method described in any of the above-described.
Compared to prior art, image identification method proposed by the invention receives the identification Target Photo of user's input
Trigger action;Based on the trigger action, it is determined that the target identification mode to the Target Photo;In a manner of the target identification
Identify the content in the Target Photo;The content in the Target Photo is extracted, and offer is included in the Target Photo
The editable document of content.Image identification method provided by the invention can be according to the identification method of determination in Target Photo
Content is identified, and the contents extraction in the Target Photo is come out into the content as editable document, so, convenient to use
Text information in family extraction picture, and facilitate user to enter edlin to text information, provide the user with great convenience.
Brief description of the drawings
Fig. 1 is a kind of hardware architecture diagram for the mobile terminal for realizing each embodiment of the present invention;
Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic flow sheet of image identification method provided in an embodiment of the present invention;
Fig. 4 is a kind of picture schematic diagram provided in an embodiment of the present invention;
Fig. 5 is user interface schematic diagram provided in an embodiment of the present invention;
Fig. 6 is another picture schematic diagram provided in an embodiment of the present invention;
Fig. 7 is another user interface schematic diagram provided in an embodiment of the present invention;
Fig. 8 is the schematic flow sheet of another image identification method provided in an embodiment of the present invention;
Fig. 9 is the schematic flow sheet of another image identification method provided in an embodiment of the present invention;
Figure 10 is a kind of high-level schematic functional block diagram of mobile terminal provided in an embodiment of the present invention;
Figure 11 is the high-level schematic functional block diagram of another mobile terminal provided in an embodiment of the present invention;
Figure 12 is the high-level schematic functional block diagram of another mobile terminal provided in an embodiment of the present invention;
Figure 13 is the high-level schematic functional block diagram of another mobile terminal provided in an embodiment of the present invention;
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only
Be advantageous to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board
Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. move
Dynamic terminal, and the fixed terminal such as digital TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special
Outside element for moving purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of each embodiment of the realization present invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit
108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts,
Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station
Downlink information receive after, handled to processor 110;In addition, up data are sent to base station.Generally, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrate
Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102
Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need
To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is receiving or
It is sound that the voice data stored in memory 109, which is converted into audio signal and exported,.Moreover, audio output unit 103
The audio output related to the specific function that mobile terminal 100 performs can also be provided (for example, call signal receives sound, disappeared
Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out
Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after the processing of graphics processor 1041 can be deposited
Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can
To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.
Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound
Caused noise or interference during frequency signal.
Mobile terminal 100 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, CD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with
And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it
(for example user uses any suitable objects or annex such as finger, stylus on contact panel 1071 or in contact panel 1071
Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
Contact coordinate is converted into, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can
To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap
Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel
1061 be the part independent as two to realize the input of mobile terminal and output function, but in certain embodiments, can
Input and the output function of mobile terminal are realized so that contact panel 1071 and display panel 1061 is integrated, is not done herein specifically
Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and at least one element that the input received is transferred in mobile terminal 100 or can use
In transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by running or performing the software program and/or module that are stored in memory 109, and call and be stored in storage
Data in device 109, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include at least one processing unit;Preferably, processor 110 can integrate application processor and modulation /demodulation processing
Device, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is mainly located
Manage radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) to all parts power supply, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system being based on below to the mobile terminal of the present invention enters
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
Unite as the LTE system of universal mobile communications technology, the UE that the LTE system includes communicating connection successively (User Equipment, is used
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling between EPC203, there is provided carrying and connection management.HSS2032 is all to manage for providing some registers
Such as the function of attaching position register (not shown) etc, and preserve some and used about service features, data rate etc.
The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP
Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources
Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., do not limit herein.
Based on the above-mentioned hardware configuration of mobile terminal 100 and communications network system, each embodiment of the inventive method is proposed.
Refering to Fig. 3, Fig. 3 is a kind of step flow chart of image identification method provided in an embodiment of the present invention, methods described
Applied in a mobile terminal, as shown in figure 3, methods described includes:
Step 301, the trigger action for identifying Target Photo for receiving user's input.
In the step, methods described receives the trigger action of the identification Target Photo of user's input.The Target Photo can
To be that picture that user is shot by the mobile terminal or user are connect by the bitcom on the mobile terminal
The picture that the other users received are sent.The trigger action can be to the clicking operation of the Target Photo or
The long-press of the Target Photo is operated, the embodiment of the present invention is not especially limited to this.
Step 302, based on the trigger action, it is determined that the target identification mode to the Target Photo.
In the step, methods described is based on the trigger action, it is determined that the target identification mode to the Target Photo.This
In inventive embodiments, multiple different trigger actions are previously stored with the mobile terminal, wherein, different trigger actions pair
Answer different identification methods.For example, corresponding first identification method of the first trigger action, the corresponding second identification side of the second trigger action
Formula, corresponding 3rd identification method of the 3rd trigger action.Specifically, if methods described receives the first triggering behaviour of user's input
Make, methods described determines that the first identification method is the target identification mode;If methods described receives the second of user's input
Trigger action, methods described determine that the second identification method is the target identification mode;If methods described receives user's input
The 3rd trigger action, methods described determines that the 3rd identification method is the target identification mode.
In some embodiments of the invention, methods described is based on the trigger action, it is determined that the mesh to the Target Photo
Mark identification method can also be after the trigger action of user's input is received, and user circle is provided in the mobile terminal
Face supplies user's selection target identification method, and the selection operation for being then based on user in the user interface determines target identification side
Formula.
The identification method to Target Photo can include Text region mode and Table recognition mode, can also wrap
Include other identification methods.User can select the target identification according to the content in the Target Photo and the demand of oneself
Mode, for example, if only including word in the Target Photo, user can select Text region mode to know for the target
Other mode;If both include word or can be by selecting Text region mode to realize including form, user in the Target Photo
The word in the Target Photo is only identified, can also be by selecting Table recognition mode to realize in the identification Target Photo
Content in form and form.
Step 303, identify in a manner of the target identification content in the Target Photo.
In the step, methods described identifies the content in the Target Photo in a manner of the target identification.Specifically, when
When the target identification mode is Text region mode, methods described identifies the word content in the Target Photo;When described
When target identification mode is Table recognition mode, methods described identifies the content in form and form in the Target Photo.
In some embodiments of the invention, methods described is identified in a manner of the target identification in the Target Photo
Before appearance, it can also first judge whether the Target Photo meets preparatory condition, if meeting the preparatory condition, directly with described
Target identification mode identifies the content in the Target Photo;If being unsatisfactory for the preparatory condition, methods described is to the target
Picture carries out default processing, and the Target Photo after default processing is then identified in a manner of the target identification.It is described
Judge the Target Photo whether meet preparatory condition can be judge the Target Photo whether be front shooting picture, institute
Default processing is stated to handle for the one or more in rotating the Target Photo, stretch, scaling.
Content in step 304, the extraction Target Photo, and compiling for the content included in the Target Photo is provided
Collect document.
In the step, methods described extracts the content in the Target Photo, and offer is included in the Target Photo
The editable document of content, specifically, if methods described to the Target Photo carry out Text region, methods described extraction described in
Word in Target Photo, and the editable document of the word included in the Target Photo is provided, facilitate user to target figure
Word in piece is arranged or edited.If methods described carries out Table recognition, methods described extraction institute to the Target Photo
The content in the form and form in Target Photo is stated, and the form and table content included in the Target Photo is provided
Editable document, facilitate user to enter edlin or arrangement to the form in the Target Photo.
For example, when user at school, train or attend a lecture when, it may be necessary to record the note, however, due on classroom
Limited time and hand-written speed it is limited, user can record class offerings by way of shooting slide content, need after class
User can be realized interior in the extraction Target Photo by performing trigger action to the picture of shooting when reading or arranging
Hold, and can further realize the editor to content in picture.Such as shown in figure 4, user can be in classroom photographs magic lantern
Piece content, when methods described receives the trigger action of user, the picture shown in Fig. 4 can be identified, and described in extraction
Word content in picture, there is provided the editable document including the word content in the picture, as shown in Figure 5.
Would generally be related mode in some posters, and for convenience of fast, user is meeting by recording sea by way of taking pictures
Contact method in report, however, when user needs to dial the phone in poster or saves as contact person, it is necessary to check poster
Photo, and the phone in recording photograph, then input phone dialed or stored, operation it is comparatively laborious, once and user do not have
Remember or misremember, it is necessary to repeatedly input repeatedly, cause inconvenience to the user.By the embodiment of the present invention, user needs to broadcast
When putting the phone in poster, trigger action can be performed to the photo of the poster, methods described can be based on the trigger action
The phone in photo is identified, and the editable shelves for including the phone are provided, user can be directly to the phone in the poster
Duplication operation is carried out, facilitates user to dial or store.
In addition, when user newly adds a group, it is necessary to record the phone of multiple members of community, if by shooting into
The mode of photo, it is impossible to further edited, caused inconvenience to the user to the phone in photo.Such as shown in Figure 6
The picture for including form, user can select to picture carry out Table recognition, methods described can the selection pair based on user
The picture carries out Table recognition, and provides the editable document for including table content, such as shown in fig. 7, facilitates user couple
Form is arranged and enters edlin to the content in form.
In some embodiments of the invention, methods described can extract the position of word and/or form in the Target Photo
Information, then being provided according to the positional information of word and/or form in the Target Photo includes word in the Target Photo
And/or the editable document of form.In such manner, it is possible to ensure the relative position relation of content and the target figure in editable document
It is consistent in piece, avoid out of order cause inconvenience to the user.Methods described can also extract word and/or table in the Target Photo
The size information of lattice, then being provided according to the size information of word and/or form in the Target Photo includes the target figure
The editable document of word and/or form in piece, in such manner, it is possible to improve the reduction of word and/or form in the Target Photo
Degree, user is more facilitated to read or edit.
In the embodiment, the image identification method receives the trigger action of the identification Target Photo of user's input;It is based on
The trigger action, it is determined that the target identification mode to the Target Photo;The target is identified in a manner of the target identification
Content in picture;The content in the Target Photo is extracted, and the editable of the content included in the Target Photo is provided
Document.Image identification method provided by the invention can be known according to the identification method of determination to the content in Target Photo
Not, and using the contents extraction in the Target Photo content as editable document is come out, so, facilitates user to extract picture
In text information, and facilitate user to enter edlin to text information, provide the user with great convenience.
Referring to Fig. 8, Fig. 8 is the schematic flow sheet of another image identification method provided in an embodiment of the present invention, such as Fig. 8 institutes
Show, methods described includes:
Step 801, the trigger action for identifying Target Photo for receiving user's input.
In the step, methods described receives the trigger action of the identification Target Photo of user's input.The Target Photo can
To be that picture that user is shot by the mobile terminal or user are connect by the bitcom on the mobile terminal
The picture that the other users received are sent.The trigger action can be to the clicking operation of the Target Photo or
The long-press of the Target Photo is operated, the embodiment of the present invention is not especially limited to this.
Step 802, based on the trigger action, it is determined that the target identification mode to the Target Photo.
In the step, methods described is based on the trigger action, it is determined that the target identification mode to the Target Photo.This
In inventive embodiments, multiple different trigger actions are previously stored with the mobile terminal, wherein, different trigger actions pair
Answer different identification methods.For example, corresponding first identification method of the first trigger action, the corresponding second identification side of the second trigger action
Formula, corresponding 3rd identification method of the 3rd trigger action.Specifically, if methods described receives the first triggering behaviour of user's input
Make, methods described determines that the first identification method is the target identification mode;If methods described receives the second of user's input
Trigger action, methods described determine that the second identification method is the target identification mode;If methods described receives user's input
The 3rd trigger action, methods described determines that the 3rd identification method is the target identification mode.
In some embodiments of the invention, methods described is based on the trigger action, it is determined that the mesh to the Target Photo
Mark identification method can also be after the trigger action of user's input is received, and user circle is provided in the mobile terminal
Face supplies user's selection target identification method, and the selection operation for being then based on user in the user interface determines target identification side
Formula.
The identification method to Target Photo can include Text region mode and Table recognition mode, can also wrap
Include other identification methods.User can select the target identification according to the content in the Target Photo and the demand of oneself
Mode, for example, if only including word in the Target Photo, user can select Text region mode to know for the target
Other mode;If both include word or can be by selecting Text region mode to realize including form, user in the Target Photo
The word in the Target Photo is only identified, can also be by selecting Table recognition mode to realize in the identification Target Photo
Content in form and form.
Step 803, judge whether the Target Photo meets preparatory condition.
In the step, methods described first judges whether the Target Photo meets preparatory condition, if meeting the default bar
Part, flow enter step 805;If on the contrary, being unsatisfactory for the preparatory condition, step 804 is performed.It is described to judge the target figure
Whether piece meets that preparatory condition can judge whether the Target Photo is the positive picture shot, specifically, if the mesh
Piece of marking on a map is not the picture of front shooting, and methods described determines that the Target Photo is unsatisfactory for the preparatory condition.
Step 804, default processing is carried out to the Target Photo.
In the step, when the Target Photo is unsatisfactory for the preparatory condition, methods described is entered to the Target Photo
The default processing of row.The default processing is handled for the one or more in being rotated, stretched to the Target Photo, being scaled.
In such manner, it is possible to effectively eliminate because shooting angle problem causes the deformation to image content, the degree of accuracy of picture recognition is improved.
Step 805, identify in a manner of the target identification content in the Target Photo.
In the step, methods described identifies the content in the Target Photo in a manner of the target identification.Specifically, when
When the target identification mode is Text region mode, methods described identifies the word content in the Target Photo;When described
When target identification mode is Table recognition mode, methods described identifies the content in form and form in the Target Photo.
Content in step 806, the extraction Target Photo, and compiling for the content included in the Target Photo is provided
Collect document.
In the step, methods described extracts the content in the Target Photo, and offer is included in the Target Photo
The editable document of content, specifically, if methods described to the Target Photo carry out Text region, methods described extraction described in
Word in Target Photo, and the editable document of the word included in the Target Photo is provided, facilitate user to target figure
Word in piece is arranged or edited.If methods described carries out Table recognition, methods described extraction institute to the Target Photo
The content in the form and form in Target Photo is stated, and the form and table content included in the Target Photo is provided
Editable document, facilitate user to enter edlin or arrangement to the form in the Target Photo.
For example, when user at school, train or attend a lecture when, it may be necessary to record the note, however, due on classroom
Limited time and hand-written speed it is limited, user can record class offerings by way of shooting slide content, need after class
User can be realized interior in the extraction Target Photo by performing trigger action to the picture of shooting when reading or arranging
Hold, and can further realize the editor to content in picture.Such as shown in figure 4, user can be in classroom photographs magic lantern
Piece content, when methods described receives the trigger action of user, the picture shown in Fig. 4 can be identified, and described in extraction
Word content in picture, there is provided the editable document including the word content in the picture, as shown in Figure 5.
Would generally be related mode in some posters, and for convenience of fast, user is meeting by recording sea by way of taking pictures
Contact method in report, however, when user needs to dial the phone in poster or saves as contact person, it is necessary to check poster
Photo, and the phone in recording photograph, then input phone dialed or stored, operation it is comparatively laborious, once and user do not have
Remember or misremember, it is necessary to repeatedly input repeatedly, cause inconvenience to the user.By the embodiment of the present invention, user needs to broadcast
When putting the phone in poster, trigger action can be performed to the photo of the poster, methods described can be based on the trigger action
The phone in photo is identified, and the editable shelves for including the phone are provided, user can be directly to the phone in the poster
Duplication operation is carried out, facilitates user to dial or store.
In addition, when user newly adds a group, it is necessary to record the phone of multiple members of community, if by shooting into
The mode of photo, it is impossible to further edited, caused inconvenience to the user to the phone in photo.Such as shown in Figure 6
The picture for including form, user can select to picture carry out Table recognition, methods described can the selection pair based on user
The picture carries out Table recognition, and provides the editable document for including table content, such as shown in fig. 7, facilitates user couple
Form is arranged and enters edlin to the content in form.
In some embodiments of the invention, methods described can extract the position of word and/or form in the Target Photo
Information, then being provided according to the positional information of word and/or form in the Target Photo includes word in the Target Photo
And/or the editable document of form.In such manner, it is possible to ensure the relative position relation of content and the target figure in editable document
It is consistent in piece, avoid out of order cause inconvenience to the user.Methods described can also extract word and/or table in the Target Photo
The size information of lattice, then being provided according to the size information of word and/or form in the Target Photo includes the target figure
The editable document of word and/or form in piece, in such manner, it is possible to improve the reduction of word and/or form in the Target Photo
Degree, user is more facilitated to read or edit.
Alternatively, the target identification mode includes Text region or Table recognition;
The content identified in a manner of the target identification in the Target Photo, including:
If the target identification mode is Text region, the word content in the Target Photo is identified;
If the target identification mode is Table recognition, form and table content in the Target Photo are identified.
Alternatively, the content identified in a manner of target identification in the Target Photo, including:
If the word in the Target Photo is handwriting, according to the corresponding relation of handwritten stroke shape and default stroke
Identify the handwriting in the Target Photo.
In the embodiment, when the word in the Target Photo is handwriting, methods described can be according to writing pencil
Draw shape and the handwriting in the corresponding relation identification Target Photo of default stroke shape.
Specifically, the handwritten stroke shape of multiple users and pair of default stroke can be prestored in the mobile terminal
It should be related to, so, methods described can determine the target according to the corresponding relation of the handwritten stroke shape and default stroke
Word content in picture.In such manner, it is possible to identify the word in the Target Photo according to the writing style of different writers, carry
The high Text region degree of accuracy.
Referring to Fig. 9, Fig. 9 is the schematic flow sheet of another image identification method provided in an embodiment of the present invention, such as Fig. 9 institutes
Show, methods described includes:
Step 901, the trigger action for identifying Target Photo for receiving user's input.
In the step, methods described receives the trigger action of the identification Target Photo of user's input.The Target Photo can
To be that picture that user is shot by the mobile terminal or user are connect by the bitcom on the mobile terminal
The picture that the other users received are sent.The trigger action can be to the clicking operation of the Target Photo or
The long-press of the Target Photo is operated, the embodiment of the present invention is not especially limited to this.
Step 902, based on the trigger action, it is determined that the target identification mode to the Target Photo.
In the step, methods described is based on the trigger action, it is determined that the target identification mode to the Target Photo.This
In inventive embodiments, multiple different trigger actions are previously stored with the mobile terminal, wherein, different trigger actions pair
Answer different identification methods.For example, corresponding first identification method of the first trigger action, the corresponding second identification side of the second trigger action
Formula, corresponding 3rd identification method of the 3rd trigger action.Specifically, if methods described receives the first triggering behaviour of user's input
Make, methods described determines that the first identification method is the target identification mode;If methods described receives the second of user's input
Trigger action, methods described determine that the second identification method is the target identification mode;If methods described receives user's input
The 3rd trigger action, methods described determines that the 3rd identification method is the target identification mode.
In some embodiments of the invention, methods described is based on the trigger action, it is determined that the mesh to the Target Photo
Mark identification method can also be after the trigger action of user's input is received, and user circle is provided in the mobile terminal
Face supplies user's selection target identification method, and the selection operation for being then based on user in the user interface determines target identification side
Formula.
The identification method to Target Photo can include Text region mode and Table recognition mode, can also wrap
Include other identification methods.User can select the target identification according to the content in the Target Photo and the demand of oneself
Mode, for example, if only including word in the Target Photo, user can select Text region mode to know for the target
Other mode;If both include word or can be by selecting Text region mode to realize including form, user in the Target Photo
The word in the Target Photo is only identified, can also be by selecting Table recognition mode to realize in the identification Target Photo
Content in form and form.
Step 903, identify in a manner of the target identification content in the Target Photo.
In the step, methods described identifies the content in the Target Photo in a manner of the target identification.Specifically, when
When the target identification mode is Text region mode, methods described identifies the word content in the Target Photo;When described
When target identification mode is Table recognition mode, methods described identifies the content in form and form in the Target Photo.
In some embodiments of the invention, methods described is identified in a manner of the target identification in the Target Photo
Before appearance, it can also first judge whether the Target Photo meets preparatory condition, if meeting the preparatory condition, directly with described
Target identification mode identifies the content in the Target Photo;If being unsatisfactory for the preparatory condition, methods described is to the target
Picture carries out default processing, and the Target Photo after default processing is then identified in a manner of the target identification.It is described
Judge the Target Photo whether meet preparatory condition can be judge the Target Photo whether be front shooting picture, institute
Default processing is stated to handle for the one or more in rotating the Target Photo, stretch, scaling.
Content in step 904, the extraction Target Photo, and compiling for the content included in the Target Photo is provided
Collect document.
In the step, methods described extracts the content in the Target Photo, and offer is included in the Target Photo
The editable document of content, specifically, if methods described to the Target Photo carry out Text region, methods described extraction described in
Word in Target Photo, and the editable document of the word included in the Target Photo is provided, facilitate user to target figure
Word in piece is arranged or edited.If methods described carries out Table recognition, methods described extraction institute to the Target Photo
The content in the form and form in Target Photo is stated, and the form and table content included in the Target Photo is provided
Editable document, facilitate user to enter edlin or arrangement to the form in the Target Photo.
For example, when user at school, train or attend a lecture when, it may be necessary to record the note, however, due on classroom
Limited time and hand-written speed it is limited, user can record class offerings by way of shooting slide content, need after class
User can be realized interior in the extraction Target Photo by performing trigger action to the picture of shooting when reading or arranging
Hold, and can further realize the editor to content in picture.Such as shown in figure 4, user can be in classroom photographs magic lantern
Piece content, when methods described receives the trigger action of user, the picture shown in Fig. 4 can be identified, and described in extraction
Word content in picture, there is provided the editable document including the word content in the picture, as shown in Figure 5.
Would generally be related mode in some posters, and for convenience of fast, user is meeting by recording sea by way of taking pictures
Contact method in report, however, when user needs to dial the phone in poster or saves as contact person, it is necessary to check poster
Photo, and the phone in recording photograph, then input phone dialed or stored, operation it is comparatively laborious, once and user do not have
Remember or misremember, it is necessary to repeatedly input repeatedly, cause inconvenience to the user.By the embodiment of the present invention, user needs to broadcast
When putting the phone in poster, trigger action can be performed to the photo of the poster, methods described can be based on the trigger action
The phone in photo is identified, and the editable shelves for including the phone are provided, user can be directly to the phone in the poster
Duplication operation is carried out, facilitates user to dial or store.
In addition, when user newly adds a group, it is necessary to record the phone of multiple members of community, if by shooting into
The mode of photo, it is impossible to further edited, caused inconvenience to the user to the phone in photo.Such as shown in Figure 6
The picture for including form, user can select to picture carry out Table recognition, methods described can the selection pair based on user
The picture carries out Table recognition, and provides the editable document for including table content, such as shown in fig. 7, facilitates user couple
Form is arranged and enters edlin to the content in form.
In some embodiments of the invention, methods described can extract the position of word and/or form in the Target Photo
Information, then being provided according to the positional information of word and/or form in the Target Photo includes word in the Target Photo
And/or the editable document of form.In such manner, it is possible to ensure the relative position relation of content and the target figure in editable document
It is consistent in piece, avoid out of order cause inconvenience to the user.Methods described can also extract word and/or table in the Target Photo
The size information of lattice, then being provided according to the size information of word and/or form in the Target Photo includes the target figure
The editable document of word and/or form in piece, in such manner, it is possible to improve the reduction of word and/or form in the Target Photo
Degree, user is more facilitated to read or edit.
Step 905, receive the shot operation performed to target picture.
In the step, methods described receives the shot operation that user performs to target picture.For example, when user needs
When sharing or recording the partial content in target picture, shot operation can be performed to target picture, methods described, which receives, to be used
The shot operation that family performs to target picture.
Step 906, the colouring information for obtaining the target picture, and divided the target picture according to color contrast
For multiple regions, wherein, the continuous edge in same region.
In the step, methods described obtains the colouring information of the target picture, and according to color contrast by the mesh
Mark picture is divided into multiple regions, wherein, the continuous edge in same region.The continuous edge in the same region can be
Difference between the pixel value at the edge in same region is less than predetermined threshold value.
Step 907, target area is determined from the multiple region, the target area of the target picture is cut
Figure.
In the step, methods described determines target area from the multiple region, and to the target of the target picture
Region carries out sectional drawing.So, methods described can realize the sectional drawing to subregion, it is not necessary to which user to user region is cut
It is cut out processing after figure to picture again, provides the user with conveniently.In addition, methods described can identify irregular edge, side
Just user have the region of broken edge to carry out region sectional drawing, improves the aesthetic feeling of sectional drawing picture.
In some embodiments of the invention, methods described can also by color or continuity limb recognition picture region, and
Edlin can be entered to region, such as remove background colour, dragging duplication etc..
Referring to Figure 10, Figure 10 is a kind of high-level schematic functional block diagram of mobile terminal provided in an embodiment of the present invention, such as Figure 10
Shown, the mobile terminal 1000 includes:
First receiving module 1001, the trigger action of the identification Target Photo for receiving user's input;
Determining module 1002, for based on the trigger action, it is determined that the target identification mode to the Target Photo;
Identification module 1003, for identifying the content in the Target Photo in a manner of the target identification;
Control module 1004 be used to extracting content in the Target Photo and providing include it is interior in the Target Photo
The editable document of appearance.
Alternatively, referring to Figure 11, Figure 11 is the functional module signal of another mobile terminal provided in an embodiment of the present invention
Figure, as shown in figure 11, the mobile terminal 1000 also includes:
Judge module 1005, for judging whether the Target Photo meets preparatory condition;
Processing module 1006, if not meeting the preparatory condition for the picture, the Target Photo is preset
Processing.
Alternatively, the target identification mode includes Text region or Table recognition, and referring to Figure 12, Figure 12 is of the invention real
The high-level schematic functional block diagram of another mobile terminal of example offer is applied, as shown in figure 12, the identification module 1003 includes:
First recognition unit 10031, if being Text region for the target identification mode, identify in the Target Photo
Word content;
Second recognition unit 10032, if being Table recognition for the target identification mode, identify in the Target Photo
Form and table content.
Alternatively, the identification module 1003, is specifically used for:
If the word in the Target Photo is handwriting, according to the corresponding relation of handwritten stroke shape and default stroke
Identify the handwriting in the Target Photo.
Alternatively, referring to Figure 13, Figure 13 is the functional module signal of another mobile terminal provided in an embodiment of the present invention
Figure, as shown in figure 13, the mobile terminal 1000 also includes:
Second receiving module 1007, for receiving the shot operation performed to target picture;
Division module 1008, for obtaining the colouring information of the target picture, and according to color contrast by the mesh
Mark picture is divided into multiple regions, wherein, the continuous edge in same region;
Screen capture module 1009, for determining target area from the multiple region, to the target area of the target picture
Domain carries out sectional drawing.
Mobile terminal 1000 can mobile terminal is realized in above-described embodiment each process, to avoid repeating, herein not
Repeat again.
One of ordinary skill in the art will appreciate that all or part of step for realizing above-described embodiment method is can be with
Completed by the related hardware of at least one programmed instruction, at least one program can be stored in shifting as shown in Figure 1
In the memory 109 of dynamic terminal 100, and can the processor 110 perform, at least one program is by the processor
110 realize following steps when performing:
Receive the trigger action of the identification Target Photo of user's input;
Based on the trigger action, it is determined that the target identification mode to the Target Photo;
The content in the Target Photo is identified in a manner of the target identification;
The content in the Target Photo is extracted, and the editable document of the content included in the Target Photo is provided.
Alternatively, before the content identified in a manner of the target identification in the Target Photo, the processor
110 are additionally operable to:
Judge whether the Target Photo meets preparatory condition;
If the picture does not meet the preparatory condition, default processing is carried out to the Target Photo.
Alternatively, the target identification mode includes Text region or Table recognition;
The content identified in a manner of the target identification in the Target Photo, including:
If the target identification mode is Text region, the word content in the Target Photo is identified;
If the target identification mode is Table recognition, form and table content in the Target Photo are identified.
Alternatively, the content identified in a manner of target identification in the Target Photo, including:
If the word in the Target Photo is handwriting, according to the corresponding relation of handwritten stroke shape and default stroke
Identify the handwriting in the Target Photo.
Alternatively, the processor 110 is additionally operable to:
Receive the shot operation performed to target picture;
The colouring information of the target picture is obtained, and the target picture is divided into by multiple areas according to color contrast
Domain, wherein, the continuous edge in same region;
Target area is determined from the multiple region, sectional drawing is carried out to the target area of the target picture.
One of ordinary skill in the art will appreciate that all or part of step for realizing above-described embodiment method is can be with
Completed by the related hardware of at least one programmed instruction, at least one program can be stored in one and computer-readable deposit
In storage media, at least one program upon execution, comprises the following steps:
Receive the trigger action of the identification Target Photo of user's input;
Based on the trigger action, it is determined that the target identification mode to the Target Photo;
The content in the Target Photo is identified in a manner of the target identification;
The content in the Target Photo is extracted, and the editable document of the content included in the Target Photo is provided.
Alternatively, before the content identified in a manner of the target identification in the Target Photo, described at least one
When individual program is performed, following steps can be also realized:
Judge whether the Target Photo meets preparatory condition;
If the picture does not meet the preparatory condition, default processing is carried out to the Target Photo.
Alternatively, the target identification mode includes Text region or Table recognition;
The content identified in a manner of the target identification in the Target Photo, including:
If the target identification mode is Text region, the word content in the Target Photo is identified;
If the target identification mode is Table recognition, form and table content in the Target Photo are identified.
Alternatively, the content identified in a manner of target identification in the Target Photo, including:
If the word in the Target Photo is handwriting, according to the corresponding relation of handwritten stroke shape and default stroke
Identify the handwriting in the Target Photo.
Alternatively, when at least one program is performed, following steps can also be realized:
Receive the shot operation performed to target picture;
The colouring information of the target picture is obtained, and the target picture is divided into by multiple areas according to color contrast
Domain, wherein, the continuous edge in same region;
Target area is determined from the multiple region, sectional drawing is carried out to the target area of the target picture.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal equipment (can be mobile phone, computer, clothes
Be engaged in device, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
The preferred embodiments of the present invention are these are only, are not intended to limit the scope of the invention, it is every to utilize this hair
The equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of image identification method, applied to mobile terminal, it is characterised in that methods described includes:
Receive the trigger action of the identification Target Photo of user's input;
Based on the trigger action, it is determined that the target identification mode to the Target Photo;
The content in the Target Photo is identified in a manner of the target identification;
The content in the Target Photo is extracted, and the editable document of the content included in the Target Photo is provided.
2. image identification method as claimed in claim 1, it is characterised in that it is described identified in a manner of the target identification it is described
Before content in Target Photo, methods described also includes:
Judge whether the Target Photo meets preparatory condition;
If the picture does not meet the preparatory condition, default processing is carried out to the Target Photo.
3. image identification method as claimed in claim 1, it is characterised in that the target identification mode include Text region or
Table recognition;
The content identified in a manner of the target identification in the Target Photo, including:
If the target identification mode is Text region, the word content in the Target Photo is identified;
If the target identification mode is Table recognition, form and table content in the Target Photo are identified.
4. image identification method as claimed in claim 1, it is characterised in that described that the target is identified in a manner of target identification
Content in picture, including:
If the word in the Target Photo is handwriting, identified according to the corresponding relation of handwritten stroke shape and default stroke
Handwriting in the Target Photo.
5. the image identification method as described in any one of Claims 1-4, it is characterised in that methods described also includes:
Receive the shot operation performed to target picture;
The colouring information of the target picture is obtained, and the target picture is divided into by multiple regions according to color contrast,
Wherein, the continuous edge in same region;
Target area is determined from the multiple region, sectional drawing is carried out to the target area of the target picture.
6. a kind of mobile terminal, it is characterised in that the mobile terminal includes memory, at least one processor and is stored in institute
State on memory and can at least one program of at least one computing device, at least one program by it is described extremely
Following steps are realized during a few computing device:
Receive the trigger action of the identification Target Photo of user's input;
Based on the trigger action, it is determined that the target identification mode to the Target Photo;
The content in the Target Photo is identified in a manner of the target identification;
The content in the Target Photo is extracted, and the editable document of the content included in the Target Photo is provided.
7. mobile terminal as claimed in claim 6, it is characterised in that described that the target is identified in a manner of the target identification
Before content in picture, at least one processor is additionally operable to:
Judge whether the Target Photo meets preparatory condition;
If the picture does not meet the preparatory condition, default processing is carried out to the Target Photo.
8. mobile terminal as claimed in claim 6, it is characterised in that the target identification mode includes Text region or form
Identification;
The content identified in a manner of the target identification in the Target Photo, including:
If the target identification mode is Text region, the word content in the Target Photo is identified;
If the target identification mode is Table recognition, form and table content in the Target Photo are identified.
9. mobile terminal as claimed in claim 6, it is characterised in that described that the Target Photo is identified in a manner of target identification
In content, including:
If the word in the Target Photo is handwriting, identified according to the corresponding relation of handwritten stroke shape and default stroke
Handwriting in the Target Photo.
10. a kind of computer-readable recording medium, the computer-readable recording medium storage has computer executable at least
One program, it is characterised in that at least one program makes the computer perform above-mentioned power when being performed by the computer
Profit requires the step in the method described in 1~5 any one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710903876.0A CN107678650A (en) | 2017-09-29 | 2017-09-29 | A kind of image identification method, mobile terminal and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710903876.0A CN107678650A (en) | 2017-09-29 | 2017-09-29 | A kind of image identification method, mobile terminal and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107678650A true CN107678650A (en) | 2018-02-09 |
Family
ID=61138373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710903876.0A Pending CN107678650A (en) | 2017-09-29 | 2017-09-29 | A kind of image identification method, mobile terminal and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107678650A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509957A (en) * | 2018-03-30 | 2018-09-07 | 努比亚技术有限公司 | Character recognition method, terminal and computer-readable medium |
CN108874283A (en) * | 2018-05-29 | 2018-11-23 | 努比亚技术有限公司 | Image identification method, mobile terminal and computer readable storage medium |
CN108920612A (en) * | 2018-06-28 | 2018-11-30 | 山东中孚安全技术有限公司 | Parsing doc binary format and the method and system for extracting picture in document |
CN109461111A (en) * | 2018-10-26 | 2019-03-12 | 连尚(新昌)网络科技有限公司 | Image editing method, device, terminal device and medium |
CN111353422A (en) * | 2020-02-27 | 2020-06-30 | 维沃移动通信有限公司 | Information extraction method and device and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279262A (en) * | 2013-04-25 | 2013-09-04 | 深圳市中兴移动通信有限公司 | Method and device for extracting content from image |
CN104317520A (en) * | 2014-10-23 | 2015-01-28 | 小米科技有限责任公司 | Method and device for processing contents of display area |
CN105761201A (en) * | 2016-02-02 | 2016-07-13 | 山东大学 | Method for translation of characters in picture |
CN106791022A (en) * | 2016-11-30 | 2017-05-31 | 努比亚技术有限公司 | A kind of mobile terminal and screenshot method |
-
2017
- 2017-09-29 CN CN201710903876.0A patent/CN107678650A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279262A (en) * | 2013-04-25 | 2013-09-04 | 深圳市中兴移动通信有限公司 | Method and device for extracting content from image |
CN104317520A (en) * | 2014-10-23 | 2015-01-28 | 小米科技有限责任公司 | Method and device for processing contents of display area |
CN105761201A (en) * | 2016-02-02 | 2016-07-13 | 山东大学 | Method for translation of characters in picture |
CN106791022A (en) * | 2016-11-30 | 2017-05-31 | 努比亚技术有限公司 | A kind of mobile terminal and screenshot method |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509957A (en) * | 2018-03-30 | 2018-09-07 | 努比亚技术有限公司 | Character recognition method, terminal and computer-readable medium |
CN108509957B (en) * | 2018-03-30 | 2022-08-05 | 深圳市阳日电子有限公司 | Character recognition method, terminal and computer readable medium |
CN108874283A (en) * | 2018-05-29 | 2018-11-23 | 努比亚技术有限公司 | Image identification method, mobile terminal and computer readable storage medium |
CN108874283B (en) * | 2018-05-29 | 2021-06-18 | 努比亚技术有限公司 | Picture identification method, mobile terminal and computer readable storage medium |
CN108920612A (en) * | 2018-06-28 | 2018-11-30 | 山东中孚安全技术有限公司 | Parsing doc binary format and the method and system for extracting picture in document |
CN109461111A (en) * | 2018-10-26 | 2019-03-12 | 连尚(新昌)网络科技有限公司 | Image editing method, device, terminal device and medium |
CN111353422A (en) * | 2020-02-27 | 2020-06-30 | 维沃移动通信有限公司 | Information extraction method and device and electronic equipment |
CN111353422B (en) * | 2020-02-27 | 2023-08-22 | 维沃移动通信有限公司 | Information extraction method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107358227A (en) | A kind of mark recognition method, mobile terminal and computer-readable recording medium | |
CN107678650A (en) | A kind of image identification method, mobile terminal and computer-readable recording medium | |
CN107748645A (en) | Reading method, mobile terminal and computer-readable recording medium | |
CN107748856A (en) | Two-dimensional code identification method, terminal and computer-readable recording medium | |
CN108234295A (en) | Display control method, terminal and the computer readable storage medium of group's functionality controls | |
CN107682627A (en) | A kind of acquisition parameters method to set up, mobile terminal and computer-readable recording medium | |
CN107832397A (en) | A kind of image processing method, device and computer-readable recording medium | |
CN108289174A (en) | A kind of image pickup method, mobile terminal and computer readable storage medium | |
CN107241494A (en) | A kind of quick inspection method of data content, mobile terminal and storage medium | |
CN109032466A (en) | Long screenshot method, mobile terminal and storage medium based on double screen | |
CN107333056A (en) | Image processing method, device and the computer-readable recording medium of moving object | |
CN107844231A (en) | A kind of interface display method, mobile terminal and computer-readable recording medium | |
CN107944022A (en) | Picture classification method, mobile terminal and computer-readable recording medium | |
CN107239205A (en) | A kind of photographic method, mobile terminal and storage medium | |
CN107844230A (en) | A kind of advertisement page method of adjustment, mobile terminal and computer-readable recording medium | |
CN109300099A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN107943494A (en) | Distribution method and mobile terminal are applied by all kinds of means | |
CN108307111A (en) | A kind of zoom photographic method, mobile terminal and storage medium | |
CN108182664A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN108012029A (en) | A kind of information processing method, equipment and computer-readable recording medium | |
CN107566608A (en) | A kind of system air navigation aid, equipment and computer-readable recording medium | |
CN107613206A (en) | A kind of image processing method, mobile terminal and computer-readable recording medium | |
CN107450796A (en) | A kind of image processing method, mobile terminal and computer-readable recording medium | |
CN109325133A (en) | A kind of method of Information locating, terminal and computer readable storage medium | |
CN107943397A (en) | A kind of note generation method, mobile terminal and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180209 |