CN104424472A - Image recognition method and user terminal - Google Patents

Image recognition method and user terminal Download PDF

Info

Publication number
CN104424472A
CN104424472A CN201310400604.0A CN201310400604A CN104424472A CN 104424472 A CN104424472 A CN 104424472A CN 201310400604 A CN201310400604 A CN 201310400604A CN 104424472 A CN104424472 A CN 104424472A
Authority
CN
China
Prior art keywords
user terminal
mark
region
image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310400604.0A
Other languages
Chinese (zh)
Other versions
CN104424472B (en
Inventor
徐丹华
汪运斌
龙志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN201310400604.0A priority Critical patent/CN104424472B/en
Priority to CN201910061460.8A priority patent/CN109902687B/en
Priority to PCT/CN2014/085761 priority patent/WO2015032308A1/en
Publication of CN104424472A publication Critical patent/CN104424472A/en
Application granted granted Critical
Publication of CN104424472B publication Critical patent/CN104424472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/90Identifying an image sensor based on its output data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an image recognition method and a user terminal. The method comprises the steps of detecting the marking operation on an image by the user terminal, determining an area marked by the user on the image, recognizing a marked content of the marked area by the user terminal, and amplifying and displaying the marked content. By adopting the image recognition method and the user terminal, the content interested by the user can be amplified and displayed.

Description

A kind of image-recognizing method and user terminal
Technical field
The present invention relates to the communications field, particularly relate to a kind of image-recognizing method and user terminal.
Background technology
Usually, people, when reading, newspaper or advertising slogan in the street, see that oneself interested content can be wanted to store, particularly when outdoor, do not have paper pen to record.
Existing optical character identification (OCR, Optical Character Recognition) technology, can by content of text through optical instrument, as image scanner, facsimile recorder or any photographic goods, image is proceeded to the terminal such as computing machine, mobile phone, then content of text identified and then be presented in the terminal such as computing machine, mobile phone, but, because the screen of terminal device is less, one page content of required display is more again, cannot the content comparatively paid close attention to of the clear user of checking.
Summary of the invention
The invention provides a kind of image-recognizing method and user terminal, user can be solved and clearly cannot check the content comparatively paid close attention to.
First aspect, provides a kind of image-recognizing method, comprising:
Described user terminal detects the operation that described user carries out marking on the image;
Described user terminal determines the region of described user mark on the image;
The tag content in the region marked described in described user terminal identification;
Described user terminal amplifies the described tag content of display.
In conjunction with first aspect, in the first embodiment of first aspect, described user terminal determines that the concrete steps in the region of user's mark on the image comprise:
Described user terminal determines the region of the described mark of described image by the trajectory coordinates of described mark.
In conjunction with the first embodiment of first aspect, in the second embodiment of first aspect, described user terminal determines that by the trajectory coordinates of described mark the concrete steps in the region of the described mark of described image comprise:
If the track of described mark is for closing track, the region that described user terminal is determined in closed track is the region of described mark.
In conjunction with the first embodiment of first aspect, in the third embodiment of first aspect, described user terminal determines that by the trajectory coordinates of described mark the concrete steps in the region of the described mark of described image comprise:
If the track of described mark is for closing track, the region that described user terminal is determined in closed track is the region of described mark.
Second aspect, provides a kind of user terminal, and described user terminal comprises:
Detecting unit, for detecting the operation that described user carries out marking on the image;
Determining unit, for determining the region of described user mark on the image;
Recognition unit, for identifying the tag content in the region of described mark;
Indicative control unit, amplifies the described tag content of display for controlling display.
In conjunction with second aspect, in the first embodiment of second aspect, comprising:
Described determining unit, for determining the region of the described mark of described image by the trajectory coordinates of described mark.
In conjunction with the first embodiment of second aspect, in the second embodiment of second aspect, comprising:
Described determining unit, for the track of described mark for closing track, the region that described user terminal is determined in closed track is the region of described mark.
In conjunction with the first embodiment of second aspect, in the third embodiment of second aspect, comprising:
Described determining unit, the track for described mark is non-close track, and the top of described user terminal determination non-close track is the region of described mark.
In conjunction with second aspect, or the first embodiment of second aspect, or the second embodiment of second aspect, or the third embodiment of second aspect, in the 4th kind of embodiment of second aspect, described indicative control unit comprises:
Win module, for identifying that the described tag content obtained is won;
Processing module, for processing the described tag content won, and the described tag content after specimens preserving;
Display control module, carries out amplification display for controlling described display to the described tag content after process.
As can be seen from the above technical solutions, the embodiment of the present invention has the following advantages:
In the embodiment of the present invention, user terminal detects the operation of user in the enterprising row labels of image, then the region of the mark that user does on image is determined, user terminal identifies the tag content in the region of mark, then amplification display is carried out to the tag content after identification, amplification display can be carried out to the interested content of user.
Accompanying drawing explanation
Fig. 1 is image-recognizing method embodiment schematic diagram in the embodiment of the present invention;
Fig. 2 is another embodiment schematic diagram of image-recognizing method in the embodiment of the present invention;
Fig. 3 is user terminal example structure reference diagram in the embodiment of the present invention;
Fig. 4 is another embodiment schematic diagram of user terminal in the embodiment of the present invention;
Fig. 5 is another embodiment schematic diagram of user terminal in the embodiment of the present invention.
Embodiment
The embodiment of the present invention provides a kind of image-recognizing method and user terminal, can realize user terminal and only amplify display to the tag content in the region of mark.
Please refer to Fig. 1, in the embodiment of the present invention, image-recognizing method embodiment comprises:
101, user terminal detects the operation of user in the enterprising row labels of image;
In the present embodiment, user terminal can detect the operation of user in the enterprising row labels of image, user can be made marks on image by finger, also can be made marks on image by writing pencil, do not limit herein, the obtain manner of this image does not limit, and can be the image of user terminal by producing after camera shooting, the image also can downloaded from other electronic equipment for user terminal.
102, user terminal determines the region that user marks on image;
In the present embodiment, detect user after the enterprising row labels of image in step 101, user terminal can determine the region of the mark that user does on image.
103, the tag content in the region of user terminal identification marking;
In the present embodiment, user terminal, can the tag content in region of identification marking after determining the region marked, and makes user can check by the screen of user terminal the tag content identifying and obtain.
104, user terminal amplifies show tags content.
In the present embodiment, user terminal can in step 103 identify after tag content be shown enlarged on the screen of user terminal.
In the present embodiment, user terminal detects user in the enterprising row labels of image, then the region of the mark that user does on image is determined, the tag content in the region of user terminal identification marking, then the tag content after Identification display is amplified, achieve user terminal only to identify the tag content in the region of mark, and then amplification display can be carried out to the interested content of user.
For the ease of understanding, be described the image-recognizing method in the embodiment of the present invention with an instantiation below, refer to Fig. 2, in the embodiment of the present invention, another embodiment of image-recognizing method comprises:
201, user terminal detects the operation of user in the enterprising row labels of image;
In the present embodiment, user terminal detects user in the enterprising row labels of image, the obtain manner of image can be the image of user terminal by producing after camera shooting, also the image can downloaded from other electronic equipment for user terminal, such as, when user is in reading, newspaper or when tag line is seen in outdoor, see oneself interested word content or pattern, user can send instruction to user terminal, user terminal is taken pictures according to the instruction of user, and synthetic image upon taking a picture, the instruction of user is the indicating user terminal shooting interested word content of user or pattern, the interested word content of user or pattern is comprised in the image generated after taking pictures, user terminal can first by image display on the screen of the user terminal, user can mark interested content on image, the mark mode of user on image does not limit, user can be made marks on image by finger, also can be made marks on image by writing pencil, certain those skilled in the art can also according to finger and writing pencil two kinds of mark mode institute other mark modes apparent, be described to be labeled as example to word content in the present embodiment.
In actual applications, user terminal can open default mark interface to supply user in the enterprising row labels of image, after user has marked, by the mode clicking " completing " virtual key, user can inform that user terminal completes mark, by voice command, user also can inform that user terminal completes mark, inform that the mode that user terminal completes mark does not limit herein, certain user terminal can preset threshold value A, when the time that user marks on image is more than or equal to this threshold value A, user terminal can send information to inquire whether user completes mark.
202, user terminal is by the region of the mark in the trajectory coordinates determination image of mark;
In the present embodiment, the mark that user does on image does not limit, it can be straight line, or curve, or it is oval, or rectangle, or it is circular, in actual applications, the track of user terminal identification marking is prior art, because the image in the screen of user to user terminal marks, user terminal can detect the touch point of user on screen, identify the trajectory coordinates of touch point, user terminal is by the region of the mark in the trajectory coordinates determination image of mark, if mark track for close track (as, oval, rectangle or circle), the user terminal region that can preset in closed track is the region of mark, if mark track be non-close track (as, straight line or curve), user terminal can preset the region for mark above non-close track, in actual applications, user terminal can preset near the region of the capable character area of N above non-close track as mark, if image comprise pattern (as, personage, object) time, user terminal can point out user to select closed track to mark, the region that certainly can also mark according to the custom sets itself of user, such as, the below of non-close track is the region of mark.
203, the tag content in the region of user terminal identification marking;
In the present embodiment, take tag content as word be example, user terminal can by the tag content in the OCR only region of identification marking, OCR technology is by checking the character that paper prints, and detect the shape of dark, bright pattern determination character, then with character identifying method, shape is translated into the technology of computword, the concrete implementation of OCR is known technology, does not describe in detail herein.
204, user terminal will identify that the tag content obtained is won;
205, user terminal processes the tag content won, and the tag content after specimens preserving;
206, user terminal amplifies show tags content.
In the present embodiment, user terminal is won the tag content obtained in step 203, if tag content is word, then typesetting is again carried out to the tag content won, again the tag content after typesetting is carried out preserving and showing to user, if tag content is pattern, then the parameter such as size, tone of this pattern is processed, then the tag content after user terminal amplification Graphics Processing, in actual applications, the tag content of preserving in the user terminal can be shared with other user by user.
In the present embodiment, user terminal detects the operation of user in the enterprising row labels of image, user terminal is by the region of the mark in the trajectory coordinates determination image of mark, the tag content in the region of user terminal identification marking, user terminal will identify that the tag content obtained is won, user terminal processes the tag content won, and the tag content after specimens preserving, user terminal shows the tag content after process, achieve user terminal and only display is amplified to the tag content in the region of mark, and then facilitate user to check interested content, and the tag content won carries out preserving the space that effectively can save storage by user terminal, and then make user search oneself interested content at any time and be shared with other user.
Be described the user terminal of the embodiment of the present invention for performing above-mentioned image-recognizing method below, its basic logical structure is with reference to figure 3, and in the embodiment of the present invention, user terminal embodiment comprises:
Detecting unit 301, determining unit 302, recognition unit 303 and indicative control unit 304;
Detecting unit 301, for detecting the operation of user in the enterprising row labels of image;
Determining unit 302, for determining the region of the mark of user on image;
Recognition unit 303, for the tag content in the region of identification marking;
Indicative control unit 304, amplifies the tag content after Identification display for controlling display.
In the present embodiment, detecting unit 301 detects the operation of user in the enterprising row labels of image, determining unit 302 determines the region that user marks on image, the tag content in the region of recognition unit 303 identification marking, indicative control unit 304 controls the tag content after the identification of display amplification Identification display unit 303, achieve user terminal only to identify the tag content in the region of mark, and then amplification display can be carried out to the interested content of user.
In order to better understand the above embodiments, mutual between the modules comprised user terminal with specific embodiment below and unit is described the data interactive mode in user terminal, refer to Fig. 4, in the embodiment of the present invention, another embodiment of user terminal comprises:
Detecting unit 401, determining unit 402, recognition unit 403, indicative control unit 404;
Wherein indicative control unit 404 comprises: win module 4041, processing module 4042 and display control module 4043;
Detecting unit 401 detects user in the enterprising row labels of image, in actual applications, the obtain manner of image can be the image by producing after camera shooting, also the image can downloaded from other electronic equipment for user terminal, such as, when user is in reading, newspaper or when tag line is seen in outdoor, see oneself interested word content or pattern, user can send instruction to user terminal, user terminal is taken pictures according to the instruction of user, and synthetic image upon taking a picture, the instruction of user is the indicating user terminal shooting interested word content of user or pattern, the interested word content of user or pattern is comprised in the image generated after taking pictures, user terminal can first by image display on the screen of the user terminal, user can mark interested word content on image, the mark mode of user on image does not limit, user can be made marks on image by finger, also can be made marks on image by writing pencil, certain those skilled in the art can also according to finger and writing pencil two kinds of mark mode institute other mark modes apparent, it is complete that detecting unit 401 can inform that determining unit 402 detects, and image is sent to determining unit 402,
Determining unit 402 is by the region of the mark of the trajectory coordinates determination image of mark, wherein the mark of user on image does not limit, it can be straight line, or curve, or it is oval, or rectangle, or it is circular, in actual applications, the track of the region recognition mark of mark is prior art, because the image in the screen of user to user terminal marks, determining unit 402 can detect the touch point of user, identify the trajectory coordinates of touch point, then determining unit 402 is by the region of the mark in the trajectory coordinates determination image of mark, if mark track for close track (as, oval, rectangle or circle), determining unit 402 determines that the region in closed track is the region of mark, if mark track be non-close track (as, straight line or curve), determining unit 402 determines the region for mark above non-close track, in actual applications, determining unit 402 is determined near the region of the capable character area of N above non-close track as mark, if image comprise pattern (as, personage, object) time, user can be pointed out to select closed track to mark, the region that certainly can also mark according to the custom sets itself of user, such as, the below of non-close track is the region of mark, image after determining is sent to recognition unit 403 by determining unit 402, and inform the region of the mark in recognition unit 403 image,
The tag content in the region of recognition unit 403 identification marking, if when tag content is word, can by the tag content in the OCR only region of identification marking, OCR technology is by checking the character that paper prints, and detect the shape of dark, bright pattern determination character, then with character identifying method, shape is translated into the technology of computword, the concrete implementation of OCR is known technology, do not describe in detail herein, recognition unit 403 sends image to winning module 4041, and the tag content identified is informed and win module 4041;
The tag content won in the region of the mark in module 4041 pairs of images is won, and the tag content won is sent to processing module 4042;
If tag content is word, processing module 4042 carries out typesetting again to the tag content won, again the tag content after typesetting is carried out preserving and showing to user, if tag content is pattern, the parameter such as size, tone of processing module 4042 to this pattern processes, tag content after processing module 4042 specimens preserving, in actual applications, the tag content of preserving in the user terminal can be shared with other user by user, and the tag content after process is sent to display control module 4043 by processing module 4042;
Display control module 4043 controls the tag content after display amplification Graphics Processing.
In the present embodiment, detecting unit 401 detects the operation of user in the enterprising row labels of image, determining unit 402 is by the region of the mark of the trajectory coordinates determination image of mark, the tag content in the region of recognition unit 403 identification marking, win module 4041 and will identify that the tag content obtained is won, processing module 4042 processes the tag content won, and the tag content after specimens preserving, display control module 4043 controls the tag content after display amplification Graphics Processing, achieve user terminal and only display is amplified to the tag content in the region of mark, and then facilitate user to check interested content, and the tag content won is undertaken preserving the space that effectively can save storage by processing module 4042 by user terminal, and then make user search oneself interested content at any time and be shared with other user.
Further illustrating below to user terminal in the embodiment of the present invention, refer to Fig. 5, in the embodiment of the present invention, another embodiment of user terminal comprises: processor 501, camera 502, display 503 and the storer 504 for storage figure picture for the production of image.
Processor 501 detects the operation of user in the enterprising row labels of image, determines the region of the mark of user on image, then the tag content in the region of identification marking;
Display 503 amplifies show tags content.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the system of foregoing description, the specific works process of device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
The above, above embodiment only in order to technical scheme of the present invention to be described, is not intended to limit; Although with reference to previous embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or equivalent replacement is carried out to wherein portion of techniques feature, and these amendments or replacement, do not make the essence of appropriate technical solution depart from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (9)

1. an image-recognizing method, is characterized in that, comprising:
Described user terminal detects the operation that described user carries out marking on the image;
Described user terminal determines the region of described user mark on the image;
The tag content in the region marked described in described user terminal identification;
Described user terminal amplifies the described tag content of display.
2. method according to claim 1, is characterized in that, described user terminal determines that the concrete steps in the region of user's mark on the image comprise:
Described user terminal determines the region of the described mark of described image by the trajectory coordinates of described mark.
3. method according to claim 2, is characterized in that, described user terminal determines that by the trajectory coordinates of described mark the concrete steps in the region of the described mark of described image comprise:
If the track of described mark is for closing track, the region that described user terminal is determined in closed track is the region of described mark.
4. method according to claim 2, is characterized in that, described user terminal determines that by the trajectory coordinates of described mark the concrete steps in the region of the described mark of described image comprise:
If the track of described mark is non-close track, the top of described user terminal determination non-close track is the region of described mark.
5. a user terminal, is characterized in that, described user terminal comprises:
Detecting unit, for detecting the operation that described user carries out marking on the image;
Determining unit, for determining the region of described user mark on the image;
Recognition unit, for identifying the tag content in the region of described mark;
Indicative control unit, amplifies the described tag content of display for controlling display.
6. user terminal according to claim 5, is characterized in that,
Described determining unit, for determining the region of the described mark of described image by the trajectory coordinates of described mark.
7. user terminal according to claim 6, is characterized in that,
Described determining unit, for the track of described mark for closing track, the region that described user terminal is determined in closed track is the region of described mark.
8. user terminal according to claim 6, is characterized in that,
Described determining unit, the track for described mark is non-close track, and the top of described user terminal determination non-close track is the region of described mark.
9. the user terminal according to any one of claim 5-8, is characterized in that, described indicative control unit comprises:
Win module, for identifying that the described tag content obtained is won;
Processing module, for processing the described tag content won, and the described tag content after specimens preserving;
Display control module, carries out amplification display for controlling described display to the described tag content after process.
CN201310400604.0A 2013-09-05 2013-09-05 A kind of image-recognizing method and user terminal Active CN104424472B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201310400604.0A CN104424472B (en) 2013-09-05 2013-09-05 A kind of image-recognizing method and user terminal
CN201910061460.8A CN109902687B (en) 2013-09-05 2013-09-05 Image identification method and user terminal
PCT/CN2014/085761 WO2015032308A1 (en) 2013-09-05 2014-09-02 Image recognition method and user terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310400604.0A CN104424472B (en) 2013-09-05 2013-09-05 A kind of image-recognizing method and user terminal

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201910061460.8A Division CN109902687B (en) 2013-09-05 2013-09-05 Image identification method and user terminal

Publications (2)

Publication Number Publication Date
CN104424472A true CN104424472A (en) 2015-03-18
CN104424472B CN104424472B (en) 2019-02-19

Family

ID=52627798

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910061460.8A Active CN109902687B (en) 2013-09-05 2013-09-05 Image identification method and user terminal
CN201310400604.0A Active CN104424472B (en) 2013-09-05 2013-09-05 A kind of image-recognizing method and user terminal

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910061460.8A Active CN109902687B (en) 2013-09-05 2013-09-05 Image identification method and user terminal

Country Status (2)

Country Link
CN (2) CN109902687B (en)
WO (1) WO2015032308A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032994A (en) * 2019-06-10 2019-07-19 上海肇观电子科技有限公司 Character detecting method, reading aids, circuit and medium
CN110059678A (en) * 2019-04-17 2019-07-26 上海肇观电子科技有限公司 A kind of detection method, device and computer readable storage medium
US10796187B1 (en) 2019-06-10 2020-10-06 NextVPU (Shanghai) Co., Ltd. Detection of texts

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9830055B2 (en) 2016-02-16 2017-11-28 Gal EHRLICH Minimally invasive user metadata
CN108461129B (en) * 2018-03-05 2022-05-20 余夏夏 Medical image labeling method and device based on image authentication and user terminal
CN116030388B (en) * 2022-12-30 2023-08-11 以萨技术股份有限公司 Processing method for identifying task, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6956779B2 (en) * 1999-01-14 2005-10-18 Silicon Storage Technology, Inc. Multistage autozero sensing for a multilevel non-volatile memory integrated circuit system
CN102184396A (en) * 2011-06-13 2011-09-14 北方工业大学 Document image tilt correction method based on OCR recognition feedback
CN102209969A (en) * 2008-11-12 2011-10-05 富士通株式会社 Character area extracting device, image picking-up device provided with character area extracting function and character area extracting program
CN102999752A (en) * 2012-11-15 2013-03-27 广东欧珀移动通信有限公司 Method and device for quickly identifying local characters in picture and terminal

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100472604C (en) * 2003-03-04 2009-03-25 富士通株式会社 Image display method, image display program, and information device
KR101558211B1 (en) * 2009-02-19 2015-10-07 엘지전자 주식회사 User interface method for inputting a character and mobile terminal using the same
KR101857564B1 (en) * 2009-05-15 2018-05-15 삼성전자 주식회사 Method for processing image of mobile terminal
KR101527037B1 (en) * 2009-06-23 2015-06-16 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN102169477B (en) * 2010-02-25 2013-02-20 汉王科技股份有限公司 Electronic document displaying method and device
KR101851239B1 (en) * 2011-11-08 2018-04-23 삼성전자 주식회사 Device and method for processing an image expression of wireless terminal
TWI544350B (en) * 2011-11-22 2016-08-01 Inst Information Industry Input method and system for searching by way of circle
CN102662566B (en) * 2012-03-21 2016-08-24 中兴通讯股份有限公司 Screen content amplification display method and terminal
CN103176712B (en) * 2013-03-08 2016-03-09 小米科技有限责任公司 A kind of image magnification display method and device
CN103279286A (en) * 2013-05-06 2013-09-04 鸿富锦精密工业(深圳)有限公司 Electronic device and method for adjusting display scale of pictures
CN103235836A (en) * 2013-05-07 2013-08-07 西安电子科技大学 Method for inputting information through mobile phone

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6956779B2 (en) * 1999-01-14 2005-10-18 Silicon Storage Technology, Inc. Multistage autozero sensing for a multilevel non-volatile memory integrated circuit system
CN102209969A (en) * 2008-11-12 2011-10-05 富士通株式会社 Character area extracting device, image picking-up device provided with character area extracting function and character area extracting program
CN102184396A (en) * 2011-06-13 2011-09-14 北方工业大学 Document image tilt correction method based on OCR recognition feedback
CN102999752A (en) * 2012-11-15 2013-03-27 广东欧珀移动通信有限公司 Method and device for quickly identifying local characters in picture and terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059678A (en) * 2019-04-17 2019-07-26 上海肇观电子科技有限公司 A kind of detection method, device and computer readable storage medium
CN110032994A (en) * 2019-06-10 2019-07-19 上海肇观电子科技有限公司 Character detecting method, reading aids, circuit and medium
CN110032994B (en) * 2019-06-10 2019-09-20 上海肇观电子科技有限公司 Character detecting method, reading aids, circuit and medium
US10796187B1 (en) 2019-06-10 2020-10-06 NextVPU (Shanghai) Co., Ltd. Detection of texts

Also Published As

Publication number Publication date
WO2015032308A1 (en) 2015-03-12
CN109902687B (en) 2023-12-08
CN109902687A (en) 2019-06-18
CN104424472B (en) 2019-02-19

Similar Documents

Publication Publication Date Title
US10013624B2 (en) Text entity recognition
US9836263B2 (en) Display control device, display control method, and program
CN104424472A (en) Image recognition method and user terminal
US9165406B1 (en) Providing overlays based on text in a live camera view
US9058516B2 (en) Automatic identification of fields and labels in forms
US9070036B2 (en) Systems and methods for note recognition
US20120287070A1 (en) Method and apparatus for notification of input environment
US20140297646A1 (en) Systems and methods for managing notes
US20150358549A1 (en) Image capturing parameter adjustment in preview mode
US20130091474A1 (en) Method and electronic device capable of searching and displaying selected text
KR20190021146A (en) Method and device for translating text displayed on display
JP4753842B2 (en) Idea extraction support system and method
CN107491428A (en) Bank's list and its information input method and device based on optical lattice technology
CN111754414B (en) Image processing method and device for image processing
CN109670507B (en) Picture processing method and device and mobile terminal
TW201044286A (en) Method and system for actively detecting and recognizing placards
CN104809099A (en) Document file generating device and document file generation method
WO2016057161A1 (en) Text-based thumbnail generation
JP2012027908A (en) Visual processing device, visual processing method and visual processing system
CN110717060A (en) Image mask filtering method and device and storage medium
CN104956378A (en) Electronic apparatus and handwritten-document processing method
JP2016085547A (en) Electronic apparatus and method
JP6279732B2 (en) TERMINAL DEVICE, INPUT CONTENT CONTROL METHOD, AND PROGRAM
US10417515B2 (en) Capturing annotations on an electronic display
CN115563255A (en) Method and device for processing dialog text, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20171116

Address after: Metro Songshan Lake high tech Industrial Development Zone, Guangdong Province, Dongguan City Road 523808 No. 2 South Factory (1) project B2 -5 production workshop

Applicant after: HUAWEI terminal (Dongguan) Co., Ltd.

Address before: 518129 Longgang District, Guangdong, Bantian HUAWEI base B District, building 2, building No.

Applicant before: Huawei Device Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 523808 Southern Factory Building (Phase I) Project B2 Production Plant-5, New Town Avenue, Songshan Lake High-tech Industrial Development Zone, Dongguan City, Guangdong Province

Patentee after: Huawei Device Co., Ltd.

Address before: 523808 Southern Factory Building (Phase I) Project B2 Production Plant-5, New Town Avenue, Songshan Lake High-tech Industrial Development Zone, Dongguan City, Guangdong Province

Patentee before: HUAWEI terminal (Dongguan) Co., Ltd.

CP01 Change in the name or title of a patent holder