CN106155542A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN106155542A
CN106155542A CN201510160891.1A CN201510160891A CN106155542A CN 106155542 A CN106155542 A CN 106155542A CN 201510160891 A CN201510160891 A CN 201510160891A CN 106155542 A CN106155542 A CN 106155542A
Authority
CN
China
Prior art keywords
picture
mark
gesture
layer
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510160891.1A
Other languages
Chinese (zh)
Other versions
CN106155542B (en
Inventor
龚佳毅
殷文婧
陈喆
侯方
王景宇
王志斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510160891.1A priority Critical patent/CN106155542B/en
Publication of CN106155542A publication Critical patent/CN106155542A/en
Application granted granted Critical
Publication of CN106155542B publication Critical patent/CN106155542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention relates to a kind of image processing method and device, its method includes: obtains the picture being loaded into, renders at picture layer and show described picture;Receive the operating gesture that user triggers on picture;Identify operating gesture, generate corresponding mark object according to recognition result correspondence position on described picture;Mark object is rendered by mark layer, and in corresponding position and picture synthesis display.The present invention makes full use of touch screen feature, responds various interaction gesture, in conjunction with shape recognition, make user can complete picture labeling operation quickly and easily on touch screen terminal, simplify office crowd and use the operating process of annotation tool, greatly improve work efficiency, optimize Consumer's Experience.

Description

Image processing method and device
Technical field
The present invention relates to picture Processing Technique field, particularly relate to one and on the touchscreen picture is marked The image processing method of note operation and device.
Background technology
At present, along with iPhone, itouch, iPad, android mobile phone and the stream of android panel computer OK, touch screen operation has been increasingly becoming the popular and mode of operation of everybody custom;Use picture mark soft Part directly carries out picture mark on these touch panel devices the most increasingly to be favored by mobile office crowd.No Being same as on PC using mouse to carry out the mode that operates, the operation on touch screen has a feature of its uniqueness: finger Clicking on and replace click, finger slide replaces mouse to move, also multi-point touch, long press etc. each Plant operating gesture, in conjunction with shape recognition, many conveniently interactive actions can be expanded, greatly Improve the work efficiency of mobile office.
But, prior art uses annotation tool to carry out the operating process of picture mark more on the touchscreen Complexity, it is also underused the operating gesture recognition point of touch screen, thus reduces picture mark Efficiency.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method and device, can fast and easy on the touchscreen Realize picture mark, simplify user operation.
A kind of image processing method that the embodiment of the present invention proposes, including:
Obtain the picture being loaded into, render at picture layer and show described picture;
Receive the operating gesture that user triggers on described picture;
Identify described operating gesture, generate corresponding mark according to recognition result correspondence position on described picture Note object;
Described mark object is rendered by mark layer, and in corresponding position and the synthesis display of described picture.
The embodiment of the present invention also proposes a kind of picture processing device, including:
Picture insmods, and for obtaining the picture of loading, renders at picture layer and shows described picture;
Gesture receiver module, for receiving the operating gesture that user triggers on described picture;
Labeling module, is used for identifying described operating gesture, according to recognition result corresponding position on described picture Put and generate corresponding mark object;
Display module, for rendering described mark object by mark layer, and in corresponding position and institute State picture synthesis display.
A kind of image processing method that the embodiment of the present invention proposes and device, the picture being loaded into by acquisition is also Show described picture;Receive the operating gesture that user triggers on described picture;Identify described operating gesture, Corresponding mark object is generated according to recognition result;By described mark object correspondence position on described picture Display, thus makes full use of touch screen feature, responds various interaction gesture, in conjunction with shape recognition so that User can complete picture labeling operation quickly and easily on touch screen terminal, simplifies office crowd and uses The operating process of annotation tool, greatly improves work efficiency, optimizes Consumer's Experience.
Accompanying drawing explanation
Fig. 1 is the system architecture schematic diagram that embodiment of the present invention scheme relates to;
Fig. 2 is the configuration diagram of the gesture recognition system shown in Fig. 1;
Fig. 3 a is a kind of operating gesture schematic diagram that embodiment of the present invention scheme relates to;
Fig. 3 b is the another kind of operating gesture schematic diagram that embodiment of the present invention scheme relates to;
Fig. 3 c is another operating gesture schematic diagram that embodiment of the present invention scheme relates to;
Fig. 3 d is another operating gesture schematic diagram that embodiment of the present invention scheme relates to;
Fig. 4 is the mark queue schematic diagram of the mark management system shown in Fig. 1;
Fig. 5 is the hardware architecture diagram of the picture processing device that embodiment of the present invention scheme relates to;
Fig. 6 is the schematic flow sheet of image processing method preferred embodiment of the present invention;
Fig. 7 is the high-level schematic functional block diagram of picture processing device preferred embodiment of the present invention.
In order to make technical scheme clearer, clear, make the most in detail below in conjunction with accompanying drawing State.
Detailed description of the invention
Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not used to limit Determine the present invention.
The core concept of embodiment of the present invention scheme is: the picture that is loaded into by acquisition also shows described picture; Receive the operating gesture that user triggers on described picture;Identify described operating gesture, according to recognition result Generate corresponding mark object;Described mark object correspondence position on described picture is shown, thus fills Divide and utilize touch screen feature, respond various interaction gesture, in conjunction with shape recognition so that user can facilitate On touch screen terminal, complete picture labeling operation rapidly, simplify office crowd and use the behaviour of annotation tool Make process, improve work efficiency, optimize Consumer's Experience.
Using mouse to carry out the mode operated as it was previously stated, be different from PC, the operation on touch screen has it Unique feature: finger is clicked on and replaced click, finger slide replaces mouse to move, and also has many Put touch-control, grow various operating gestures such as pressing, in conjunction with shape recognition, can expand many convenient and swift Interactive action, be greatly enhanced the work efficiency of mobile office.
In order to make full use of the feature of touch screen, simplify office crowd and use the operating process of annotation tool, The embodiment of the present invention proposes a kind of exchange method that picture is labeled operation on the touchscreen, fully profit Use touch screen feature, respond various interaction gesture, in conjunction with shape recognition so that user can with more convenient soon Complete picture mark work fastly.
Specifically, as it is shown in figure 1, the software system architecture that embodiment of the present invention scheme relates to includes gesture Identification system, mark management system and display system, wherein:
Gesture recognition system is mainly responsible for collecting operation metadata, analyzes concrete operating gesture, finally turns Turn to certain mark.The framework of gesture recognition system can be as shown in Figure 2.
Operating gesture can be such as: word (as shown in Figure 3 a) is added in double-click, length presses interpolation arrow (as Shown in Fig. 3 b), Graffiti painting circle add circle markings (as shown in Figure 3 c), Graffiti painting rectangle add rectangle Mark (as shown in Figure 3 d) or other scribble operation add scribble mark, additionally, operating gesture also may be used Think double charge painting canvas processed etc..
Mark management system is mainly responsible for safeguarding mark object, accepts newly added mark from gesture recognition system Note object, adds mark queue, and the mark queue of maintenance passes to display system displaying.Wherein, The mark queue of mark management system can mark queue and also carry each mark object as shown in Figure 4 Coordinate information.
Display system is mainly responsible for exhibiting pictures and annotation results, can be divided into multiple figure layer, such as include Picture layer and mark layer, picture layer is responsible for rendering original image, and it is right that mark layer is responsible for rendering all of mark As, when preserving picture after user completes mark, display system can generate result figure and show by Flatten Image To user, finally realize the mark of picture.
Above-mentioned software system can be integrated in a picture processing device with the form of client software, this figure Piece treating apparatus can be carried on PC end, it is also possible to be carried on mobile phone, panel computer, portable hand-held set On the various touch-control mobile terminals such as standby, provide a user with application operating interface, and root by client software According to the corresponding gesture operation of user, picture is labeled.
As a kind of specific implementation, the picture being loaded into by acquisition also shows described picture;Receive and use The operating gesture that family is triggered on described picture;Identifying described operating gesture, it is right to generate according to recognition result The mark object answered;Described mark object correspondence position on described picture is shown.Thus, fully sharp Use touch screen feature, respond various interaction gesture, in conjunction with shape recognition so that user can be with fast and easy Ground completes picture labeling operation on touch screen terminal, simplifies office crowd and uses the operation of annotation tool Journey, improves work efficiency, optimizes Consumer's Experience.
The hardware configuration of above-mentioned picture processing device can be as shown in Figure 5.
With reference to Fig. 5, this picture processing device may include that processor 1001, such as CPU, network interface 1004, user interface 1003, memorizer 1005, communication bus 1002.Wherein, communication bus 1002 is used In the connection communication realized in this server between each building block.User interface 1003 can include having The display screen (Display) of touch controllable function, and the assembly such as keyboard (Keyboard), mouse, be used for connecing Receive the information of user's input, and the information transmission of reception is processed to processor 1005, wherein, use The information of family input includes the various gesture operations that user inputs.Display screen can be LCD display, LED Display screen, for showing the data that the needs such as the information of picture and user annotation show.Alternatively, use Family interface 1003 can also include the wireline interface of standard, wave point.Network interface 1004 optionally may be used To include the wireline interface of standard, wave point (such as WI-FI interface).Memorizer 1005 can be at a high speed RAM memory, it is also possible to be stable memorizer (non-volatile memory), such as disk storage Device.Memorizer 1005 optionally can also is that the storage device independent of aforementioned processor 1001.Such as Fig. 5 institute Show, as the memorizer 1005 of a kind of computer-readable storage medium can include operating system, network service Module, Subscriber Interface Module SIM and picture processing program.
In the picture processing device shown in Fig. 5, network interface 1004 is mainly used in back-stage management platform, Data communication is carried out with back-stage management platform;User interface 1003 is mainly used in connecting client, with client End carries out information and the instructions such as data communication, the gesture operation of reception client input and mark;And process Device 1001 may be used for calling the picture processing program of storage in memorizer 1005, and performs following operation:
Obtain the picture being loaded into, render at picture layer and show described picture;
Receive the operating gesture that user triggers on described picture;
Identify described operating gesture, generate corresponding mark according to recognition result correspondence position on described picture Note object;
Described mark object is rendered by mark layer, and in corresponding position and the synthesis display of described picture.
Further, in one embodiment, the figure of storage during processor 1001 calls memorizer 1005 Sheet processing routine, it is also possible to operation below performing:
All mark objects are added mark queue by the time order and function order generated according to described mark object;
Obtain all mark objects in described mark queue;
Render original image by picture layer, render all mark objects by mark layer;
After user completes mark, merge described picture layer and mark layer, generate eventually with mark object Picture and show.
Further, in one embodiment, the figure of storage during processor 1001 calls memorizer 1005 Sheet processing routine, it is also possible to operation below performing:
When identifying described operating gesture for double-clicking gesture, according to described double-click gesture, at described picture Upper correspondence position display input frame;
Receive the Word message that user inputs in described input frame;
The label character object that described double-click gesture is corresponding is generated based on described Word message.
Further, in one embodiment, the figure of storage during processor 1001 calls memorizer 1005 Sheet processing routine, it is also possible to operation below performing:
When identifying described operating gesture and being long-pressing gesture, according to described long-pressing gesture, at described picture Upper correspondence position display arrow;
Receive user the dragging of described arrow is instructed;
Drag instruction according to described, extend the length that described arrow points to, generate described long-pressing gesture corresponding Arrow mark object.
Further, in one embodiment, the figure of storage during processor 1001 calls memorizer 1005 Sheet processing routine, it is also possible to operation below performing:
When identifying described operating gesture for scribble gesture, according to described scribble gesture, at described picture Upper correspondence position draws the scribble shape of correspondence, generates corresponding scribble mark object.
Further, in one embodiment, the figure of storage during processor 1001 calls memorizer 1005 Sheet processing routine, it is also possible to operation below performing:
When identifying described operating gesture for double finger slide, instruct according to double finger slides, contracting Put the described picture of display.
The present embodiment passes through such scheme, especially by obtaining the picture being loaded into, renders at picture layer and shows Show described picture;Receive the operating gesture that user triggers on described picture;Identify described operating gesture, Corresponding mark object is generated according to recognition result correspondence position on described picture;By described mark object Rendered by mark layer, and in corresponding position and the synthesis display of described picture, thus make full use of touch Screen feature, responds various interaction gesture, in conjunction with shape recognition so that user can touch quickly and easily Touch and complete picture labeling operation in screen terminal, simplify office crowd and use the operating process of annotation tool, pole Big improves work efficiency, optimizes Consumer's Experience.
Based on above-mentioned software system architecture and hardware structure, image processing method embodiment of the present invention is proposed.
As shown in Figure 6, first embodiment of the invention proposes a kind of image processing method, including:
Step S101, obtains the picture being loaded into, renders at picture layer and show described picture;
As a example by mobile phone, when user needs to be labeled picture, can first choose the figure needing mark Sheet, this image credit can be the picture that mobile phone is locally stored, it is also possible to download from the webserver Picture, it is also possible to be user's picture of passing through that mobile phone camera shoots at any time, in this no limit.
Step S102, receives the operating gesture that user triggers on described picture;
In order to realize the quick mark to picture, the present embodiment is completed by the gesture operation of user.
To this end, the present embodiment has preset the mark object that different operating gesture is corresponding.Such as: double-click Add word (as shown in Figure 3 a), length is added round by interpolation arrow (as shown in Figure 3 b), Graffiti painting circle Shape marks (as shown in Figure 3 c), Graffiti painting rectangle adds rectangle mark (as shown in Figure 3 d) or other Scribble and add scribble mark, additionally, operating gesture can also be double charge painting canvas processed etc..
User can input corresponding operating gesture and be labeled as required on picture.
Step S103, identifies described operating gesture, raw according to recognition result correspondence position on described picture Become corresponding mark object;
Operating gesture, after recognizing the operating gesture that user triggers on picture, is entered by gesture recognition system Row is analyzed, and generates corresponding mark object.
According to different operating gestures, concrete processing procedure is as follows:
As shown in Figure 3 a, when identifying described operating gesture for double-clicking gesture, according to described double-click gesture, Correspondence position display input frame on described picture, user can input corresponding word letter in input frame Breath.
After receiving the Word message that user inputs in above-mentioned input frame, generate based on this Word message The label character object that described double-click gesture is corresponding.
As shown in Figure 3 b, when identifying described operating gesture and being long-pressing gesture, according to described long-pressing gesture, Correspondence position display arrow on described picture.User can carry out drag operation long by the basis of operation, To extend the sensing length of arrow.
Mobile phone, after receiving user's dragging instruction to arrow, instructs according to this dragging, extends arrow and refer to To length, generate described long-pressing gesture corresponding arrow mark object.
Additionally, when identifying described operating gesture for scribble gesture, according to described scribble gesture, in institute State correspondence position on picture and draw the scribble shape of correspondence, generate corresponding scribble mark object.
As shown in Figure 3 c, Graffiti painting circle adds circle markings;Or, as shown in Figure 3 d, Graffiti painting square Shape adds rectangle mark;Or use other scribble operation to add the scribble mark of respective shapes.
Step S104, is rendered described mark object by mark layer, and at corresponding position and described figure Sheet synthesis display.
Finally, mark object correspondence position on picture is together shown together with picture.Detailed process is as follows:
First, the time order and function order that mark management system generates according to mark object, by right for all marks As adding mark queue;All mark objects in mark queue are sent to show by mark management system successively Show system.
Display system renders original image by picture layer, renders institute in described mark queue by mark layer There is mark object;
After user completes mark, display system composing picture layer and mark layer, generate eventually with mark The picture of object also shows.
Wherein, display system is when marking layer and rendering display mark object, for the different labeled time still Belonging to the mark object of same labeling position, the time top layer of mark object rearward shows, the most below Mark display is at the outer layer above marked.
Additionally, after completing mark, mark object can also be modified by user as required.
It addition, during above-mentioned labeling operation, user can also carry out double finger slide on picture, With Zoom display picture.When implementing, mark management system is double at the operating gesture identifying user When referring to slide, notice display system refers to slide instruction according to double, picture described in Zoom display.
The present embodiment passes through such scheme, the picture being loaded into by acquisition, renders at picture layer and shows institute State picture;Receive the operating gesture that user triggers on described picture;Identify described operating gesture, according to Recognition result correspondence position on described picture generates corresponding mark object;Described mark object is passed through Mark layer renders, and in corresponding position and the synthesis display of described picture, thus makes full use of touch screen special Point, responds various interaction gesture, in conjunction with shape recognition so that user can be quickly and easily at touch screen Complete picture labeling operation in terminal, simplify office crowd and use the operating process of annotation tool, greatly Improve work efficiency, optimize Consumer's Experience.
Accordingly, picture processing device embodiment of the present invention is proposed.
As it is shown in fig. 7, present pre-ferred embodiments proposes a kind of picture processing device, including: picture carries Enter module 201, gesture receiver module 202, labeling module 203 and display module 204, wherein:
Picture insmods 201, for obtaining the picture of loading, renders at picture layer and shows described picture;
Gesture receiver module 202, for receiving the operating gesture that user triggers on described picture;
Labeling module 203, is used for identifying described operating gesture, according to recognition result correspondence on described picture Position generates corresponding mark object;
Display module 204, for described mark object is rendered by mark layer, and corresponding position with The synthesis display of described picture.
As a example by mobile phone, when user needs to be labeled picture, can first choose the figure needing mark Sheet, this image credit can be the picture that mobile phone is locally stored, it is also possible to download from the webserver Picture, it is also possible to be user's picture of passing through that mobile phone camera shoots at any time, in this no limit.
In order to realize the quick mark to picture, the present embodiment is completed by the gesture operation of user.
To this end, the present embodiment has preset the mark object that different operating gesture is corresponding.Such as: double-click Add word (as shown in Figure 3 a), length is added round by interpolation arrow (as shown in Figure 3 b), Graffiti painting circle Shape marks (as shown in Figure 3 c), Graffiti painting rectangle adds rectangle mark (as shown in Figure 3 d) or other Scribble and add scribble mark, additionally, operating gesture can also be double charge painting canvas processed etc..
User can input corresponding operating gesture and be labeled as required on picture.
Operating gesture, after recognizing the operating gesture that user triggers on picture, is entered by gesture recognition system Row is analyzed, and generates corresponding mark object.
According to different operating gestures, concrete processing procedure is as follows:
As shown in Figure 3 a, when identifying described operating gesture for double-clicking gesture, according to described double-click gesture, Correspondence position display input frame on described picture, user can input corresponding word letter in input frame Breath.
After receiving the Word message that user inputs in above-mentioned input frame, generate based on this Word message The label character object that described double-click gesture is corresponding.
As shown in Figure 3 b, when identifying described operating gesture and being long-pressing gesture, according to described long-pressing gesture, Correspondence position display arrow on described picture.User can carry out drag operation long by the basis of operation, To extend the sensing length of arrow.
Mobile phone, after receiving user's dragging instruction to arrow, instructs according to this dragging, extends arrow and refer to To length, generate described long-pressing gesture corresponding arrow mark object.
Additionally, when identifying described operating gesture for scribble gesture, according to described scribble gesture, in institute State correspondence position on picture and draw the scribble shape of correspondence, generate corresponding scribble mark object.
As shown in Figure 3 c, Graffiti painting circle adds circle markings;Or, as shown in Figure 3 d, Graffiti painting square Shape adds rectangle mark;Or use other scribble operation to add the scribble mark of respective shapes.
Finally, mark object correspondence position on picture is together shown together with picture.Detailed process is as follows:
First, the time order and function order generated according to mark object by mark management system, by all marks Note object adds mark queue;All mark objects in mark queue are sent by mark management system successively To display system.
Display system renders original image by picture layer, renders institute in described mark queue by mark layer There is mark object;
After user completes mark, display system composing picture layer and mark layer, generate eventually with mark The picture of object also shows.
Wherein, display system is when marking layer and rendering display mark object, for the different labeled time still Belonging to the mark object of same labeling position, the time top layer of mark object rearward shows, the most below Mark display is at the outer layer above marked.
Additionally, after completing mark, mark object can also be modified by user as required.
It addition, during above-mentioned labeling operation, user can also carry out double finger slide on picture, With Zoom display picture.When implementing, mark management system is double at the operating gesture identifying user When referring to slide, notice display system refers to slide instruction according to double, picture described in Zoom display.
The present embodiment passes through such scheme, the picture being loaded into by acquisition, renders at picture layer and shows institute State picture;Receive the operating gesture that user triggers on described picture;Identify described operating gesture, according to Recognition result correspondence position on described picture generates corresponding mark object;Described mark object is passed through Mark layer renders, and in corresponding position and the synthesis display of described picture, thus makes full use of touch screen special Point, responds various interaction gesture, in conjunction with shape recognition so that user can be quickly and easily at touch screen Complete picture labeling operation in terminal, simplify office crowd and use the operating process of annotation tool, greatly Improve work efficiency, optimize Consumer's Experience.
Also, it should be noted in this article, term " include ", " comprising " or its any other become Body is intended to comprising of nonexcludability, so that include the process of a series of key element, method, article Or device not only includes those key elements, but also includes other key elements being not expressly set out, or Also include the key element intrinsic for this process, method, article or device.There is no more restriction In the case of, statement " including ... " key element limited, it is not excluded that including the mistake of this key element Journey, method, article or device there is also other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art is it can be understood that arrive above-mentioned Embodiment method can add the mode of required general hardware platform by software and realize, naturally it is also possible to logical Cross hardware, but a lot of in the case of the former is more preferably embodiment.Based on such understanding, the present invention's The part that prior art is contributed by technical scheme the most in other words can be with the form body of software product Revealing to come, this computer software product is stored in a storage medium (such as ROM/RAM, magnetic disc, light Dish) in, including some instructions with so that a station terminal equipment (can be mobile phone, computer, service Device, or the network equipment etc.) perform the method described in each embodiment of the present invention.
The foregoing is only the preferred embodiments of the present invention, not thereby limit the scope of the claims of the present invention, Every equivalent structure utilizing description of the invention and accompanying drawing content to be made or flow process conversion, or directly or Connect and be used in other relevant technical field, be the most in like manner included in the scope of patent protection of the present invention.

Claims (12)

1. an image processing method, it is characterised in that including:
Obtain the picture being loaded into, render at picture layer and show described picture;
Receive the operating gesture that user triggers on described picture;
Identify described operating gesture, generate corresponding mark according to recognition result correspondence position on described picture Note object;
Described mark object is rendered by mark layer, and in corresponding position and the synthesis display of described picture.
Method the most according to claim 1, it is characterised in that described described mark object is passed through Mark layer renders, and the step in corresponding position with the synthesis display of described picture includes:
All mark objects are added mark queue by the time order and function order generated according to described mark object;
Obtain all mark objects in described mark queue;
Render original image by picture layer, render all mark objects by mark layer;
After user completes mark, merge described picture layer and mark layer, generate eventually with mark object Picture and show.
Method the most according to claim 1, it is characterised in that described according to recognition result described On picture, the step of the mark object that correspondence position generates correspondence includes:
When identifying described operating gesture for double-clicking gesture, according to described double-click gesture, at described picture Upper correspondence position display input frame;
Receive the Word message that user inputs in described input frame;
The label character object that described double-click gesture is corresponding is generated based on described Word message.
Method the most according to claim 1, it is characterised in that described according to recognition result described On picture, the step of the mark object that correspondence position generates correspondence includes:
When identifying described operating gesture and being long-pressing gesture, according to described long-pressing gesture, at described picture Upper correspondence position display arrow;
Receive user the dragging of described arrow is instructed;
Drag instruction according to described, extend the length that described arrow points to, generate described long-pressing gesture corresponding Arrow mark object.
Method the most according to claim 1, it is characterised in that described according to recognition result described On picture, the step of the mark object that correspondence position generates correspondence includes:
When identifying described operating gesture for scribble gesture, according to described scribble gesture, at described picture Upper correspondence position draws the scribble shape of correspondence, generates corresponding scribble mark object.
6. according to the method according to any one of claim 1-5, it is characterised in that described method is also wrapped Include:
When identifying described operating gesture for double finger slide, instruct according to double finger slides, contracting Put the described picture of display.
7. a picture processing device, it is characterised in that including:
Picture insmods, and for obtaining the picture of loading, renders at picture layer and shows described picture;
Gesture receiver module, for receiving the operating gesture that user triggers on described picture;
Labeling module, is used for identifying described operating gesture, according to recognition result corresponding position on described picture Put and generate corresponding mark object;
Display module, for rendering described mark object by mark layer, and in corresponding position and institute State picture synthesis display.
Device the most according to claim 7, it is characterised in that
Described labeling module, is additionally operable to the time order and function order generated according to described mark object, will be all Mark object adds mark queue;Obtain all mark objects in described mark queue;By picture layer wash with watercolours Dye original image, renders all mark objects by mark layer;After user completes mark, merge described Picture layer and mark layer, generate the picture eventually with mark object and show.
Device the most according to claim 7, it is characterised in that
Described labeling module, is additionally operable to when identifying described operating gesture for double-clicking gesture, according to described Double-click gesture, correspondence position display input frame on described picture;Receive user defeated in described input frame The Word message entered;The label character object that described double-click gesture is corresponding is generated based on described Word message.
Device the most according to claim 7, it is characterised in that
Described labeling module, is additionally operable to when identifying described operating gesture and being long-pressing gesture, according to described Long-pressing gesture, correspondence position display arrow on described picture;Receive user the dragging of described arrow is referred to Order;Drag instruction according to described, extend the length that described arrow points to, generate described long-pressing gesture corresponding Arrow mark object.
11. devices according to claim 7, it is characterised in that
Described labeling module, is additionally operable to when identifying described operating gesture for scribble gesture, according to described Scribble gesture, on described picture, correspondence position draws the scribble shape of correspondence, generates corresponding scribble mark Note object.
12. according to the device according to any one of claim 7-11, it is characterised in that
Described display module, is additionally operable to when identifying described operating gesture for double finger slide, according to Double finger slides instruct, picture described in Zoom display.
CN201510160891.1A 2015-04-07 2015-04-07 Picture processing method and device Active CN106155542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510160891.1A CN106155542B (en) 2015-04-07 2015-04-07 Picture processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510160891.1A CN106155542B (en) 2015-04-07 2015-04-07 Picture processing method and device

Publications (2)

Publication Number Publication Date
CN106155542A true CN106155542A (en) 2016-11-23
CN106155542B CN106155542B (en) 2020-12-01

Family

ID=57337954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510160891.1A Active CN106155542B (en) 2015-04-07 2015-04-07 Picture processing method and device

Country Status (1)

Country Link
CN (1) CN106155542B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106500716A (en) * 2016-12-13 2017-03-15 英业达科技有限公司 Automobile navigation optical projection system and its method
CN106643781A (en) * 2016-12-14 2017-05-10 英业达科技有限公司 Vehicle navigation display system and method thereof
CN106951090A (en) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 Image processing method and device
CN109523609A (en) * 2018-10-16 2019-03-26 华为技术有限公司 A kind of method and terminal of Edition Contains
CN109726302A (en) * 2018-12-10 2019-05-07 广州景骐科技有限公司 Mask method, device, computer equipment and the storage medium of image
CN110443772A (en) * 2019-08-20 2019-11-12 百度在线网络技术(北京)有限公司 Image processing method, device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070274704A1 (en) * 2006-05-25 2007-11-29 Fujitsu Limited Information processing apparatus, information processing method and program
CN102207826A (en) * 2011-05-30 2011-10-05 中兴通讯股份有限公司 Method and system for scrawling
WO2013164351A1 (en) * 2012-04-30 2013-11-07 Research In Motion Limited Device and method for processing user input
CN103793146A (en) * 2014-02-27 2014-05-14 朱印 Method and device for processing images
CN103793174A (en) * 2014-02-11 2014-05-14 厦门美图网科技有限公司 Scrawling method for image
CN103885623A (en) * 2012-12-24 2014-06-25 腾讯科技(深圳)有限公司 Mobile terminal, system and method for processing sliding event into editing gesture
CN104484856A (en) * 2014-11-21 2015-04-01 广东威创视讯科技股份有限公司 Picture labeling display control method and processor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070274704A1 (en) * 2006-05-25 2007-11-29 Fujitsu Limited Information processing apparatus, information processing method and program
CN102207826A (en) * 2011-05-30 2011-10-05 中兴通讯股份有限公司 Method and system for scrawling
WO2013164351A1 (en) * 2012-04-30 2013-11-07 Research In Motion Limited Device and method for processing user input
CN103885623A (en) * 2012-12-24 2014-06-25 腾讯科技(深圳)有限公司 Mobile terminal, system and method for processing sliding event into editing gesture
CN103793174A (en) * 2014-02-11 2014-05-14 厦门美图网科技有限公司 Scrawling method for image
CN103793146A (en) * 2014-02-27 2014-05-14 朱印 Method and device for processing images
CN104484856A (en) * 2014-11-21 2015-04-01 广东威创视讯科技股份有限公司 Picture labeling display control method and processor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
灵性之眼: "手机如何在照片上添加文字", 《HTTPS://JINGYAN.BAIDU.COM/ARTICLE/2D5AFD69F7D9B785A3E28E64.HTML》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106500716A (en) * 2016-12-13 2017-03-15 英业达科技有限公司 Automobile navigation optical projection system and its method
CN106643781A (en) * 2016-12-14 2017-05-10 英业达科技有限公司 Vehicle navigation display system and method thereof
CN106951090A (en) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 Image processing method and device
CN106951090B (en) * 2017-03-29 2021-03-30 北京小米移动软件有限公司 Picture processing method and device
CN109523609A (en) * 2018-10-16 2019-03-26 华为技术有限公司 A kind of method and terminal of Edition Contains
WO2020078298A1 (en) * 2018-10-16 2020-04-23 华为技术有限公司 Content editing method and terminal
CN109726302A (en) * 2018-12-10 2019-05-07 广州景骐科技有限公司 Mask method, device, computer equipment and the storage medium of image
CN110443772A (en) * 2019-08-20 2019-11-12 百度在线网络技术(北京)有限公司 Image processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN106155542B (en) 2020-12-01

Similar Documents

Publication Publication Date Title
CN106155542A (en) Image processing method and device
CN107368550B (en) Information acquisition method, device, medium, electronic device, server and system
CN111310866B (en) Data labeling method, device, system and terminal equipment
CN104679388B (en) The method and its mobile terminal of application program are opened by icon duplicate
CN102541400A (en) System and method performing data interaction with touch screen
CN113194024B (en) Information display method and device and electronic equipment
CN104063071A (en) Content input method and device
CN112949437A (en) Gesture recognition method, gesture recognition device and intelligent equipment
CN105955683B (en) System and control method
CN112306347B (en) Image editing method, image editing device and electronic equipment
CN105022480A (en) Input method and terminal
CN104391898A (en) Data showing method and device
CN104598289A (en) Recognition method and electronic device
CN104978414A (en) Content search method and terminal
CN113010738B (en) Video processing method, device, electronic equipment and readable storage medium
CN104407763A (en) Content input method and system
CN105493145A (en) Method and device for determining user input on basis of visual information on user's fingernails or toenails
KR20150097250A (en) Sketch retrieval system using tag information, user equipment, service equipment, service method and computer readable medium having computer program recorded therefor
CN108052525B (en) Method and device for acquiring audio information, storage medium and electronic equipment
CN114245193A (en) Display control method and device and electronic equipment
CN111796736A (en) Application sharing method and device and electronic equipment
CN111752428A (en) Icon arrangement method and device, electronic equipment and medium
CN106487862A (en) Transfer interactive system and method based on fingerprint identification information
CN112887481B (en) Image processing method and device
CN104636446A (en) Heritage Web application mobile transformation method based on cloud computing mode

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant