CN101382836A - Electronic painting creative method based on multi-medium user interaction - Google Patents

Electronic painting creative method based on multi-medium user interaction Download PDF

Info

Publication number
CN101382836A
CN101382836A CNA2008101207935A CN200810120793A CN101382836A CN 101382836 A CN101382836 A CN 101382836A CN A2008101207935 A CNA2008101207935 A CN A2008101207935A CN 200810120793 A CN200810120793 A CN 200810120793A CN 101382836 A CN101382836 A CN 101382836A
Authority
CN
China
Prior art keywords
user
picture
painting
voice
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101207935A
Other languages
Chinese (zh)
Other versions
CN101382836B (en
Inventor
徐颂华
杨文霞
刘智满
潘云鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2008101207935A priority Critical patent/CN101382836B/en
Publication of CN101382836A publication Critical patent/CN101382836A/en
Application granted granted Critical
Publication of CN101382836B publication Critical patent/CN101382836B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an electronic painting method based on multi-media user interaction, which comprises the steps: (1) painting content consists of strokes drawn by using an electronic paintbrush and painting elements extracted from pictures; (2) a user searches candidate picture galleries from a picture material library by voice, static gestures or dynamic physical performances; (3) the user chooses pictures from the candidate picture galleries by the voice, static gestures or eye tracking; (4) the pictures chosen by the user are automatically segmented so that meaningful elements in the picture are extracted; drawing element drawings needed by the user are chosen from the segmented results by distinguishing the voice and the static gestures of the user or the eye tracking; and (5) the user adjusts the size, angle and position of each drawing element by static gestures or voices to ensure beautiful paintings. The disclosed electronic painting way which is based on the voice, the static gestures or the dynamic physical performances of the user ensures a more humanized painting process.

Description

Method based on the electronic painting creation of many media user interaction
Technical field
The present invention relates to the human-computer interaction technology of the method for electronic painting creation, relate in particular to a method based on the electronic painting creation of many media user interaction.
Background technology
Drawing is the art form that a lot of people love, but have much have a rich imagination and intelligential people do not possess skilled painting ability, in order farthest to help the user that the form of the content of expecting with drawing showed, we have invented this system of painting by voice, gesture and performance.Up to the present, the mode of drawing on computers of main flow is to paint on painting canvas by electronic brush, and this painting mode requires very high to the skill of drawing, acquires a certain degree of difficulty the user of skill but like drawing a picture not being skilled in painting for great majority.Therefore, we have invented and thisly can pass through voice, and the mode that gesture or performance are painted makes the user to paint by number of ways, thereby better the content of painting is wanted in performance.It is a kind of novel based on the identification to user's voice, gesture and performance that the contribution of this invention has been to propose, and the multimedia human-computer interaction technology of eye tracking.
Aspect the medium human-computer interaction technology, what propose diversified man-machine interaction mode the earliest is before two more than ten years, Richard (Richard A.B.1980. " Put-That-There ": Voice and Gesture at theGraphics Interface) developed the system of " Put-That-There ", in this system, the user can be put into the X-Y scheme object on the very big screen by gesture and voice command; The early stage user interface based on gesture is Et al (K. , W.H ü bner, and K.
Figure A200810120793D0004114022QIETU
.Given:Gesture driven interactions in virtual environments; A toolkit approach to 3Dinteractions.In Interfaces to Real and Virtual Worlds, 1992) be the virtual reality program development, in their system, gesture and the symbolic information that pre-defines are mapped, come order in the activation system by gesture.Hand tracking is very important a kind of method in the gesture interface, the earliest do not use the digitizing gloves but the vision technique that uses a computer carries out sound position (the VOICEPLACE) (Krueger of system that the system of hand tracking is Krueger exploitation, M.VIDEOPLACE and the Interfaceof the Future.In The Art of Human Computer Interface Design.Addison Wesley, Menlo Park, CA.pp.417-422.1991), in their system, the outline line of hand is used for producing two-dimentional picture.We have used the input of the outline line of hand as picture searching equally in our system.Except the user interface based on gesture, unconventional man-machine interaction method in addition is based on the user interface of voice and based on the user interface of eye tracking.In the system that (Richard A.B.Eyes at the interface.Conference onHuman Factors in Computing Systems.pp.360-362.1982) proposed, used the method for eye tracking to make the user to select picture for the first time by eye tracking.In the present invention, we have used the method for eye tracking to realize the selection of picture and the selection of image segmentation element equally.
The research work of speech recognition starts from the fifties, up to now, the speech identifying function that is comprised in Windows XP when particularly share with the software of Office XP, can significantly strengthen the computing function such as recreation, data input or copy editor field; IBM has also released the speech recognition input system, also has a series of commercialization and speech recognition software that increase income, has all changed the interactive mode of user and computing machine to a certain extent.Speech recognition has been a relative proven technique; Computer also is the focus of studying this year to people's the gesture and the identification of performance, at computer vision field, has a lot of scientists to be devoted to obtain by camera gesture and other body languages of user, analyzes the research of its meaning then.The eye tracking technology be one interesting and the research of a lot of actual application value arranged, in fact, the project that the track user eyeball of much increasing income has been arranged on network can realize observing by camera the activity of user's eyeball, and which zone of screen in the watching of consumer positioning.Therefore, the present invention has adopted the recognition technology and the eye tracking technology of existing speech recognition technology, gesture and performance, main contribution is to have proposed a kind of novel, can better help the user to carry out the multimedia painting system of painting creation, the present invention has simultaneously also considered children and disabled person's demand.
Summary of the invention
The objective of the invention is to overcome the deficiencies in the prior art, a method based on the electronic painting creation of many media user interaction is provided.
The method of creating based on the electronic painting of many media user interaction comprises the steps:
1) element of use mapping software drafting and the painting element that extracts from picture are formed the content of painting;
2) user searches for candidate's pictures by the method for voice, static posture, dynamic limbs performance or sketch drawing from the picture materials storehouse;
3) user chooses a width of cloth picture in candidate's pictures by voice, static gesture or eye tracking;
4) picture of automatically user being chosen is cut apart, and extracts the element in the picture, chooses the element that the user needs by identification user's voice, static gesture or eye tracking from segmentation result;
5) user adjusts the size and the position of each painting element by static gesture and voice, makes paint attractive in appearance and have more artistry.
Described user searches for candidate's pictures step by the method for voice, static posture, dynamic limbs performance or sketch drawing from the picture materials storehouse:
(1) user passes through microphone, say the sound of the title of object search, the brief sentence that comprises title or imitation drawing object, system can determine the object oriented that the user will search for according to the content that the user says, then search candidate pictures from the picture materials storehouse;
(2) user wants the appearance of the object of painting by static posture imitation, capture user's imitation effect by camera, and analyze and extract the shape facility of user's static posture, and from the picture materials storehouse, search for candidate's pictures according to the shape facility that extracts;
(3) user wants the appearance of the object of painting by dynamic limbs performance imitation, capture user's limbs performance by camera, and analyze and extract the behavioral characteristics of user's performance, and from the picture materials storehouse, search for candidate's pictures according to the feature that extracts;
(4) user can use mapping software to draw sketch, searches for candidate's pictures according to sketch in the picture materials storehouse then;
(5) user draws sketch on paper, captures this sketch by camera, then this sketch is searched for candidate's pictures in the picture materials storehouse.
Described user chooses a width of cloth picture step in candidate's pictures by voice, static gesture or eye tracking:
(1) the figure sector-meeting in candidate's pictures is slit into sheets demonstration, and each width of cloth picture of each page all has a numbering, and the user says the numbering of the picture of selection by microphone;
(2) user is by pointing to the width of cloth picture in candidate's pictures with finger, captures user's static gesture by camera, and the gesture of analysis user is pointed to which zone of screen, determines which width of cloth picture user wants to choose;
(3) eye movement by the camera track user, the sight by the statistics user rest on time in which zone on the screen and the longlyest determine that the user wants the picture of choosing.
The described picture of automatically user being chosen is cut apart, and extracts the element in the picture, chooses the element step that the user needs by identification user's voice, static gesture or eye tracking from segmentation result:
(1) uses image segmentation algorithm that the picture of choosing is cut apart, extract the element in the picture, and each element that extracts is numbered;
(2) user says the numbering of element by microphone;
(3) user points to an element that extracts with finger from image, and camera can capture user's gesture, and which zone in the gesture of the analysis user sensing screen, determines which the user wants to choose and cut apart element;
(4) eye movement by the camera track user, the sight by the statistics user rest on time in which zone on the screen and the longlyest determine that the user wants the element of choosing of cutting apart.
Described user adjusts the size and the position of each painting element by static gesture and voice, makes paint attractive in appearance and have more the artistry step:
(1) each painting element in the width of cloth paint of drawing for the user, comprise that the user draws with mapping software and from picture, extract, the user is with static gesture or selected one or more painting element of voice;
(2) user makes and uses gesture or voice, one or more chosen painting element is adjusted the operation of size, angle and position.
At present the electronic palette software of main flow all is to finish by the operation of electronic brush or mouse, keyboard, need to be grasped skilled pictorial skill and just can draw out picture attractive in appearance, but most of user does not have skilled cartographic technique, for children, disabled person, the elderly, on top of with carry out traditional more non-easy thing of drawing operation.Want drawn content in order to help all users to draw easily, the present invention proposes a kind of human-computer interaction interface by voice, static posture or dynamic table evolution row electronic painting, make the user to express and want the content of painting by number of ways, can be primarily focused on the drawing, rather than concentrate in the operation of study electronic palette software; Compare with the use electronic brush, speaking and performing is human more natural exchange waies, this also makes painting process natural more and light, therefore, paint by the system that invention proposed that uses us, even do not have the people of pictorial skill also can draw out attractive in appearance quickly and easily fully and have the picture of artistic feeling.
Description of drawings
Fig. 1 (a) is the width of cloth picture in the picture materials storehouse;
Fig. 1 (b) is the painting element that extracts from picture (a);
Fig. 2 is a system architecture diagram of the present invention;
Fig. 3 is a workflow diagram of the present invention;
Fig. 4 be the user say " too " time, the picture Candidate Set that from the picture materials storehouse, searches;
Fig. 5 is the picture Candidate Set that the user searches from the picture materials storehouse when saying " giving me cat ", and the time that user's sight stops in each zone is added up;
Fig. 6 is the picture segmentation result, has marked each in the drawings and has cut apart the sequence number of element.
Embodiment
The method of creating based on the electronic painting of many media user interaction comprises the steps:
1) element of use mapping software drafting and the painting element that extracts from picture are formed the content of painting; The user can use any mapping software to draw out painting element in the process of using the present invention's drawing, also can extract a significant object from picture, as a painting element; The user is in the process of system's drawing of using us, can use any mapping software to draw out painting element, also can from picture, extract a significant object, as a painting element, for example, shown in Figure 1, (a) among Fig. 1 is a width of cloth picture that finds from the picture materials storehouse, (b) be a painting element that from picture (a), extracts, a butterfly;
2) user searches for candidate's pictures by the method for voice, static posture, dynamic limbs performance or sketch drawing from the picture materials storehouse;
3) user chooses a width of cloth picture in candidate's pictures by voice, static gesture or eye tracking;
4) picture of automatically user being chosen is cut apart, and extracts the element in the picture, chooses the element that the user needs by identification user's voice, static gesture or eye tracking from segmentation result;
5) user adjusts the size and the position of each painting element by static gesture and voice, makes paint attractive in appearance and have more artistry.
The method that described user paints by voice, static posture, dynamic limbs performance or sketch, the step of search candidate pictures from the picture materials storehouse:
(1) user passes through microphone, say the sound of the title of object search, the brief sentence that comprises title or imitation drawing object, system can determine the object oriented that the user will search for according to the content that the user says, then search candidate pictures from the picture materials storehouse; From the picture materials storehouse, search for candidate's pictures picture according to voice: with current main-stream, according to the mode difference that key word comes search pictures, the present invention is by the identification user's voice, comprise word, for example, " automobile ", perhaps brief sentence, for example, " giving me an automobile ", perhaps imitate the sound of drawing object, for example, " too ", system all can search out the picture all about automobile from material database, form candidate's pictures, shown in Figure 4 is in candidate's pictures first page.By using the technology of speech recognition and machine learning, become the method for search key to realize speech conversion.Used integrated speech recognition software among the Windows XP among the present invention, the audio recognition method that uses other software to provide is also regarded distortion of the present invention as.Identification for word, directly change into the input key word of search engine by user's voice, for the method for extracting key word the short sentence of saying from the user, we become text by speech recognition software with speech conversion earlier, " voice in the graphic interface and gesture " (Richard A.B.1980. " Put-That-There ": to the method to extraction key word from short sentence of proposition, use the method that provides in other document also to regard distortion of the present invention as Voice and Gesture at the GraphicsInterface) has been provided subsequently.Provide this function of sound of imitating the drawing object according to the user mainly to be designed for children, children's vocabulary is very limited, and major part is onomatopoeia, so we have used the method for machine learning to finish mapping from the onomatopoeia to the search text.
(2) user wants the appearance of the object of painting by static posture imitation, capture user's imitation effect by camera, and analyze and extract the shape facility of user's static posture, and from the picture materials storehouse, search for candidate's pictures according to the shape facility that extracts; For what obtain by camera, the extraction of the shape facility of the picture of user's static posture and coupling, the present invention adopted " intelligence based on color, shape, image search system with spatial relationship " (T.K.Shin, J.Y.Huang, C.S.Wang, J.C.Hung, and C.H.Kao.An intelligent content-based image retrieval system based oncolor, shape and spatial relations.The Proceedings of the National Science Council, 25 (4): 232-243, September 2001) extraction of middle proposition and the method for mating object in the picture, use the method that proposes in other documents also to be considered as distortion of the present invention.
(3) user wants the appearance of the object of painting by dynamic limbs performance imitation, capture user's limbs performance by camera, and analyze and extract the behavioral characteristics of user's performance, and from the picture materials storehouse, search for candidate's pictures according to the feature that extracts; Movement locus during for extraction and match user performance, adopted " based on the video search of movement locus " (C.W.Su among the present invention, H.Y.M.Liao, H.R.Tyan, C.W.Lin, D.Y.Chen, and K.C.Fan.Motion flow-based video retrieval.IEEETransactions on Multimedia, 9 (9): 1193-1201, Oct.2007) method of the movement locus of main moving object in the extraction that proposes in and one section video of comparison, this is also can be considered as one section video because of the user's who obtains by camera dynamic gesture and performance, uses the method that proposes in other documents also to regard distortion of the present invention as.
(4) user can use mapping software to draw sketch, searches for candidate's pictures according to sketch in the picture materials storehouse then; The process of search is, the profile information of the picture of comparison picture material database and the feature sketch that the user draws, think that if the two is identical picture in the picture materials storehouse is qualified Search Results, form candidate's pictures by all pictures that this search procedure finds.For according to the sketch search pictures, the present invention has adopted the searching method that uses in (http://labs.systemone.at/retrievr), uses the method that proposes in other websites or the document also to regard distortion of the present invention as.
(5) user draws sketch on paper, captures this sketch by camera, then this sketch is searched for candidate's pictures in the picture materials storehouse.The process of search is, the profile information of the picture of comparison picture material database and the feature sketch that the user draws, think that if the two is identical picture in the picture materials storehouse is qualified Search Results, form candidate's pictures by all pictures that this search procedure finds.For the method that the user draws sketch and obtains by camera on paper, used among the present invention " virtual panel: use common paper to realize virtual mouse, keyboard and three D controllers " (Zhang Z., Wu Y., Shan Y.; Shafer, S.Visual Panel:Virtual Mouse, Keyboard, what propose and 3D Controller with an Ordinary Pieceof Paper.In Proceedings of ACM Workshop on Perceptive User Interfaces (PUI) .2001) passes through camera obtains the sketch of drawing on paper method, uses the method that proposes in other documents also to be considered as distortion of the present invention.
Described user chooses a width of cloth picture step in candidate's pictures by voice, static gesture or eye tracking:
(1) the figure sector-meeting in candidate's pictures is slit into sheets demonstration, and each width of cloth picture of each page all has a numbering, and the user says the numbering of the picture of selection by microphone; The sound identification module that uses in the invention can convert voice signal to text, and then determines which width of cloth the picture that is selected is.Speech recognition process has adopted the integrated speech recognition engine of Windows XP, uses other speech recognition softwares also to regard distortion of the present invention as.
(2) user is by pointing to the width of cloth picture in candidate's pictures with finger, captures user's static gesture by camera, and the gesture of analysis user is pointed to which zone of screen, determines which width of cloth picture user wants to choose; Adopted " voice in the graphic interface and gesture " (Richard A.B.1980. " Put-That-There ": to the method to user's gesture identification of proposition, use the method that proposes in other documents will regard distortion of the present invention as Voice and Gesture at the Graphics Interface) for the identification of user's gesture.
(3) eye movement by the camera track user, the sight by the statistics user rest on time in which zone on the screen and the longlyest determine that the user wants the picture of choosing.The eye tracking technology has been used in invention, can follow the tracks of the activity of user's eyeball by camera, and it is the longest that the sight that therefore can add up the user rests on time in which zone of screen, and the picture in this zone is exactly the picture that the user chooses.Fig. 5 is user's first page of search result set when saying " giving me cat ", the sight that red point among Fig. 5 has marked the user drops on the position on the screen, as can be seen from Figure 5, user's sight is maximum at the number of times of the 8th secondary picture, and therefore the 8th width of cloth picture is the picture that the user chooses by eyeball tracking.To the way that the tracking of user's eyeball has adopted (http://www.inference.phy.cam.ac.uk/opengazer/) to provide, use the method that proposes in other softwares or the document to regard distortion of the present invention as.
The described picture of automatically user being chosen is cut apart, and extracts the element in the picture, chooses the element step that the user needs by identification user's voice, static gesture or eye tracking from segmentation result:
(1) uses image segmentation algorithm that the picture of choosing is cut apart, extract the element in the picture, and each element that extracts is numbered; As shown in Figure 6, the picture among Fig. 6 is divided into 9 entities, come out with red square frame mark respectively, and automatic label.The present invention has adopted " based on the image segmentation algorithm of figure " (Pedro F.F., Daniel P.H.Effective Graph-based image segmentation) way of the split image of Ti Chuing uses the image segmentation algorithm that proposes in other documents to regard distortion of the present invention as;
(2) user says the numbering of element by microphone; Sound identification module converts voice signal to text, and then determines it is which that the picture that is selected is cut apart element, and speech recognition process has adopted Windows
The speech recognition engine that XP is integrated uses other speech recognition softwares also to regard distortion of the present invention as.
(3) user points to an element that extracts with finger from image, and camera can capture user's gesture, and which zone in the gesture of the analysis user sensing screen, determines which the user wants to choose and cut apart element; Adopted " voice in the graphic interface and gesture " (RichardA.B.1980. " Put-That-There ": to the method to user's gesture identification of proposition, use the method that proposes in other documents will regard distortion of the present invention as Voice and Gesture at the Graphics Interface) for the identification of user's gesture.
(4) eye movement by the camera track user, the sight by the statistics user rest on time in which zone on the screen and the longlyest determine that the user wants the element of choosing of cutting apart.Adopted the eye tracking technology, can follow the tracks of the activity of user's eyeball by camera, it is the longest that the sight that therefore can add up the user rests on time in which zone of screen, and the element of cutting apart in this zone is exactly that the user wants the element of cutting apart chosen.Tracking to user's eyeball has adopted that (http://www.inference.phy, the way that cam.ac.uk/opengazer/) provides use the method that proposes in other softwares or the document to regard distortion of the present invention as.
Described user adjusts the size and the position of each painting element by static gesture and voice, makes paint attractive in appearance and have more the artistry step:
(1) each painting element in the width of cloth paint of drawing for the user, comprise that the user draws with mapping software and from picture, extract, the user is with static gesture or selected one or more painting element of voice; For each painting element in the width of cloth paint, comprise and from picture, extract that system can be numbered according to the order of adding painting element that the user draws out with mapping software; The user can pass through static gesture, promptly point to a painting element and choose it with finger, camera can capture user's gesture, and the finger of analysis user points to which zone in the screen, thereby determine which painting element the user wants to choose, and repeats this action and can choose a plurality of painting elements.Adopted " voice in the graphic interface and gesture " (Richard A.B.1980. " Put-That-There ": to the method to user's gesture identification of proposition, use the method that proposes in other documents will regard distortion of the present invention as Voice and Gesture at the Graphics Interface) for the identification of user's gesture; The numbering of a painting element promptly said in voice, and sound identification module integrated in the system can convert voice signal to text, and then determine which the painting element that is selected is; Repeat the eligible a plurality of painting elements of this step.Using the selected painting element of voice is the numbering of saying painting element by the user, speech recognition software can convert voice signal to text, and then determine which the painting element that is selected is, speech recognition process has adopted the integrated speech recognition engine of Windows XP, uses other speech recognition softwares also to regard distortion of the present invention as.
(2) user makes and uses gesture or voice, one or more chosen painting element is adjusted the operation of size, angle and position.Operation for painting element, the present invention adopted " voice in the graphic interface and gesture " (Richard A.B.1980. " Put-That-There ": Voice and Gesture at theGraphics Interface) the operation and the method for operating to Drawing Object of design in, use the operational order and the method for operating that design in other documents also to be considered as distortion of the present invention.
Embodiment
The method that the electronic painting based on many media user interaction that the use voice that the present invention proposes, static gesture or dynamic limbs performance are carried out is created, the enforcement that machine is detailed is as follows in conjunction with the accompanying drawings: the standard configuration of enforcement environment of the present invention is as shown in the table.
Accessory Quantity Explanation
Computer
1 Essential, computer must be able to normally move
Microphone 1 Essential
Camera Minimum
1 Essential, a plurality of cameras can strengthen gesture and performance is analyzed, the accuracy of eye tracking
Handwriting pad 1 cover Optional, also available mouse and keyboard replace
Except can normally moving, and support that system also needs to dispose microphone and camera outside the computer of camera and microphone, handwriting pad not necessarily disposes, but handwriting pad can bring convenience for user's hand drawing pattern; Wherein microphone is used for obtaining user's voice, and camera is used for catching user's gesture and performance, and carries out eye tracking;
System architecture diagram of the present invention as shown in Figure 2.Comprise 8 modules in the system, sound identification module is used for discerning user's voice, is translated into Word message; Static posture and dynamically the performance identification module will be used for discerning the meaning of user's body language; The eye tracking module is used for catching the activity of user's eyeball, and where the sight of consumer positioning rests on screen; Search engine module is handled according to text, perhaps body form, perhaps sketch, perhaps movement locus, the function of search pictures from network or database; Picture select module with search engine searches to candidate's pictures enumerate out, select picture for the user; User preferences block can be added up the style of the picture that the user likes according to the picture that the user selects, and user's hobby is fed back to search engine, can the picture in candidate's pictures be filtered when making search engine search pictures next time; The image segmentation module can be separated with the part with different meanings in the width of cloth picture, and be numbered; Image segmentation result selects module that an interface that allows the user select cutting object can be provided.
Workflow diagram of the present invention as shown in Figure 3, the user needs to create earlier a width of cloth painting canvas, promptly opens a workspace, can create the new picture of a width of cloth or open existing picture and carry out new creation in the workspace; The user can select with mapping software drawing or extract the method drawing of painting element from picture subsequently; If middle mapping software drawing, then directly using any mapping software that provides in the computer system to operate gets final product, if select from picture, to mention painting element, then can use voice, static posture or dynamic limbs performance search candidate pictures from the picture materials storehouse, pass through voice subsequently, static gesture or eye tracking are selected the picture that meets customer requirements from candidate's pictures, next, system can cut apart automatically to the picture of choosing, afterwards, the user can pass through voice, gesture or eye tracking are selected the painting element of user's needs from cut apart element; Subsequently, system can be put into the painting element of choosing in the painting canvas automatically, the user can choose one or more painting element in the current painting canvas by voice and static gesture, and adjust size, direction and the position of each selected painting element by voice and static gesture, can continue to add the drawing object subsequently, repeat above-mentioned steps, up to finishing drawing.

Claims (5)

1. the method based on the electronic painting creation of many media user interaction is characterized in that comprising the steps:
1) element of use mapping software drafting and the painting element that extracts from picture are formed the content of painting;
2) user searches for candidate's pictures by the method for voice, static posture, dynamic limbs performance or sketch drawing from the picture materials storehouse;
3) user chooses a width of cloth picture in candidate's pictures by voice, static gesture or eye tracking;
4) picture of automatically user being chosen is cut apart, and extracts the element in the picture, chooses the element that the user needs by identification user's voice, static gesture or eye tracking from segmentation result;
5) user adjusts the size and the position of each painting element by static gesture and voice, makes paint attractive in appearance and have more artistry.
2. a method according to claim 1 based on the electronic painting creation of many media user interaction, it is characterized in that the method for described user, search candidate pictures step from the picture materials storehouse by voice, static posture, dynamic limbs performance or sketch drawing:
(1) user passes through microphone, say the sound of the title of object search, the brief sentence that comprises title or imitation drawing object, system can determine the object oriented that the user will search for according to the content that the user says, then search candidate pictures from the picture materials storehouse;
(2) user wants the appearance of the object of painting by static posture imitation, capture user's imitation effect by camera, and analyze and extract the shape facility of user's static posture, and from the picture materials storehouse, search for candidate's pictures according to the shape facility that extracts;
(3) user wants the appearance of the object of painting by dynamic limbs performance imitation, capture user's limbs performance by camera, and analyze and extract the behavioral characteristics of user's performance, and from the picture materials storehouse, search for candidate's pictures according to the feature that extracts;
(4) user can use mapping software to draw sketch, searches for candidate's pictures according to sketch in the picture materials storehouse then;
(5) user draws sketch on paper, captures this sketch by camera, then this sketch is searched for candidate's pictures in the picture materials storehouse.
3. a method based on the creation of the electronic painting of many media user interaction according to claim 1 is characterized in that described user chooses a width of cloth picture step in candidate's pictures by voice, static gesture or eye tracking:
(1) the figure sector-meeting in candidate's pictures is slit into sheets demonstration, and each width of cloth picture of each page all has a numbering, and the user says the numbering of the picture of selection by microphone;
(2) user is by pointing to the width of cloth picture in candidate's pictures with finger, captures user's static gesture by camera, and the gesture of analysis user is pointed to which zone of screen, determines which width of cloth picture user wants to choose;
(3) eye movement by the camera track user, the sight by the statistics user rest on time in which zone on the screen and the longlyest determine that the user wants the picture of choosing.
4. a method according to claim 1 based on the electronic painting creation of many media user interaction, it is characterized in that the described picture of automatically user being chosen cuts apart, extract the element in the picture, from segmentation result, choose the element step that the user needs by identification user's voice, static gesture or eye tracking:
(1) uses image segmentation algorithm that the picture of choosing is cut apart, extract the element in the picture, and each element that extracts is numbered;
(2) user says the numbering of element by microphone;
(3) user points to an element that extracts with finger from image, and camera can capture user's gesture, and which zone in the gesture of the analysis user sensing screen, determines which the user wants to choose and cut apart element;
(4) eye movement by the camera track user, the sight by the statistics user rest on time in which zone on the screen and the longlyest determine that the user wants the element of choosing of cutting apart.
5. a method according to claim 1 based on the electronic painting creation of many media user interaction, it is characterized in that described user adjusts the size and the position of each painting element by static gesture and voice, make paint attractive in appearance and have more the artistry step:
(1) each painting element in the width of cloth paint of drawing for the user, comprise that the user draws with mapping software and from picture, extract, the user is with static gesture or selected one or more painting element of voice;
(2) user makes and uses gesture or voice, one or more chosen painting element is adjusted the operation of size, angle and position.
CN2008101207935A 2008-09-05 2008-09-05 Electronic painting creative method based on multi-medium user interaction Expired - Fee Related CN101382836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101207935A CN101382836B (en) 2008-09-05 2008-09-05 Electronic painting creative method based on multi-medium user interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101207935A CN101382836B (en) 2008-09-05 2008-09-05 Electronic painting creative method based on multi-medium user interaction

Publications (2)

Publication Number Publication Date
CN101382836A true CN101382836A (en) 2009-03-11
CN101382836B CN101382836B (en) 2010-12-15

Family

ID=40462705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101207935A Expired - Fee Related CN101382836B (en) 2008-09-05 2008-09-05 Electronic painting creative method based on multi-medium user interaction

Country Status (1)

Country Link
CN (1) CN101382836B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102445984A (en) * 2010-10-08 2012-05-09 英业达股份有限公司 Voice drawing system and method
CN102637107A (en) * 2011-02-15 2012-08-15 鸿富锦精密工业(深圳)有限公司 Drawing operation method
CN103268188A (en) * 2013-05-27 2013-08-28 华为终端有限公司 Setting method, unlocking method and device based on picture characteristic elements
CN103294178A (en) * 2012-02-29 2013-09-11 联想(北京)有限公司 Man-machine interaction control method and electronic terminal
CN101867744B (en) * 2009-04-20 2013-10-16 Tcl集团股份有限公司 TV set having electronic drawing function and realizing method thereof
CN103389870A (en) * 2012-05-11 2013-11-13 中兴通讯股份有限公司 Unlocking method and device for touch control screen
CN103988142A (en) * 2011-11-04 2014-08-13 托比伊科技公司 Portable device
CN104615440A (en) * 2015-02-13 2015-05-13 联想(北京)有限公司 Information processing method and electronic device
CN105447896A (en) * 2015-11-14 2016-03-30 华中师范大学 Animation creation system for young children
CN105912256A (en) * 2016-04-08 2016-08-31 微鲸科技有限公司 Control method of touch screen and touch device
US9449107B2 (en) 2009-12-18 2016-09-20 Captimo, Inc. Method and system for gesture based searching
CN106200942A (en) * 2016-06-30 2016-12-07 联想(北京)有限公司 Information processing method and electronic equipment
CN106780673A (en) * 2017-02-13 2017-05-31 杨金强 A kind of animation method and system
CN106951090A (en) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 Image processing method and device
CN106959760A (en) * 2017-03-31 2017-07-18 联想(北京)有限公司 A kind of information processing method and device
CN107610549A (en) * 2017-11-07 2018-01-19 宋彦震 Track and pick up color, children cognition of tinting education card and method
WO2018023316A1 (en) * 2016-07-31 2018-02-08 李仁涛 Early education machine capable of painting
CN108008810A (en) * 2016-11-01 2018-05-08 深圳纬目信息技术有限公司 A kind of confirmation method and system based on Mental imagery
US10037086B2 (en) 2011-11-04 2018-07-31 Tobii Ab Portable device
WO2019095801A1 (en) * 2017-11-14 2019-05-23 上海电机学院 Interactive drawing method and apparatus based on sound mfcc characteristics
CN110704711A (en) * 2019-09-11 2020-01-17 中国海洋大学 Object automatic identification system for lifetime learning
CN112987930A (en) * 2021-03-17 2021-06-18 读书郎教育科技有限公司 Method for realizing convenient interaction with large-size electronic product
CN114115528A (en) * 2021-11-02 2022-03-01 深圳市雷鸟网络传媒有限公司 Virtual object control method and device, computer equipment and storage medium
WO2023131016A1 (en) * 2022-01-04 2023-07-13 北京字节跳动网络技术有限公司 Tutorial data display method and apparatus, computer device, and storage medium

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867744B (en) * 2009-04-20 2013-10-16 Tcl集团股份有限公司 TV set having electronic drawing function and realizing method thereof
US9449107B2 (en) 2009-12-18 2016-09-20 Captimo, Inc. Method and system for gesture based searching
CN102445984B (en) * 2010-10-08 2015-06-24 英业达股份有限公司 Voice drawing system and method
CN102445984A (en) * 2010-10-08 2012-05-09 英业达股份有限公司 Voice drawing system and method
CN102637107A (en) * 2011-02-15 2012-08-15 鸿富锦精密工业(深圳)有限公司 Drawing operation method
US9772690B2 (en) 2011-11-04 2017-09-26 Tobii Ab Portable device
CN108170218A (en) * 2011-11-04 2018-06-15 托比公司 Mancarried device
CN103988142B (en) * 2011-11-04 2018-06-26 托比公司 Mancarried device
CN103988142A (en) * 2011-11-04 2014-08-13 托比伊科技公司 Portable device
US10037086B2 (en) 2011-11-04 2018-07-31 Tobii Ab Portable device
US10409388B2 (en) 2011-11-04 2019-09-10 Tobii Ab Portable device
US10061393B2 (en) 2011-11-04 2018-08-28 Tobii Ab Portable device
CN103294178A (en) * 2012-02-29 2013-09-11 联想(北京)有限公司 Man-machine interaction control method and electronic terminal
CN103389870A (en) * 2012-05-11 2013-11-13 中兴通讯股份有限公司 Unlocking method and device for touch control screen
CN103268188A (en) * 2013-05-27 2013-08-28 华为终端有限公司 Setting method, unlocking method and device based on picture characteristic elements
CN104615440A (en) * 2015-02-13 2015-05-13 联想(北京)有限公司 Information processing method and electronic device
CN105447896A (en) * 2015-11-14 2016-03-30 华中师范大学 Animation creation system for young children
CN105912256A (en) * 2016-04-08 2016-08-31 微鲸科技有限公司 Control method of touch screen and touch device
CN106200942B (en) * 2016-06-30 2022-04-22 联想(北京)有限公司 Information processing method and electronic equipment
CN106200942A (en) * 2016-06-30 2016-12-07 联想(北京)有限公司 Information processing method and electronic equipment
WO2018023316A1 (en) * 2016-07-31 2018-02-08 李仁涛 Early education machine capable of painting
CN108008810A (en) * 2016-11-01 2018-05-08 深圳纬目信息技术有限公司 A kind of confirmation method and system based on Mental imagery
CN106780673A (en) * 2017-02-13 2017-05-31 杨金强 A kind of animation method and system
CN106951090B (en) * 2017-03-29 2021-03-30 北京小米移动软件有限公司 Picture processing method and device
CN106951090A (en) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 Image processing method and device
CN106959760A (en) * 2017-03-31 2017-07-18 联想(北京)有限公司 A kind of information processing method and device
CN107610549A (en) * 2017-11-07 2018-01-19 宋彦震 Track and pick up color, children cognition of tinting education card and method
CN107610549B (en) * 2017-11-07 2019-10-25 杭州勒格网络科技有限公司 It tracks and picks up color, children cognition of tinting education card and method
WO2019095801A1 (en) * 2017-11-14 2019-05-23 上海电机学院 Interactive drawing method and apparatus based on sound mfcc characteristics
CN110704711A (en) * 2019-09-11 2020-01-17 中国海洋大学 Object automatic identification system for lifetime learning
CN112987930A (en) * 2021-03-17 2021-06-18 读书郎教育科技有限公司 Method for realizing convenient interaction with large-size electronic product
CN114115528A (en) * 2021-11-02 2022-03-01 深圳市雷鸟网络传媒有限公司 Virtual object control method and device, computer equipment and storage medium
CN114115528B (en) * 2021-11-02 2024-01-19 深圳市雷鸟网络传媒有限公司 Virtual object control method, device, computer equipment and storage medium
WO2023131016A1 (en) * 2022-01-04 2023-07-13 北京字节跳动网络技术有限公司 Tutorial data display method and apparatus, computer device, and storage medium

Also Published As

Publication number Publication date
CN101382836B (en) 2010-12-15

Similar Documents

Publication Publication Date Title
CN101382836B (en) Electronic painting creative method based on multi-medium user interaction
Kara et al. Hierarchical parsing and recognition of hand-sketched diagrams
Wu et al. Fusing multi-modal features for gesture recognition
CN103226388A (en) Kinect-based handwriting method
CN107578023A (en) Man-machine interaction gesture identification method, apparatus and system
CN108536302A (en) A kind of teaching method and system based on human body gesture and voice
CN102930270A (en) Method and system for identifying hands based on complexion detection and background elimination
CN111103982A (en) Data processing method, device and system based on somatosensory interaction
CN101620738A (en) Method for generating multi-media concept map
Leng et al. A user-defined gesture set for music interaction in immersive virtual environment
Giunchi et al. Mixing modalities of 3d sketching and speech for interactive model retrieval in virtual reality
Käster et al. Combining speech and haptics for intuitive and efficient navigation through image databases
CN101404087B (en) Fast electronic painting creation method based on network search
Augereau et al. An overview of comics research in computer science
Lyons Facial gesture interfaces for expression and communication
CN117032453A (en) Virtual reality interaction system for realizing mutual recognition function
CN111860086A (en) Gesture recognition method, device and system based on deep neural network
Singh et al. Data capturing process for online Gurmukhi script recognition system
Just Two-handed gestures for human-computer interaction
CN201540535U (en) Non-contact human-computer interaction system based on blue point identification
Ma et al. Multimodal art pose recognition and interaction with human intelligence enhancement
Zhang et al. Deep neural networks for free-hand sketch recognition
Jessop Capturing the body live: A framework for technological recognition and extension of physical expression in performance
Takeuchi Synthetic space: inhabiting binaries
De Angeli et al. Ecological interfaces: Extending the pointing paradigm by visual context

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20101215

Termination date: 20150905

EXPY Termination of patent right or utility model