CN110750160A - Drawing method and device for drawing screen based on gesture, drawing screen and storage medium - Google Patents

Drawing method and device for drawing screen based on gesture, drawing screen and storage medium Download PDF

Info

Publication number
CN110750160A
CN110750160A CN201911016667.XA CN201911016667A CN110750160A CN 110750160 A CN110750160 A CN 110750160A CN 201911016667 A CN201911016667 A CN 201911016667A CN 110750160 A CN110750160 A CN 110750160A
Authority
CN
China
Prior art keywords
gesture
current moment
determining
current
depth value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911016667.XA
Other languages
Chinese (zh)
Other versions
CN110750160B (en
Inventor
于越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201911016667.XA priority Critical patent/CN110750160B/en
Publication of CN110750160A publication Critical patent/CN110750160A/en
Application granted granted Critical
Publication of CN110750160B publication Critical patent/CN110750160B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/80Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a drawing method and device for a drawing screen based on gestures, the drawing screen and a storage medium, wherein the method comprises the following steps: acquiring a scene image of the current moment acquired by a camera shooting assembly; identifying a scene image at the current moment, and determining a gesture at the current moment; determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment; and drawing the graph to be drawn in the drawing screen. From this, the user need not to touch the drawing screen and can realize long-range drawing, the interest of reinforcing drawing. And, the user only needs to make corresponding gesture, can realize the drawing, and need not to use appurtenance to draw, can promote the convenience of drawing, improves user's use and experiences.

Description

Drawing method and device for drawing screen based on gesture, drawing screen and storage medium
Technical Field
The application relates to the technical field of image processing and human-computer interaction, in particular to a drawing method and device for a drawing screen based on gestures, the drawing screen and a storage medium.
Background
At present, in order to improve the reality of painting, a user can paint on paper, however, the content written on the paper is easily damaged or lost due to the influence of environment or other factors, and in order to avoid the above problem, the user can paint on a device with a painting screen, such as a painting board.
In this way, the user must use an auxiliary tool, such as a drawing pen, to draw on the device having the drawing screen, which is inconvenient to operate.
Disclosure of Invention
The application provides a drawing screen drawing method, device, drawing screen and storage medium based on gesture to realize promoting the convenience of drawing, improve user's use and experience, be used for solving among the prior art and draw with the help of appurtenance at the equipment that has the drawing screen, operate very inconvenient technical problem.
An embodiment of a first aspect of the present application provides a drawing screen drawing method based on gestures, including:
acquiring a scene image of the current moment acquired by a camera shooting assembly;
identifying the scene image at the current moment, and determining the gesture at the current moment;
determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment;
and drawing the graph to be drawn in a drawing screen.
According to the drawing method based on the gesture, the scene image at the current moment acquired by the camera shooting assembly is acquired; identifying a scene image at the current moment, and determining a gesture at the current moment; determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment; and drawing the graph to be drawn in the drawing screen. From this, the user need not to touch the drawing screen and can realize long-range drawing, the interest of reinforcing drawing. And, the user only needs to make corresponding gesture, can realize the drawing, and need not to use appurtenance to draw, can promote the convenience of drawing, improves user's use and experiences.
An embodiment of the second aspect of the present application provides a drawing screen drawing device based on gesture, including:
the acquisition module is used for acquiring a scene image of the current moment acquired by the camera shooting assembly;
the recognition module is used for recognizing the scene image at the current moment and determining the gesture at the current moment;
the determining module is used for determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment;
and the drawing module is used for drawing the graph to be drawn in the drawing screen.
The drawing device based on the gesture comprises a camera component, a control component and a control component, wherein the camera component is used for acquiring a scene image at the current moment; identifying a scene image at the current moment, and determining a gesture at the current moment; determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment; and drawing the graph to be drawn in the drawing screen. From this, the user need not to touch the drawing screen and can realize long-range drawing, the interest of reinforcing drawing. And, the user only needs to make corresponding gesture, can realize the drawing, and need not to use appurtenance to draw, can promote the convenience of drawing, improves user's use and experiences.
An embodiment of a third aspect of the present application provides a drawing screen, including: the drawing device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the drawing method based on the gesture according to the embodiment of the first aspect of the application.
A fourth aspect of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements a gesture-based drawing screen drawing method as set forth in the first aspect of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a gesture-based drawing screen drawing method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a drawing board according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a gesture-based drawing screen drawing method according to a second embodiment of the present application;
FIG. 4 is a flowchart illustrating a gesture-based drawing screen drawing method according to a third embodiment of the present application;
FIG. 5 is a flowchart illustrating a gesture-based drawing screen drawing method according to a fourth embodiment of the present application;
FIG. 6 is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a gesture-based drawing screen drawing device according to a fifth embodiment of the present application;
fig. 8 is a schematic structural diagram of a drawing device of a gesture-based drawing screen according to a sixth embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
This application mainly draws in the equipment that has the drawing screen with the help of appurtenance among the prior art, and the very inconvenient technical problem of operation provides a drawing screen drawing method based on gesture.
According to the drawing method based on the gesture, the scene image at the current moment acquired by the camera shooting assembly is acquired; identifying a scene image at the current moment, and determining a gesture at the current moment; determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment; and drawing the graph to be drawn in the drawing screen. From this, the user need not to touch the drawing screen and can realize long-range drawing, the interest of reinforcing drawing. And, the user only needs to make corresponding gesture, can realize the drawing, and need not to use appurtenance to draw, can promote the convenience of drawing, improves user's use and experiences.
A gesture-based drawing screen drawing method, apparatus, drawing screen, and storage medium according to embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a drawing method of a gesture-based drawing screen according to an embodiment of the present disclosure.
The drawing screen drawing method based on the gesture can be applied to drawing screens, for example, can be applied to equipment with drawing screens such as drawing boards, drawing teaching auxiliary equipment and drawing equipment.
As shown in fig. 1, the gesture-based drawing screen drawing method includes the steps of:
step 101, acquiring a scene image of the current moment acquired by the camera shooting assembly.
In the embodiment of the present application, the scene image at the current time may be a two-dimensional scene image. The camera assembly can comprise a color camera and/or a black and white camera, and a two-dimensional scene image at the current moment can be acquired based on the color camera and/or the black and white camera.
In this application embodiment, the user can open the remote drawing function of drawing screen, for example, can have corresponding controlling part on the equipment that has the drawing screen, or can have corresponding switch on the remote controller of the equipment that has the drawing screen, a remote drawing function for opening or close the drawing screen, the user can open the remote drawing function of drawing screen through triggering above-mentioned controlling part or switch, or, the user can also open the remote drawing function by the equipment that speech control has the drawing screen, this application does not restrict to this. After the remote drawing function of the drawing screen is started, the scene image at the current moment can be collected through the camera shooting assembly, the collected scene image at the current moment is sent to the drawing screen drawing device based on the gesture, correspondingly, the drawing screen drawing device based on the gesture can receive the scene image at the current moment, or the drawing screen drawing device based on the gesture can be communicated with the camera shooting assembly in real time, and after the scene image at the current moment is collected by the camera shooting assembly, the drawing screen drawing device based on the gesture can acquire the scene image at the current moment.
And 102, identifying the scene image at the current moment, and determining the gesture at the current moment.
In the embodiment of the application, after the scene image at the current moment is acquired, the scene image at the current moment can be subjected to image recognition, and the gesture at the current moment is determined. For example, in order to improve the accuracy of the image recognition result, the scene image at the current time may be recognized based on a machine learning method, and the gesture at the current time may be determined.
As a possible implementation manner, the scene image at the current time may be recognized, a hand region or a gesture region may be determined and extracted, then, the image in the hand region or the gesture region may be preprocessed to improve accuracy of the recognition result, for example, the image in the hand region or the gesture region may be subjected to geometric normalization, histogram equalization, and the like, then, in order to avoid an influence of illumination change and gesture rotation on the recognition result, the image in the hand region or the gesture region may be subjected to feature extraction based on a histogram of Gradient (HOG) to obtain a gesture feature, that is, an HOG feature, and a dimensionality reduction algorithm, such as Principal Component Analysis (PCA), Factor Analysis (Factor Analysis), Independent Component Analysis (ICA), and the like, is adopted, and performing dimension reduction processing on the extracted HOG feature process, then identifying the HOG feature after dimension reduction processing by using a Support Vector Machine (SVM), judging the meaning of the current gesture, and outputting a gesture serial number so as to determine the gesture at the current moment.
For example, 4 gestures, including a brush gesture, a palm gesture, a fist making gesture and a withdrawal gesture, are represented by arabic numerals 1 to 4 respectively, when the gestures are recognized by adopting a voting-type SVM, classifiers can be constructed among gesture classes, for classification problems with k types of samples, k (k-1)/2 classifiers can be constructed, voting results of the classifiers are counted, and the class with the largest votes is the class to which the sample points belong. For example, with the type 4 gesture described above, k may take the value of 4.
It should be noted that, the above is only exemplified by 4 kinds of gestures, and in practical application, the gestures may be further divided into fine fractions, so as to improve the accuracy of the recognition result.
And 103, determining the current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment.
In the embodiment of the present application, the previous preset time period may include one time, or may also include multiple times, which is not limited in the present application.
In the embodiment of the application, when the gesture at the current moment is different, the graphs to be drawn can be different, and when the gesture in the previous preset time period adjacent to the current moment is different from the gesture at the current moment, the graphs to be drawn can also be different. For example, when the gesture at the current moment is a brush gesture, the graph to be drawn may be determined according to the movement trajectory of the fingertip, or when the gesture at the current moment is a cancel gesture, the last drawing operation may be cancelled, or when the gesture at the current moment is a palm gesture and the gesture in the previous preset time period adjacent to the current moment is a fist-making gesture, the image to be drawn may be a burst graph of a pigment bag, and the like.
In the embodiment of the application, the corresponding relation between each gesture and the graph to be drawn and the corresponding relation between each gesture and the graph to be drawn which are made successively can be preset, so that after the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment are determined, the corresponding relation can be inquired, and the graph to be drawn at the current moment is determined. Still as exemplified by the above example, assuming that the gesture made first is a fist-making gesture and the gesture made later is a palm gesture, the image to be rendered may be a paint burst graphic.
And 104, drawing the graph to be drawn in the drawing screen.
In the embodiment of the application, after the current image to be drawn is determined, the image to be drawn can be drawn in the drawing screen. From this, the user need not to touch the drawing screen and can realize long-range drawing, the interest of reinforcing drawing. And, the user only needs to make corresponding gesture, can realize the drawing, and need not to use appurtenance to draw, can promote the convenience of drawing, improves user's use and experiences.
As an application scenario, refer to fig. 2, and fig. 2 is a schematic structural diagram of a drawing board according to an embodiment of the present application. Wherein, drawing board 20 has subassembly 21 and the drawing screen 22 of making a video recording, gather scene image through subassembly 21 of making a video recording, and the gesture in the discernment scene image, when the gesture of discerning is the painting brush gesture, can be according to the orbit of fingertip, draw on the drawing screen, the gesture of discerning is the fist gesture, and later discernment obtains the palm gesture again, then can draw on the drawing screen and be similar to throw the figure that the pigment package explodes to the drawing screen, when the gesture of discerning is for withdrawing the gesture, can withdraw the effect of drawing last time.
It should be noted that fig. 2 only exemplifies 4 gestures, and in practical application, the gestures may be divided into fine fractions, and a corresponding relationship between each gesture and a graph to be drawn is defined, or a corresponding relationship between each gesture made in sequence and a graph to be drawn is defined, for example, when the gesture is a praise gesture, the graph to be drawn may be a flower graph, and the like. In addition, fig. 2 only exemplifies that the undo gesture is the vertical index finger and the middle finger, and the brush pen gesture is the vertical index finger, and in actual application, specific gestures corresponding to the undo gesture and the brush pen gesture can be customized, for example, the undo gesture can also be three vertical fingers, four vertical fingers, and the like, which is not limited in this application.
According to the drawing method based on the gesture, the scene image at the current moment acquired by the camera shooting assembly is acquired; identifying a scene image at the current moment, and determining a gesture at the current moment; determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment; and drawing the graph to be drawn in the drawing screen. From this, the user need not to touch the drawing screen and can realize long-range drawing, the interest of reinforcing drawing. And, the user only needs to make corresponding gesture, can realize the drawing, and need not to use appurtenance to draw, can promote the convenience of drawing, improves user's use and experiences.
In the prior art, when a user draws on a device having a drawing screen, the line width of the drawing is set by default, or can be manually set on the drawing screen by the user. In the first mode, the requirements of different users cannot be met, and in the second mode, the operation is very inconvenient.
In the application, the line width of the drawing can be automatically adjusted according to the distance value between the hand of the user and the camera shooting assembly, so that the individual requirements of different users are met, the line width does not need to be manually set by the user, the user operation can be simplified, and the use experience of the user is improved. The above process is described in detail with reference to example two.
Fig. 3 is a schematic flowchart of a drawing method of a gesture-based drawing screen according to a second embodiment of the present application.
As shown in fig. 3, the gesture-based drawing screen drawing method may include the steps of:
step 201, acquiring a scene image of the current time acquired by the camera shooting assembly.
The execution process of step 201 may refer to the execution process of step 101 in the above embodiments, which is not described herein again.
Further, in order to improve the accuracy of the subsequent recognition result, the scene image at the current moment may be preprocessed, for example, the scene image at the current moment may be filtered by using a median filtering method to remove noise influence, and then the filtered scene image at the current moment is enhanced, so that the gesture information in the image is more obvious and easier to extract.
Step 202, obtaining a depth value corresponding to each pixel point in the scene image at the current moment.
In this embodiment of the application, the camera shooting assembly may further include a depth camera, and the depth camera obtains a depth value corresponding to each pixel point in the scene image at the current time.
Step 203, recognizing the scene image at the current moment, and determining the gesture at the current moment.
In the embodiment of the application, whether the scene image at the current moment contains a hand region or a gesture region can be determined based on skin color information and depth information of the scene image at the current moment, and then the image in the hand region or the gesture region can be identified to determine the gesture at the current moment. For example, in order to avoid the influence of illumination change and gesture rotation on the recognition result, feature extraction may be performed on the image in the hand region based on the HOG algorithm to obtain a gesture feature, that is, a HOG feature, and then, the HOG feature is recognized by using an SVM, the meaning of the current gesture is determined, and a gesture number is output, so as to determine the gesture at the current time.
And 204, determining the current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment.
The execution process of step 204 may refer to the execution process of step 103 in the above embodiments, which is not described herein again.
Step 205, determining the current line width according to the depth value corresponding to the gesture at the current moment.
In the embodiment of the application, the corresponding relationship between different depth values and line widths can be preset, so that after the depth value corresponding to the gesture at the current moment is determined, the corresponding relationship can be inquired, and the corresponding current line width is determined. For example, the smaller the depth value corresponding to the gesture at the current time, the closer the hand is to the camera component, and at this time, the wider the line width, and the larger the depth value corresponding to the gesture at the current time, the farther the hand is from the camera component, and at this time, the thinner the line width.
The depth value corresponding to the gesture at the current moment may be a minimum depth value corresponding to a pixel point in the image in the gesture area, or may also be an average depth value of each pixel point in the image in the gesture area, or may also be a depth value corresponding to a center pixel point in the image in the gesture area, which is not limited in the present application.
And step 206, drawing the graph to be drawn in the drawing screen according to the current line width.
In the embodiment of the application, after the current line width is determined, the graph to be drawn can be drawn in the drawing screen. Therefore, the user can adjust the distance between the hand and the camera shooting assembly according to the self requirement to automatically adjust the current line width, the personalized requirements of different users can be met, the user does not need to manually set the line width, the user operation can be simplified, and the user experience is improved.
As a possible implementation manner, in step 102 or 203, in the present application, skin color detection may be performed on chromaticity of each pixel point in a scene image at the current time, each connected domain is determined according to a skin color detection result, a hand region or a gesture region is determined according to depth information of each connected domain, and then the hand region or the gesture region is identified to determine a gesture at the current time. The above process is described in detail with reference to example three.
Fig. 4 is a schematic flowchart of a drawing method of a gesture-based drawing screen according to a third embodiment of the present application.
As shown in fig. 4, on the basis of the above embodiment, step 102 or 203 may specifically include the following sub-steps:
step 301, performing skin color detection on the chromaticity of each pixel point in the scene image at the current moment.
In the embodiment of the application, the skin color detection can be performed on the chromaticity of each pixel point in the scene image at the current moment based on the elliptical skin color model, so that the skin color area is determined.
As a possible implementation manner, if the scene image at the current time is an image in an RGB color space, the scene image at the current time may be converted from the RGB color space to the YCbCrAnd (3) carrying out color space detection on the chromaticity of each pixel point in the scene image at the current moment through an elliptical color space model, and judging whether each pixel point belongs to a color area, wherein the elliptical color space model is used for detecting the color of each pixel point in the scene image at the current momentCan be shown as equation (1):
Figure BDA0002245916530000071
the center Cb of the ellipse can be obtained according to experience0=123.73、Cr0136.34, the inclination angle theta of the ellipse is 2.25(rad), the major and minor axes of the ellipse are respectively 28.73 a, 16.24 b, Cx=1.69、CyCb and Cr each represent YCbCrAnd x, y denote transformed chrominance components. The skin color judgment criterion is as follows:
Figure BDA0002245916530000072
and D (Cb, Cr) represents a face region, the chrominance components x and y of each pixel point after conversion are substituted into a formula (2), and if the value is less than or equal to 1, the pixel point is in an ellipse, namely the pixel point belongs to a skin region. Therefore, the chromaticity components of all the pixel points in the scene image at the current moment are sequentially substituted into the formula (2), and all the skin color areas can be obtained.
Step 302, if the scene image at the current moment includes a target connected region, performing gesture recognition on the image in the target connected region, and determining a gesture at the current moment, wherein the chromaticity component of each pixel point in the target connected region meets a preset condition.
In the embodiment of the application, the chromaticity component of each pixel point in the target connected region can satisfy the formula (2), that is, the target connected region can be a skin color region.
In the embodiment of the application, if only one target connected region is included in the scene image at the current moment, gesture recognition can be directly performed on the image in the target connected region, and the gesture at the current moment is determined. For example, in order to avoid the influence of illumination change and gesture rotation on the recognition result, feature extraction may be performed on the image in the target communication region based on the HOG algorithm to obtain a gesture feature, that is, a HOG feature, and then, the HOG feature is recognized by using an SVM, the meaning of the current gesture is determined, and a gesture serial number is output, thereby determining the gesture at the current time.
Step 303, if the scene image at the current time includes a plurality of target connected regions, determining a depth value corresponding to each target connected region.
And step 304, determining the target connected region with the minimum corresponding depth value as the target connected region to be identified.
Generally, a user draws facing a drawing screen, and when the user draws with a gesture, the hand is closer to the camera component, so that if a scene image at the current moment includes a plurality of target connected regions, the depth value corresponding to each target connected region can be determined, and the target connected region with the smallest depth value is used as the target connected region to be recognized. And calculating an average depth value corresponding to the corresponding target connected region according to the depth value of each pixel point, and taking the average depth value as the depth value of the corresponding target connected region.
And 305, performing gesture recognition on the target connected region to be recognized, and determining a gesture at the current moment.
In the embodiment of the application, after the target connected region to be recognized is determined, gesture recognition can be performed on the target connected region to be recognized, and a gesture at the current moment is determined. Similarly, feature extraction can be performed on the image in the target communication area to be recognized based on the HOG algorithm to obtain gesture features, namely HOG features, then the HOG features are recognized by using an SVM (support vector machine), the meaning of the current gesture is judged, and a gesture serial number is output, so that the gesture at the current moment is determined.
As a possible implementation manner, referring to fig. 5, on the basis of the foregoing embodiment, the gesture-based drawing screen drawing method may further include the following steps:
in step 401, if the gesture at the current moment is the first preset gesture, it is determined whether the gesture in the previous preset time period is the second preset gesture.
In this application embodiment, first preset gesture and second preset gesture are preset, for example, first preset gesture can be palm gesture, and second preset gesture can be fist making gesture.
In the embodiment of the application, when the gesture at the current moment is the first preset gesture, the gesture in the previous preset time period may be matched with the second preset gesture, so as to determine whether the gesture in the previous preset time period is the second preset gesture.
Step 402, if the gesture in the previous preset time period is the second preset gesture, determining whether a first depth value corresponding to the first preset gesture is smaller than a second depth value corresponding to the second preset gesture.
In this embodiment of the application, the first depth value corresponding to the first preset gesture may be a minimum depth value corresponding to a pixel point in an image in the gesture area, or may also be an average depth value of each pixel point in the image in the gesture area, or may also be a depth value corresponding to a center pixel point in the image in the gesture area. Similarly, the second depth value corresponding to the second preset gesture may also be a minimum depth value corresponding to a pixel point in the image in the gesture area, or may also be an average depth value of each pixel point in the image in the gesture area, or may also be a depth value corresponding to a central pixel point in the image in the gesture area.
In this embodiment, when the gesture within the previous preset time period is the second preset gesture, a first depth value corresponding to the first preset gesture and a second depth value corresponding to the second preset gesture may be calculated, the first depth value and the second depth value are subtracted to obtain a subtraction result, if the subtraction result is smaller than zero, it indicates that the first depth value is smaller than the second depth value, and if the subtraction result is greater than or equal to zero, it indicates that the first depth value is not smaller than the second depth value.
In step 403, if the first depth value is smaller than the second depth value, the target style is determined according to the first depth value.
In this embodiment of the application, when the gesture at the current moment is a first preset gesture and the gesture within the previous preset time period is a second preset gesture, a graph to be drawn, for example, a burst graph of a paint packet, may be determined. When it is determined that the first depth value is smaller than the second depth value, a target pattern may be determined according to the first depth value, for example, the target pattern may be a pattern of a size, a shape, and the like of a region where the color packet bursts.
For example, when the first depth value is smaller, the hand is closer to the camera module or the drawing screen, and the burst area is smaller, the area occupied by each pigment dot in the corresponding border is larger, and the interval between the pigment dots is smaller.
And step 404, drawing the graph to be drawn in the drawing screen according to the target style.
In the embodiment of the application, after the graph to be drawn and the target style are determined, the graph to be drawn can be drawn in the drawing screen according to the target style. Therefore, the user can select the corresponding graph to be drawn according to the gesture, the display style of the graph to be drawn can be automatically adjusted according to the distance between the hand and the camera shooting assembly, and personalized requirements of different users can be met.
As a possible implementation manner, if the gesture at the current moment is the second preset gesture, the second preset gesture and the second depth value corresponding to the second preset gesture are recorded, that is, drawing is not performed, and only the gesture and the corresponding depth information are recorded, so that the graph to be drawn at the corresponding moment can be determined at the subsequent moment according to the gesture and the corresponding depth information.
As a possible implementation manner, the target style may be determined according to a speed at which the user switches between the first preset gesture and the second preset gesture. That is, for step 403, a time difference between the current time and the time when the second preset gesture is captured may be determined, and the target style may be determined according to the time difference and the first depth value.
For example, the smaller the time difference, the faster the gesture transformation speed is indicated, the larger the time difference, the slower the gesture transformation speed is indicated, and thus, when the first depth value and the time difference are smaller, the popped area is smaller, the area occupied by each paint dot in the corresponding border is larger, and the interval between the paint dots is smaller, and when the first depth value and the time difference are larger, the popped area is larger, the area occupied by each border paint dot is smaller, the interval between the paint dots is larger, and the like.
As a possible implementation manner, before the graph to be drawn is drawn in the drawing screen, a first position of a gesture at the current time in a scene image at the current time may be determined, and a corresponding second position in the drawing screen is determined according to the first position and the resolution of the drawing screen, so that the graph to be drawn may be subsequently drawn at the second position.
For example, when the gesture at the current time is a brush gesture, a pixel point with the smallest depth value in the gesture area may be determined as the position of the fingertip, and assuming that the coordinate of the pixel point with the smallest depth value is (x, y), that is, the first position is (x, y), the resolution of the scene image at the current time is width height, the resolution of the drawing screen is width height, the relative position of the fingertip in the scene image at the current time may be represented by (x/width, y/height) coordinates, and the corresponding second position of the fingertip in the drawing screen may be (width x/width, height y/height), and the graphics to be drawn may be drawn at the second position. For example, adjacent position points may be connected, or each drawing 3 points may be connected to obtain a drawn graph.
Or, the gesture at the current time t2 is a palm gesture, the gesture at the adjacent previous drawing time t1 is a fist-making gesture (t1< t2), and a depth value d2min of the palm gesture is smaller than a depth value d1min of the fist-making gesture, that is, the fist-making gesture precedes the palm gesture, and the position of the fist-making gesture is after the fist-making gesture, at this time, the relative position of a central pixel point of the palm gesture in the scene image at the current time can be calculated, and at a corresponding position in the drawing screen, a graph with a burst paint is drawn. For example, if the coordinates of the center pixel of the palm gesture are (x1, y1), the pattern of the burst of the paint bag can be drawn at (width x1/width, height y 1/height).
As a possible implementation manner, the camera module may periodically acquire images, and then, for step 101, a scene image of the current time acquired by the camera module at a first time interval may be acquired. It should be noted that, in order to realize the recognition of the non-continuous line drawing, the frequency of image acquisition may be greater than the drawing frequency, and therefore, before step 103, it is further required to determine that the time interval between the current time and the adjacent previous drawing time is greater than a second time interval, where the second time interval is greater than the first time interval.
It can be understood that when a user draws a discontinuous line, a pause or jump of finger actions must exist between the two lines, the frequency of image acquisition is set to be lower than the drawing frequency, and a corresponding image frame can be ensured to be acquired, so that the discontinuous line can be drawn.
As an application scenario, refer to fig. 6, and fig. 6 is a schematic view of an application scenario in an embodiment of the present application. The user can start the remote drawing function of the drawing screen, the camera shooting assembly starts to collect image sequences periodically, the image sequences comprise scene images and depth information corresponding to the scene images, then the scene images can be preprocessed (filtered and enhanced), and a gesture area is extracted according to skin color information and the depth information of the preprocessed scene images. Then, the gesture at the current moment can be recognized based on the SVM, if the gesture is a brush pen gesture, the position of a pixel point with the minimum depth value in the scene image is used as the position of a fingertip, the relative position coordinate of the fingertip in the scene image is calculated, and the relative position coordinate is mapped to a drawing screen, so that the fingertip track can be drawn; if the gesture is cancelled, cancelling the last drawing operation; if the gesture is a palm gesture and the appearance time t2 of the palm gesture is later than the appearance time t1 of the fist making gesture (t1< t2), the depth value d2min of the palm gesture is smaller than the depth value d1min of the fist making gesture, the relative position of the central pixel point of the palm gesture in the scene image is calculated and mapped to the corresponding position on the drawing screen, and therefore the paint bag burst graph can be drawn at the corresponding position; and if the scene image does not comprise the four gestures or the scene image does not comprise the gestures, starting to process the next frame of image.
In order to realize the embodiment, the application also provides a drawing screen drawing device based on the gesture.
Fig. 7 is a schematic structural diagram of a drawing device of a gesture-based drawing screen according to a fifth embodiment of the present application.
As shown in fig. 7, the gesture-based drawing screen drawing device includes: an acquisition module 110, a recognition module 120, a determination module 130, and a rendering module 140.
The obtaining module 110 is configured to obtain a scene image of the current time acquired by the camera module.
The recognition module 120 is configured to recognize the scene image at the current time, and determine a gesture at the current time.
The determining module 130 is configured to determine a current graph to be drawn according to the gesture at the current time and the gesture in the previous preset time period adjacent to the current time.
And the drawing module 140 is used for drawing the graph to be drawn in the drawing screen.
As a possible implementation manner, the obtaining module 110 is further configured to obtain a depth value corresponding to each pixel point in the scene image at the current time.
The drawing module 140 is specifically configured to determine a current line width according to a depth value corresponding to the gesture at the current time; and drawing the graph to be drawn in the drawing screen according to the current line width.
Further, in a possible implementation manner of the embodiment of the present application, referring to fig. 8, on the basis of the embodiment shown in fig. 7, the apparatus may further include: a judging module 150 and a recording module 160.
As a possible implementation manner, the obtaining module 110 is further configured to obtain a depth value corresponding to each pixel point in the scene image at the current time.
The judging module 150 is configured to, if the gesture at the current moment is a first preset gesture, judge whether the gesture in a previous preset time period is a second preset gesture; if the gesture in the previous preset time period is the second preset gesture, whether a first depth value corresponding to the first preset gesture is smaller than a second depth value corresponding to the second preset gesture is judged.
The determining module 130 is further configured to determine the target style according to the first depth value if the first depth value is smaller than the second depth value.
The drawing module 140 is further configured to draw the graph to be drawn in the drawing screen according to the target style.
The recording module 160 is configured to record a second preset gesture and a second depth value corresponding to the second preset gesture if the gesture at the current moment is the second preset gesture.
As a possible implementation manner, the determining module 130 is specifically configured to determine a time difference between a current time and a time when the second preset gesture is acquired; and determining the target style according to the time difference and the first depth value.
As a possible implementation manner, the determining module 130 is further configured to determine a first position of the gesture at the current time in the scene image at the current time; and determining the second position according to the first position and the resolution of the drawing screen.
And the drawing module 140 is specifically used for drawing the graph to be drawn at a second position in the drawing screen.
As a possible implementation manner, the obtaining module 110 is specifically configured to obtain a scene image of a current time acquired by the camera component at a first time interval.
The determining module 130 is further configured to determine that a time interval between the current time and an adjacent previous drawing time is greater than a second time interval, where the second time interval is greater than the first time interval.
As a possible implementation manner, the identification module 120 is specifically configured to perform skin color detection on the chromaticity of each pixel point in the scene image at the current time; and if the scene image at the current moment comprises a target communication area, performing gesture recognition on the image in the target communication area, wherein the chromaticity component of each pixel point in the target communication area meets the preset condition.
As a possible implementation manner, the identifying module 120 is further configured to determine, if the scene image at the current time includes a plurality of target connected regions, a depth value corresponding to each target connected region; and determining the target connected region with the minimum corresponding depth value as the target connected region to be identified.
It should be noted that the foregoing explanation of the embodiment of the gesture-based drawing method for a drawing screen is also applicable to the gesture-based drawing device of the embodiment, and is not repeated here.
The drawing device based on the gesture comprises a camera component, a control component and a control component, wherein the camera component is used for acquiring a scene image at the current moment; identifying a scene image at the current moment, and determining a gesture at the current moment; determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment; and drawing the graph to be drawn in the drawing screen. From this, the user need not to touch the drawing screen and can realize long-range drawing, the interest of reinforcing drawing. And, the user only needs to make corresponding gesture, can realize the drawing, and need not to use appurtenance to draw, can promote the convenience of drawing, improves user's use and experiences.
In order to achieve the above-mentioned embodiment, the present application also provides a drawing screen, including: the drawing device comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the drawing device realizes the drawing method based on the gesture according to the embodiment of the application.
In order to achieve the above embodiments, the present application also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a gesture-based drawing screen drawing method as proposed by the aforementioned embodiments of the present application.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A gesture-based drawing screen drawing method is characterized by comprising the following steps:
acquiring a scene image of the current moment acquired by a camera shooting assembly;
identifying the scene image at the current moment, and determining the gesture at the current moment;
determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment;
and drawing the graph to be drawn in a drawing screen.
2. The method of claim 1, further comprising:
acquiring a depth value corresponding to each pixel point in the scene image at the current moment;
the drawing of the graph to be drawn in the drawing screen comprises the following steps:
determining the width of the current line according to the depth value corresponding to the gesture at the current moment;
and drawing the graph to be drawn in the drawing screen according to the current line width.
3. The method of claim 1, further comprising:
acquiring a depth value corresponding to each pixel point in the scene image at the current moment;
after the determining the gesture at the current moment, the method further includes:
if the gesture at the current moment is a first preset gesture, judging whether the gesture in the previous preset time period is a second preset gesture;
if the gesture in the previous preset time period is a second preset gesture, judging whether a first depth value corresponding to the first preset gesture is smaller than a second depth value corresponding to the second preset gesture or not;
if the first depth value is smaller than the second depth value, determining a target pattern according to the first depth value;
and drawing the graph to be drawn in the drawing screen by taking the target style as a basis.
4. The method of claim 3, wherein the determining the gesture at the current time is followed by:
and if the gesture at the current moment is a second preset gesture, recording the second preset gesture and a second depth value corresponding to the second preset gesture.
5. The method of claim 3, wherein the determining the target pattern comprises:
determining a time difference between the current moment and the moment of acquiring the second preset gesture;
and determining a target style according to the time difference and the first depth value.
6. The method of any of claims 1-5, wherein prior to drawing the graphic to be drawn in the drawing screen, further comprising:
determining a first position of the gesture at the current moment in the scene image at the current moment;
determining a second position according to the first position and the resolution of the drawing screen;
the drawing of the graph to be drawn in the drawing screen comprises the following steps:
and drawing the graph to be drawn at a second position in the drawing screen.
7. The method of any of claims 1-5, wherein said obtaining an image of the scene captured by the camera assembly at the current time comprises:
acquiring a scene image of a camera shooting assembly at the current moment acquired at a first time interval;
before determining the current graph to be drawn, the method further includes:
and determining that the time interval between the current moment and the adjacent previous drawing moment is greater than a second time interval, wherein the second time interval is greater than the first time interval.
8. The method of any one of claims 1-5, wherein said identifying the image of the scene at the current time comprises:
carrying out skin color detection on the chromaticity of each pixel point in the scene image at the current moment;
and if the scene image at the current moment comprises a target communication area, performing gesture recognition on the image in the target communication area, wherein the chrominance component of each pixel point in the target communication area meets a preset condition.
9. The method of claim 8,
if the scene image at the current moment comprises a plurality of target connected areas, determining a depth value corresponding to each target connected area;
and determining the target connected region with the minimum corresponding depth value as the target connected region to be identified.
10. A drawing screen drawing device based on gestures, comprising:
the acquisition module is used for acquiring a scene image of the current moment acquired by the camera shooting assembly;
the recognition module is used for recognizing the scene image at the current moment and determining the gesture at the current moment;
the determining module is used for determining a current graph to be drawn according to the gesture at the current moment and the gesture in the previous preset time period adjacent to the current moment;
and the drawing module is used for drawing the graph to be drawn in the drawing screen.
11. Drawing screen, characterized in that it comprises a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor implements a gesture based drawing screen drawing method according to any of claims 1-9.
12. A non-transitory computer-readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the gesture based drawing screen drawing method according to any one of claims 1-9.
CN201911016667.XA 2019-10-24 2019-10-24 Gesture-based drawing screen drawing method and device, drawing screen and storage medium Active CN110750160B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911016667.XA CN110750160B (en) 2019-10-24 2019-10-24 Gesture-based drawing screen drawing method and device, drawing screen and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911016667.XA CN110750160B (en) 2019-10-24 2019-10-24 Gesture-based drawing screen drawing method and device, drawing screen and storage medium

Publications (2)

Publication Number Publication Date
CN110750160A true CN110750160A (en) 2020-02-04
CN110750160B CN110750160B (en) 2023-08-18

Family

ID=69279686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911016667.XA Active CN110750160B (en) 2019-10-24 2019-10-24 Gesture-based drawing screen drawing method and device, drawing screen and storage medium

Country Status (1)

Country Link
CN (1) CN110750160B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111949134A (en) * 2020-08-28 2020-11-17 深圳Tcl数字技术有限公司 Human-computer interaction method, device and computer-readable storage medium
CN112925414A (en) * 2021-02-07 2021-06-08 深圳创维-Rgb电子有限公司 Display screen gesture drawing method and device and computer readable storage medium
CN113610944A (en) * 2021-07-30 2021-11-05 新线科技有限公司 Line drawing method, device, equipment and storage medium
CN113703577A (en) * 2021-08-27 2021-11-26 北京市商汤科技开发有限公司 Drawing method and device, computer equipment and storage medium
CN114639157A (en) * 2022-05-18 2022-06-17 合肥的卢深视科技有限公司 Bad learning behavior detection method, system, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090138830A1 (en) * 2005-06-20 2009-05-28 Shekhar Ramachandra Borgaonkar Method, article, apparatus and computer system for inputting a graphical object
CN105893959A (en) * 2016-03-30 2016-08-24 北京奇艺世纪科技有限公司 Gesture identifying method and device
CN106407892A (en) * 2016-08-27 2017-02-15 上海盟云移软网络科技股份有限公司 Hand portion motion capturing method based on Kinect
CN108229277A (en) * 2017-03-31 2018-06-29 北京市商汤科技开发有限公司 Gesture identification, control and neural network training method, device and electronic equipment
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090138830A1 (en) * 2005-06-20 2009-05-28 Shekhar Ramachandra Borgaonkar Method, article, apparatus and computer system for inputting a graphical object
CN105893959A (en) * 2016-03-30 2016-08-24 北京奇艺世纪科技有限公司 Gesture identifying method and device
CN106407892A (en) * 2016-08-27 2017-02-15 上海盟云移软网络科技股份有限公司 Hand portion motion capturing method based on Kinect
CN108229277A (en) * 2017-03-31 2018-06-29 北京市商汤科技开发有限公司 Gesture identification, control and neural network training method, device and electronic equipment
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111949134A (en) * 2020-08-28 2020-11-17 深圳Tcl数字技术有限公司 Human-computer interaction method, device and computer-readable storage medium
CN112925414A (en) * 2021-02-07 2021-06-08 深圳创维-Rgb电子有限公司 Display screen gesture drawing method and device and computer readable storage medium
CN113610944A (en) * 2021-07-30 2021-11-05 新线科技有限公司 Line drawing method, device, equipment and storage medium
CN113610944B (en) * 2021-07-30 2024-06-14 新线科技有限公司 Line drawing method, device, equipment and storage medium
CN113703577A (en) * 2021-08-27 2021-11-26 北京市商汤科技开发有限公司 Drawing method and device, computer equipment and storage medium
CN114639157A (en) * 2022-05-18 2022-06-17 合肥的卢深视科技有限公司 Bad learning behavior detection method, system, electronic device and storage medium
CN114639157B (en) * 2022-05-18 2022-11-22 合肥的卢深视科技有限公司 Bad learning behavior detection method, system, electronic device and storage medium

Also Published As

Publication number Publication date
CN110750160B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN110750160B (en) Gesture-based drawing screen drawing method and device, drawing screen and storage medium
US10372226B2 (en) Visual language for human computer interfaces
US9933856B2 (en) Calibrating vision systems
US9349039B2 (en) Gesture recognition device and control method for the same
US8768006B2 (en) Hand gesture recognition
US8675916B2 (en) User interface apparatus and method using movement recognition
JP5709228B2 (en) Information processing apparatus, information processing method, and program
US20120105613A1 (en) Robust video-based handwriting and gesture recognition for in-car applications
EP2980755B1 (en) Method for partitioning area, and inspection device
WO2012147960A1 (en) Information processing device, information processing method, and recording medium
CN105096377A (en) Image processing method and apparatus
US9557821B2 (en) Gesture recognition apparatus and control method of gesture recognition apparatus
JP2016520946A (en) Human versus computer natural 3D hand gesture based navigation method
CN103092334B (en) Virtual mouse driving device and virtual mouse simulation method
US7840035B2 (en) Information processing apparatus, method of computer control, computer readable medium, and computer data signal
US11402918B2 (en) Method for controlling terminal apparatus, apparatus for controlling terminal apparatus, and computer-program product
US10423824B2 (en) Body information analysis apparatus and method of analyzing hand skin using same
JP6326847B2 (en) Image processing apparatus, image processing method, and image processing program
CN113330395A (en) Multi-screen interaction method and device, terminal equipment and vehicle
CN112733823B (en) Method and device for extracting key frame for gesture recognition and readable storage medium
KR101281461B1 (en) Multi-touch input method and system using image analysis
KR100553850B1 (en) System and method for face recognition / facial expression recognition
Buddhikot et al. Hand gesture interface based on skin detection technique for automotive infotainment system
CN110308821B (en) Touch response method and electronic equipment
Rupanagudi et al. A novel and secure methodology for keyless ignition and controlling an automobile using air gestures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant