CN113703577A - Drawing method and device, computer equipment and storage medium - Google Patents

Drawing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113703577A
CN113703577A CN202110996280.6A CN202110996280A CN113703577A CN 113703577 A CN113703577 A CN 113703577A CN 202110996280 A CN202110996280 A CN 202110996280A CN 113703577 A CN113703577 A CN 113703577A
Authority
CN
China
Prior art keywords
target
hand detection
detected
information
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110996280.6A
Other languages
Chinese (zh)
Inventor
孔祥晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110996280.6A priority Critical patent/CN113703577A/en
Publication of CN113703577A publication Critical patent/CN113703577A/en
Priority to PCT/CN2022/087946 priority patent/WO2023024536A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/80Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a drawing method, apparatus, computer device and storage medium, including: acquiring an image to be detected of a target area; detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame; under the condition that the hand detection information meets a trigger condition, determining a starting position of a first virtual tool in the display equipment based on the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target time period.

Description

Drawing method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a drawing method, an apparatus, a computer device, and a storage medium.
Background
In the related art, drawing is generally performed directly through a touch screen, for example, drawing is performed by touching the touch screen with a finger or a stylus. However, the implementation method is difficult to support the drawing process of the large touch screen, and generally affects the drawing effect.
Disclosure of Invention
The embodiment of the disclosure at least provides a drawing method, a drawing device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a drawing method, including:
acquiring an image to be detected of a target area;
detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
under the condition that the hand detection information meets a trigger condition, determining a starting position of a first virtual tool in the display equipment based on the position information of the hand detection frame;
and controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target time period.
According to the method, after the image to be detected of the target area is obtained, the hand detection information in the image to be detected can be determined, and the first virtual tool can be controlled to carry out drawing under the condition that the change of the position information of the hand detection frame is detected, so that a user does not need to directly contact with a display device through a medium such as a finger or a touch pen in the drawing process, and the drawing mode is enriched. Especially, when display device is great, when the display screen that is used for presenting the picture is great, can watch holistic plotting effect through the mode of remote drawing in the drawing process, reduce the user and be interrupted or the discontinuous condition of drawing that arouses for accomplishing longer lines drawing, optimized interactive process to drawing effect has been promoted.
In one possible embodiment, the controlling the first virtual tool to draw with the start position as a drawing start point includes:
determining a target tool type corresponding to first gesture information according to the first gesture information indicated in the hand detection information;
controlling a first virtual tool under the target tool type to draw by taking the starting position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
Different gesture information corresponds different instrument types, and the user can realize the switching to the instrument type through the change of control gesture information, can enrich the interactive process of drawing in-process user and equipment like this, promotes user experience.
In a possible embodiment, the method further comprises:
and starting a drawing function of the display equipment under the condition that the user is detected to be in a first target posture based on the image to be detected, and the time length of the user in the first target posture exceeds a first preset time length.
Therefore, the user can directly start the drawing function without contacting, the drawing process of the user is simplified, the phenomenon that the drawing efficiency is influenced because the user cannot find the mark of the drawing tool is avoided, and the drawing interest is enhanced.
In a possible embodiment, the hand detection information satisfies a trigger condition, which includes at least one of:
second gesture information indicated in the hand detection information conforms to a preset trigger gesture type;
the duration of the position of the hand detection frame indicated in the hand detection information in the target area exceeds the set duration.
In one possible implementation, in a case where the first virtual tool is a virtual brush, the target tool type is a target brush type;
after the determining a target tool type corresponding to the first gesture information, the method further comprises:
and determining a target virtual brush for drawing from a plurality of preset virtual brushes matched with the type of the target brush, and displaying the target virtual brush at the starting position.
By displaying the target virtual brush, the user can clearly and intuitively watch the current drawing process, and further drawing adjustment can be performed.
In one possible implementation, the determining a target virtual brush for drawing from a plurality of preset virtual brushes matching the target brush type includes:
and determining the target virtual brush matched with the user attribute information from a plurality of preset virtual brushes matched with the target brush type according to the user attribute information of the user corresponding to the hand detection frame.
The displayed virtual brush is matched with the user attribute information, so that the virtual brush can be displayed in a personalized mode, and the user experience of a user in the drawing process is improved.
In one possible embodiment, a menu area of a display device includes a plurality of virtual tool identifiers;
the method further comprises the following steps:
in response to detecting that the user makes a second target gesture, presenting a movement identification at a starting position of the first virtual tool;
displaying a second virtual tool at the initial position of the first virtual tool under the condition that the detected time length that the mobile identifier is located at the display position corresponding to a second virtual tool identifier in the plurality of tool identifiers exceeds a second preset time length;
in response to a target processing operation, processing the drawn portion based on the second virtual tool.
By the mode, the user can switch the virtual tools without contacting, interaction between the user and the equipment in the drawing process is increased, and user experience is improved.
In one possible embodiment, the controlling the first virtual tool to draw with the start position as a drawing start point according to a change in the position information of the hand detection frame detected within a target period includes:
determining corrected position information according to the detected change of the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the corrected position information.
Through this kind of mode, can avoid because the relatively poor problem of drawing effect that the user hand trembled and lead to, promote drawing effect.
In one possible embodiment, the determining a starting position of a first virtual tool in a display device based on the position information of the hand detection box includes:
and determining the initial position of a first virtual tool in the display equipment based on the position information of the hand detection frame and the proportional relation between the image to be detected and the display interface of the display equipment.
In a second aspect, an embodiment of the present disclosure further provides a drawing device, including:
the acquisition module is used for acquiring an image to be detected of a target area;
the first determining module is used for detecting the image to be detected and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
the second determining module is used for determining the initial position of the first virtual tool in the display equipment based on the position information of the hand detection frame under the condition that the hand detection information meets the triggering condition;
and the drawing module is used for controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target time period.
In one possible embodiment, the drawing module, when controlling the first virtual tool to draw with the start position as a drawing start point, is configured to:
determining a target tool type corresponding to first gesture information according to the first gesture information indicated in the hand detection information;
controlling a first virtual tool under the target tool type to draw by taking the starting position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
In a possible implementation, the apparatus further includes a control module configured to:
and starting a drawing function of the display equipment under the condition that the user is detected to be in a first target posture based on the image to be detected, and the time length of the user in the first target posture exceeds a first preset time length.
In one possible embodiment, the hand detection information satisfying the trigger condition includes at least one of:
second gesture information indicated in the hand detection information conforms to a preset trigger gesture type;
the duration of the position of the hand detection frame indicated in the hand detection information in the target area exceeds the set duration.
In one possible implementation, in a case where the first virtual tool is a virtual brush, the target tool type is a target brush type;
after the determining a target tool type corresponding to the first gesture information, the second determination module is further to:
and determining a target virtual brush for drawing from a plurality of preset virtual brushes matched with the type of the target brush, and displaying the target virtual brush at the starting position.
In one possible implementation, the second determining module, when determining a target virtual brush for drawing from a plurality of preset virtual brushes matching the target brush type, is configured to:
and determining the target virtual brush matched with the user attribute information from a plurality of preset virtual brushes matched with the target brush type according to the user attribute information of the user corresponding to the hand detection frame.
In one possible embodiment, a menu area of a display device includes a plurality of virtual tool identifiers;
the drawing module is further configured to:
in response to detecting that the user makes a second target gesture, presenting a movement identification at a starting position of the first virtual tool;
displaying a second virtual tool at the initial position of the first virtual tool under the condition that the detected time length that the mobile identifier is located at the display position corresponding to a second virtual tool identifier in the plurality of tool identifiers exceeds a second preset time length;
in response to a target processing operation, processing the drawn portion based on the second virtual tool.
In one possible embodiment, the drawing module, when controlling the first virtual tool to draw with the start position as a drawing start point according to a change in the position information of the hand detection frame detected in the target period, is configured to:
determining corrected position information according to the detected change of the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the corrected position information.
In one possible embodiment, the first determining module, when determining the starting position of the first virtual tool in the display device based on the position information of the hand detection box, is configured to:
and determining the initial position of a first virtual tool in the display equipment based on the position information of the hand detection frame and the proportional relation between the image to be detected and the display interface of the display equipment.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a mapping method provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating position information of a key point of a body limb and position information of a hand detection box of a user in a drawing method provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating an architecture of a drawing device according to an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
The defects existing in the scheme that the user draws by directly contacting with the display device through a medium such as a finger or a touch pen are the results obtained after the inventor practices and researches, so the discovery process of the above problems and the solution proposed by the present disclosure to the above problems in the following should be the contribution of the inventor to the present disclosure in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the embodiment, a drawing method disclosed in the embodiments of the present disclosure is first described in detail, and an execution subject of the drawing method provided in the embodiments of the present disclosure is generally a computer device with a display capability, such as a smart television, a smart phone, a tablet computer, and the like. It should be noted that the drawings in the present disclosure include, but are not limited to, drawing, writing, and the like, and the editing operation on the display interface is realized through the interaction between the user and the computer device.
The display device described herein may refer to the computer device or may be a display device, such as a monitor, connected to the computer device, and the specific calculation process is executed by the computer device and the display process is executed by the display device.
Referring to fig. 1, a flow chart of a drawing method provided in the embodiment of the present disclosure is shown, where the method includes steps 101 to 104, where:
step 101, acquiring an image to be detected of a target area.
102, detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information includes position information of a hand detection frame.
And 103, determining the initial position of the first virtual tool in the display equipment based on the position information of the hand detection frame under the condition that the hand detection information meets the trigger condition.
And 104, controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target time period.
According to the method, after the image to be detected of the target area is obtained, the hand detection information in the image to be detected can be determined, and the first virtual tool can be controlled to carry out drawing under the condition that the change of the position information of the hand detection frame is detected, so that a user does not need to directly contact with a display device through a medium such as a finger or a touch pen in the drawing process, and the drawing mode is enriched. Especially, when display device is great, when the display screen that is used for presenting the picture is great, can watch holistic plotting effect through the mode of remote drawing in the drawing process, reduce the user and be interrupted or the discontinuous condition of drawing that arouses for accomplishing longer lines drawing, optimized interactive process to drawing effect has been promoted.
The following is a detailed description of the above steps.
For steps 101 and 102,
Here, the target area may be any area where a display interface of the display device can be viewed, for example, to ensure a control effect of a user on the display device and a display effect when the display device displays content through a display screen, the area directly facing the display device may be set as the target area. In particular, the imaging device may be disposed on the display device or disposed near the display device. The camera device can acquire a scene image of the target area in real time, the scene image comprises an image to be detected, and the image to be detected of the target area can be acquired from the camera device in a data transmission mode. It should be noted that the deployment position of the image pickup apparatus may be determined according to the position of the target area so that the imaging area of the deployed image pickup apparatus contains at least the target area.
The image to be detected may be any frame image corresponding to the target area, for example, the image to be detected may be an image corresponding to the target area at the current time, or an image corresponding to the target area at the historical time. After the image to be detected is obtained, the image to be detected can be detected, and hand detection information of the user in the image to be detected is determined.
The hand detection information may include position information of a hand detection frame, and the hand detection frame may refer to a minimum detection frame including a hand of a user in the image to be detected.
In specific implementation, the target neural network for detecting the key points may be trained, so that the trained target neural network meets a preset condition, for example, the loss value of the trained target neural network is smaller than the set loss threshold. And then, the image to be detected can be detected through the trained target neural network, and the position information of the hand detection frame of the user in the image to be detected is determined.
For example, the target neural network may identify the image to be detected and determine the position information of the key point of the body limb of the user included in the image to be detected, and the target neural network may further determine the position information of the hand detection frame of the user based on the position information of the key point of the body limb and the image to be detected. The number and the positions of the body key points of the half body can be set according to requirements, for example, the number of the body key points can be 14 or 17, and the like. The position information of the hand detection frame includes coordinate information of four vertexes of the detection frame and coordinate information of a center point of the hand detection frame.
Referring to fig. 2, a schematic diagram of the position information of the key points of the body limbs of the user and the position information of the hand detection frame is shown. The user's body key points in fig. 2 may include head vertex 5, head center point 4, neck joint point 3, left shoulder joint point 9, right shoulder joint point 6, left elbow joint point 10, right elbow joint point 7, left wrist joint point 11, right wrist joint point 8, body center point 12, crotch joint point 1, crotch joint point 2, and crotch center point 0; the hand detection box may include four vertices 13, 15, 16, 17 of the left hand detection box and a center point 14 of the left hand box; and four vertices 18, 20, 21, 22 of the right-hand detection box and a center point 19 of the right-hand box.
For step 103,
In a possible implementation manner, before detecting whether the gesture detection information satisfies the trigger condition, it may be detected whether the display device is in the drawing interface, that is, whether an interface displayed by the display device through the display screen belongs to the drawing interface.
When detecting whether the display device is in a drawing interface, detecting the working state of each process started by the display device currently, wherein different processes correspond to different application programs, and when detecting that the working state of the process corresponding to the drawing application program is a display state, determining that the display device is in the drawing interface currently; when the working state of the process corresponding to the drawing application program is detected to be the non-display state, it may be determined that the current drawing application program is started but switched to the background.
When the display device is detected not to be in the drawing interface, the drawing function can be started firstly, namely, the drawing application program is started, or the drawing application program running in the background is switched to the foreground for displaying.
In one possible implementation manner, the drawing function of the display device is started when it is detected that the user is in the first target posture based on the image to be detected and the duration of the first target posture exceeds a first preset duration.
Here, the first target gesture may refer to a target action, such as waving a hand, waving a scissors hand, or the like; or the first target gesture may refer to that the corresponding position of the center point of the hand detection box in the display device is located in a first target area, and the first target area refers to an area where a drawing function may be started or invoked, for example, may refer to an area where an identifier (for example, an icon) of a drawing application program is located, so that a user starts or invokes the drawing function by performing a corresponding gesture or operation in the first target area.
When detecting whether the user performs the target action, the image to be detected may be input to a motion recognition network through a motion recognition network, and whether the user performs the target action may be obtained based on the motion recognition network, where the motion recognition network may be obtained based on sample image training with a motion tag, and the image to be detected input to the motion recognition network may be multiple continuous images. Here, the motion label refers to a label indicating a motion category included in the sample image, and may be, for example, a scissor hand, a fist, or the like.
Through the mode, the user can directly start the drawing function without contacting, the drawing process of the user is simplified, the phenomenon that the drawing efficiency is influenced because the user cannot find the mark of the drawing tool is avoided, and the interest of drawing is enhanced.
When the display device is not in the drawing interface, the position information of the corresponding movement identifier can be determined according to the position information of the hand detection frame, the hand detection frame is represented by the movement identifier, and the movement identifier can be a mouse identifier as an example. The hand detection box may be represented by the first virtual tool when the display device is in a drawing interface. Specifically, the method for determining the position information of the first virtual tool based on the position information of the hand detection box will be described below.
The hand detection information satisfies the trigger condition, which may be that the hand detection information satisfies the trigger condition for starting drawing, and in a possible implementation, the hand detection information satisfies the trigger condition, which includes at least one of:
second gesture information indicated in the hand detection information conforms to a preset trigger gesture type;
the duration of the position of the hand detection frame indicated in the hand detection information in the target area exceeds the set duration.
Here, the trigger gesture type may be a gesture for instructing to start drawing, and exemplary may be a two-handed fist, more than "OK", or the like. When the second gesture information indicated in the image to be detected is determined, the second gesture information can be exemplarily identified through a gesture recognition network, and the training process of the specific gesture recognition network is similar to that of the motion recognition network, which will not be described herein again.
The target area may be an area within a preset range from the position of the hand detection frame detected for the first time; the position of the hand detection frame may be a position after the position information of the hand detection frame is mapped on a display screen/display interface of a display device, or may also be a position indicated by the position information of the hand detection frame.
In combination with a specific application scenario, the condition that the hand detection information satisfies the trigger condition may be that the user makes a preset trigger gesture, or a duration that the hand of the user remains unchanged (or a hand movement range is small) exceeds a set duration.
With respect to step 104,
In one possible implementation, the virtual tool may refer to a virtual tool for drawing, and exemplary tools may include a brush, a dye pen, an eraser, and the like. The first virtual tool may refer to a default virtual tool or a virtual tool determined based on historical drawing operations.
For example, if it is detected that the hand detection information satisfies the trigger condition for the first time, the default virtual brush may be determined as the first virtual tool, and if it is detected that the hand detection information satisfies the trigger condition for the nth time, the virtual tool used at the end of the drawing operation executed for the nth-1 st time may be determined as the first virtual tool, where N is a positive integer greater than 1.
For example, if the drawing operation performed the nth time is finished, the virtual tool used is an eraser, and when the drawing operation is performed after the N +1 th time detects that the hand detection information satisfies the trigger condition, the corresponding first virtual tool is also an eraser.
In a possible embodiment, when the first virtual tool is controlled to draw with the start position as a drawing start point, a target tool type corresponding to first gesture information may be determined first based on the first gesture information indicated in the hand detection information; then controlling a first virtual tool under the target tool type to draw by taking the starting position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
Here, the attribute corresponding to the target tool type may exemplarily include a color, a thickness, a size, a processing type, and the like. Different gesture information corresponds different instrument types, and the user can realize the switching to the instrument type through the change of control gesture information, can enrich the interactive process of drawing in-process user and equipment like this, promotes user experience.
It should be noted that the image to be detected of the target region may be acquired in real time, and the hand detection information in the image to be detected may also be detected in real time. In contrast to the second gesture information, the first gesture information may refer to gesture information detected from the image to be detected after determining that the hand detection information satisfies the trigger condition.
The target tool type corresponding to the first gesture information may refer to different tools, illustratively, the target tool type corresponding to the gesture information a is an eraser, and the target tool type corresponding to the gesture information B is a brush; or the target tool type corresponding to the first gesture information is a different type of the first virtual tool, and illustratively, if the first virtual tool is a brush, the target tool type corresponding to the gesture information a is a rough brush, and the target tool type corresponding to the gesture information B is a fine brush.
In a specific implementation, the first virtual tool for controlling the type of the target tool draws with the start position as a drawing start point, and the movement trajectory of the hand detection frame may be determined based on the position information of the hand detection frame and the historical position information of the hand detection frame corresponding to the adjacent historical image to be detected located before the image to be detected in time sequence, and then drawing is performed based on the movement trajectory of the hand detection frame and the start position.
After the drawing in one step is completed, the position information of the first virtual tool may be re-determined based on the changed position information of the hand detection frame, and the re-determined position information may be re-used as the starting position to perform drawing in a subsequent step.
In a possible implementation manner, in a case that the first virtual tool is a virtual brush, the target tool type may be a target brush type, and after the target tool type corresponding to the first gesture information is determined, a target virtual brush for drawing may be further determined from a plurality of preset virtual brushes matching the target brush type, and the target virtual brush is displayed at the start position.
Here, when a target virtual brush for drawing is determined from a plurality of preset virtual brushes matching a target brush type, an exemplary method may determine, in combination with user attribute information of a user, a target virtual brush matching the user attribute information of the user from a plurality of preset virtual brushes matching the target brush type.
Wherein, the user attribute information of the user can exemplarily include age, gender, occupation, and the like; if the user attribute information of the user is male and is 30 years old, the target virtual brush matched with the user attribute information can be a virtual pen; if the user attribute information of the user is female and is 5 years old, the target virtual brush matched with the user attribute information of the user can be a virtual cartoon pencil.
Therefore, the displayed virtual brush is matched with the user attribute information, so that the virtual tool can be displayed in a personalized mode, and the user experience of the user in the drawing process is improved.
When the target virtual brush is displayed at the initial position, the drawing habit of a user can be combined, the drawing habit of the user can be preset, and for example, the target virtual brush can be displayed in an inclined mode according to a preset inclination angle so as to vividly present the process of editing on a plane such as paper by the user holding the drawing tool.
In order to prevent the influence of the shaking of the hand of the user on the drawing effect, the shaking prevention process may be performed before the first virtual tool is controlled to draw the image.
Specifically, the corrected position information may be determined according to the detected change in the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the corrected position information.
Here, the corrected position information is information after performing a correction process after mapping the position information of the hand detection frame on a display device, and the correction process may be a smoothing process, for example.
In one possible embodiment, the menu area of the display device may include a plurality of virtual tool identifiers, which may include, for example, names, symbols, and the like of a plurality of virtual tools.
In one possible implementation, the user may switch the virtual tool and perform the drawing operation based on the switched virtual tool.
For example, a movement identifier may be presented at a starting position of the first virtual tool in response to detecting that the user makes a second target gesture; then, when the fact that the duration that the mobile identifier is located at the display position corresponding to a second virtual tool identifier in the tool identifiers exceeds a second preset duration is detected, displaying the second virtual tool at the initial position of the first virtual tool; and processing the drawn part based on the second virtual tool in response to the target processing operation.
Here, the second target gesture may be a gesture for instructing to stop drawing, and for example, if the palm of the user faces the display device, drawing may be performed based on movement of the palm during movement of the palm (i.e., during movement of the hand detection frame); if the back of the hand of the user faces towards the display device, a mobile identifier, for example, a mouse identifier, may be displayed at the start position of the first virtual tool.
It should be noted that the starting position of the first virtual tool may change according to the change of the position information of the frame detection point, the change timing is that the user makes a second target gesture, when it is detected that the user makes the second target gesture, the starting position may be updated in real time according to the position information of the frame point, or the mobile identifier changes according to the change of the position information of the frame point, and the display position corresponding to the mobile identifier after the position change is also the starting position.
The responding target processing operation, processing the drawn part based on the second virtual tool, may refer to executing a processing function corresponding to the second virtual tool on the drawn part based on the second virtual tool, for example, if the second virtual tool is an eraser, part of the drawing of the drawn part may be removed; the responding to the target processing operation may be to determine a processing position corresponding to the second virtual tool in response to movement of the user frame point.
By the mode, the user can switch the virtual tools without contacting, interaction between the user and the equipment in the drawing process is increased, and user experience is improved.
In one possible implementation mode, when position information of a plurality of hand detection frames is detected, one target hand detection frame can be randomly selected and drawing is carried out based on the position information of the target hand detection frame; alternatively, when the position information of the plurality of hand detection frames is detected, the start positions of the two first virtual tools are determined based on the two hand detection frames, and the two first virtual tools are controlled to draw based on the change of the position information of the two hand detection frames.
In summary, the above embodiments can be generally described as follows: under the condition that the hand detection information meets the trigger condition, controlling a first virtual tool to draw based on the change condition of the position information of the hand detection frame; when the user makes a second target gesture, stopping drawing, and displaying a mobile identifier, wherein the position of the mobile identifier can be changed according to the change of the position information of the hand detection frame; when it is detected again that the hand detection information meets the trigger condition, or the user stops making the second target gesture, the display position of the first virtual tool can be determined again based on the position information of the hand detection frame, and the first virtual tool can be displayed.
The hand detection information satisfying the trigger condition may be interpreted as a start of the drawing step, and the user making the second target gesture may be interpreted as an end of the drawing step.
A description will be given below of a specific method for determining the start position of the first virtual tool in the display device based on the position information of the hand detection box, that is, a description of conversion between the image coordinates in the image to be detected and the coordinates in the display device if the conversion is realized.
In a possible implementation manner, the starting position of the first virtual tool in the display device can be determined based on the position information of the hand detection frame and the proportional relation between the image to be detected and the display interface of the display device.
In specific implementation, the target position information of the central point position information of the hand detection frame of the user on the display interface can be determined through the proportional relation between the image to be detected and the display interface of the display equipment and the position information of the hand detection frame of the user, and then the target position information of the central point position information of the hand detection frame of the user on the display interface is determined as the initial position of the first virtual tool.
In an optional embodiment, before determining the start position of the first virtual tool based on the position information of the hand detection box, the method further comprises: detecting an image to be detected, and determining target joint point position information of a user in the image to be detected;
determining a starting position of the first virtual tool based on the position information of the hand detection box, comprising: and determining the initial position of the first virtual tool based on the position information of the hand detection frame, the position information of the target joint point and a reference proportion corresponding to the user, wherein the reference proportion is used for amplifying a first distance between the position of the hand detection frame and the position of the target joint point.
Wherein the reference ratio may be determined according to the following steps:
step one, obtaining the distance between a hand detection frame and a target joint point to obtain the arm length of a user in an image to be detected.
And step two, obtaining the distance between the target joint and each vertex of the image to be detected to obtain a second distance, wherein the second distance is the maximum distance in the distances between the target joint and each vertex.
And step three, determining the ratio of the arm length to the second distance as a reference proportion.
In the first step, since the distance between the target key point and the hand detection frame may represent the longest arm extension distance of the person during the exercise process, the distance between the center point of the hand detection frame and the target joint point may be determined first to obtain the arm length of the user in the image to be detected.
Illustratively, referring to fig. 2, a first straight-line distance between the right shoulder joint point 6 (target joint point) and the right elbow joint point 7, a second straight-line distance between the right elbow joint point 7 and the right wrist joint point 8, and a third straight-line distance between the right wrist joint point 8 and the center point 19 (hand detection box) of the right hand frame may be calculated, and the sum of the first straight-line distance, the second straight-line distance, and the third straight-line distance may be determined as the arm length of the user. Alternatively, a first straight-line distance between the left shoulder joint point 9 (target joint point) and the left elbow joint point 10, a second straight-line distance between the left elbow joint point 10 and the left wrist joint point 11, and a third straight-line distance between the left wrist joint point 11 and the left frame center point 14 (hand detection frame) may be calculated, and the sum of the first straight-line distance, the second straight-line distance, and the third straight-line distance may be determined as the arm length of the user.
In step two, after the linear distances between the target joint point and the four vertices of the image to be detected are calculated, the second distance may be determined from the generated four linear distances, that is, the maximum distance may be selected as the second distance from the calculated four linear distances.
Or, the central pixel point of the image to be detected can be taken as the origin in advance, and the image to be detected is averagely divided into four regions, namely a first region positioned on the upper left, a second region positioned on the upper right, a third region positioned on the lower left and a fourth region positioned on the lower right. Further, the area where the target joint point is located can be determined based on the position information of the target joint point; and determining a target vertex which is farthest away from the target joint point based on the area where the target joint point is located, and calculating the linear distance between the target joint point and the target vertex to obtain a second distance. For example, if the target joint point is located in the third area, determining the vertex at the upper right corner as the target vertex; and if the target joint point is located in the fourth area, determining the vertex at the upper left corner as the target vertex.
In step three, the ratio of the farthest straight-line distance c to the second distance d may be determined as a reference ratio, i.e. the reference ratio is c/d, where the farthest straight-line distance c is the longest length of the arm calculated in step one.
If the first distance is a, when the first distance is enlarged based on the reference ratio, the enlarged target distance is a/(c/d) ═ a/c × d, and c is the farthest straight distance, so a/c is certainly not greater than 1, and a/c × d is certainly not greater than d, so that the enlarged target distance can be prevented from being greater than the second distance.
In the method, the arm length and the second distance of the user in the image to be detected are determined, and the ratio of the arm length to the second distance is determined as the reference ratio, so that when the first distance is amplified based on the determined reference ratio, the situation that the determined target distance is greater than the second distance and the determined intermediate position information exceeds the range of the image to be detected can be avoided.
In an alternative embodiment, determining the starting position of the first virtual tool based on the position information of the hand detection box, the position information of the target joint point, and the corresponding reference scale of the user includes:
step one, determining middle position information of a first virtual tool in an image coordinate system corresponding to an image to be detected based on position information of a hand detection frame, position information of a target joint point and a reference proportion corresponding to a user.
And secondly, determining the target display position of the mobile identifier in the display equipment based on the intermediate position information.
Wherein, the specific implementation of the first step is as follows:
the method comprises the steps of firstly, obtaining a first distance between a hand detection frame and a target joint point based on position information of the hand detection frame and position information of the target joint point.
Amplifying the first distance based on the reference proportion to obtain a target distance;
and thirdly, determining the middle position information of the mobile identifier under the image coordinate system corresponding to the image to be detected based on the target distance and the position information of the hand detection frame.
Here, the first distance between the hand detection frame and the target joint point may be calculated based on the position information of the hand detection frame and the target joint point position information, for example, if the position information of the center point of the hand detection frame is (x)1,y1) The position information of the target joint point is (x)2,y2) The first distance is C1, then
Figure BDA0003234168810000191
The first distance C1 may be further enlarged based on the reference ratio C/D to determine the target distance D1, C1/D1 being C/D, i.e., the target distance D1 being C1 × C/D. Finally, the position information of the center point of the hand detection frame after distance amplification can be determined based on the target distance and the position coordinates of the hand center point indicated by the position information of the hand detection frame; and determining the position information of the central point of the hand detection frame after the distance is amplified as the middle position information of the mobile identifier under the image coordinate system corresponding to the image to be detected.
In the second step, for example, based on the proportional relationship between the display interface of the display device and the image to be detected, the middle position information of the mobile identifier in the image coordinate system corresponding to the image to be detected is converted into the coordinate system corresponding to the display interface of the display device, and the target display position of the mobile identifier in the display device is determined.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a drawing device corresponding to the drawing method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the above-mentioned drawing method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 3, a schematic diagram of an architecture of a drawing apparatus according to an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 301, a first determination module 302, a second determination module 303, a drawing module 304, and a control module 305; wherein the content of the first and second substances,
an obtaining module 301, configured to obtain an image to be detected of a target region;
a first determining module 302, configured to detect the image to be detected, and determine hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
a second determining module 303, configured to determine, based on the position information of the hand detection frame, a starting position of the first virtual tool in the display device when the hand detection information satisfies a trigger condition;
a drawing module 304, configured to control the first virtual tool to draw with the start position as a drawing start point according to a change in the position information of the hand detection frame detected in the target time period.
In a possible implementation manner, the drawing module 304, when controlling the first virtual tool to draw with the starting position as a drawing starting point, is configured to:
determining a target tool type corresponding to first gesture information according to the first gesture information indicated in the hand detection information;
controlling a first virtual tool under the target tool type to draw by taking the starting position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
In a possible implementation, the apparatus further includes a control module 305 configured to:
and starting a drawing function of the display equipment under the condition that the user is detected to be in a first target posture based on the image to be detected, and the time length of the user in the first target posture exceeds a first preset time length.
In one possible embodiment, the hand detection information satisfying the trigger condition includes at least one of:
second gesture information indicated in the hand detection information conforms to a preset trigger gesture type;
the duration of the position of the hand detection frame indicated in the hand detection information in the target area exceeds the set duration.
In one possible implementation, in a case where the first virtual tool is a virtual brush, the target tool type is a target brush type;
after the determining the target tool type corresponding to the first gesture information, the second determining module 303 is further configured to:
and determining a target virtual brush for drawing from a plurality of preset virtual brushes matched with the type of the target brush, and displaying the target virtual brush at the starting position.
In one possible implementation, the second determining module 303, when determining a target virtual brush for drawing from a plurality of preset virtual brushes matching the target brush type, is configured to:
and determining the target virtual brush matched with the user attribute information from a plurality of preset virtual brushes matched with the target brush type according to the user attribute information of the user corresponding to the hand detection frame.
In one possible embodiment, a menu area of a display device includes a plurality of virtual tool identifiers;
the drawing module 304 is further configured to:
in response to detecting that the user makes a second target gesture, presenting a movement identification at a starting position of the first virtual tool;
displaying a second virtual tool at the initial position of the first virtual tool under the condition that the detected time length that the mobile identifier is located at the display position corresponding to a second virtual tool identifier in the plurality of tool identifiers exceeds a second preset time length;
in response to a target processing operation, processing the drawn portion based on the second virtual tool.
In one possible implementation, the drawing module 304, when controlling the first virtual tool to draw with the start position as a drawing start point according to the change of the position information of the hand detection frame detected in the target time period, is configured to:
determining corrected position information according to the detected change of the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the corrected position information.
In one possible implementation, the first determining module 302, when determining the starting position of the first virtual tool in the display device based on the position information of the hand detection box, is configured to:
and determining the initial position of a first virtual tool in the display equipment based on the position information of the hand detection frame and the proportional relation between the image to be detected and the display interface of the display equipment.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 4, a schematic structural diagram of a computer device 400 provided in the embodiment of the present disclosure includes a processor 401, a memory 402, and a bus 403. The memory 402 is used for storing execution instructions and includes a memory 4021 and an external memory 4022; the memory 4021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 401 and data exchanged with an external memory 4022 such as a hard disk, the processor 401 exchanges data with the external memory 4022 through the memory 4021, and when the computer device 400 operates, the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 executes the following instructions:
acquiring an image to be detected of a target area;
detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
under the condition that the hand detection information meets a trigger condition, determining a starting position of a first virtual tool in the display equipment based on the position information of the hand detection frame;
and controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target time period.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the drawing method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the drawing method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A method of mapping, comprising:
acquiring an image to be detected of a target area;
detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
under the condition that the hand detection information meets a trigger condition, determining a starting position of a first virtual tool in the display equipment based on the position information of the hand detection frame;
and controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target time period.
2. The method of claim 1, wherein controlling the first virtual tool to draw with the start position as a drawing start point comprises:
determining a target tool type corresponding to first gesture information according to the first gesture information indicated in the hand detection information;
controlling the first virtual tool under the target tool type to draw by taking the starting position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and starting a drawing function of the display equipment under the condition that the user is detected to be in a first target posture based on the image to be detected, and the time length of the user in the first target posture exceeds a first preset time length.
4. The method according to any one of claims 1 to 3, wherein the hand detection information satisfies a trigger condition, including at least one of:
second gesture information indicated in the hand detection information conforms to a preset trigger gesture type;
the duration of the position of the hand detection frame indicated in the hand detection information in the target area exceeds the set duration.
5. The method of claim 2, wherein in the case where the first virtual tool is a virtual brush, the target tool type is a target brush type;
after the determining a target tool type corresponding to the first gesture information, the method further comprises:
and determining a target virtual brush for drawing from a plurality of preset virtual brushes matched with the type of the target brush, and displaying the target virtual brush at the starting position.
6. The method of claim 5, wherein the determining a target virtual brush for drawing from a plurality of preset virtual brushes matching the target brush type comprises:
and determining the target virtual brush matched with the user attribute information from a plurality of preset virtual brushes matched with the target brush type according to the user attribute information of the user corresponding to the hand detection frame.
7. The method according to any one of claims 1 to 6, wherein the menu area of the display device comprises a plurality of virtual tool identifiers;
the method further comprises the following steps:
in response to detecting that the user makes a second target gesture, presenting a movement identification at a starting position of the first virtual tool;
displaying a second virtual tool at the initial position of the first virtual tool under the condition that the detected time length that the mobile identifier is located at the display position corresponding to a second virtual tool identifier in the plurality of tool identifiers exceeds a second preset time length;
in response to a target processing operation, processing the drawn portion based on the second virtual tool.
8. The method according to any one of claims 1 to 7, wherein the controlling the first virtual tool to draw with the start position as a drawing start point according to the change of the position information of the hand detection frame detected in the target time period includes:
determining corrected position information according to the detected change of the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the corrected position information.
9. The method according to any one of claims 1 to 8, wherein the determining a starting position of a first virtual tool in a display device based on the position information of the hand detection frame comprises:
and determining the initial position of the first virtual tool in the display equipment based on the position information of the hand detection frame and the proportional relation between the image to be detected and the display interface of the display equipment.
10. A drawing device, comprising:
the acquisition module is used for acquiring an image to be detected of a target area;
the first determining module is used for detecting the image to be detected and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
the second determining module is used for determining the initial position of the first virtual tool in the display equipment based on the position information of the hand detection frame under the condition that the hand detection information meets the triggering condition;
and the drawing module is used for controlling the first virtual tool to draw by taking the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target time period.
11. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the mapping method according to any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the drawing method according to any one of claims 1 to 9.
CN202110996280.6A 2021-08-27 2021-08-27 Drawing method and device, computer equipment and storage medium Pending CN113703577A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110996280.6A CN113703577A (en) 2021-08-27 2021-08-27 Drawing method and device, computer equipment and storage medium
PCT/CN2022/087946 WO2023024536A1 (en) 2021-08-27 2022-04-20 Drawing method and apparatus, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110996280.6A CN113703577A (en) 2021-08-27 2021-08-27 Drawing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113703577A true CN113703577A (en) 2021-11-26

Family

ID=78656074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110996280.6A Pending CN113703577A (en) 2021-08-27 2021-08-27 Drawing method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113703577A (en)
WO (1) WO2023024536A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024536A1 (en) * 2021-08-27 2023-03-02 上海商汤智能科技有限公司 Drawing method and apparatus, and computer device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150242107A1 (en) * 2014-02-26 2015-08-27 Microsoft Technology Licensing, Llc Device control
CN106462341A (en) * 2014-06-12 2017-02-22 微软技术许可有限责任公司 Sensor correlation for pen and touch-sensitive computing device interaction
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture
CN110750160A (en) * 2019-10-24 2020-02-04 京东方科技集团股份有限公司 Drawing method and device for drawing screen based on gesture, drawing screen and storage medium
CN112204509A (en) * 2018-06-01 2021-01-08 苹果公司 Device, method and graphical user interface for an electronic device interacting with a stylus
CN112506340A (en) * 2020-11-30 2021-03-16 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium
CN112925414A (en) * 2021-02-07 2021-06-08 深圳创维-Rgb电子有限公司 Display screen gesture drawing method and device and computer readable storage medium
CN112987933A (en) * 2021-03-25 2021-06-18 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268181A (en) * 2017-01-04 2018-07-10 奥克斯空调股份有限公司 A kind of control method and device of non-contact gesture identification
CN108921101A (en) * 2018-07-04 2018-11-30 百度在线网络技术(北京)有限公司 Processing method, equipment and readable storage medium storing program for executing based on gesture identification control instruction
US11188145B2 (en) * 2019-09-13 2021-11-30 DTEN, Inc. Gesture control systems
CN112262393A (en) * 2019-12-23 2021-01-22 商汤国际私人有限公司 Gesture recognition method and device, electronic equipment and storage medium
CN113703577A (en) * 2021-08-27 2021-11-26 北京市商汤科技开发有限公司 Drawing method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150242107A1 (en) * 2014-02-26 2015-08-27 Microsoft Technology Licensing, Llc Device control
CN106462341A (en) * 2014-06-12 2017-02-22 微软技术许可有限责任公司 Sensor correlation for pen and touch-sensitive computing device interaction
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture
CN112204509A (en) * 2018-06-01 2021-01-08 苹果公司 Device, method and graphical user interface for an electronic device interacting with a stylus
CN110750160A (en) * 2019-10-24 2020-02-04 京东方科技集团股份有限公司 Drawing method and device for drawing screen based on gesture, drawing screen and storage medium
CN112506340A (en) * 2020-11-30 2021-03-16 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium
CN112925414A (en) * 2021-02-07 2021-06-08 深圳创维-Rgb电子有限公司 Display screen gesture drawing method and device and computer readable storage medium
CN112987933A (en) * 2021-03-25 2021-06-18 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MUKUL ET AL.: "An Extensible and Nested Gesture for Drawing Tools", ACM SIGSOFT SOFTWARE ENGINEERING NOTES, vol. 39, no. 5, 1 September 2014 (2014-09-01), XP058056966, DOI: 10.1145/2659118.2659133 *
李晓男;张宇红;: "毛笔书法虚拟临摹技术研究――以App Inventor软件为例", 美与时代(中), no. 02, 15 February 2017 (2017-02-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024536A1 (en) * 2021-08-27 2023-03-02 上海商汤智能科技有限公司 Drawing method and apparatus, and computer device and storage medium

Also Published As

Publication number Publication date
WO2023024536A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
CN108431729B (en) Three-dimensional object tracking to increase display area
Shen et al. Vision-based hand interaction in augmented reality environment
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
CN114303120A (en) Virtual keyboard
CN110363867B (en) Virtual decorating system, method, device and medium
JP6165485B2 (en) AR gesture user interface system for mobile terminals
KR20130088104A (en) Mobile apparatus and method for providing touch-free interface
CN108027656B (en) Input device, input method, and program
CN112506340A (en) Device control method, device, electronic device and storage medium
CN112926423A (en) Kneading gesture detection and recognition method, device and system
JP7378354B2 (en) Detecting finger presses from live video streams
US20170363936A1 (en) Image processing apparatus, image processing method, and program
CN113703577A (en) Drawing method and device, computer equipment and storage medium
JP6501806B2 (en) INFORMATION PROCESSING APPARATUS, OPERATION DETECTING METHOD, AND COMPUTER PROGRAM
JP2017219942A (en) Contact detection device, projector device, electronic blackboard system, digital signage device, projector device, contact detection method, program and recording medium
US11100317B2 (en) Drawing device and drawing method
JP7199441B2 (en) input device
CN110136233B (en) Method, terminal and storage medium for generating nail effect map
CN111324274A (en) Virtual makeup trial method, device, equipment and storage medium
KR101775080B1 (en) Drawing image processing apparatus and method based on natural user interface and natural user experience
CN114816088A (en) Online teaching method, electronic equipment and communication system
CN114578956A (en) Equipment control method and device, virtual wearable equipment and storage medium
CN113301243A (en) Image processing method, interaction method, system, device, equipment and storage medium
US10185407B2 (en) Display control apparatus, display control method and recording medium
JP5456817B2 (en) Display control apparatus, display control method, information display system, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40061847

Country of ref document: HK