WO2023024536A1 - 一种绘图方法、装置、计算机设备及存储介质 - Google Patents

一种绘图方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2023024536A1
WO2023024536A1 PCT/CN2022/087946 CN2022087946W WO2023024536A1 WO 2023024536 A1 WO2023024536 A1 WO 2023024536A1 CN 2022087946 W CN2022087946 W CN 2022087946W WO 2023024536 A1 WO2023024536 A1 WO 2023024536A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
hand detection
information
virtual
detected
Prior art date
Application number
PCT/CN2022/087946
Other languages
English (en)
French (fr)
Inventor
孔祥晖
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023024536A1 publication Critical patent/WO2023024536A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/80Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to a drawing method, device, computer equipment and storage medium.
  • drawing is generally performed directly through a touch screen, for example, drawing is performed by touching the touch screen with a finger or a stylus.
  • drawing is performed by touching the touch screen with a finger or a stylus.
  • the above-mentioned implementation methods are difficult to support the drawing process of a larger touch screen, which usually affects the drawing effect.
  • Embodiments of the present disclosure at least provide a drawing method, device, computer equipment, and storage medium.
  • an embodiment of the present disclosure provides a drawing method, including:
  • the hand detection information includes position information of a hand detection frame
  • the first virtual tool is controlled to draw with the starting position as the starting point of drawing.
  • the hand detection information in the image to be detected can be determined, and when it is detected that the position information of the hand detection frame changes, control
  • the first virtual tool draws, so that the user does not need to directly contact the display device through a medium such as a finger or a stylus during the drawing process, which enriches the drawing method.
  • the display device is large, that is, when the display screen used to present the picture is large, the overall drawing effect can be viewed during the drawing process by means of long-distance drawing, reducing the need for users to complete long line drawing.
  • the intermittent or discontinuous situation of the painting optimizes the interaction process, thereby improving the painting effect.
  • controlling the first virtual tool to draw with the starting position as the starting point of drawing includes:
  • Different gesture information corresponds to different tool types.
  • the user can switch the tool type by controlling the change of the gesture information, which can enrich the interaction process between the user and the device during the drawing process and improve the user experience.
  • the method also includes:
  • the drawing function of the display device is started.
  • the hand detection information meets a trigger condition, including at least one of the following:
  • the second gesture information indicated in the hand detection information conforms to a preset trigger gesture type
  • the duration of the position of the hand detection frame indicated in the hand detection information within the target area exceeds a set duration.
  • the target tool type is a target brush type
  • the method further includes:
  • a target virtual brush for drawing is determined from a plurality of preset virtual brushes that match the type of the target brush, and the target virtual brush is displayed at the starting position.
  • the user can clearly and intuitively watch the current drawing process, and then can adjust the drawing.
  • the determining a target virtual brush for drawing from among a plurality of preset virtual brushes that match the type of the target brush includes:
  • the target virtual brush matching the user attribute information is determined from a plurality of preset virtual brush types matching the target brush type.
  • the displayed virtual paintbrush matches the user's attribute information, so that the virtual paintbrush can be displayed in a personalized manner, and the user experience of the user in the drawing process can be improved.
  • the menu area of the display device includes multiple virtual tool identifiers
  • the method also includes:
  • the drawn portion is processed based on the second virtual tool in response to the target processing operation.
  • the user can switch the virtual tool without contact, which increases the interaction between the user and the device during the drawing process and improves the user experience.
  • controlling the first virtual tool to draw with the starting position as the starting point of drawing according to the change of the position information of the hand detection frame detected within the target period includes:
  • the corrected position information is determined; according to the corrected position information, the first virtual tool is controlled to draw with the starting position as the starting point of drawing.
  • the determining the initial position of the first virtual tool in the display device based on the position information of the hand detection frame includes:
  • the initial position of the first virtual tool in the display device is determined.
  • an embodiment of the present disclosure further provides a drawing device, including:
  • An acquisition module configured to acquire the image to be detected of the target area
  • a first determining module configured to detect the image to be detected, and determine hand detection information in the image to be detected; the hand detection information includes position information of a hand detection frame;
  • the second determination module is used to determine the initial position of the first virtual tool in the display device based on the position information of the hand detection frame when the hand detection information meets the trigger condition;
  • the drawing module is configured to control the first virtual tool to use the starting position as a drawing starting point to draw according to the change of the position information of the hand detection frame detected within the target time period.
  • the drawing module is configured to: when controlling the first virtual tool to draw with the starting position as the starting point of drawing:
  • the device further includes a control module, configured to:
  • the drawing function of the display device is started.
  • the hand detection information meeting a trigger condition includes at least one of the following:
  • the second gesture information indicated in the hand detection information conforms to a preset trigger gesture type
  • the duration of the position of the hand detection frame indicated in the hand detection information within the target area exceeds a set duration.
  • the target tool type is a target brush type
  • the second determination module is further configured to:
  • a target virtual brush for drawing is determined from a plurality of preset virtual brushes that match the type of the target brush, and the target virtual brush is displayed at the starting position.
  • the second determination module is configured to: when determining a target virtual brush for drawing from a plurality of preset virtual brushes matching the type of the target brush:
  • the target virtual brush matching the user attribute information is determined from a plurality of preset virtual brush types matching the target brush type.
  • the menu area of the display device includes multiple virtual tool identifiers
  • the drawing module is also used for:
  • the drawn portion is processed based on the second virtual tool in response to the target processing operation.
  • the drawing module controls the first virtual tool to use the starting position as the starting point of drawing according to the change of the position information of the hand detection frame detected within the target time period.
  • the corrected position information is determined; according to the corrected position information, the first virtual tool is controlled to draw with the starting position as the starting point of drawing.
  • the first determination module when determining the initial position of the first virtual tool in the display device based on the position information of the hand detection frame, is configured to:
  • the initial position of the first virtual tool in the display device is determined.
  • an embodiment of the present disclosure further provides a computer device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the computer device is running, the processing The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
  • a computer device including: a processor, a memory, and a bus
  • the memory stores machine-readable instructions executable by the processor, and when the computer device is running, the processing
  • the processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
  • embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned first aspect, or any of the first aspects of the first aspect, may be executed. Steps in one possible implementation.
  • an optional implementation manner of the present disclosure further provides a computer program product, including computer-readable codes, or a computer-readable storage medium bearing computer-readable codes, when the computer-readable codes are processed in an electronic device
  • the processor in the electronic device executes the above first aspect, or the steps in any possible implementation manner of the first aspect.
  • FIG. 1 shows a flowchart of a drawing method provided by an embodiment of the present disclosure
  • Fig. 2 shows a schematic diagram of the key point position information of the user's half-body limbs and the position information of the hand detection frame in the drawing method provided by the embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a drawing device provided by an embodiment of the present disclosure
  • Fig. 4 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
  • the drawing method provided in the embodiments of the present disclosure can be executed by hardware, generally a computer device with display capabilities.
  • a computer device with display capabilities.
  • smart TVs, smart phones, tablet computers, etc. or execute computer-executable codes through processors.
  • the drawing mentioned in the present disclosure includes but not limited to drawing, writing and other editing operations on the display interface realized through the interaction between the user and the computer device.
  • the display device mentioned herein may refer to the above-mentioned computer device, or may be a display device connected to the above-mentioned computer device, such as a display, and the specific calculation process is performed by the computer device, and the display process is performed by the display device.
  • FIG. 1 is a flow chart of a drawing method provided by an embodiment of the present disclosure, the method includes steps 101 to 104, wherein:
  • Step 101 acquiring an image to be detected of a target area.
  • Step 102 Detect the image to be detected, and determine hand detection information in the image to be detected; the hand detection information includes position information of a hand detection frame.
  • Step 103 if the hand detection information satisfies the trigger condition, determine the initial position of the first virtual tool in the display device based on the position information of the hand detection frame.
  • Step 104 Control the first virtual tool to draw with the starting position as the starting point of drawing according to the change of the position information of the hand detection frame detected within the target time period.
  • the hand detection information in the image to be detected can be determined, and when it is detected that the position information of the hand detection frame changes, control
  • the first virtual tool draws, so that the user does not need to directly contact the display device through a medium such as a finger or a stylus during the drawing process, which enriches the drawing method.
  • the display device is large, that is, when the display screen used to present the picture is large, the overall drawing effect can be viewed during the drawing process by means of long-distance drawing, reducing the need for users to complete long line drawing.
  • the intermittent or discontinuous situation of the painting optimizes the interaction process, thereby improving the painting effect.
  • step 101 For step 101 and step 102,
  • the target area can be any area where the display interface of the display device can be viewed.
  • the camera device may be deployed on or near the display device.
  • the camera device can collect the scene image of the target area in real time, the scene image includes the image to be detected, and then the image to be detected of the target area can be obtained from the camera device through data transmission.
  • the deployment position of the imaging device may be determined according to the position of the target area, so that the shooting area of the deployed imaging device at least includes the target area.
  • the image to be detected may be any frame image corresponding to the target area.
  • the image to be detected may be an image corresponding to the target area at the current moment, or an image corresponding to the target area at a historical moment.
  • the image to be detected may be detected to determine the hand detection information of the user in the image to be detected.
  • the hand detection information may include position information of a hand detection frame, and the hand detection frame may refer to a minimum detection frame including the user's hand in the image to be detected.
  • the target neural network used for detecting key points may be trained so that the trained target neural network satisfies preset conditions, for example, the loss value of the trained target neural network is smaller than a set loss threshold.
  • the image to be detected can be detected through the trained target neural network, and the position information of the user's hand detection frame in the image to be detected can be determined.
  • the target neural network can identify the image to be detected, and determine the key point position information of the user's half body limbs included in the image to be detected, and the target neural network can also determine the user's key point position information based on the key point position information of the half body body and the image to be detected.
  • the position information of the hand detection frame wherein, the number and position of the key points of the limbs of the half body can be set as required, for example, the number of key points of the limbs can be 14 or 17, etc.
  • the position information of the hand detection frame includes coordinate information of four vertices of the detection frame and coordinate information of the center point of the hand detection frame.
  • the key points of the half-body limbs of the user described in Fig. 2 may include the head vertex 5, the head center point 4, the neck joint point 3, the left shoulder joint point 9, the right shoulder joint point 6, the left elbow joint point 10, the right elbow joint point Joint point 7, left wrist joint point 11, right wrist joint point 8, half-body limb center point 12, crotch joint point 1, crotch joint point 2, and crotch center point 0;
  • the hand detection frame can include the left hand detection frame The four vertices 13, 15, 16, 17 of and the center point 14 of the left-hand frame; and the four vertices 18, 20, 21, 22 of the right-hand detection frame and the center point 19 of the right-hand frame.
  • the gesture detection information before detecting whether the gesture detection information satisfies the trigger condition, it may first be detected whether the display device is in the drawing interface, that is, whether the interface displayed by the display device through the display screen belongs to the drawing interface.
  • the drawing function can be started first, that is, the drawing application program can be started, or the drawing application program running in the background can be switched to the foreground for display.
  • the drawing function of the display device is activated .
  • the first target pose may refer to a target action, such as waving, scissorhands, etc.; or the first target pose may refer to the corresponding position of the center point of the hand detection frame in the display device at the first
  • the first target area refers to the area where the drawing function can be started or invoked, for example, it can refer to the area where the logo of the drawing application program (for example, an icon) is located, so that the user can pass the first target area
  • the area performs corresponding gestures or operations to start or call the drawing function.
  • the image to be detected can be input into the action recognition network through an action recognition network, and whether the user makes a target action can be obtained based on the action recognition network
  • the action recognition network may be trained based on sample images carrying action labels, and the images to be detected input into the action recognition network may be multiple consecutive images.
  • the action label refers to a label used to represent the category of the action contained in the sample image, for example, it may be like scissors hands, fisting and so on.
  • the user can directly start the drawing function without contact, which simplifies the user's drawing process, reduces the probability of affecting the drawing efficiency due to the user's inability to find the identification of the drawing tool, and enhances the fun of drawing.
  • the position information of the corresponding mobile identification can be determined according to the position information of the hand detection frame, and the mobile identification is used to represent the hand detection frame.
  • the mobile identification is exemplary can be the mouse ID.
  • the first virtual tool may be used to represent the hand detection frame. Specifically, the method for determining the position information of the first virtual tool based on the position information of the hand detection frame will be introduced below.
  • the hand detection information meeting the trigger condition may mean that the hand detection information meets the trigger condition for starting drawing.
  • the hand detection information meets the trigger condition including at least one of the following:
  • the second gesture information indicated in the hand detection information conforms to a preset trigger gesture type
  • the duration of the position of the hand detection frame indicated in the hand detection information within the target area exceeds a set duration.
  • the trigger gesture type may be a gesture for instructing to start drawing, for example, it may be clasping fists with both hands, saying "OK" and so on.
  • the second gesture information indicated in the image to be detected it can be identified through a gesture recognition network.
  • the specific training process of the gesture recognition network is similar to the above-mentioned action recognition network, and will not be repeated here. .
  • the target area may refer to an area within a preset range from the first detected position of the hand detection frame; the position of the hand detection frame may refer to mapping the position information of the hand detection frame to to the position on the display screen/display interface of the display device, or may also refer to the position indicated by the position information of the hand detection frame.
  • the hand detection information meeting the trigger condition may mean that the user makes a preset trigger gesture, or the user's hand remains unchanged (or the hand movement range is small) for longer than the set time. Timing length.
  • step 104 For step 104,
  • the virtual tool may refer to a virtual tool used for drawing, and examples may include tools such as a paintbrush, a dyeing pen, and an eraser.
  • the first virtual tool may refer to a default virtual tool, or a virtual tool determined based on historical drawing operations.
  • the hand detection information if it is detected that the hand detection information meets the trigger condition for the first time, it can be determined that the default virtual paintbrush is the first virtual tool; if it is detected that the hand detection information meets the trigger condition for the Nth time, the Nth - a virtual tool used at the end of one drawing operation as the first virtual tool, wherein N is a positive integer greater than 1.
  • the virtual tool used is an eraser at the end of the Nth execution of the drawing operation, when the N+1th hand detection information is detected to meet the trigger condition, when the drawing operation is performed, the corresponding first virtual tool Also for the eraser.
  • the first virtual tool to draw with the initial position as the starting point of drawing when controlling the first virtual tool to draw with the initial position as the starting point of drawing, it may be determined first according to the first gesture information indicated in the hand detection information, which is related to the first gesture information.
  • the target tool type corresponding to the first gesture information then control the first virtual tool under the target tool type to draw with the starting position as the starting point of drawing; wherein, the drawing result after drawing conforms to the target tool type corresponding properties.
  • the attributes corresponding to the target tool type may exemplarily include color, thickness, size, processing type, and the like.
  • Different gesture information corresponds to different tool types.
  • the user can switch the tool type by controlling the change of the gesture information, which can enrich the interaction process between the user and the device during the drawing process and improve the user experience.
  • the image to be detected of the target area may be acquired in real time, and the hand detection information in the image to be detected may also be detected in real time.
  • the first gesture information herein may refer to gesture information detected from the image to be detected after it is determined that the hand detection information satisfies a trigger condition.
  • the target tool type corresponding to the first gesture information may refer to different tools.
  • the target tool type corresponding to gesture information A is an eraser
  • the target tool type corresponding to gesture information B is a paintbrush
  • the The target tool type corresponding to the first gesture information refers to different types of the first virtual tool.
  • the target tool type corresponding to gesture information A is a thick paintbrush
  • the target tool type corresponding to gesture information B is The tool type is Fine Brush.
  • the first virtual tool under the control of the target tool type uses the starting position as the drawing starting point to draw, which can be based on the position information of the hand detection frame and the timing position of the virtual tool to be detected.
  • the historical position information of the hand detection frame corresponding to the adjacent historical image to be detected before the image, determine the movement track of the hand detection frame, and then perform the process based on the movement track of the hand detection frame and the initial position drawing.
  • the position information of the first virtual tool can be re-determined based on the changed position information of the hand detection frame, and the re-determined position information can be used as the starting position again, and the following steps can be performed drawing.
  • the target tool type may be a target brush type
  • you may also A target virtual brush for drawing is determined from a plurality of preset virtual brushes matching the type of the target brush, and the target virtual brush is displayed at the starting position.
  • a target virtual brush for drawing from among a plurality of preset virtual brushes that match the target brush type an example may be combined with the user's user attribute information to select from a plurality of preset virtual brushes that match the target brush type.
  • a target virtual paintbrush matching the user attribute information of the user is determined.
  • the user attribute information of the user may illustratively include age, gender, occupation, etc.; if the user attribute information of the user is male and 30 years old, then the target virtual paintbrush matching the user attribute information may be A virtual pen; if the user attribute information of the user is female and 5 years old, then the target virtual paintbrush matching the user attribute information of the user may be a virtual cartoon pencil.
  • the displayed virtual paintbrush matches the user attribute information
  • the virtual tool can be displayed in a personalized manner, and the user experience of the user in the drawing process can be improved.
  • the target virtual paintbrush at the initial position When displaying the target virtual paintbrush at the initial position, it can be combined with the user's drawing habits, which can be preset. For example, it can be displayed obliquely according to the preset inclination angle, so as to present the user's hand-held drawing in a vivid way.
  • anti-shake processing may be performed before controlling the first virtual tool to draw.
  • the corrected position information can be determined according to the detected change of the position information of the hand detection frame; according to the corrected position information, the first virtual tool can be controlled to draw with the starting position as the starting point of drawing .
  • the corrected position information refers to information after a correction process is performed after mapping the position information of the hand detection frame to a display device.
  • the correction process may be a smoothing process.
  • the menu area of the display device may include multiple virtual tool identifiers, for example, may include names, symbols, etc. of multiple virtual tools.
  • the user may switch the virtual tool, and perform a drawing operation based on the switched virtual tool.
  • the second virtual tool in response to detecting that the user makes a second target gesture, displaying a moving mark at the starting position of the first virtual tool; and then detecting that the moving mark is located among the plurality of virtual tool marks
  • the second target gesture may refer to a gesture for instructing to stop drawing. For example, if the palm of the user faces the display device, during the movement of the palm (that is, the movement of the hand detection frame process), drawing can be performed based on the movement of the palm; if the back of the user’s hand is facing the display device, a movement logo can be displayed at the starting position of the first virtual tool, and a mouse can be displayed for example logo.
  • the initial position of the first virtual tool can be changed according to the change of the position information of the detection point of the hand frame.
  • the timing of the change is that the user makes the second target gesture.
  • the starting position can be updated in real time according to the position information of the hand frame point, or in other words, the mobile logo changes according to the change of the position information of the hand frame point, and the display position corresponding to the mobile logo after the position change Also for the starting position.
  • the responding target processing operation, processing the drawn part based on the second virtual tool may refer to executing the processing function corresponding to the second virtual tool on the drawn part based on the second virtual tool, for example, if the second virtual tool If the tool is an eraser, part of the picture that has been drawn can be removed; the responding to the target processing operation may refer to determining the processing position corresponding to the second virtual tool in response to the movement of the user's hand frame point.
  • the user can switch the virtual tool without contact, which increases the interaction between the user and the device during the drawing process and improves the user experience.
  • a target hand detection frame can be randomly selected, and drawing is performed based on the position information of the target hand detection frame; or, in When the position information of multiple hand detection frames is detected, the initial positions of the two first virtual tools are determined based on the two hand detection frames respectively, and the control is controlled based on the changes in the position information of the two hand detection frames respectively.
  • Two first virtual tools for painting are used to determine the position information of the two hand detection frames.
  • the above embodiment can be summarized as follows: when the hand detection information satisfies the trigger condition, based on the change of the position information of the hand detection frame, control the first virtual tool to draw; During the second target gesture, stop drawing, and display the moving logo, the position of the moving logo can be changed according to the change of the position information of the hand detection frame; when it is detected again that the hand detection information meets the trigger condition, Or when the user stops making the second target gesture, the display position of the first virtual tool may be re-determined based on the position information of the hand detection frame and displayed.
  • the fact that the hand detection information satisfies the trigger condition can be understood as the start of the drawing step, and the user making the second target gesture can be understood as the end of the drawing step.
  • the following will introduce the specific method of determining the starting position of the first virtual tool in the display device based on the position information of the hand detection frame, that is, to introduce how to realize the relationship between the image coordinates in the image to be detected and the coordinates in the display device conversion.
  • the position of the first virtual tool in the display device may be determined based on the position information of the hand detection frame and the proportional relationship between the image to be detected and the display interface of the display device. starting point.
  • the target of the center point position information of the user's hand detection frame on the display interface can be determined through the proportional relationship between the image to be detected and the display interface of the display device, and the position information of the user's hand detection frame. position information, and then determine the target position information of the center point position information of the user's hand detection frame on the display interface as the initial position of the first virtual tool.
  • the method before determining the initial position of the first virtual tool based on the position information of the hand detection frame, the method further includes: detecting the image to be detected, and determining the target joint of the user included in the image to be detected Point location information;
  • Determining the starting position of the first virtual tool based on the position information of the hand detection frame includes: determining the starting position of the first virtual tool based on the position information of the hand detection frame, the position information of the target joint point, and the corresponding reference scale of the user. The starting position, wherein the reference scale is used to amplify the first distance between the position of the hand detection frame and the position of the target joint point.
  • the reference ratio can be determined according to the following steps:
  • Step 1 Obtain the distance between the hand detection frame and the target joint point to obtain the arm length of the user in the image to be detected.
  • Step 2 Obtain the distance between the target joint point and each vertex of the image to be detected to obtain a second distance, and the second distance is the maximum distance among the distances between the target joint point and each vertex.
  • Step 3 Determine the ratio of the arm length to the second distance as the reference ratio.
  • step 1 since the distance between the target key point and the hand detection frame can represent the longest distance that a person’s arm extends during the movement, the distance between the center point of the hand detection frame and the target joint point can be determined first. to get the arm length of the user in the image to be detected.
  • the first linear distance between the right shoulder joint point 6 (target joint point) and the right elbow joint point 7, and the distance between the right elbow joint point 7 and the right wrist joint point 8 can be calculated.
  • the second straight-line distance between, and calculate the third straight-line distance between the right wrist joint point 8 and the center point 19 (hand detection frame) of the right-hand frame and combine the first straight-line distance, the second straight-line distance and the third straight-line distance The sum between the straight-line distances, determined as the user's arm length.
  • the first linear distance between the left shoulder joint point 9 (target joint point) and the left elbow joint point 10 calculates the second linear distance between the left elbow joint point 10 and the left wrist joint point 11, and calculate The third straight-line distance between the left wrist joint point 11 and the left-hand frame center point 14 (hand detection frame), and the sum of the first straight-line distance, the second straight-line distance, and the third straight-line distance is determined as the user's arm length.
  • the second distance can be determined from the four generated straight-line distances, that is, the four straight-line distances obtained from the calculation Choose the largest distance as the second distance.
  • the image to be detected can be equally divided into four areas with the central pixel of the image to be detected as the origin in advance, the first area located at the upper left, the second area located at the upper right, and the third area located at the lower left , and the fourth area at the bottom right. Furthermore, based on the position information of the target joint point, the area where the target joint point is located can be determined; then based on the area where the target joint point is located, the target vertex farthest from the target joint point can be determined, and the distance between the target joint point and the target vertex can be calculated. The straight-line distance between them is the second distance. For example, if the target joint point is in the third area, then determine the vertex in the upper right corner as the target vertex; if the target joint point is in the fourth area, then determine the vertex in the upper left corner as the target vertex.
  • the ratio of the farthest straight-line distance c to the second distance d can be determined as a reference ratio, that is, the reference ratio is c/d, where the farthest straight-line distance c is the arm distance calculated in step one longest length of .
  • the ratio of the arm length to the second distance is determined as a reference ratio, so that when the first distance is enlarged based on the determined reference ratio, the The determined target distance is not greater than the second distance, reducing the probability that the determined intermediate position information exceeds the range of the image to be detected.
  • the initial position of the first virtual tool is determined based on the position information of the hand detection frame, the position information of the target joint point, and the corresponding reference scale of the user, including:
  • Step 1 Based on the position information of the hand detection frame, the target joint point position information, and the reference scale corresponding to the user, determine the intermediate position information of the first virtual tool in the image coordinate system corresponding to the image to be detected.
  • Step 2 Determine the target display position of the mobile identifier on the display device based on the intermediate position information.
  • step 1 the concrete execution of described step 1 is:
  • the first distance between the hand detection frame and the target joint point is obtained.
  • the first distance between the hand detection frame and the target joint point can be calculated. For example, if the position information of the center point of the hand detection frame is (x 1 , y 1 ), the position information of the target joint point is (x 2 , y 2 ), and the first distance is C1, then
  • the position information of the center point of the hand detection frame after the distance enlargement can be determined; and the hand detection frame after the distance enlargement
  • the position information of the central point of is determined as the intermediate position information of the mobile marker in the image coordinate system corresponding to the image to be detected.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure also provides a drawing device corresponding to the drawing method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned drawing method in the embodiment of the present disclosure, the implementation of the device can refer to the method The implementation of this method will not be repeated here.
  • FIG. 3 it is a schematic structural diagram of a drawing device provided by an embodiment of the present disclosure, and the device includes: an acquisition module 301 , a first determination module 302 , a second determination module 303 , a drawing module 304 and a control module 305 ; Wherein, the obtaining module 301 is used to obtain the image to be detected of the target area;
  • the first determining module 302 is configured to detect the image to be detected, and determine hand detection information in the image to be detected; the hand detection information includes position information of a hand detection frame;
  • the second determination module 303 is used to determine the initial position of the first virtual tool in the display device based on the position information of the hand detection frame when the hand detection information meets the trigger condition;
  • the drawing module 304 is configured to control the first virtual tool to draw with the starting position as the starting point of drawing according to the change of the position information of the hand detection frame detected within the target time period.
  • the drawing module 304 when controlling the first virtual tool to draw with the starting position as the starting point of drawing, is configured to:
  • the device further includes a control module 305, configured to:
  • the drawing function of the display device is started.
  • the hand detection information meeting a trigger condition includes at least one of the following:
  • the second gesture information indicated in the hand detection information conforms to a preset trigger gesture type
  • the duration of the position of the hand detection frame indicated in the hand detection information within the target area exceeds a set duration.
  • the target tool type is a target brush type
  • the second determination module 303 is further configured to:
  • a target virtual brush for drawing is determined from a plurality of preset virtual brushes that match the type of the target brush, and the target virtual brush is displayed at the starting position.
  • the second determining module 303 when determining a target virtual brush for drawing from among the preset virtual brush types matching the target brush type, is configured to:
  • the target virtual brush matching the user attribute information is determined from a plurality of preset virtual brush types matching the target brush type.
  • the menu area of the display device includes multiple virtual tool identifiers
  • the drawing module 304 is also used for:
  • the drawn portion is processed based on the second virtual tool in response to the target processing operation.
  • the drawing module 304 controls the first virtual tool to use the starting position as the starting point of drawing according to the change of the position information of the hand detection frame detected within the target period When plotting, use to:
  • the corrected position information is determined; according to the corrected position information, the first virtual tool is controlled to draw with the starting position as the starting point of drawing.
  • the first determination module 302 when determining the initial position of the first virtual tool in the display device based on the position information of the hand detection frame, is configured to:
  • the initial position of the first virtual tool in the display device is determined.
  • FIG. 4 it is a schematic structural diagram of a computer device 400 provided by an embodiment of the present disclosure, including a processor 401 , a memory 402 , and a bus 403 .
  • the memory 402 is used to store execution instructions, including a memory 4021 and an external memory 4022; the memory 4021 here is also called an internal memory, and is used to temporarily store calculation data in the processor 401 and exchange data with an external memory 4022 such as a hard disk.
  • the processor 401 exchanges data with the external memory 4022 through the memory 4021.
  • the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 executes the following instructions:
  • the hand detection information includes position information of a hand detection frame
  • the first virtual tool is controlled to draw with the starting position as the starting point of drawing.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the drawing method described in the above-mentioned method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the embodiment of the present disclosure also provides a computer program product, the computer product is loaded with program code, and the instructions included in the program code can be used to execute the steps of the drawing method described in the above method embodiment, for details, please refer to the above method embodiment , which will not be repeated here.
  • the above-mentioned computer program product may be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
  • a software development kit Software Development Kit, SDK
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to realize the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开提供了一种绘图方法、装置、计算机设备及存储介质,包括:获取目标区域的待检测图像(101);对待检测图像进行检测,确定待检测图像中的手部检测信息;手部检测信息包括手部检测框的位置信息(102);在手部检测信息满足触发条件的情况下,基于手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置(103);根据目标时段内检测到的手部检测框的位置信息的变化,控制第一虚拟工具以起始位置作为绘图起点进行绘图(104)。

Description

一种绘图方法、装置、计算机设备及存储介质
本申请要求在2021年8月27日提交中国专利局、申请号为202110996280.6、申请名称为“一种绘图方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机技术领域,具体而言,涉及一种绘图方法、装置、计算机设备及存储介质。
背景技术
相关技术中,在进行绘图时,一般是通过触摸屏直接进行绘画,例如通过手指或触控笔接触触摸屏的方式来进行绘画。但上述实现方式难以支持较大触摸屏的绘图过程,通常会影响绘画效果。
发明内容
本公开实施例至少提供一种绘图方法、装置、计算机设备及存储介质。
第一方面,本公开实施例提供了一种绘图方法,包括:
获取目标区域的待检测图像;
对所述待检测图像进行检测,确定所述待检测图像中的手部检测信息;所述手部检测信息包括手部检测框的位置信息;
在所述手部检测信息满足触发条件的情况下,基于所述手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置;
根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图。
上述方法中,可以在获取到目标区域的待检测图像之后,确定所述待检测图像中的手部检测信息,并可以在检测到所述手部检测框的位置信息发生变化的情况下,控制第一虚拟工具进行绘图,这样用户在绘图过程中用户无需通过手指或触控笔等媒介与显示设备直接接触,丰富了绘图方式。尤其是,当显示设备较大时,即用于呈现画面的显示屏较大时,可以通过远距离绘图的方式,在绘图过程中观看整体的绘图效果,减少用户为完成较长线条绘画而引起的绘画间断或不连续的情况,优化了交互过程,从而提升了绘画效果。
一种可能的实施方式中,所述控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图,包括:
根据所述手部检测信息中指示的第一手势信息,确定与所述第一手势信息对应的目标工具类型;
控制所述目标工具类型下的第一虚拟工具以所述起始位置作为绘图起点进行绘图;其中,绘图后的绘图结果符合所述目标工具类型对应的属性。
不同的手势信息对应不同的工具类型,用户可以通过控制手势信息的改变,来实现对于工具类型的切换,这样可以丰富绘图过程中用户与设备的交互过程,提升用户体验。
一种可能的实施方式中,所述方法还包括:
在基于所述待检测图像检测到用户处于第一目标姿态,且处于所述第一目标姿态的时长超过第一预设时长的情况下,启动显示设备的绘图功能。
这样,用户无需接触可以直接启动绘图功能,简化了用户绘图的流程,降低了因用户无法查找到绘图工具的标识而影响绘图效率的概率,增强了绘图的趣味性。
一种可能的实施方式中,所述手部检测信息满足触发条件,包括如下至少一项:
所述手部检测信息中指示的第二手势信息符合预设的触发手势类型;
所述手部检测信息中指示的手部检测框的位置处于目标区域内的持续时长超过设定时长。
一种可能的实施方式中,在所述第一虚拟工具为虚拟画笔的情况下,所述目标工具类型为目标画笔类型;
在所述确定与所述第一手势信息对应的目标工具类型之后,所述方法还包括:
从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔,并在所述起始位置处显示所述目标虚拟画笔。
通过展示目标虚拟画笔,可以使得用户清晰直观的观看的当前的绘图过程,进而可以进行绘图调整。
一种可能的实施方式中,所述从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔,包括:
根据所述手部检测框对应的用户的用户属性信息,从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定与所述用户属性信息匹配的所述目标虚拟画笔。
展示的虚拟画笔与用户属性信息相匹配,这样可以个性化展示虚拟画笔,提升用户在绘图过程中的用户体验。
一种可能的实施方式中,显示设备的菜单区域包括多个虚拟工具标识;
所述方法还包括:
响应于检测到用户做出第二目标姿态,在所述第一虚拟工具的起始位置处展示移动标识;
在检测到所述移动标识位于所述多个虚拟工具标识中的第二虚拟工具标识对应的展示位置处的时长,超过第二预设时长的情况下,在所述第一虚拟工具的起始位置处展示第二虚拟工具;
响应于目标处理操作,基于所述第二虚拟工具对已绘图部分进行处理。
通过这种方式,用户无需接触即可实现虚拟工具的切换,增加了绘图过程中用户与设备之间的交互,提升了用户体验。
一种可能的实施方式中,所述根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图,包括:
根据检测到的所述手部检测框的位置信息的变化,确定修正位置信息;根据所述修正位置信息,控制所述第一虚拟工具以所述起始位置作为绘画起点进行绘图。
通过这种方式,可以降低由于用户手抖导致的绘图效果较差的概率,提升了绘图效果。
一种可能的实施方式中,所述基于所述手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置,包括:
基于所述手部检测框的位置信息、和所述待检测图像与显示设备的显示界面之间的比例关系,确定所述显示设备中第一虚拟工具的起始位置。
第二方面,本公开实施例还提供一种绘图装置,包括:
获取模块,用于获取目标区域的待检测图像;
第一确定模块,用于对所述待检测图像进行检测,确定所述待检测图像中的手部检测信息;所述手部检测信息包括手部检测框的位置信息;
第二确定模块,用于在所述手部检测信息满足触发条件的情况下,基于所述手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置;
绘图模块,用于根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图。
一种可能的实施方式中,所述绘图模块,在控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图时,用于:
根据所述手部检测信息中指示的第一手势信息,确定与所述第一手势信息对应的目标工具类型;
控制所述目标工具类型下的第一虚拟工具以所述起始位置作为绘图起点进行绘图;其中,绘图后的绘图结果符合所述目标工具类型对应的属性。
一种可能的实施方式中,所述装置还包括,控制模块,用于:
在基于所述待检测图像检测到用户处于第一目标姿态,且处于所述第一目标姿态的时长超过第一预设时长的情况下,启动显示设备的绘图功能。
一种可能的实施方式中,所述手部检测信息满足触发条件包括如下至少一项:
所述手部检测信息中指示的第二手势信息符合预设的触发手势类型;
所述手部检测信息中指示的手部检测框的位置处于目标区域内的持续时长超过设定时长。
一种可能的实施方式中,在所述第一虚拟工具为虚拟画笔的情况下,所述目标工具类型为目标画笔类型;
在所述确定与所述第一手势信息对应的目标工具类型之后,所述第二确定模块还用于:
从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔,并在所述起始位置处显示所述目标虚拟画笔。
一种可能的实施方式中,所述第二确定模块,在从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔时,用于:
根据所述手部检测框对应的用户的用户属性信息,从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定与所述用户属性信息匹配的所述目标虚拟画笔。
一种可能的实施方式中,显示设备的菜单区域包括多个虚拟工具标识;
所述绘图模块,还用于:
响应于检测到用户做出第二目标姿态,在所述第一虚拟工具的起始位置处展示移动标识;
在检测到所述移动标识位于所述多个虚拟工具标识中的第二虚拟工具标识对应的展示位置处的时长,超过第二预设时长的情况下,在所述第一虚拟工具的起始位置处展示第二虚拟工具;
响应于目标处理操作,基于所述第二虚拟工具对已绘图部分进行处理。
一种可能的实施方式中,所述绘图模块,在根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图 时,用于:
根据检测到的所述手部检测框的位置信息的变化,确定修正位置信息;根据所述修正位置信息,控制所述第一虚拟工具以所述起始位置作为绘画起点进行绘图。
一种可能的实施方式中,所述第一确定模块,在基于所述手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置时,用于:
基于所述手部检测框的位置信息、和所述待检测图像与显示设备的显示界面之间的比例关系,确定所述显示设备中第一虚拟工具的起始位置。
第三方面,本公开实施例还提供一种计算机设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第四方面,本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第五方面,本公开可选实现方式还提供一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种绘图方法的流程图;
图2示出了本公开实施例所提供的绘图方法中,用户的半身肢体关键点位置信息和手部检测框位置信息的示意图;
图3示出了本公开实施例所提供的一种绘图装置的架构示意图;
图4示出了本公开实施例所提供的一种计算机设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种绘图方法进行详细介绍,本公开实施例所提供的绘图方法的执行主体可以为硬件执行,一般为具有显示能力的计算机设备,例如智能电视、智能手机、平板电脑等,或者通过处理器运行计算机可执行代码的方式执行。需要说明的是,本公开中所述绘图包括但不仅限于绘画、写字等通过用户与计算机设备之间的交互所实现的在显示界面上的编辑操作。
本文中所述的显示设备可以是指上述计算机设备,或者可以是与上述计算机设备相连的显示装置,例如显示器,具体的计算过程由计算机设备来执行,显示过程由显示设备来执行。
参见图1所示,为本公开实施例提供的一种绘图方法的流程图,所述方法包括步骤101~步骤104,其中:
步骤101、获取目标区域的待检测图像。
步骤102、对所述待检测图像进行检测,确定所述待检测图像中的手部检测信息;所述手部检测信息包括手部检测框的位置信息。
步骤103、在所述手部检测信息满足触发条件的情况下,基于所述手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置。
步骤104、根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图。
上述方法中,可以在获取到目标区域的待检测图像之后,确定所述待检测图像中的手部检测信息,并可以在检测到所述手部检测框的位置信息发生变化的情况下,控制第一虚拟工具进行绘图,这样用户在绘图过程中用户无需通过手指或触控笔等媒介与显示设备直接接触,丰富了绘图方式。尤其是,当显示设备较大时,即用于呈现画面的显示屏较大时,可以通过远距离绘图的方式,在绘图过程中观看整体的绘图效果,减少用户为完成较长线条绘画而引起的绘画间断或不连续的情况,优化了交互过程,从而提升了绘画效果。
以下是对上述步骤的详细介绍。
针对步骤101和步骤102、
这里,目标区域可以为能够观看到显示设备的显示界面的任一区域,比如,为保证用户对显示设备的控制效果,确保显示设备通过显示屏进行内容展示时的展示效果,可以将正对显示设备的区域设置为目标区域。具体实施时,摄像设备可部署在显示设备上,或是部署在显示设备附近。摄像设备可以实时采集目标区域的场景图像,该场景图像包括待检测图像,进而可以通过数据传输的方式从摄像设备获取目标区域的待检测图像。需要说明的是,可以根据目标区域的位置确定摄像设备的部署位置,使得部署的摄像设备的拍摄区域至少包含目标区域。
待检测图像可以为目标区域对应的任一帧图像,比如,待检测图像可以为当前时刻目标区域对应的图像,也可以历史时刻目标区域对应的图像。在获取到待检测图像之后,可以对待检测图像进行检测,确定待检测图像中用户的手部检测信息。
其中,所述手部检测信息可以包括手部检测框的位置信息,所述手部检测框可以是指所述待检测图像中包括用户手部的最小检测框。
具体实施时,可以训练用于对关键点进行检测的目标神经网络,使得训练后的目标神经网络满足预设条件,比如,使得训练后的目标神经网络的损失值小于设置的损失阈值。进而可以通过训练后的目标神经网络对待检测图像进行检测,确定待检测图像中的用户的手部检测框的位置信息。
示例性的,目标神经网络可以对待检测图像进行识别,确定待检测图像中包括的用户的半身肢体关键点位置信息,目标神经网络还可以基于半身肢体关键点位置信息和待检测图像,确定用户的手部检测框的位置信息。其中,半身肢体关键点的数量和位置可以根据需要进行设置,比如,肢体关键点的数量可以为14个或17个等。手部检测框的位置信息包括检测框的四个顶点的坐标信息、和手部检测框的中心点的坐标信息。
参见图2所示的绘图方法中,所述用户的半身肢体关键点位置信息和手部检测框位置信息的示意图。图2中所述用户的半身肢体关键点可以包括头部顶点5、头部中心点4、颈部关节点3、左肩部关节点9、右肩关节点6、左手肘关节点10、右手肘关节点7、左手腕关节点11、右手腕关节点8、半身肢体中心点12、胯部关节点1、胯部关节点2、和胯部中心点0;手部检测框可以包括左手检测框的四个顶点13、15、16、17和左手框的中心点14;以及右手检测框的四个顶点18、20、21、22和右手框的中心点19。
针对步骤103、
在一种可能的实施方式中,在检测所述手势检测信息是否满足触发条件之前,可以先检测显示设备是否处于绘图界面中,即显示设备通过显示屏展示的界面是否属于绘图界面。
在检测所述显示设备是否处于绘图界面时,可以检测显示设备当前启动的各个进程的工作状态,不同的进程对应于不同的应用程序,当检测到绘图应用程序对应的进程的工作状态为展示状态时,则可以确定所述显示设备当前处于绘图界面中;而当检测到绘图应用程序对应的进程的工作状态为未展示状态时,则可以确定当前绘图应用程序已被启动,但是被切换至后台。
当检测到显示设备没有处于绘画界面中时,可以先启动绘图功能,即启动绘图应用程序,或者将后台运行的绘图应用程序切换至前台展示。
一种可能的实施方式中,在基于所述待检测图像检测到用户处于第一目标姿态,且处于所述第一目标姿态的时长超过第一预设时长的情况下,启动显示设备的绘图功能。
这里,所述第一目标姿态可以是指目标动作,例如挥手、比剪刀手等;或者所述第一目标姿态可以是指,手部检测框的中心点在显示设备中的对应位置位于第一目标区域内,所述第一目标区域指的是可以启动或调用绘图功能的区域,比如,可以是指绘图应用程序的标识(例如可以是图标)所在的区域,以使用户通过在第一目标区域执行相应的手势或操作来启动或调用绘图功能。
在检测所述用户是否做出目标动作时,示例性的可以通过动作识别网络,将所述待检测图像输入至动作识别网络中,基于所述动作识别网络可以得到所述用户是否做出目标动作,这里,所述动作识别网络可以是基于携带有动作标签的样本图像训练得到的,输入至所述动作识别网络中的待检测图像可以是多张连续的图像。其中,动作标签指的是用于表示所述样本图像中所包含的动作类别的标签,例如可以是比剪刀手、抱拳等。
通过这种方式,用户无需接触可以直接启动绘图功能,简化了用户绘图的流程,降低了因用户无法查找到绘图工具的标识而影响绘图效率的概率,增强了绘图的趣味性。
当显示设备没有处于绘图界面中时,可以根据所述手部检测框的位置信息,确定对应的移动标识的位置信息,用所述移动标识表示所述手部检测框,所述移动标识示例性的可以是鼠标标识。当显示设备处于绘图界面中时,可以用所述第一虚拟工具表示所述手部检测框。具体的,基于手部检测框的位置信息,确定第一虚拟工具的位置信息的方法将在下方展开介绍。
所述手部检测信息满足触发条件,可以是指手部检测信息满足开始绘图的触发条件,在一种可能的实施方式中,所述手部检测信息满足触发条件,包括如下至少一项:
所述手部检测信息中指示的第二手势信息符合预设的触发手势类型;
所述手部检测信息中指示的手部检测框的位置处于目标区域内的持续时长超过设定时长。
这里,所述触发手势类型可以是用于指示开始绘图的手势,示例性的可以是双手抱拳、比“OK”等。在确定所述待检测图像中指示的第二手势信息时,示例性的可以通过手势识别网络进行识别,具体的手势识别网络的训练过程与上述动作识别网络的类似,在此将不再赘述。
所述目标区域可以是指距离首次检测出的所述手部检测框的位置在预设范围内的区域;所述手部检测框的位置可以是指将所述手部检测框的位置信息映射到显示设备的显示屏/显示界面上之后的位置,或者也可以是指所述手部检测框的位置信息指示的位置。
结合具体的应用场景,所述手部检测信息满足触发条件可以是指所述用户做出预设的触发手势,或者用户的手部保持不变(或手部移动范围较小)的时长超过设定时长。
针对步骤104、
在一种可能的实施方式中,虚拟工具可以是指用于进行绘图的虚拟工具,示例性的可以包括画笔、染色笔、橡皮擦等工具。所述第一虚拟工具可以是指默认的虚拟工具,或者基于历史绘图操作确定的虚拟工具。
示例性的,若在第一次检测到手部检测信息满足触发条件,则可以确定默认的虚拟画笔为第一虚拟工具,若在第N次检测到手部检测信息满足触发条件,则可以将第N-1次执行绘图操作结束时使用的虚拟工具作为所述第一虚拟工具,其中,N为大于1的正整数。
示例性的,若第N次执行绘图操作结束时,使用的虚拟工具为橡皮擦,当第N+1次检测到手部检测信息满足触发条件后,执行绘图操作时,其对应的第一虚拟工具也为橡皮擦。
在一种可能的实施方式中,在控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图时,可以先根据所述手部检测信息中指示的第一手势信息,确定与所述第一手势信息对应的目标工具类型;然后控制所述目标工具类型下的第一虚拟工具以所述起始位置作为绘图起点进行绘图;其中,绘图后的绘图结果符合所述目标工具类型对应的属性。
这里,所述目标工具类型对应的属性示例性的可以包括颜色、粗细、大小、处理类型等。不同的手势信息对应不同的工具类型,用户可以通过控制手势信息的改变,来实现对于工具类型的切换,这样可以丰富绘图过程中用户与设备的交互过程,提升用户体验。
需要说明的是,所述目标区域的待检测图像可以是实时获取的,所述待检测图像中的手部检测信息也可以是实时检测的。区别于上述第二手势信息,这里所述第一手势信息可以是指在确定所述手部检测信息满足触发条件之后,从所述待检测图像中检测到的手势信息。
所述与第一手势信息对应的目标工具类型可以是指不同的工具,示例性的,手势信息A对应的目标工具类型为橡皮擦,手势信息B对应的目标工具类型为画笔;或者,所述与第一手势信息对应的目标工具类型是指第一虚拟工具的不同类型,示例性的,若第一虚拟工具为画笔,手势信息A对应的目标工具类型为粗画笔,手势信息B对应的目标工具类型为细画笔。
具体实施中,所述控制所述目标工具类型下的第一虚拟工具以所述起始位置作为绘图起点进行绘图,可以基于所述手部检测框的位置信息,和时序上位于所述待检测图像之前的相邻历史待检测图像对应的手部检测框的历史位置信息,确定所述手部检测框的移动轨迹,然后基于所述手部检测框的移动轨迹、和所述起始位置进行绘图。
在完成一步绘图之后,可以基于变化后的手部检测框的位置信息,重新确定所述第一虚拟工具的位置信息,并将该重新确定的位置信息重新作为所述起始位置,进行之后步骤的绘图。
在一种可能的实施方式中,在所述第一虚拟工具为虚拟画笔的情况下,所述目标工具类型可以为目标画笔类型,在确定与第一手势信息对应的目标工具类型之后,还可以从预设的与目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔,并在所述起始位置显示所述目标虚拟画笔。
这里,从预设的与目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔时,示例性的可以结合用户的用户属性信息,从预设的与目标画笔类型匹配的多个虚拟画笔中,确定与所述用户的用户属性信息匹配的目标虚拟画笔。
其中,所述用户的用户属性信息示例性的可以包括年龄、性别、职业等;若所述用户的用户属性信息为男性,30岁,则与所述用户的属性信息匹配的目标虚拟画笔可以为虚拟钢笔;若所述用户的用户属性信息为女性,5岁,则与所述用户的用户属性信息匹配的目标虚拟画笔可以为虚拟卡通铅笔。
这样,展示的虚拟画笔与用户属性信息相匹配,可以个性化展示虚拟工具,提升用户在绘图过程中的用户体验。
在起始位置显示所述目标虚拟画笔时,可以结合用户的绘图习惯,所述用户的绘图习惯可以是预先设置的,示例性的,可以按照预先设置的倾角倾斜展示,以形象呈现用户手持绘图工具在诸如纸张等平面上进行编辑的过程。
在绘图过程中,用户可能会出现手抖等情况,因此为了防止这种情况对于绘画效果的影响,可以在控制第一虚拟工具进行绘图之前,进行防抖处理。
具体的,可以根据检测到的所述手部检测框的位置信息的变化,确定修正位置信息;根据所述修正位置信息,控制所述第一虚拟工具以所述起始位置作为绘画起点进行绘图。
这里,所述修正位置信息是指,在将所述手部检测框的位置信息映射到显示设备上之后进行修正处理之后的信息,示例性的,所述修正处理可以是平滑处理。
在一种可能的实施方式中,所述显示设备的菜单区域可以包括多个虚拟工具标识, 例如可以包括多个虚拟工具的名称、符号等。
在一种可能的实施方式中,用户可以切换虚拟工具,并基于切换后的虚拟工具进行绘图操作。
示例性的,可以响应于检测到用户做出第二目标姿态,在所述第一虚拟工具的起始位置处展示移动标识;然后在检测到所述移动标识位于所述多个虚拟工具标识中的第二虚拟工具标识对应的展示位置处的时长,超过第二预设时长的情况下,在所述第一虚拟工具的起始位置处展示第二虚拟工具;再响应于目标处理操作,基于所述第二虚拟工具对已绘图部分进行处理。
这里,所述第二目标姿态可以是指用于指示停止绘图的姿态,示例性的,若所述用户的手掌朝向所述显示设备,则在手掌的移动过程中(即手部检测框的移动过程中),可以基于所述手掌的移动进行绘图;若所述用户的手背朝向所述显示设备,则可以在所述第一虚拟工具的起始位置处展示移动标识,示例性的可以展示鼠标标识。
需要说明的是,第一虚拟工具的起始位置可以根据手框检测点的位置信息的变化而发生变化,其变化的时机为用户做出第二目标姿态,当检测到用户做出第二目标姿态时,可以根据手框点的位置信息实时更新所述起始位置,或者说,所述移动标识根据所述手框点位置信息的变化而变化,位置变化后的移动标识所对应的展示位置也为所述起始位置。
所述响应目标处理操作,基于第二虚拟工具对已绘图部分进行处理,可以是指基于第二虚拟工具对已绘图部分执行所述第二虚拟工具对应的处理功能,例如若所述第二虚拟工具为橡皮擦,则可以祛除已绘图部分的部分图画;所述响应目标处理操作,可以是指响应用户手框点的移动,确定所述第二虚拟工具对应的处理位置。
通过这种方式,用户无需接触即可实现虚拟工具的切换,增加了绘图过程中用户与设备之间的交互,提升了用户体验。
在一种可能的实施方式中,在检测到多个手部检测框的位置信息时,可以随机选择一个目标手部检测框,并基于该目标手部检测框的位置信息进行绘图;或者,在检测到多个手部检测框的位置信息时,分别基于两个手部检测框,确定两个第一虚拟工具的起始位置,并分别基于两个手部检测框的位置信息的变化,控制两个第一虚拟工具进行绘画。
综上,上述实施例可以概括描述为:在所述手部检测信息满足触发条件的情况下,基于手部检测框的位置信息的变化情况,控制第一虚拟工具进行绘图;在到用户做出第二目标姿态时,停止绘画,并展示移动标识,所述移动标识的位置可以根据所述手部检测框的位置信息的变化而变化;当再次检测到所述手部检测信息满足触发条件,或者用户停止做出所述第二目标姿态时,可以重新基于手部检测框的位置信息确定第一虚拟工具的展示位置,并进行展示。
所述手部检测信息满足触发条件可以理解为绘图步骤的起始,所述用户做出第二目标姿态可以理解为绘图步骤的结束。
下面将对基于手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置的具体方法展示介绍,即介绍如果实现待检测图像中的图像坐标,与显示设备中的坐标之间的转换。
在一种可能的实施方式中,可以基于所述手部检测框的位置信息、和所述待检测图 像与显示设备的显示界面之间的比例关系,确定所述显示设备中第一虚拟工具的起始位置。
具体实施时,可以通过待检测图像与显示设备的显示界面之间的比例关系、以及用户的手部检测框的位置信息,确定用户的手部检测框的中心点位置信息在显示界面上的目标位置信息,再将该用户的手部检测框的中心点位置信息在显示界面上的目标位置信息,确定为第一虚拟工具的起始位置。
一种可选实施方式中,在基于手部检测框的位置信息,确定第一虚拟工具的起始位置之前,方法还包括:对待检测图像进行检测,确定待检测图像中包括的用户的目标关节点位置信息;
基于手部检测框的位置信息,确定第一虚拟工具的起始位置,包括:基于手部检测框的位置信息、目标关节点位置信息、以及用户对应的参考比例,确定第一虚拟工具的起始位置,其中,参考比例用于对手部检测框的位置与目标关节点位置之间的第一距离进行放大。
其中,可以根据下述步骤确定参考比例:
步骤一、获取手部检测框与目标关节点之间的距离,以得到待检测图像中用户的手臂长度。
步骤二、获取目标关节点与待检测图像各顶点之间的距离,以得到第二距离,第二距离为目标关节点与各顶点之间的距离中的最大距离。
步骤三、将手臂长度与第二距离的比值,确定为参考比例。
在步骤一中,由于目标关键点与所述手部检测框之间的距离可以表示人在运动过程中手臂伸展的最长距离,因此可以先确定手部检测框的中心点与目标关节点之间的距离,以得到待检测图像中用户的手臂长度。
示例性的,参见图2所示,可以计算右肩关节点6(目标关节点)与右手肘关节点7之间的第一直线距离、计算右手肘关节点7与右手腕关节点8之间的第二直线距离、以及计算右手腕关节点8与右手框的中心点19(手部检测框)之间的第三直线距离,并将第一直线距离、第二直线距离和第三直线距离之间的和,确定为用户的手臂长度。或者,可以计算左肩关节点9(目标关节点)与左手肘关节点10之间的第一直线距离、计算左手肘关节点10与左手腕关节点11之间的第二直线距离、以及计算左手腕关节点11与左手框中心点14(手部检测框)之间的第三直线距离,并将第一直线距离、第二直线距离、和第三直线距离的和,确定为用户的手臂长度。
在步骤二中,可以在计算了目标关节点分别与待检测图像的四个顶点之间的直线距离之后,从生成的四个直线距离中确定第二距离,即从计算得到的四个直线距离中选择最大距离作为第二距离。
或者,还可以预先以待检测图像的中心像素点为原点,将待检测图像平均划分为四个区域,位于左上方的第一区域、位于右上方的第二区域、位于左下方的第三区域、和位于右下方的第四区域。进而,可以基于目标关节点位置信息,确定目标关节点所处的区域;再可以基于目标关节点所处的区域,确定与目标关节点距离最远的目标顶点,计算目标关节点与目标顶点之间的直线距离,即得到第二距离。比如若目标关节点位于第三区域内,则确定右上角的顶点为目标顶点;若目标关节点位于第四区域内,则确定左上角的顶点为目标顶点。
在步骤三中,可以将最远直线距离c与第二距离d的比值,确定为参考比例,即参考比例为c/d,这里,所述最远直线距离c为步骤一中计算出的手臂的最长长度。
若在第一距离为a,在基于参考比例对第一距离进行放大时,放大后的目标距离为a/(c/d)=a/c*d,由于c为最远直线距离,因此a/c必定不大于1,因此a/c*d必定不大于d,这样可以使放大之后的目标距离不大于第二距离。
上述方法中,通过确定待检测图像中用户的手臂长度、以及第二距离,将手臂长度与第二距离的比值,确定为参考比例,使得基于确定的参考比例对第一距离放大时,可以使确定的目标距离不大于第二距离,降低确定的中间位置信息超出了待检测图像的范围的情况发生的概率。
一种可选实施方式中,基于手部检测框的位置信息、目标关节点位置信息、以及用户对应的参考比例,确定第一虚拟工具的起始位置,包括:
步骤一、基于手部检测框的位置信息、目标关节点位置信息、以及与用户对应的参考比例,确定第一虚拟工具在待检测图像对应的图像坐标系下的中间位置信息。
步骤二、基于中间位置信息,确定移动标识在显示设备中的目标显示位置。
其中,所述步骤一的具体执行为:
一、基于手部检测框的位置信息、和目标关节点位置信息,得到手部检测框与目标关节点之间的第一距离。
二、基于参考比例对第一距离进行放大,得到目标距离;
三、基于目标距离、和手部检测框的位置信息,确定移动标识在待检测图像对应的图像坐标系下的中间位置信息。
这里,可以基于手部检测框的位置信息和目标关节点位置信息,计算手部检测框与目标关节点之间的第一距离,比如,若手部检测框的中心点的位置信息为(x 1,y 1)、目标关节点位置信息为(x 2,y 2)、第一距离为C1,则
Figure PCTCN2022087946-appb-000001
再可以基于参考比例c/d对第一距离C1进行放大,确定目标距离D1,C1/D1=c/d,即目标距离D1=C1×c/d。最后,可以基于目标距离、和手部检测框的位置信息指示的手部中心点位置坐标,确定距离放大后的手部检测框的中心点的位置信息;并将距离放大后的手部检测框的中心点的位置信息,确定为移动标识在待检测图像对应的图像坐标系下的中间位置信息。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与绘图方法对应的绘图装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述绘图方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参照图3所示,为本公开实施例提供的一种绘图装置的架构示意图,所述装置包括:获取模块301、第一确定模块302、第二确定模块303、绘图模块304以及控制模块305;其中,获取模块301,用于获取目标区域的待检测图像;
第一确定模块302,用于对所述待检测图像进行检测,确定所述待检测图像中的手部检测信息;所述手部检测信息包括手部检测框的位置信息;
第二确定模块303,用于在所述手部检测信息满足触发条件的情况下,基于所述手 部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置;
绘图模块304,用于根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图。
一种可能的实施方式中,所述绘图模块304,在控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图时,用于:
根据所述手部检测信息中指示的第一手势信息,确定与所述第一手势信息对应的目标工具类型;
控制所述目标工具类型下的第一虚拟工具以所述起始位置作为绘图起点进行绘图;其中,绘图后的绘图结果符合所述目标工具类型对应的属性。
一种可能的实施方式中,所述装置还包括,控制模块305,用于:
在基于所述待检测图像检测到用户处于第一目标姿态,且处于所述第一目标姿态的时长超过第一预设时长的情况下,启动显示设备的绘图功能。
一种可能的实施方式中,所述手部检测信息满足触发条件包括如下至少一项:
所述手部检测信息中指示的第二手势信息符合预设的触发手势类型;
所述手部检测信息中指示的手部检测框的位置处于目标区域内的持续时长超过设定时长。
一种可能的实施方式中,在所述第一虚拟工具为虚拟画笔的情况下,所述目标工具类型为目标画笔类型;
在所述确定与所述第一手势信息对应的目标工具类型之后,所述第二确定模块303还用于:
从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔,并在所述起始位置处显示所述目标虚拟画笔。
一种可能的实施方式中,所述第二确定模块303,在从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔时,用于:
根据所述手部检测框对应的用户的用户属性信息,从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定与所述用户属性信息匹配的所述目标虚拟画笔。
一种可能的实施方式中,显示设备的菜单区域包括多个虚拟工具标识;
所述绘图模块304,还用于:
响应于检测到用户做出第二目标姿态,在所述第一虚拟工具的起始位置处展示移动标识;
在检测到所述移动标识位于所述多个虚拟工具标识中的第二虚拟工具标识对应的展示位置处的时长,超过第二预设时长的情况下,在所述第一虚拟工具的起始位置处展示第二虚拟工具;
响应于目标处理操作,基于所述第二虚拟工具对已绘图部分进行处理。
一种可能的实施方式中,所述绘图模块304,在根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图时,用于:
根据检测到的所述手部检测框的位置信息的变化,确定修正位置信息;根据所述修正位置信息,控制所述第一虚拟工具以所述起始位置作为绘画起点进行绘图。
一种可能的实施方式中,所述第一确定模块302,在基于所述手部检测框的位置信 息,确定显示设备中第一虚拟工具的起始位置时,用于:
基于所述手部检测框的位置信息、和所述待检测图像与显示设备的显示界面之间的比例关系,确定所述显示设备中第一虚拟工具的起始位置。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
基于同一技术构思,本公开实施例还提供了一种计算机设备。参照图4所示,为本公开实施例提供的计算机设备400的结构示意图,包括处理器401、存储器402、和总线403。其中,存储器402用于存储执行指令,包括内存4021和外部存储器4022;这里的内存4021也称内存储器,用于暂时存放处理器401中的运算数据,以及与硬盘等外部存储器4022交换的数据,处理器401通过内存4021与外部存储器4022进行数据交换,当计算机设备400运行时,处理器401与存储器402之间通过总线403通信,使得处理器401在执行以下指令:
获取目标区域的待检测图像;
对所述待检测图像进行检测,确定所述待检测图像中的手部检测信息;所述手部检测信息包括手部检测框的位置信息;
在所述手部检测信息满足触发条件的情况下,基于所述手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置;
根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的绘图方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的绘图方法的步骤,具体可参见上述方法实施例,在此不再赘述。
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的 目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。

Claims (13)

  1. 一种绘图方法,其特征在于,包括:
    获取目标区域的待检测图像;
    对所述待检测图像进行检测,确定所述待检测图像中的手部检测信息;所述手部检测信息包括手部检测框的位置信息;
    在所述手部检测信息满足触发条件的情况下,基于所述手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置;
    根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图。
  2. 根据权利要求1所述的方法,其特征在于,所述控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图,包括:
    根据所述手部检测信息中指示的第一手势信息,确定与所述第一手势信息对应的目标工具类型;
    控制所述目标工具类型下的所述第一虚拟工具以所述起始位置作为绘图起点进行绘图;其中,绘图后的绘图结果符合所述目标工具类型对应的属性。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    在基于所述待检测图像检测到用户处于第一目标姿态,且处于所述第一目标姿态的时长超过第一预设时长的情况下,启动显示设备的绘图功能。
  4. 根据权利要求1~3任一所述的方法,其特征在于,所述手部检测信息满足触发条件,包括如下至少一项:
    所述手部检测信息中指示的第二手势信息符合预设的触发手势类型;
    所述手部检测信息中指示的手部检测框的位置处于目标区域内的持续时长超过设定时长。
  5. 根据权利要求2所述的方法,其特征在于,在所述第一虚拟工具为虚拟画笔的情况下,所述目标工具类型为目标画笔类型;
    在所述确定与所述第一手势信息对应的目标工具类型之后,所述方法还包括:
    从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔,并在所述起始位置处显示所述目标虚拟画笔。
  6. 根据权利要求5所述的方法,其特征在于,所述从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定进行绘图的目标虚拟画笔,包括:
    根据所述手部检测框对应的用户的用户属性信息,从预设的与所述目标画笔类型匹配的多个虚拟画笔中,确定与所述用户属性信息匹配的所述目标虚拟画笔。
  7. 根据权利要求1~6任一所述的方法,其特征在于,显示设备的菜单区域包括多个虚拟工具标识;
    所述方法还包括:
    响应于检测到用户做出第二目标姿态,在所述第一虚拟工具的起始位置处展示移动标识;
    在检测到所述移动标识位于所述多个虚拟工具标识中的第二虚拟工具标识对应的展示位置处的时长,超过第二预设时长的情况下,在所述第一虚拟工具的起始位置处展示第二虚拟工具;
    响应于目标处理操作,基于所述第二虚拟工具对已绘图部分进行处理。
  8. 根据权利要求1~7任一所述的方法,其特征在于,所述根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图,包括:
    根据检测到的所述手部检测框的位置信息的变化,确定修正位置信息;根据所述修正位置信息,控制所述第一虚拟工具以所述起始位置作为绘画起点进行绘图。
  9. 根据权利要求1~8任一所述的方法,其特征在于,所述基于所述手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置,包括:
    基于所述手部检测框的位置信息、和所述待检测图像与显示设备的显示界面之间的比例关系,确定所述显示设备中所述第一虚拟工具的起始位置。
  10. 一种绘图装置,其特征在于,包括:
    获取模块,用于获取目标区域的待检测图像;
    第一确定模块,用于对所述待检测图像进行检测,确定所述待检测图像中的手部检测信息;所述手部检测信息包括手部检测框的位置信息;
    第二确定模块,用于在所述手部检测信息满足触发条件的情况下,基于所述手部检测框的位置信息,确定显示设备中第一虚拟工具的起始位置;
    绘图模块,用于根据目标时段内检测到的所述手部检测框的位置信息的变化,控制所述第一虚拟工具以所述起始位置作为绘图起点进行绘图。
  11. 一种计算机设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至9任一项所述的绘图方法的步骤。
  12. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至9任一项所述的绘图方法的步骤。
  13. 一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行用于实现权利要求1至9中的任一权利要求所述的绘图方法的步骤。
PCT/CN2022/087946 2021-08-27 2022-04-20 一种绘图方法、装置、计算机设备及存储介质 WO2023024536A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110996280.6A CN113703577A (zh) 2021-08-27 2021-08-27 一种绘图方法、装置、计算机设备及存储介质
CN202110996280.6 2021-08-27

Publications (1)

Publication Number Publication Date
WO2023024536A1 true WO2023024536A1 (zh) 2023-03-02

Family

ID=78656074

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/087946 WO2023024536A1 (zh) 2021-08-27 2022-04-20 一种绘图方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN113703577A (zh)
WO (1) WO2023024536A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703577A (zh) * 2021-08-27 2021-11-26 北京市商汤科技开发有限公司 一种绘图方法、装置、计算机设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268181A (zh) * 2017-01-04 2018-07-10 奥克斯空调股份有限公司 一种非接触式手势识别的控制方法及装置
CN108921101A (zh) * 2018-07-04 2018-11-30 百度在线网络技术(北京)有限公司 基于手势识别控制指令的处理方法、设备及可读存储介质
CN112262393A (zh) * 2019-12-23 2021-01-22 商汤国际私人有限公司 手势识别方法和装置、电子设备及存储介质
CN112506340A (zh) * 2020-11-30 2021-03-16 北京市商汤科技开发有限公司 设备控制方法、装置、电子设备及存储介质
US20210081029A1 (en) * 2019-09-13 2021-03-18 DTEN, Inc. Gesture control systems
CN113703577A (zh) * 2021-08-27 2021-11-26 北京市商汤科技开发有限公司 一种绘图方法、装置、计算机设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9971490B2 (en) * 2014-02-26 2018-05-15 Microsoft Technology Licensing, Llc Device control
US9727161B2 (en) * 2014-06-12 2017-08-08 Microsoft Technology Licensing, Llc Sensor correlation for pen and touch-sensitive computing device interaction
CN108932053B (zh) * 2018-05-21 2021-06-11 腾讯科技(深圳)有限公司 基于手势的绘图方法、装置、存储介质和计算机设备
US11023055B2 (en) * 2018-06-01 2021-06-01 Apple Inc. Devices, methods, and graphical user interfaces for an electronic device interacting with a stylus
CN110750160B (zh) * 2019-10-24 2023-08-18 京东方科技集团股份有限公司 基于手势的绘画屏绘画方法、装置、绘画屏和存储介质
CN112925414A (zh) * 2021-02-07 2021-06-08 深圳创维-Rgb电子有限公司 显示屏手势绘画方法、装置及计算机可读存储介质
CN112987933A (zh) * 2021-03-25 2021-06-18 北京市商汤科技开发有限公司 设备控制方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268181A (zh) * 2017-01-04 2018-07-10 奥克斯空调股份有限公司 一种非接触式手势识别的控制方法及装置
CN108921101A (zh) * 2018-07-04 2018-11-30 百度在线网络技术(北京)有限公司 基于手势识别控制指令的处理方法、设备及可读存储介质
US20210081029A1 (en) * 2019-09-13 2021-03-18 DTEN, Inc. Gesture control systems
CN112262393A (zh) * 2019-12-23 2021-01-22 商汤国际私人有限公司 手势识别方法和装置、电子设备及存储介质
CN112506340A (zh) * 2020-11-30 2021-03-16 北京市商汤科技开发有限公司 设备控制方法、装置、电子设备及存储介质
CN113703577A (zh) * 2021-08-27 2021-11-26 北京市商汤科技开发有限公司 一种绘图方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN113703577A (zh) 2021-11-26

Similar Documents

Publication Publication Date Title
US20220129060A1 (en) Three-dimensional object tracking to augment display area
US8593421B2 (en) Local coordinate frame user interface for multitouch-enabled devices
US8553001B2 (en) Methods and apparatus for determining local coordinate frames for a human hand
KR101844390B1 (ko) 사용자 인터페이스 제어를 위한 시스템 및 기법
Shen et al. Vision-based hand interaction in augmented reality environment
KR20130088104A (ko) 비접촉 방식의 인터페이스를 제공하기 위한 휴대 장치 및 방법
US20130132903A1 (en) Local Coordinate Frame User Interface for Multitouch-Enabled Applications
WO2015161653A1 (zh) 一种终端操作方法及终端设备
Rautaray et al. Real time multiple hand gesture recognition system for human computer interaction
US9378427B2 (en) Displaying handwritten strokes on a device according to a determined stroke direction matching the present direction of inclination of the device
WO2014067110A1 (zh) 绘图控制方法、装置及移动终端
CN102193631A (zh) 可穿戴式三维手势交互系统及其使用方法
KR20170009979A (ko) 터치 입력을 위한 방법 및 시스템
US10514844B2 (en) Automatically modifying an input area based on a proximity to one or more edges
WO2023024536A1 (zh) 一种绘图方法、装置、计算机设备及存储介质
US10331333B2 (en) Touch digital ruler
CN109062491A (zh) 交互智能设备的笔迹处理方法和装置
US10824237B2 (en) Screen display control method and screen display control system
US11789543B2 (en) Information processing apparatus and information processing method
WO2022206785A1 (zh) 智能化切换笔型的方法、装置、设备及存储介质
JP2013200654A (ja) 表示制御装置、表示制御方法、情報表示システム及びプログラム
US20230042447A1 (en) Method and Device for Managing Interactions Directed to a User Interface with a Physical Object
JP2017228216A (ja) 情報処理装置、その制御方法、プログラム、及び記憶媒体
JP2016042383A (ja) ユーザ操作処理装置、ユーザ操作処理方法及びプログラム
CN115373530A (zh) 一种绘图方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22859890

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE