CN118295566A - Drawing assisting method, apparatus, device and readable storage medium - Google Patents
Drawing assisting method, apparatus, device and readable storage medium Download PDFInfo
- Publication number
- CN118295566A CN118295566A CN202410399271.2A CN202410399271A CN118295566A CN 118295566 A CN118295566 A CN 118295566A CN 202410399271 A CN202410399271 A CN 202410399271A CN 118295566 A CN118295566 A CN 118295566A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- position information
- captured
- display interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000008859 change Effects 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims description 26
- 230000008569 process Effects 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000010422 painting Methods 0.000 abstract description 17
- 230000000694 effects Effects 0.000 description 17
- 238000013135 deep learning Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000010030 laminating Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007591 painting process Methods 0.000 description 1
Landscapes
- Studio Devices (AREA)
Abstract
The application provides a painting assisting method, a device, equipment and a readable storage medium. The method of the application is executed by an electronic device, the electronic device is provided with a camera, and the method comprises the following steps: acquiring a first image in the captured image of the camera; wherein the first image includes a drawn target object; determining target position information of the first image in a drawing paper area captured by the camera; and adjusting the position of the first image in the image display interface according to the target position information and the change of the drawing paper area captured by the camera. The scheme solves the problem that a user cannot draw with the help of electronic equipment.
Description
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a drawing assisting method, a drawing assisting device, drawing assisting equipment and a readable storage medium.
Background
In the current painting process, the user himself is required to actually create, and the requirement on the actual painting level of the user is high. For example, when a user needs to copy a work, such as a real-life object, if a good effect is to be copied, the user needs to have a strong copying and painting ability, and it is very inconvenient for some people with insufficient painting skills and beginners without practical painting bases to want to achieve an excellent painting effect.
Disclosure of Invention
The embodiment of the application provides a painting assisting method, device and equipment and a readable storage medium, which are used for assisting a user to realize high-quality painting.
In order to solve the above problems, the present application is achieved as follows:
in a first aspect, an embodiment of the present application provides a drawing assistance method, which is executed by an electronic device, where a camera is provided on the electronic device, the method including:
Acquiring a first image in the captured image of the camera; wherein the first image includes a drawn target object;
Determining target position information of the first image in a drawing paper area captured by the camera;
and adjusting the position of the first image in the image display interface according to the target position information and the change of the drawing paper area captured by the camera.
In a second aspect, an embodiment of the present application provides a drawing assisting apparatus including:
The acquisition module is used for acquiring a first image in the camera captured image; wherein the first image includes a drawn target object;
the first processing module is used for determining target position information of the first image in a drawing paper area captured by the camera;
and the second processing module is used for adjusting the position of the first image in the image display interface according to the target position information and the change of the drawing paper area captured by the camera.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor, and a program stored on the memory and executable on the processor; the processor is configured to read the program in the memory to implement the steps in the drawing assistance method as described above.
In a fourth aspect, embodiments of the present application provide a readable storage medium storing a program which, when executed by a processor, implements steps in a drawing assistance method as described above.
In a fifth aspect, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement the steps of the drawing assistance method as described above.
According to the embodiment of the application, aiming at the target object painted by the user, the first image including the target object in the image captured by the camera can be obtained, then, along with the movement of the electronic equipment by the user, the target position information of the first image is determined in the painting paper area captured by the camera, so that the position of the first image in the image display interface is timely adjusted according to the target position information and the change of the painting paper area captured by the camera, the user can copy painting conveniently, the painting copy effect with higher quality is realized, the user does not need to have professional painting skills, and convenience is provided for painting beginners or lovers.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a schematic flow chart of a painting assistance method according to an embodiment of the application;
FIG. 2 shows one of the display schematics of the electronic device;
FIG. 3 shows a schematic structural diagram of a deep learning matting model;
FIG. 4 shows a second schematic display of the electronic device;
FIG. 5 shows a schematic application of a method according to an embodiment of the application;
FIG. 6 is a schematic view showing a structure of a drawing assisting apparatus according to an embodiment of the application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments. In the following description, specific details such as specific configurations and components are provided merely to facilitate a thorough understanding of embodiments of the application. It will therefore be apparent to those skilled in the art that various changes and modifications can be made to the embodiments described herein without departing from the scope and spirit of the application. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present application, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application. In addition, the terms "system" and "network" are often used interchangeably herein.
As shown in fig. 1, an embodiment of the present application provides a drawing assistance method, which is executed by an electronic device, on which a camera is disposed, the method including:
Step 11, acquiring a first image in the captured image of the camera; wherein the first image includes a drawn target object;
Step 12, determining target position information of the first image in a drawing paper area captured by the camera;
And step 13, adjusting the position of the first image in the image display interface according to the target position information and the change of the drawing paper area captured by the camera.
According to the steps, the electronic equipment can acquire a first image including the target object in the image captured by the camera aiming at the target object drawn by the user, and then, along with the movement of the electronic equipment by the user, the target position information of the first image is determined in the drawing paper area captured by the camera, so that the position of the first image in the image display interface is timely adjusted according to the target position information and the change of the drawing paper area captured by the camera, the user can copy the drawing conveniently, the copying drawing effect with higher quality is realized, the user does not need to have professional drawing skills, and convenience is provided for a painting beginner or a fan.
In this embodiment, the live action captured by the camera is displayed as an image on the image display interface of the electronic device.
Optionally, the image display interface is configured to display the image captured by the camera in real time.
For example, a user triggers a drawing assisting function of the electronic device through a specific operation to start a camera of the electronic device, and displays the image display interface on a screen to trigger the electronic device to execute the drawing assisting method of the embodiment of the application.
Optionally, in this embodiment, the acquiring a first image of the captured images of the camera includes:
displaying the image captured by the camera on the image display interface in the process of capturing the image by the camera;
and responding to a first instruction of a user, performing buckling processing on the target object to obtain the first image.
Here, the first instruction is generated by a user for a selection operation of a partial image in the image display interface for obtaining the first image. For example, as shown in fig. 2, the image display interface 20 displays a drawn target object, and the user interacts with the electronic device screen by a selection operation (e.g., clicking the screen with a finger) to select the region where the target object 22 is located with a rectangular frame 21.
Therefore, the user can select the drawn target object when the user finds the drawn target object by looking at the image displayed on the image display interface, so that the electronic equipment responds to a first instruction corresponding to the selection operation to execute the matting processing of the target object to obtain the first image. The method comprises the steps of capturing a target object in a live-action through a camera of electronic equipment, carrying out matting processing, and then attaching and displaying an image of the target object after matting processing on an actual drawing paper area of a user captured by the camera, so that a virtual reality AR superposition effect of a first image in drawing paper can be observed through an equipment screen by a user, and the target object can be observed through the equipment screen and copied on the actual drawing paper.
It should be appreciated that in this embodiment, the electronic device uses a deep learning matting model to extract the foreground portion of the target object as the first image. Of course, the extracted first image is continuously displayed on the image display interface (such as at the central position of the image display interface), so that the user can move the electronic device to attach the first image to the drawing paper area.
The deep learning matting model can automatically separate the foreground from the background in the image, and an accurate matting result is generated. The deep learning matting model adopts an end-to-end structure, as shown in fig. 3, an input image is subjected to a series of convolution, downsampling, upsampling, and finally, a single-channel binary Mask with the same size as the input image is output.
In the deep learning matting model, 3*3 convolution refers to a convolution operation performed on the input image using a 3x3 convolution kernel. The convolution operation performs computation on the input image by sliding the convolution kernel, thereby extracting features of the image. In 3*3 convolutions, the convolution kernel has a size of 3x3, which consists of 9 weight parameters. The convolution operation multiplies the convolution kernel by each pixel of the input image element by element, and adds the results to obtain a convolved output value. By sliding the convolution kernel, a convolution operation can be performed on the entire input image, thereby obtaining an output result. 3*3 convolution is widely used in deep learning, and can be used for tasks such as image classification, object detection, image segmentation and the like. 3*3 convolution is considered an efficient convolution operation because it has a smaller convolution kernel size, a relatively smaller amount of parameters, and a smaller amount of computation, while being able to effectively capture local features of an image.
In the deep learning matting model, downsampling is used to reduce the size of an image or to reduce the resolution of an image. It can be achieved by reducing the number of pixels in the image. During the downsampling process, some specific algorithm or method is typically used to select the pixels to be preserved. The most common downsampling methods are average pooling and maximum pooling. Averaging pooling is a downsampling method that averages the pixel values of each small region (e.g., a 2x2 region) of the input image and then takes the average value as the pixel value of the corresponding region in the output image. This reduces the size of the image and retains image information to some extent. Max pooling is another downsampling method that takes the pixel value of each small region of the input image to a maximum value, and then takes the maximum value as the pixel value of the corresponding region in the output image. Maximum pooling can help preserve the main features in the image while reducing the size of the image. Downsampling is widely used in deep learning, particularly in convolutional neural networks (Convolutional Neural Network, CNN). By the downsampling operation, the size of the feature map can be gradually reduced, thereby reducing the amount of computation, and more abstract and advanced features can be extracted. This helps to improve the efficiency and generalization ability of the model.
In the deep learning matting model, upsampling is used to increase the size of an image or to increase the resolution of an image. It can be achieved by increasing the number of pixels in the image. During the upsampling process, some specific algorithm or method is typically used to interpolate new pixel values. The most common upsampling methods are nearest neighbor interpolation, bilinear interpolation and bicubic interpolation. Bilinear interpolation it calculates the value of a new pixel in the output image by weighted averaging the pixels in the input image. This approach may produce smoother results than nearest neighbor interpolation.
In a neural network, the operation of connecting (Concat) the feature maps refers to stitching the feature maps together according to a certain dimension. This operation may be performed between different layers in the deep learning model to merge their feature information together. Typically, the concat operation is performed in the channel dimension of the feature map, i.e. the feature maps are stitched in the channel dimension. For example, if there are two feature maps A and B, their dimensions are [ H, W, C1] and [ H, W, C2], respectively, where H and W represent the height and width of the feature maps and C1 and C2 represent the number of channels. They can be stitched into a new feature map, with dimensions H, W, c1+c2, by the concat operation. The concat operation of the feature map splices the feature map of the previous layer with the feature map of the next layer so as to retain more information. The method can also be used for multi-scale feature fusion, and feature graphs from different levels are spliced to improve the perceptibility of the model to different scale information. Through Concat operations, the neural network can better utilize the characteristic information of different levels, so that the expression capacity and performance of the model are improved.
In addition, considering the user's desire to copy, in the embodiment of the present application, after the first image is obtained, step 12 is executed to determine the target position information of the first image in the drawing paper area captured by the camera, so that the situation that the electronic device moves in the drawing process of the subsequent user affects the drawing effect.
Optionally, in this embodiment, the determining the target position information of the first image in the drawing area captured by the camera includes:
Under the condition that the camera captures a drawing paper area, responding to a second instruction of a user, and adjusting the position of the first image in the image display interface;
responding to a third instruction of a user, and acquiring related position information of the first image and the drawing paper area in the image display interface;
And determining the target position information according to the related position information.
The second instruction is generated by the user moving the electronic device and performing a position adjustment operation on the first image under the condition that the camera is aligned to the drawing paper and captures the drawing paper area, and is used for adjusting the position of the first image in the image display interface. For example, the user drags the first image with a single finger by adjusting an operation (such as zooming in or zooming out the first image with a two-finger touch screen). Of course, the user can also freely adjust according to personal requirements, such as attribute adjustment of the first image, and the transparency of the first image is adjusted through the sliding bar.
The third instruction is generated by a confirmation operation after the user completes the adjustment of the position of the first image (e.g., the user presses the ok button after the adjustment is completed) for acquiring the target position information. Thus, after the user finishes adjusting the position of the first image, the electronic device can obtain the target position information of the first image, so that the position of the first image can be adjusted by taking the target position information as a reference.
The related position information and the target position information may include a relative position between a pixel point of the first image and a reference point of the image drawing area. The reference points of the drawing paper area may be four vertexes of the drawing paper, or may be other position points.
In this embodiment, the target position information is determined according to the relevant position information, and on the one hand, the relevant position information may be regarded as target position information, and on the other hand, the target position information is calculated based on the relevant position information.
Optionally, the determining the target position information of the first image in the drawing paper area captured by the camera, or the adjusting the position of the first image in the image display interface according to the target position information and the change of the drawing paper area captured by the camera includes:
removing a background area in the frame image captured by the camera to obtain a second image;
Performing edge detection and contour extraction on the second image, and determining the position of the maximum contour;
and obtaining the position information of the reference point according to the position of the maximum outline.
In one implementation, for a user's confirmation operation, the electronic device may remove a background area of a current frame image captured by the camera to obtain a second image, then determine a position of a maximum contour through edge detection and contour extraction, and obtain reference point position information from the position of the maximum contour, so that relevant position information of the first image and the drawing paper area in the current frame image can be determined based on the reference point position information.
In one implementation, for the movement of the camera or the drawing paper in the drawing process, the electronic device removes the background area of the real-time frame image captured by the camera to obtain a second image, then determines the position of the maximum contour through edge detection and contour extraction, and then obtains the position information of the reference point from the position of the maximum contour, and then can adjust the position of the first image in the image display interface in real time based on the position information of the reference point.
Before background area removal, noise is removed by morphological closing operation processing, including expansion and corrosion, aiming at an original frame image acquired by a camera.
After edge detection and contour extraction of the second image, the maximum contour may be determined based on the area maximum included by the contour.
In addition, in this embodiment, the obtaining the reference point position information according to the position of the maximum contour includes:
Performing quadrilateral approximation processing on the maximum profile to determine the position information of four vertexes;
And sequencing the vertexes serving as reference points to obtain the position information of the reference points.
Here, the quadrilateral approximation process includes the steps of:
Regarding the maximum profile as a curve, and determining the first and last points A, B of the curve;
1) Connecting A, B to obtain a straight line segment AB, wherein the straight line is a chord of the curve;
2) Selecting a point C with the largest distance from the straight line segment on the curve, and calculating the distance d between the point C and the AB;
3) Comparing d with a preset threshold value threshold, and if d is smaller than threshold, taking the straight line segment as an approximation of a curve, and finishing the processing of the curve segment;
if d is greater than threshold, the curve is split into two sections AC and BC with C and the two sections are treated separately as described in 1) to 3) above.
When all the curves are processed, the broken lines formed by the dividing points are connected in sequence, so that the broken lines can be used as approximate quadrilaterals of the curves, and the position information of four vertexes is obtained.
It should be understood that the position information in this embodiment may be coordinate values. And the ordering of the reference points may be: and taking the abscissa and the minimum reference point as an upper left reference point, the abscissa and the maximum reference point as a lower right reference point, the reference point with the minimum difference between the abscissa and the ordinate as an upper right reference point, and the reference point with the maximum difference between the abscissa and the ordinate as a lower left reference point.
Optionally, in this embodiment, the adjusting the position of the first image in the image display interface according to the target position information and the change of the drawing area captured by the camera further includes:
under the condition that the drawing paper area captured by the camera moves, acquiring the position information of the reference point before and after the movement;
Determining a homography transformation matrix according to the position information of the reference points before and after the movement;
And adjusting the position of the first image in an image display interface according to the homography transformation matrix and the target position information.
Therefore, when the movement occurs, the image of each frame is calculated in real time relative to the initial frame by stably tracking the drawing paper area, and the homography transformation matrix H can be determined for the reference point change, so that the position of the first image in the image display interface can be adjusted by combining H and the determined target position information, and as shown in fig. 4, the effect of stably attaching the target object 41 to the drawing paper 42 in the image display interface 40 is realized. Wherein the initial frame is a frame image determined by the target position information.
In the following, taking a basin of a user wanting to draw his home as an example, the application of the method according to the embodiment of the present application is shown in fig. 5:
The user starts the drawing auxiliary function of the electronic equipment, and the camera acquires images in real time and displays the images on the image display interface. The user clicks the screen by a finger, and a basin is selected as a target object by using a rectangular frame. The electronic equipment obtains a first image through matting processing. The user moves the electronic device and aligns the camera to the drawing paper. The user adjusts the position, transparency, etc. of the first image (the two-finger touch screen can zoom in or out the first image, the single finger can drag the first image, the transparency of the first image can also be adjusted through the sliding bar), and the adjustment is confirmed (the determination button is pressed after the adjustment is finished), at this time, the image display interface will display the effect of superposing the first image on the drawing paper. And then, under the condition that the drawing paper area captured by the camera moves, stably tracking the drawing paper area, calculating a homography transformation matrix of each frame of image relative to the initial frame in real time, and adjusting the position of the first image in the image display interface, so that the effect that the relative position of the first image attached to the drawing paper and the drawing paper can be kept unchanged all the time when the electronic equipment moves is realized. Finally, the user can copy the potted flower on the drawing paper according to the outline of the object through the image display interface picture.
The flower can be separated from the picture background of the current frame by the camera facing the flower, and then the AR effect of laminating the flower on the picture paper in the actual real scene can be presented on the picture of the mobile phone by facing the picture paper by the camera of the mobile phone. Therefore, the user can copy the potted flower through the image display interface on the actual real drawing paper according to the line drawing strips through the content displayed on the drawing paper by the image display interface.
In summary, according to the method of the embodiment of the application, the target object in the live-action captured by the camera of the electronic equipment is subjected to the image matting processing, and then the image of the target object subjected to the image matting processing is attached and presented on the actual drawing paper area of the user captured by the camera, so that the AR superposition effect of the image matting in the drawing paper can be observed by the user through the equipment screen, and the copying of the target object can be performed on the actual drawing paper through the equipment screen, thereby assisting the user to complete the high-quality drawing effect.
As shown in fig. 6, a drawing assisting apparatus according to an embodiment of the present application includes:
An acquisition module 61, configured to acquire a first image in the camera captured image; wherein the first image includes a drawn target object;
A first processing module 62, configured to determine target position information of the first image in a drawing area captured by the camera;
and the second processing module 63 is configured to adjust the position of the first image in the image display interface according to the target position information and the change of the drawing area captured by the camera.
The device can acquire a first image including the target object in the image captured by the camera aiming at the target object drawn by the user, then, along with the movement of the electronic equipment by the user, the target position information of the first image is determined in the drawing paper area captured by the camera, so that the position of the first image in the image display interface is timely adjusted according to the target position information and the change of the drawing paper area captured by the camera, the user can copy the drawing conveniently, the copying drawing effect with higher quality is realized, the user does not need to have professional drawing skills, and convenience is provided for a painting beginner or fan.
Optionally, the acquiring module is further configured to:
displaying the image captured by the camera on the image display interface in the process of capturing the image by the camera;
and responding to a first instruction of a user, performing buckling processing on the target object to obtain the first image.
Optionally, the first processing module is further configured to:
Under the condition that the camera captures a drawing paper area, responding to a second instruction of a user, and adjusting the position of the first image in the image display interface;
responding to a third instruction of a user, and acquiring related position information of the first image and the drawing paper area in the image display interface;
And determining the target position information according to the related position information.
Optionally, the first processing module or the second processing module is further configured to:
removing a background area in the frame image captured by the camera to obtain a second image;
Performing edge detection and contour extraction on the second image, and determining the position of the maximum contour;
and obtaining the position information of the reference point according to the position of the maximum outline.
Optionally, the second processing module is further configured to:
under the condition that the drawing paper area captured by the camera moves, acquiring the position information of the reference point before and after the movement;
Determining a homography transformation matrix according to the position information of the reference points before and after the movement;
And adjusting the position of the first image in an image display interface according to the homography transformation matrix and the target position information.
Optionally, the obtaining the reference point position information according to the position of the maximum contour includes:
Performing quadrilateral approximation processing on the maximum profile to determine the position information of four vertexes;
And sequencing the vertexes serving as reference points to obtain the position information of the reference points.
Optionally, the image display interface is configured to display the image captured by the camera in real time.
The implementation principle and the technical effect of the device of the embodiment of the application are similar, and the embodiment is not repeated here.
As shown in fig. 7, an embodiment of the present application further provides an electronic device, including: a memory 702, a processor 701, and a program stored on the memory 702 and executable on the processor 701;
the processor 701 is configured to read a program in the memory to implement the steps of the drawing assistance method described above.
The electronic device provided by the embodiment of the present application may execute the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein.
Those skilled in the art will appreciate that all or part of the steps of implementing the above-described embodiments may be implemented by hardware, or may be implemented by instructing the relevant hardware by a computer program comprising instructions for performing some or all of the steps of the above-described methods; and the computer program may be stored in a readable storage medium, which may be any form of storage medium.
In addition, the specific embodiment of the present application also provides a readable storage medium, on which a program is stored, which when executed by a processor, implements the steps in the drawing assistance method described above. And the same technical effects can be achieved, and in order to avoid repetition, the description is omitted here.
The embodiment of the present application further provides a computer program product, which includes computer instructions, where the computer instructions, when executed by a processor, implement each process of the embodiment of the method shown in fig. 1 and achieve the same technical effects, and in order to avoid repetition, are not described herein.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform part of the steps of the transceiving method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk, etc., which can store program codes.
While the foregoing is directed to the preferred embodiments of the present application, it will be appreciated by those skilled in the art that various modifications and changes can be made without departing from the principles of the present application, and such modifications and changes are intended to be within the scope of the present application.
Claims (11)
1. A drawing assistance method, characterized by being executed by an electronic device on which a camera is provided, the method comprising:
Acquiring a first image in the captured image of the camera; wherein the first image includes a drawn target object;
Determining target position information of the first image in a drawing paper area captured by the camera;
and adjusting the position of the first image in the image display interface according to the target position information and the change of the drawing paper area captured by the camera.
2. The method of claim 1, wherein the acquiring a first image of the camera captured images comprises:
displaying the image captured by the camera on the image display interface in the process of capturing the image by the camera;
and responding to a first instruction of a user, performing buckling processing on the target object to obtain the first image.
3. The method of claim 1, wherein determining target location information of the first image in a drawing area captured by the camera comprises:
Under the condition that the camera captures a drawing paper area, responding to a second instruction of a user, and adjusting the position of the first image in the image display interface;
responding to a third instruction of a user, and acquiring related position information of the first image and the drawing paper area in the image display interface;
And determining the target position information according to the related position information.
4. The method of claim 1, wherein the determining the target location information of the first image in the drawing area captured by the camera or the adjusting the location of the first image in the image display interface according to the target location information and the change in the drawing area captured by the camera comprises:
removing a background area in the frame image captured by the camera to obtain a second image;
Performing edge detection and contour extraction on the second image, and determining the position of the maximum contour;
and obtaining the position information of the reference point according to the position of the maximum outline.
5. The method of claim 4, wherein adjusting the position of the first image in the image display interface based on the target position information and the change in the area of the drawing captured by the camera further comprises:
under the condition that the drawing paper area captured by the camera moves, acquiring the position information of the reference point before and after the movement;
Determining a homography transformation matrix according to the position information of the reference points before and after the movement;
And adjusting the position of the first image in an image display interface according to the homography transformation matrix and the target position information.
6. The method of claim 4, wherein the obtaining the reference point location information based on the location of the maximum profile comprises:
Performing quadrilateral approximation processing on the maximum profile to determine the position information of four vertexes;
And sequencing the vertexes serving as reference points to obtain the position information of the reference points.
7. The method of claim 1, wherein the image display interface is configured to display the image captured by the camera in real time.
8. A drawing assisting apparatus, characterized by comprising:
The acquisition module is used for acquiring a first image in the camera captured image; wherein the first image includes a drawn target object;
the first processing module is used for determining target position information of the first image in a drawing paper area captured by the camera;
and the second processing module is used for adjusting the position of the first image in the image display interface according to the target position information and the change of the drawing paper area captured by the camera.
9. An electronic device, comprising: a memory, a processor, and a program stored on the memory and executable on the processor; it is characterized in that the method comprises the steps of,
The processor for reading a program in a memory to implement the steps in the drawing assistance method according to any one of claims 1 to 7.
10. A readable storage medium storing a program, wherein the program when executed by a processor implements the steps in the drawing assistance method according to any one of claims 1 to 7.
11. A computer program product comprising computer instructions which, when executed by a processor, implement the steps in the drawing assistance method as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410399271.2A CN118295566A (en) | 2024-04-03 | 2024-04-03 | Drawing assisting method, apparatus, device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410399271.2A CN118295566A (en) | 2024-04-03 | 2024-04-03 | Drawing assisting method, apparatus, device and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118295566A true CN118295566A (en) | 2024-07-05 |
Family
ID=91680577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410399271.2A Pending CN118295566A (en) | 2024-04-03 | 2024-04-03 | Drawing assisting method, apparatus, device and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118295566A (en) |
-
2024
- 2024-04-03 CN CN202410399271.2A patent/CN118295566A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110598610B (en) | Target significance detection method based on neural selection attention | |
CN106651938B (en) | A kind of depth map Enhancement Method merging high-resolution colour picture | |
Shamir et al. | Visual media retargeting | |
US8295683B2 (en) | Temporal occlusion costing applied to video editing | |
CN110827200A (en) | Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal | |
Parsania et al. | A review: Image interpolation techniques for image scaling | |
JP7542740B2 (en) | Image line of sight correction method, device, electronic device, and computer program | |
CN103544685B (en) | A kind of image composition beautification method adjusted based on main body and system | |
CN110958469A (en) | Video processing method and device, electronic equipment and storage medium | |
KR102311796B1 (en) | Method and Apparatus for Deblurring of Human Motion using Localized Body Prior | |
CN111080670A (en) | Image extraction method, device, equipment and storage medium | |
WO2023066120A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN107146197A (en) | A kind of reduced graph generating method and device | |
CN107767357A (en) | A kind of depth image super-resolution method based on multi-direction dictionary | |
JP2001521656A (en) | Computer system process and user interface providing intelligent scissors for image composition | |
CN115294055A (en) | Image processing method, image processing device, electronic equipment and readable storage medium | |
KR101028699B1 (en) | Apparatus and method for painterly rendering | |
CN116342519A (en) | Image processing method based on machine learning | |
CN109600667B (en) | Video redirection method based on grid and frame grouping | |
Wang et al. | Perception-guided multi-channel visual feature fusion for image retargeting | |
Chang et al. | Panoramic human structure maintenance based on invariant features of video frames | |
CN111831123B (en) | Gesture interaction method and system suitable for desktop mixed reality environment | |
Chang et al. | Finding good composition in panoramic scenes | |
WO2021008322A1 (en) | Image processing method, apparatus, and device | |
Ren et al. | Structure-aware flow generation for human body reshaping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |