CN109840928B - Knitting image generation method and device, electronic equipment and storage medium - Google Patents

Knitting image generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109840928B
CN109840928B CN201910100730.1A CN201910100730A CN109840928B CN 109840928 B CN109840928 B CN 109840928B CN 201910100730 A CN201910100730 A CN 201910100730A CN 109840928 B CN109840928 B CN 109840928B
Authority
CN
China
Prior art keywords
image
coordinate point
mean square
module
threading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910100730.1A
Other languages
Chinese (zh)
Other versions
CN109840928A (en
Inventor
王再冉
郭小燕
郑文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910100730.1A priority Critical patent/CN109840928B/en
Publication of CN109840928A publication Critical patent/CN109840928A/en
Application granted granted Critical
Publication of CN109840928B publication Critical patent/CN109840928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present disclosure provides a method, an apparatus, an electronic device, and a storage medium for generating a knitting image, where the method includes: acquiring an image input by a user; preprocessing the image to obtain a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the preprocessed image edge; determining a threading path of the preprocessed image according to the connecting lines among the coordinate points; and drawing the image according to the threading path to generate a knitting image. According to the method for generating the knitting image, after the plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the image edge are determined, the threading path is automatically determined according to the connecting lines between the coordinate points, so that the calculated amount is reduced, and the operation efficiency is improved.

Description

Knitting image generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and apparatus for generating a knitting image, an electronic device, and a non-transitory computer readable storage medium.
Background
Along with the improvement of the living standard of people, the requirements on the living quality are higher and higher, and the decoration painting is also more and more important. The traditional decorative painting is mainly manufactured by hands, has low manufacturing speed and low efficiency; and the time is long, the cost is high, and the requirements of social and economic development cannot be met.
In recent years, with the development of computer image processing technology, there has been a related art that can automatically or semi-automatically generate decorative drawings by computer image processing. For example, a pencil drawing may be generated by computer image processing using a sketch and hue combined method, or a decorative drawing having models of different artists styles and different levels of abstraction may be generated by computer image processing using a portrait style and abstraction method. However, the method of combining sketch and tone is to directly generate a pencil drawing, and in the process of manufacturing the pencil drawing, the pencil drawing cannot be adaptively modified according to the intention of a user, so that the satisfaction degree of the user is reduced; for another example, by using the style of portrait sketch and the abstract method, the edge contour of the image is converted into strokes of a certain artist style, and the strokes are copied onto the image, so that distortion is easy to be caused, especially in the key parts of the chin, the mouth, the teeth and the like, under the irradiation of strong light and dark light, the whole image is too bright or too dark, a clear image is often not obtained, and the presented image contour is also blurred, so that a great deal of calculation is required to extract the image contour, and the calculation efficiency is reduced.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method, an apparatus, an electronic device, and a non-transitory computer-readable storage medium for generating a knitted image.
According to a first aspect of embodiments of the present disclosure, there is provided a method of generating a knitting image, the method including:
acquiring an image input by a user;
preprocessing the image to obtain a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the preprocessed image edge;
determining a threading path of the preprocessed image according to the connecting lines among the coordinate points;
and drawing the image according to the threading path to generate a knitting image.
Optionally, before preprocessing the image, the method further includes:
contrast enhancement is carried out on the input image;
detecting the key part of the object in the enhanced image;
adjusting the key parts;
preprocessing the image to obtain a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the preprocessed image edge, wherein the method specifically comprises the following steps of:
and preprocessing the adjusted image to obtain a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the preprocessed image edge.
Optionally, the preprocessing the image to obtain a plurality of coordinate points on a periphery of an outer edge of the whiteboard tangent to the preprocessed image edge includes:
clipping the image into an image of a predetermined shape;
selecting a whiteboard with the same size as the image with the preset shape;
determining a plurality of nails uniformly distributed on a periphery of the outer edge of the whiteboard tangential to the image edge;
and calculating coordinate points of each nail in the image to obtain coordinate points of all nails.
Optionally, the determining the threading path of the preprocessed image according to the connection line between the coordinate points includes:
selecting one coordinate point from the plurality of coordinate points as a starting coordinate point;
calculating a difference value set of connecting lines between the initial coordinate point and each residual coordinate point respectively;
selecting a connecting line with the largest difference value from the difference value set as a threading path;
and continuously calculating a difference value set of connecting lines between the new initial coordinate point and each residual coordinate point by taking the coordinate point corresponding to the tail end of the threading path as a new initial coordinate point until one difference value in the difference value sets of all connecting lines is smaller than a set profit threshold or the total number of the connecting lines is larger than or equal to a preset connecting line threshold.
Optionally, the calculating the difference set of the connection lines between the starting coordinate point and each of the remaining coordinate points includes:
calculating a plurality of mean square errors between the initial coordinate point and each residual coordinate point before threading, and taking the plurality of mean square errors as a first mean square error set;
calculating a plurality of mean square errors between the initial coordinate point and each residual coordinate point after threading, and taking the plurality of mean square errors as a second mean square error set;
and calculating a difference value set of each mean square error in the first mean square error set before threading and the mean square error corresponding to the second mean square error set after threading.
Optionally, the selecting the connecting line with the largest difference from the difference set as a threading path includes:
comparing the differences in the difference set, and determining the maximum difference;
and selecting the connecting line with the maximum difference as a threading path.
Optionally, the drawing the image according to the threading path, generating a knitting image, includes:
drawing the image according to the threading path by adopting a supersampling antialiasing algorithm to generate the knitting image
According to a second aspect of the embodiments of the present disclosure, there is provided a knitting image generating apparatus including:
An acquisition module configured to acquire an image input by a user;
the preprocessing module is configured to preprocess the image to obtain a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the preprocessed image edge;
a first determining module configured to determine a threading path of the preprocessed image according to a connection line between the plurality of coordinate points;
and the drawing module is configured to draw the image according to the threading path and generate a knitting image.
Optionally, the apparatus further includes:
an enhancement module configured to contrast enhance the input image before the preprocessing module preprocesses the image;
the detection module is configured to detect key parts of the object in the enhanced image;
the adjusting module is configured to adjust the key part;
the preprocessing module is specifically configured to preprocess the image adjusted by the adjusting module, and a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the edge of the preprocessed image are obtained.
Optionally, the preprocessing module includes:
a cropping module configured to crop the image into an image of a predetermined shape;
A selection module configured to select a whiteboard of the same size as the image of the predetermined shape;
a second determination module configured to uniformly distribute a plurality of nails on a periphery of an outer edge of the whiteboard tangential to the image edge;
and the first calculation module is configured to calculate coordinate point information of each nail in the image to obtain coordinate points of all nails.
Optionally, the first determining module includes:
a first selection module configured to select one coordinate point from the plurality of coordinate points as a start coordinate point;
the second calculation module is configured to calculate a difference value set of connecting lines between the initial coordinate point and each residual coordinate point respectively;
the second selecting module is configured to select a connecting line with the largest difference value from the difference value set as a threading path;
and the iterative calculation module is configured to continuously calculate difference value sets of connecting lines between the new initial coordinate point and each residual coordinate point by taking the coordinate point corresponding to the tail end of the threading path as a new initial coordinate point until one difference value in the difference value sets of all connecting lines is smaller than a set profit threshold or the total number of connecting lines is larger than or equal to a preset connecting line threshold.
Optionally, the second computing module includes:
a first error calculation module configured to calculate a plurality of mean square errors between the start coordinate point and each of the remaining coordinate points before threading, respectively, and take the plurality of mean square errors as a first mean square error set;
a second error calculation module configured to calculate a plurality of mean square errors between the start coordinate point and each of the remaining coordinate points after threading, and take the plurality of mean square errors as a second mean square error set;
and the difference value calculating module is configured to calculate a difference value set of each mean square error in the first mean square error set before threading and a corresponding mean square error in the second mean square error set after threading.
Optionally, the second selecting module includes:
a maximum difference determination module configured to compare differences in the set of differences, determining a maximum difference;
and the path selection module is configured to select the connecting line with the maximum difference value as a threading path.
Optionally, the drawing module is specifically configured to draw the image according to the threading path by adopting a supersampling antialiasing algorithm, so as to generate the knitting image.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
A processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the above-described knitting image generation method.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, which when executed by a processor of a mobile terminal, causes the mobile terminal to perform the above-described knitting image generation method
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which, when executed by a processor of an electronic device, causes the electronic device to perform the above-described method of generating a knitted image.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
in the method for generating a knitting image shown in the present exemplary embodiment, an image input by a user is obtained as a standard, and the image is preprocessed to obtain a plurality of coordinate points on a periphery of an outer edge of a whiteboard tangent to an edge of the preprocessed image; then, determining a threading path of the preprocessed image according to the connecting lines among the coordinate points; and drawing the image according to the threading path to generate a knitting image. Therefore, according to the method for generating the knitting image, the received input image is taken as a standard, after a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the image edge are determined, the threading path is automatically determined according to the connecting lines between the coordinate points, so that the calculated amount is reduced, and the operation efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating a method of generating a knitting image according to an exemplary embodiment.
Fig. 2 is a schematic diagram of an image shown according to an exemplary embodiment.
Fig. 3 is a schematic diagram illustrating the generation of a knitted image based on fig. 2, according to an example embodiment.
Fig. 4 is another flowchart illustrating a method of generating a knitting image according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating a knitting image generating apparatus according to an exemplary embodiment.
Fig. 6 is another block diagram of a knitting image generation apparatus according to an exemplary embodiment.
FIG. 7 is a block diagram of a preprocessing module shown according to an exemplary embodiment.
Fig. 8 is a block diagram of an electronic device, according to an example embodiment.
Fig. 9 is another block diagram illustrating a structure of an electronic device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Fig. 1 is a flowchart illustrating a method for generating a knitting image according to an exemplary embodiment, and as shown in fig. 1, the method for generating a knitting image is used in a terminal, and includes the steps of:
in step 101, an image input by a user is acquired.
In this step, the user inputs an image to the terminal, that is, the terminal detects the image input by the user, where the image may be a picture of a hand-made knitting type decorative picture, a pencil drawing picture, or of course, other pictures, photographs, etc. of the decorative picture with artistic style.
In step 102, the image is preprocessed to obtain a plurality of coordinate points on a periphery of the outer edge of the whiteboard tangent to the preprocessed image edge.
The pretreatment mode in this step can be processed by a person skilled in the art according to actual requirements. The method specifically comprises the following steps:
firstly, cutting the input image or the adjusted image into an image with a preset shape;
that is, when receiving an operation instruction input by a user, the terminal clips the image into an image of a predetermined shape according to the operation instruction input by the user, wherein the predetermined shape may be a circle, a square, a rectangle, or a regular polygon, etc. For example, the image is cut into square or rectangular images, or the like. The cutting process is well known in the art and will not be described in detail herein.
For ease of understanding, referring also to fig. 2, fig. 2 shows a schematic diagram of an image according to an exemplary embodiment. Fig. 2 is an example of an original image, and may be an image obtained by contrast enhancement and modulation. After selecting a whiteboard of the same shape and size as the image, if necessary, it is determined that N nails, if 256, are uniformly distributed on the periphery of the outer edge of the whiteboard tangential to the image edge. That is, the more nails are placed on the periphery of the outer edge of the white board, the smaller the difference between the generated knitting image and the original image is, and the more the original image is; conversely, the fewer nails on the periphery of the outer edge of the whiteboard, the larger the difference between the generated knitting image and the original image is, and the smaller the difference is from the original image.
The nail in this embodiment is assumed to be a positioning point for determining the threading path later. The periphery of the outer edge of the whiteboard is the periphery if the image is cut into a circle, the periphery of the outer edge of the whiteboard is the periphery if the image is cut into a square, the periphery of the outer edge of the whiteboard is the four sides of the square, and the like, and other shapes are similar and are not described in detail herein.
Secondly, selecting a whiteboard with the same size as the image with the preset shape;
the whiteboard in this step, which is a blank template that can be processed by computer software, is assumed to have the same shape and size as the original image.
That is, the terminal stores a whiteboard of various shapes, such as square, rectangle, regular polygon, etc., in advance. A whiteboard of a corresponding shape may then be selected based on the image of the predetermined shape.
Again, a number of nails uniformly distributed around the periphery of the outer edge of the whiteboard tangential to the edge of the image (i.e. the original image) is determined, i.e. the number of nails on the whiteboard is determined, which may be 100, 256, 300 or 500, etc., and may be specifically set empirically, for example, the number of nails may be set between 50 and 500. In general, the number of nails may be set to 256, and of course, the number of nails may be increased or decreased appropriately according to actual needs. Because, the more nails are applied to the periphery of the outer edge of the white board, the smaller the difference between the generated knitting image and the original image is, and the more the original image is; conversely, the fewer nails are placed on the periphery of the outer edge of the whiteboard, the larger the difference between the generated knitting image and the original image is, and the smaller the difference is from the original image. In general, the number of nails may be 150, 280, 350 or 456, and may be any number between 50 and 500, which may be specifically predetermined, and of course, the number of nails may be reduced or increased appropriately according to actual needs, which is not limited in this embodiment.
And finally, calculating coordinate points of each nail in the image to obtain coordinate points of all nails.
The first method is as follows: assuming that N nails are uniformly distributed on the periphery of the outer edge of the whiteboard tangent to the image edge, wherein the angle corresponding to each nail is alpha, alpha is 0-2 pi, the radius is r, and the center coordinates in the whiteboard are (xo, yo), (c) i ,r i ) The coordinates of the coordinate points on the periphery of the outer edge of the whiteboard tangential to the image edge can be obtained by the following formula:
c i =xo+cos(α i )*r
r i =yo+sin(α i )*r
that is, the coordinate information of each nail in the image can be calculated by the above formula, and (r) i ,c i ) Wherein i is less than or equal to N, N is a natural number, r i C is the abscissa of the ith nail in the image i Is the ordinate of the ith nail in the image, alpha i Is the straight angle of the line between the nail and the center, and alpha i =360×i/N。
The second method is as follows: if the whiteboard tangent to the edge of the image is square, N nails are uniformly distributed on one circle of the square, that is, N nails are uniformly distributed on the upper, lower, left and right sides of the image, the euclidean distance between two adjacent nails is equal, the calculation process of the coordinate points is similar to the specific description, and the detailed description is omitted.
It should be noted that, in the embodiment of the present invention, the original image may be cut into different shapes, for example, a circle, a square, etc., if the original image is cut into a circle, the calculation method of the coordinate point is applicable to the first method, and if the original image is cut into a square, the calculation method of the coordinate point is applicable to the second method, etc.
In step 103, a threading path of the preprocessed image is determined according to the connection line between the coordinate points.
In the above steps, a whiteboard of the same size as the image has been selected, and a plurality of nails are provided on the periphery of the outer edge of the whiteboard, which defines the image edge tangent to the whiteboard, and the corresponding threading path is defined by the difference set of the connecting lines between each nail and each of the remaining nails. When a threading path is determined each time, a greedy algorithm is adopted, namely N nails are assumed, the current nails are started from the ith nail (serving as initial coordinate points), benefits obtained by connecting the nails from the ith nail to connecting paths between other N-1 nails are compared, the connecting path with the largest difference value is selected from the difference value set to serve as a threading path, then the coordinate point corresponding to the tail end of the threading path is taken as a new initial coordinate point, and the difference value set of connecting lines between the new initial coordinate point and each residual coordinate point is continuously calculated until threading paths of all coordinate points are determined; or until one difference value in the difference value sets of all the connecting lines is smaller than a set profit threshold or the total number of the connecting lines is larger than or equal to a preset connecting line threshold. The profit threshold and the connection threshold are preset according to experience, for example, the profit threshold is greater than or equal to 0; if the number of nails is 256, the connection threshold may be set to 4000 or so.
It should be noted that, the benefit in this embodiment is the difference between the mean square error between the original image and the pre-threading image and the mean square error between the original image and the post-threading image, as shown in the following formula:
gain=mse b -mse a
in this embodiment, the similarity between the knitted image and the original image is measured by the mean square error MSE between the knitted image and the original image, and the smaller the value of the mean square error is, the larger the similarity is, and vice versa. Therefore, the line with the biggest profit is selected as the threading path in the embodiment.
Since only the pixel values on the connecting lines are changed before and after threading, the method can reduce the calculated amount to improve the calculation efficiency, and can calculate the generated knitting image I D (i.e. target image) and original image I G The mean square error between the pixel values on the connection may be simplified as a mean square error between the pixel values on the connection, and the process of calculating the mean square error between the pixel values on the connection is well known in the art and will not be described herein.
The method for determining the threading path of the preprocessed image according to the connecting lines among the coordinate points specifically comprises the following steps:
1) Selecting one coordinate point from the plurality of coordinate points as a starting coordinate point;
2) Calculating a difference value set of connecting lines between the initial coordinate point and each residual coordinate point respectively;
In the step, a plurality of mean square errors between the initial coordinate point and each residual coordinate point before threading are calculated, and the plurality of mean square errors are used as a first mean square error set; then, calculating a plurality of mean square errors between the initial coordinate point and each residual coordinate point after threading, and taking the plurality of mean square errors as a second mean square error set; and finally, calculating a difference value set of each first mean square error in the first mean square error set before threading and a second mean square error set corresponding to the second mean square error set after threading, and taking the difference value set as the benefit of the connection line between the starting coordinate point and each residual coordinate point.
3) Selecting a connecting line with the largest difference value from the difference value set as a threading path;
in the step, firstly, comparing the differences in the difference set, and determining the maximum difference, wherein the link benefit corresponding to the maximum difference is the maximum benefit; then, the connecting line with the largest difference value is selected as a threading path. This is because the larger the difference, the greater the revenue before and after threading, the more similar the image after threading to the original image; the smaller the difference, the less revenue before and after threading.
That is, for the determination of the threading path, it is assumed that there is one image which is identical to the original image I G Whiteboard I of the same size D N nails are arranged on the periphery of the outer edge of the whiteboard tangent to the whiteboard, the original picture is drawn out through the two-to-two connecting lines among the nails, and the threading path is calculated by the methodSeen as an optimization problem. That is, the knitting picture and the original image are maximally made to be similar, when the original image is the head portrait of a person, the connecting lines among nails are made to pass through black eyebrow and pupil areas more, and the smooth face and white eye white areas are made to pass through less connecting lines. Each time a threading is made, the present disclosure employs a greedy algorithm. That is, assuming that the current nail starts from the ith nail, comparing the gains obtained by starting from the ith nail and ending with the other N-1 nails respectively, and N-1 threading, the line with the maximum gain is selected as the threading path.
4) And continuously calculating a difference value set of the connection lines between the new initial coordinate point and each residual coordinate point by taking the coordinate point corresponding to the tail end of the threading path as a new initial coordinate point until one difference value in the difference value sets of all the connection lines is smaller than a set profit threshold or the total number of the connection lines is larger than or equal to a preset connection line threshold.
For ease of understanding, please refer to the following examples.
Assuming that the periphery of the outer edge of the whiteboard is provided with N nails, randomly selecting a starting point S from the N nails; then, the difference set between the connection from the S point and the other N-1 point is calculated, and then the difference set of each connection is compared, and the connection with the largest difference is selected and marked as (i) 1 ,j 1 ) And will (i) 1 ,j 1 ) As a threading path;
thereafter, calculate the slave j 1 The difference sets of the connection between the point start and the other N-1 points are then compared, and the connection with the largest difference is selected and denoted as (i) 2 ,j 2 ) The method comprises the steps of carrying out a first treatment on the surface of the And will (i) 2 ,j 2 ) As a threading path;
repeating the above steps until one difference value in the difference value sets of all the connecting lines is smaller than the set profit threshold th 1 Or the total number of the connection lines is larger than a preset connection line threshold th 2 Until that point. Setting a profit threshold th 1 And preset link threshold th 2 Are all empirically set. That is, as the number of links increases, the gain obtained by adding new links becomes small, and may even be negative. At this time, a new thread pair is added to the final needleThe effect of weaving is less affected and even damaged. Thus, the present disclosure sets the threshold th 1 And th 2 When the empirically set conditions described above are met, the algorithm terminates.
In step 104, the image is drawn according to the threading path, and a knitting image is generated.
In this step, since the image is represented by rasterization on the computer, and if a straight line is drawn directly on the image, a supersampling antialiasing (SSAA, super SamplingAnti-Aliasing) algorithm can be used in the present disclosure to draw the image along the threading path, generating a knitted image. The knitting image is a knitting type decorative picture.
Wherein the generated knitting image is a black-and-white image, the generated knitting image is shown in fig. 4, and fig. 3 is a schematic diagram of a knitting image generated based on fig. 2 according to an exemplary embodiment. Wherein, a plurality of straight lines crossing horizontally and vertically displayed in the image are selected threading paths.
Among other things, the present disclosure employs a supersampling antialiasing (SSAA) algorithm, the greatest feature of which comes from the sampling process. When a frame to be rendered with m×n pixels is to be rendered, SSAA first renders a buffer of (m×s) pixels, and then samples the frame with the length and width multiplied by S. Obviously, this method is very costly and costly. Based on this, the difference between the present disclosure and the existing algorithm is that, when a frame to be rendered with m×n pixels is to be displayed, the value changed after super sampling is directly estimated on the frame with the original size, so that the calculation amount is greatly reduced, and the calculation efficiency is improved. Because the position of each nail is fixed after initialization, the nails on the large-scale image and the nails on the original image are in one-to-one correspondence, the method and the device record the pixel values between every two nails, which are mapped to the original size image in the large-scale image connecting line, and directly read the corresponding pixel values when the original image is connected, so that the calculation efficiency is improved, and the calculation time is saved.
In the method for generating a knitting image shown in the present exemplary embodiment, an image input by a user is obtained as a standard, and the image is preprocessed to obtain a plurality of coordinate points on a periphery of an outer edge of a whiteboard tangent to an edge of the preprocessed image; then, determining a threading path of the preprocessed image according to the connecting lines among the coordinate points; and drawing the image according to the threading path to generate a knitting image. Therefore, according to the method for generating the knitting image, the received input image is taken as a standard, after the plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the image edge are determined, the threading path is automatically determined according to the connecting lines among the plurality of coordinate points, so that the calculated amount is reduced, and the operation efficiency is improved.
Further, in the process of contrast enhancement of the image and adjustment of key parts in the image, namely, the method combines the face detection technology to carry out key parts, and carries out overall enhancement and adjustment of the key parts on the image, so that the generated knitting image is more stereoscopic and more vivid.
Further, the threading path can be automatically determined according to the connecting lines between nails on the periphery of the outer edge of the whiteboard, and the connecting line with the largest difference is selected as the threading path, that is, the problem of selecting the threading path is regarded as a greedy mean square error benefit problem, the problem of calculating the integral image benefit is simplified into a difference set solution of the related connecting lines, and the operation efficiency is improved.
Still further, the method adopts the super sampling anti-aliasing algorithm to draw the image according to the threading path, so that the aliasing effect is reduced, the calculated amount is reduced, and the operation efficiency is improved.
Fig. 4 is a flowchart illustrating a method for generating a knitting image according to an exemplary embodiment, and as shown in fig. 4, the method for generating a knitting image is used in a terminal, and includes the steps of:
in step 401, an image input by a user is acquired.
The step is the same as step 101, and is specifically described above, and will not be described here again.
In step 402, the input image is contrast enhanced.
Images in natural scenes tend to appear too bright or too dark due to the different illumination effects, resulting in an unclear image. For this reason, in the present disclosure, the image may be enhanced as a whole. Wherein, the whole reinforcing is: after receiving the image input by the user, the contrast enhancement is performed on the whole pixels of the image, for example, the pixel values of the whole image are stretched to a certain interval, so that the image is clearer.
In step 403, key parts of the object in the image after enhancement are detected.
In this step, if the object in the image is a human face, the key parts can be detected by a human face detection technique. Wherein the key parts can be eyes, nose, mouth and other characteristics. The specific face detection technique is already known and will not be described here.
In step 404, the key location is adjusted;
the adjustment in this step is to adjust one or more of the size, shape and color of the key part, and of course, other aspects of the key part may be adjusted as required, which is not limited in this disclosure.
For example, when processing a head portrait in an image, the eye portion is often not clear enough, especially for a head portrait with a small or squinted eye. Based on this, the size of the eyes or the colors of the upper and lower eyelids of the eyes can be adaptively adjusted, for example, the colors of the upper and lower eyelids can be deepened, so that a more stereoscopic knitted picture image can be obtained. If eyes in the image are not processed, the eye white and pupil areas are likely to be indistinguishable from the eye parts in the final knitted picture, and the problem of the blind person occurs. Therefore, the white of the eye has better contrast with the pupil area. Based on this, in the present disclosure, the pixel values of the eye white and pupil areas are adjusted such that the average value of the pixel values of the two parts is within a reasonable range, for example, the average value of the pixel value of the eye white area is 170, and the average value of the pixel value of the pupil area is 9. In order to make the last knitted painting more tranquillizing, the pupil area of eyes is detected, and a highlight point is added at the intersection point of the connecting line of the large and small canthus and the connecting line of the upper and lower eyelid center points, so that a better knitted painting result can be obtained. After the image is enhanced, the knitting lines are more concentrated at the pupil part of the eyes, so that the white and high-bright spots of the eyes are avoided as much as possible, and the final knitting picture has better visual effect.
In step 405, preprocessing the adjusted image to obtain a plurality of coordinate points on a periphery of the outer edge of the whiteboard tangent to the edge of the preprocessed image;
in step 406, determining a threading path of the preprocessed image according to the connection lines among the coordinate points;
in step 407, the image is drawn according to the threading path, and a knitting image is generated.
The steps 405 to 407 are the same as the steps 102 to 104, and detailed descriptions are omitted here.
The method for generating a knitting image shown in the present exemplary embodiment includes performing contrast enhancement on an input image with an acquired image input by a user as a standard, detecting a key part of an object in the image after enhancement, adjusting the key part, and then preprocessing the adjusted image to obtain a plurality of coordinate points on a periphery of an outer edge of a whiteboard tangent to an edge of the preprocessed image; determining a threading path of the preprocessed image according to the connecting lines among the coordinate points; and drawing the image according to the threading path to generate a knitting image. Therefore, according to the method for generating the knitting image, the contrast of the input image is enhanced, the influence of illumination is reduced, then the key parts in the enhanced image are detected, and the key parts are adjusted, so that the generated knitting image is more stereoscopic and more vivid; after a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the image edge are determined, the threading path is automatically determined according to the connecting line between the coordinate points, so that an intermediate result of generating the knitting picture is displayed, the calculated amount is reduced, and the operation efficiency is improved.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Fig. 5 is a block diagram of a knitting image generating apparatus according to an exemplary embodiment. Referring to fig. 5, the apparatus includes: an acquisition module 501, a preprocessing module 502, a first determination module 503 and a drawing module 504.
The acquiring module 501 is configured to acquire an image input by a user;
the preprocessing module 502 is configured to preprocess the image to obtain a plurality of coordinate points on a periphery of the whiteboard tangent to the preprocessed image edge;
the first determining module 503 is configured to determine a threading path of the preprocessed image according to the connection lines between the coordinate points, wherein the threading path passes through a key part of the object in the image;
The rendering module 504 is configured to render the image according to the threading path, generating a knitting image.
Optionally, in another embodiment, based on the foregoing embodiment, the apparatus may further include: the structure of the enhancement module 601, the detection module 602 and the adjustment module 603 is shown in fig. 6, fig. 6 is an example based on the block diagram shown in fig. 5, wherein,
the enhancement module 601 is configured to enhance contrast of the input image before the preprocessing module preprocesses the image;
the detection module 602 is configured to detect a key part of the object in the enhanced image;
the adjusting module 603 is configured to adjust the key part;
the preprocessing module 502 is specifically configured to preprocess the image adjusted by the adjusting module 602, so as to obtain a plurality of coordinate points on a periphery of the outer edge of the whiteboard tangent to the edge of the preprocessed image.
Optionally, in another embodiment, based on the foregoing embodiment, the preprocessing module 502 includes: a clipping module 701, a selection module 702, a second determination module 703 and a first calculation module 704, the structural block diagram of which is shown in fig. 7, wherein,
The cropping module 701 is configured to crop the image into an image of a predetermined shape;
the selecting module 702 is configured to select a whiteboard having the same size as the image of the predetermined shape;
the second determining module 703 is configured to uniformly distribute a plurality of nails along a periphery of the outer edge of the whiteboard tangential to the image edge, wherein the plurality of nails are a predetermined number of nails;
the first calculating module 704 is configured to calculate coordinate point information of each nail in the image, so as to obtain coordinate points of all nails.
Optionally, in another embodiment, based on the foregoing embodiment, the first determining module includes: a first selection module, a second calculation module, a second selection module and an iterative calculation module (not shown), wherein,
the first selection module is configured to select one coordinate point from the plurality of coordinate points as a starting coordinate point;
the second calculation module is configured to calculate a difference value set of connecting lines between the initial coordinate point and each residual coordinate point respectively;
the second selecting module is configured to select a connecting line with the largest difference value from the difference value set as a threading path;
The iterative calculation module is configured to continuously calculate difference value sets of connecting lines between the new initial coordinate point and each residual coordinate point by taking the coordinate point corresponding to the tail end of the threading path as a new initial coordinate point until one difference value in the difference value sets of all connecting lines is smaller than a set profit threshold or the total number of connecting lines is larger than or equal to a preset connecting line threshold.
Optionally, in another embodiment, based on the foregoing embodiment, the second calculating module includes: a first error calculation module, a second error calculation module and a difference calculation module (not shown), wherein,
the first error calculation module is configured to calculate a plurality of mean square errors between the starting coordinate point and each residual coordinate point before threading, and the plurality of mean square errors are used as a first mean square error set;
the second error calculation module is configured to calculate a plurality of mean square errors between the starting coordinate point and each residual coordinate point after threading, and the plurality of mean square errors are used as a second mean square error set;
the difference calculating module is configured to calculate a difference set of each mean square error in the first mean square error set before threading and a corresponding mean square error in the second mean square error set after threading, and take the difference set as the benefit of the connection between the starting coordinate point and each residual coordinate point.
Optionally, in another embodiment, based on the foregoing embodiment, the second selecting module includes: a maximum difference determination and path selection module (not shown), wherein,
a maximum difference determination module configured to compare differences in the set of differences, determining a maximum difference; and the online benefit corresponding to the maximum difference value is the maximum benefit.
And the path selection module is configured to select the connecting line with the maximum difference value as a threading path.
Optionally, in another embodiment, based on the foregoing embodiment, the drawing module is specifically configured to draw the image according to the threading path by using a supersampling antialiasing algorithm, and generate the knitting image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The knitted image generating device shown in the present exemplary embodiment uses the input image acquired by the acquiring module as a standard, performs contrast enhancement on the image, and adjusts key parts in the image, that is, the present disclosure combines the face detection technology to perform key parts, and performs overall enhancement and adjustment on the image, so that the generated knitted image is more stereoscopic and more realistic. Further, the threading path can be automatically determined according to the connecting lines between nails on the periphery of the outer edge of the whiteboard, and the connecting line with the largest difference is selected as the threading path, that is, the problem of selecting the threading path is regarded as a greedy mean square error benefit problem, the problem of calculating the integral image benefit is simplified into a difference set solution of the related connecting lines, and the operation efficiency is improved. Furthermore, the super sampling anti-aliasing algorithm is adopted to draw the image according to the threading path, so that the aliasing effect is reduced, the calculated amount is reduced, and the operation efficiency is improved.
Fig. 8 is a block diagram of an electronic device 800, according to an example embodiment. The electronic device may be a mobile terminal or a server, and in the embodiment of the present disclosure, the electronic device is taken as an example of the mobile terminal to describe. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 8, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 can be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the method of generating a knitted image shown in fig. 1, 2, described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of electronic device 800 to perform the method of generating a knitted image shown in fig. 1 and 2 described above. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, which when instructions in the computer program product are executed by the processor 820 of the electronic device 800, cause the electronic device 800 to perform the above described method of generating a knitted image shown in fig. 1, 2.
Fig. 9 is a block diagram of an electronic device 900, according to an example embodiment. For example, the electronic device 900 may be provided as a server. Referring to fig. 9, electronic device 900 includes a processing component 1922 that further includes one or more processors and memory resources, represented by memory 932, for storing instructions, such as application programs, that are executable by processing component 922. The application programs stored in memory 932 may include one or more modules that each correspond to a set of instructions. Further, processing component 922 is configured to execute instructions to perform the method of generating a knitted image shown in fig. 1 and 2 described above.
The electronic device 900 may also include a power supply component 926 configured to perform power management for the electronic device 900, a wired or wireless network interface 950 configured to connect the electronic device 900 to a network, and an input output (I/O) interface 958. The electronic device 900 may operate based on an operating system stored in memory 932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (14)

1. A method of generating a knitted image, comprising:
acquiring an image input by a user;
preprocessing the image to obtain a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the edge of the preprocessed image;
determining a threading path of the preprocessed image according to the connecting lines among the coordinate points;
drawing the image according to the threading path to generate a knitting image;
The determining a threading path of the preprocessed image according to the connecting lines among the coordinate points comprises the following steps:
selecting one coordinate point from the plurality of coordinate points as a starting coordinate point;
calculating a difference value set of connecting lines between the initial coordinate point and each residual coordinate point respectively;
selecting a connecting line with the largest difference value from the difference value set as a threading path;
and continuously calculating a difference value set of connecting lines between the new initial coordinate point and each residual coordinate point by taking the coordinate point corresponding to the tail end of the threading path as a new initial coordinate point until one difference value in the difference value sets of all connecting lines is smaller than a set profit threshold or the total number of the connecting lines is larger than or equal to a preset connecting line threshold.
2. The method of generating a knitted image according to claim 1, wherein prior to preprocessing the image, the method further comprises:
contrast enhancement is carried out on the input image;
detecting the key part of the object in the enhanced image;
adjusting the key parts;
preprocessing the image to obtain a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the preprocessed image edge, wherein the method specifically comprises the following steps of:
And preprocessing the adjusted image to obtain a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the preprocessed image edge.
3. The method for generating a knitted image according to claim 1 or 2, wherein the preprocessing the image to obtain a plurality of coordinate points on a periphery of an outer edge of the whiteboard tangential to an edge of the preprocessed image includes:
clipping the image into an image of a predetermined shape;
selecting a whiteboard with the same size as the image with the preset shape;
determining a plurality of nails uniformly distributed on a periphery of the outer edge of the whiteboard tangential to the image edge;
and calculating coordinate points of each nail in the image to obtain coordinate points of all nails.
4. The method according to claim 1, wherein calculating the difference set of the connection lines between the start coordinate point and each of the remaining coordinate points, respectively, includes:
calculating a plurality of mean square errors between the initial coordinate point and each residual coordinate point before threading, and taking the plurality of mean square errors as a first mean square error set;
calculating a plurality of mean square errors between the initial coordinate point and each residual coordinate point after threading, and taking the plurality of mean square errors as a second mean square error set;
And calculating a difference value set of each mean square error in the first mean square error set before threading and the mean square error corresponding to the second mean square error set after threading.
5. The method for generating a knitting image according to claim 4, characterized in that the selecting a line of the largest difference from the set of differences as a threading path includes:
comparing the differences in the difference set, and determining the maximum difference;
and selecting the connecting line with the maximum difference as a threading path.
6. The method of generating a knitting image according to claim 1 or 2, characterized in that the drawing the image according to the threading path, generating a knitting image, includes:
and drawing the image according to the threading path by adopting a supersampling antialiasing algorithm, and generating the knitting image.
7. A knitting image generating apparatus comprising:
an acquisition module configured to acquire an image input by a user;
the preprocessing module is configured to preprocess the image to obtain a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the preprocessed image edge;
a first determining module configured to determine a threading path of the preprocessed image according to a connection line between the plurality of coordinate points;
A drawing module configured to draw the image according to the threading path, generating a knitting image;
the first determining module includes:
a first selection module configured to select one coordinate point from the plurality of coordinate points as a start coordinate point;
the second calculation module is configured to calculate a difference value set of connecting lines between the initial coordinate point and each residual coordinate point respectively;
the second selecting module is configured to select a connecting line with the largest difference value from the difference value set as a threading path;
and the iterative calculation module is configured to continuously calculate difference value sets of connecting lines between the new initial coordinate point and each residual coordinate point by taking the coordinate point corresponding to the tail end of the threading path as a new initial coordinate point until one difference value in the difference value sets of all connecting lines is smaller than a set profit threshold or the total number of the connecting lines is larger than or equal to a preset connecting line threshold.
8. The apparatus for generating a knitting image according to claim 7, characterized in that the apparatus further comprises:
an enhancement module configured to contrast enhance the input image before the preprocessing module preprocesses the image;
The detection module is configured to detect key parts of the object in the enhanced image;
the adjusting module is configured to adjust the key part;
the preprocessing module is specifically configured to preprocess the image adjusted by the adjusting module, and a plurality of coordinate points on the periphery of the outer edge of the whiteboard tangent to the preprocessed image edge are obtained.
9. The apparatus for generating a knitting image according to claim 7 or 8, characterized in that the preprocessing module includes:
a cropping module configured to crop the image into an image of a predetermined shape;
a selection module configured to select a whiteboard of the same size as the image of the predetermined shape;
a second determination module configured to uniformly distribute a plurality of nails on a periphery of an outer edge of the whiteboard tangential to the image edge;
and the first calculation module is configured to calculate coordinate point information of each nail in the image to obtain coordinate points of all nails.
10. The apparatus for generating a knitting image according to claim 9, characterized in that the second calculation module includes:
a first error calculation module configured to calculate a plurality of mean square errors between the start coordinate point and each of the remaining coordinate points before threading, respectively, and take the plurality of mean square errors as a first mean square error set;
A second error calculation module configured to calculate a plurality of mean square errors between the start coordinate point and each of the remaining coordinate points after threading, and take the plurality of mean square errors as a second mean square error set;
and the difference value calculating module is configured to calculate a difference value set of each mean square error in the first mean square error set before threading and a corresponding mean square error in the second mean square error set after threading.
11. The apparatus for generating a knitted image according to claim 10, wherein the second selecting means includes:
a maximum difference determination module configured to compare differences in the set of differences, determining a maximum difference;
and the path selection module is configured to select the connecting line with the maximum difference value as a threading path.
12. The apparatus for generating a knitting image according to claim 7 or 8, characterized in that,
the drawing module is specifically configured to draw the image according to the threading path by adopting a supersampling antialiasing algorithm, and generate the knitting image.
13. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
Wherein the processor is configured to perform the method of generating a knitted image of any one of claims 1 to 6.
14. A non-transitory computer readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the method of generating a knitted image according to any one of claims 1 to 6.
CN201910100730.1A 2019-01-31 2019-01-31 Knitting image generation method and device, electronic equipment and storage medium Active CN109840928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910100730.1A CN109840928B (en) 2019-01-31 2019-01-31 Knitting image generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910100730.1A CN109840928B (en) 2019-01-31 2019-01-31 Knitting image generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109840928A CN109840928A (en) 2019-06-04
CN109840928B true CN109840928B (en) 2023-10-17

Family

ID=66884413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910100730.1A Active CN109840928B (en) 2019-01-31 2019-01-31 Knitting image generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109840928B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0999187A (en) * 1995-10-06 1997-04-15 Tsudakoma Corp Method for detecting knitted loop and device therefor
KR20000049960A (en) * 1999-06-02 2000-08-05 홍건표 System and method for purchasing clothes by using picture compound and recognition on network
CN101419706A (en) * 2008-12-11 2009-04-29 天津工业大学 Jersey wear flokkit and balling up grading method based on image analysis
CN106709964A (en) * 2016-12-06 2017-05-24 河南工业大学 Gradient correction and multi-direction texture extraction-based sketch generation method and device
CN108734706A (en) * 2018-05-21 2018-11-02 东南大学 A kind of rotor winding image detecting method of integration region distribution character and edge scale angle information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITFI20060118A1 (en) * 2006-05-19 2007-11-20 Golden Lady Co Spa METHOD AND DEVICE TO DISCRIMINATE THE ONE COMPARED TO THE OTHER TWO ENDS OF A MANUFACTURE

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0999187A (en) * 1995-10-06 1997-04-15 Tsudakoma Corp Method for detecting knitted loop and device therefor
KR20000049960A (en) * 1999-06-02 2000-08-05 홍건표 System and method for purchasing clothes by using picture compound and recognition on network
CN101419706A (en) * 2008-12-11 2009-04-29 天津工业大学 Jersey wear flokkit and balling up grading method based on image analysis
CN106709964A (en) * 2016-12-06 2017-05-24 河南工业大学 Gradient correction and multi-direction texture extraction-based sketch generation method and device
CN108734706A (en) * 2018-05-21 2018-11-02 东南大学 A kind of rotor winding image detecting method of integration region distribution character and edge scale angle information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
纱线画;环球网科技综合报道;《https://baijiahao.baidu.com/s?id=1611541070753283635&wfr=spider&for=pc》;20180914;全文 *

Also Published As

Publication number Publication date
CN109840928A (en) 2019-06-04

Similar Documents

Publication Publication Date Title
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN110929651B (en) Image processing method, image processing device, electronic equipment and storage medium
EP3125158B1 (en) Method and device for displaying images
CN108898546B (en) Face image processing method, device and equipment and readable storage medium
CN107958439B (en) Image processing method and device
WO2016011747A1 (en) Skin color adjustment method and device
CN110599410B (en) Image processing method, device, terminal and storage medium
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
CN107798654B (en) Image buffing method and device and storage medium
CN109472738B (en) Image illumination correction method and device, electronic equipment and storage medium
US20200312022A1 (en) Method and device for processing image, and storage medium
CN107341777B (en) Picture processing method and device
CN110909654A (en) Training image generation method and device, electronic equipment and storage medium
CN108734754B (en) Image processing method and device
WO2022077970A1 (en) Method and apparatus for adding special effects
US11403789B2 (en) Method and electronic device for processing images
CN110211211B (en) Image processing method, device, electronic equipment and storage medium
CN111066026A (en) Techniques for providing virtual light adjustments to image data
CN114007099A (en) Video processing method and device for video processing
CN109840928B (en) Knitting image generation method and device, electronic equipment and storage medium
CN109544503B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111373409B (en) Method and terminal for obtaining color value change
CN113610723B (en) Image processing method and related device
CN113160099B (en) Face fusion method, device, electronic equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant