WO2019192149A1 - 一种基于机器视觉的绘图方法及系统 - Google Patents
一种基于机器视觉的绘图方法及系统 Download PDFInfo
- Publication number
- WO2019192149A1 WO2019192149A1 PCT/CN2018/106790 CN2018106790W WO2019192149A1 WO 2019192149 A1 WO2019192149 A1 WO 2019192149A1 CN 2018106790 W CN2018106790 W CN 2018106790W WO 2019192149 A1 WO2019192149 A1 WO 2019192149A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- stroke
- robot
- motion
- state variable
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/003—Manipulators for entertainment
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1612—Programme controls characterised by the hand, wrist, grip control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/32—Digital ink
- G06V30/333—Preprocessing; Feature extraction
- G06V30/347—Sampling; Contour coding; Stroke extraction
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40083—Pick up pen and robot hand writing
Definitions
- the invention relates to the field of robots, in particular to a drawing method and system based on machine vision.
- robots are more intelligent and can adapt to more work. Therefore, robots are not only widely used in industry, but also popularized in the consumer field. For example, it is involved in areas such as family escort, practical teaching, and science and technology exhibitions.
- the prior art is based on pre-processing techniques to achieve writing or drawing preset content.
- the TTF font is used to extract the contour points of the Chinese characters to be written, and the contours are converted into spline curves, and then the curves are processed in the background and converted into the end trajectories of the robot arms.
- the control robot arm is operated according to the preset trajectory.
- the writing function of the robot arm is as follows: 1. Only Chinese characters can be written, and sketches cannot be drawn; 2. Standard fonts need to be preset in advance, only standard fonts can be processed, and non-standard and personalized fonts cannot be realized.
- the object of the present invention is to provide a drawing method and system based on machine vision, which can realize the personalized handwriting imitation by collecting the content drawn by the user, extracting the stroke data and drawing, and realizing the real-time analysis and imitation of the robot, thereby having stronger Interactivity.
- a drawing method based on machine vision includes: step S100 is to collect an original still image of the content to be drawn; step S200 processes the original still image to extract stroke data of the original still image; step S300 is based on the original static image The stroke data of the image obtains a corresponding list of robot state variables; step S400 performs motion path planning according to the robot state variable list to generate a motion message sequence; and step S500 performs a drawing action according to the motion message sequence.
- the image processing technology is used to extract the stroke data, thereby obtaining a state variable list, and then performing motion planning to generate a motion message sequence, and executing the message sequence to complete the drawing action; Since the drawing is extracted from the drawn content, the standard font library is not required, and the stick figure with strokes can be drawn, thereby realizing personalized handwriting imitation and having stronger interaction. Sex.
- step S200 specifically includes: step S210 processing the original still image to obtain a corresponding grayscale image; step S220 performing threshold processing on the grayscale image to obtain a binarized image; and step S230 extracting the a skeleton image of the binarized image is described; step S240 obtains corresponding contour information based on the skeleton image; and step S250 extracts stroke data of the original still image based on the contour information.
- step S210 includes: step S211 calibrating the original still image according to a preset positioning block to obtain a calibrated still image; and step S212 performing gradation processing on the calibrated still image to obtain a corresponding Grayscale image.
- step S230 includes: step S231 traversing each pixel in the binarized image to obtain a line segment, a curved segment or a contour of a closed graphic formed by a single pixel point or a series of single pixels as a skeleton An image, the single pixel point being an intermediate pixel point of a non-single pixel connected image.
- the image blur can be avoided, the number of strokes can be reduced, and the drawing speed can be improved.
- step S250 includes: step S251 traversing the contour information, culling the repeated contour lines between the contours in the contour information; and step S252, respectively saving the remaining contours in the form of strokes to obtain the original Stroke data for still images.
- step S252 further includes: in step S2521, the remaining contours are respectively saved in the form of strokes, and each stroke is trimmed according to a preset precision to obtain stroke data of the original still image.
- step S300 includes: Step S310: generating motion trajectory data of the corresponding robot according to the stroke data of the original still image; and step S320 respectively for each trajectory corresponding to all the track numbers in the motion trajectory data of the robot
- the point is kinematically solved to obtain a robot state variable corresponding to each track point;
- step S330 obtains the robot state variable list corresponding to the stroke data according to the robot state variable corresponding to all the track points.
- the robot state variable list is obtained according to the stroke data of the image, so that the robot can perform the drawing action.
- step S400 includes: step S410, calling a pre-configured robot motion planning library, performing motion path planning on each robot state variable in the robot state variable list, and obtaining a motion message corresponding to each robot state variable; S420 generates a robot motion message sequence according to the motion message corresponding to each robot state variable.
- the robot motion message sequence is obtained according to the robot state variable list and the motion planning library, so that the robot performs painting according to the motion message sequence to complete the imitation of the user's painting.
- the present invention also provides a machine vision-based drawing system, comprising: an image acquisition module for collecting an original still image of the content to be drawn; a stroke extraction module electrically connected to the image acquisition module for the original static The image is processed to extract the stroke data of the original still image; the state variable generating module is electrically connected to the stroke extraction module, and is configured to obtain a corresponding robot state variable list according to the stroke data of the original static image; path planning a module, electrically connected to the state variable generating module, configured to perform a motion path planning according to the robot state variable list, and generate a motion message sequence; and a drawing module electrically connected to the path planning module, according to the motion A sequence of messages that performs a drawing action.
- the image processing technology is used to extract the stroke data, thereby obtaining a state variable list, and then performing motion planning to generate a motion message sequence, and executing the message sequence to complete the drawing action; Since the drawing is extracted from the drawn content, the standard font library is not required, and the stick figure with strokes can be drawn, thereby realizing personalized handwriting imitation and having stronger interaction. Sex.
- the stroke extraction module includes: a grayscale unit configured to process the original still image to obtain a corresponding grayscale image; and a binarization unit configured to perform threshold processing on the grayscale image to obtain a binarized image; a skeleton extracting unit, configured to extract a skeleton image of the binarized image; an edge detecting unit, configured to obtain corresponding contour information according to the skeleton image; and a stroke extracting unit, configured to The contour information is extracted, and the stroke data of the original still image is extracted.
- the stroke extraction module further includes: a calibration unit, configured to calibrate the original still image according to a preset positioning block to obtain a calibrated still image; and the gradation unit is further configured to The calibrated still image is subjected to gradation processing to obtain a corresponding grayscale image.
- the skeleton extracting unit is configured to extract a skeleton image of the binarized image, specifically: the skeleton extracting unit traverses each pixel point in the binarized image to acquire a single pixel point or A series of single-pixel connected line segments, curved segments, or closed graphics contours are used as skeleton images, and the single pixel points are intermediate pixel points of non-single pixel connected images.
- eliminating redundant pixel points can avoid image blurring, reduce the number of strokes, and improve the drawing speed.
- the stroke extracting unit is configured to extract the stroke data of the original still image according to the contour information, specifically: traversing the contour information, and eliminating the repeated contour lines between the contours in the contour information; The remaining contours are respectively saved in the form of strokes to obtain stroke data of the original still image.
- the stroke extracting unit is further configured to save the remaining contours in the form of strokes, and perform cropping on each stroke according to a preset precision to obtain stroke data of the original still image.
- the state variable generating module includes: a motion trajectory data generating unit, configured to generate motion trajectory data of the corresponding robot according to the stroke data of the original still image; and a kinematics solving unit, configured to respectively respectively Each of the track points corresponding to each track number in the motion track data is kinematically solved to obtain a robot state variable corresponding to each track point; the state variable list generating unit is configured to use the robot state corresponding to all track points The variable obtains the list of the robot state variables corresponding to the stroke data.
- the robot state variable list is obtained according to the stroke data of the image, so that the robot can perform the drawing action.
- the path planning module includes: a path planning unit, configured to invoke a pre-configured robot motion planning library, and perform motion path planning on each robot state variable in the robot state variable list to obtain a corresponding state variable of each robot.
- the motion message generating unit is configured to generate a robot motion message sequence according to the motion message corresponding to each robot state variable.
- the robot motion message sequence is obtained according to the robot state variable list and the motion planning library, so that the robot performs painting according to the motion message sequence to complete the imitation of the user's painting.
- the machine vision-based drawing method and system provided by the invention can bring the following beneficial effects: by collecting the content drawn by the user, extracting the stroke data and then drawing, allowing the robot to analyze and imitate in real time, thereby realizing personalized handwriting. Imitation, more interactive.
- FIG. 1 is a flow chart of one embodiment of a machine vision based drawing method of the present invention
- FIG. 2 is a flow chart of another embodiment of a machine vision based drawing method of the present invention.
- FIG. 3 is a flow chart of another embodiment of a machine vision based drawing method of the present invention.
- FIG. 4 is a flow chart of another embodiment of a machine vision based drawing method of the present invention.
- Figure 5 is a schematic structural view of an embodiment of a machine vision based drawing system of the present invention.
- FIG. 6 is a schematic structural view of another embodiment of a machine vision based drawing system of the present invention.
- Figure 7 is a schematic structural view of another embodiment of a machine vision based drawing system of the present invention.
- Figure 8 is a block diagram showing another embodiment of a machine vision based drawing system of the present invention.
- FIG. 9 is a schematic structural view of a content to be drawn on a drawing board in the corresponding embodiment of FIG. 3 and FIG. 7;
- FIG. 10 is a schematic structural diagram of one pixel point and eight reference pixel points in the corresponding embodiment of FIG. 3 and FIG. 7;
- FIG. 11 is a schematic structural view of a pixel point and its reference pixel group in the corresponding embodiment of FIGS. 3 and 7.
- Image acquisition module 200. Stroke extraction module, 300. State variable generation module, 400. Path planning module, 500. Drawing module, 600. Image calibration module, 210. Grayscale unit, 220. Binarization unit, 230. Skeleton extraction unit, 240. Edge detection unit, 250. Stroke extraction unit, 260. Calibration unit, 310. Motion trajectory data generation unit, 320. Kinematics solution unit, 330. State variable list generation unit, 410. Path Planning unit, 420. Message sequence generating unit, 1. Positioning angle, 2. Stick figure, 3. Chinese character, 4. Drawing board.
- a machine vision based drawing method includes:
- Step S100 collects an original still image of the content to be drawn.
- the content to be drawn refers to the content drawn by the user on the drawing board, including Chinese characters and/or stick figures.
- the image to be drawn is collected by a camera mounted on the robot to obtain an original still image of the content to be drawn.
- Step S200 processes the original still image to extract stroke data of the original still image.
- the original still image is processed, for example, graying and binarizing the image, then extracting the skeleton from the binarized image, detecting all the contours from the skeleton, and saving in the form of strokes, thereby obtaining Stroke data for the original still image.
- the stroke data may have only one stroke, for example, the user draws a circle, and one circle has only one stroke; there may also be multiple strokes, such as writing a Chinese character "word" with five strokes.
- Step S300 obtains a corresponding list of robot state variables according to the stroke data of the original still image.
- the motion trajectory data of the robot is generated, and then the kinematics calculation is performed and finally the robot state variable list is generated.
- Step S400 performs motion path planning according to the robot state variable list to generate a motion message sequence.
- motion path planning is performed to generate a motion message sequence that can be executed by the robot.
- Step S500 performs a drawing action according to the motion message sequence.
- the drawing action is performed according to the motion message sequence, thereby completely mimicking the user's handwriting and style to complete the drawn content.
- the machine vision-based drawing method of the present invention extracts stroke data by using image processing technology after acquiring the original static image of the content to be drawn, thereby obtaining a list of state variables, and then performing motion planning to generate a motion message sequence. And executing the message sequence to complete the drawing action; since the drawing is extracted from the drawn content to realize the drawing, neither the preset standard font nor the drawing of the stick figure with the stroke can be realized, thereby realizing the personality
- the imitation of handwriting improves the drawing speed, and has stronger interactivity and real-time.
- step S200 is replaced by the following steps:
- Step S210 processes the original still image to obtain a corresponding grayscale image.
- the original still image is processed, and the original still image is converted from a color map to a grayscale image.
- the conversion of the grayscale image is realized by the function of the image processing library OpenCV.
- Step S220 performs threshold processing on the grayscale image to obtain a binarized image.
- the grayscale image is binarized. For example, setting the threshold to 100, traversing each pixel in the grayscale image, when the pixel value is lower than the threshold, setting the background value, such as 0; when the pixel value is higher than the threshold, setting the foreground value, such as 255. .
- the noise in the image can be removed, and the pixel values of the content drawn by the user are unified, which is convenient for improving the accuracy of skeleton extraction and edge detection.
- Step S230 extracts a skeleton image of the binarized image.
- the skeleton extraction of the image uses a certain algorithm to delete the useless pixels of the edge of the content drawn by the user in the image, and only retains the skeleton portion of the drawn content.
- the skeleton image is a collection of single pixel connected images.
- Step S240 obtains corresponding contour information according to the skeleton image.
- edge detection is implemented by using the image processing library OpenCV, and all contours are detected from the extracted skeleton image.
- Step S250 extracts stroke data of the original still image according to the contour information.
- a stroke here refers to a single-pixel and continuous line segment, which can be a straight line segment, a curved segment, a boundary line of a closed region, or an independent point.
- the combination of all strokes according to the position constitutes the overall content of the user's painting. profile.
- the image blurring can be avoided, the number of strokes can be reduced, and the drawing speed can be improved.
- step S210 is replaced by step S211-step S212
- step S230 is replaced by step S231
- step S250 is used.
- Step S251 - step S252 are replaced.
- the step S210 includes:
- Step S211 calibrates the original still image according to a preset positioning block to obtain a calibrated still image.
- the present embodiment allows the camera to tilt at an angle to the tablet without having to be completely perpendicular to the plane of the tablet.
- the accuracy of the subsequent extracted content is not affected.
- Presetting the positioning block on the drawing board for example, using two rectangular blocks to form a positioning angle.
- four positioning angles 1 are set on the drawing board 4, and the detailed size and spacing of the four positioning angles 1 are set.
- the deformation of the four positioning angles 1 can be obtained, and the captured image is calibrated according to the deformation. Thereby obtaining a more accurate image.
- the original still image does not need to be calibrated.
- Step S212 performs gradation processing on the calibrated still image to obtain a corresponding grayscale image.
- the calibrated still image is converted from a color map to a grayscale image.
- the step S230 includes:
- Step S231 traverses each pixel in the binarized image to obtain a line segment, a curved segment or a contour of a closed figure obtained by connecting a single pixel point or a series of single pixels as a skeleton image, and the single pixel point is non- A single pixel connects the middle pixel of the image.
- the image is extracted from the binarized image.
- the skeleton extraction uses a certain algorithm to delete the edgeless pixels of the content drawn by the user in the image, and only retains the skeleton portion of the drawn content.
- An implementation method is: initializing the number of iterations, using the binarized image as an image to be processed; traversing all the front sights in the image to be processed, marking the front sights that meet the preset first culling condition, and after traversing Excluding the marked front sights to obtain the image after the first processing; traversing all the front sights in the image after the first processing, marking the front sights that meet the preset second culling condition, after the traversal is completed , the marked front view is removed, the second processed image is obtained, which is completed by one iteration; the number of iterations is updated; when the number of iterations is less than the preset maximum number of iterations, the second processed image is taken as the image to be processed , continue the aforementioned culling action, start a new iteration; through multiple rounds of
- FIG. 10 There are eight reference pixel points around each pixel, as shown in FIG. 10, which are respectively located at the upper (P2), lower (P6), left (P8), right (P4), and upper left positions of the pixel ( P9), lower left (P7), upper right (P3), lower right (P5); if the pixel is at the boundary, the eight reference pixels around it will not be presented in the figure, the unpresented reference pixels The value of the point is treated as 0.
- the pixel is an isolated point; if only one of the eight reference pixels around it is the front point, the pixel is the end point If only one or two of the eight reference pixels around it are background points, then this pixel is the inner point.
- the pixel is a non-isolated point and is not an end point and is not an inner point;
- a second condition among the eight reference pixel points of the pixel, only one set of reference pixel groups exists; the reference pixel group is two adjacent reference pixel points, and the clockwise direction is used, the adjacent two The values of the reference pixels are the background value and the foreground value, respectively. As shown in FIG. 11 , with the central pixel as the core, among the eight reference pixels around it, there are two sets of reference pixel groups in the clockwise direction, and the result is 0, 1, so the central pixel is not satisfied. Second condition.
- the third condition as shown in FIG. 10, if at least one of the P2, P4, and P6 pixels is a background point, and at least one of the P4, P6, and P2 pixels is a background point, the P1 pixel satisfies the third condition. ;
- the fourth condition as shown in FIG. 10, if at least one of the P2, P4, and P8 pixels is a background point, and at least one of the P2, P6, and P8 pixels is a background point, the P1 pixel satisfies the fourth condition. ;
- the front spot that satisfies the first condition, the second condition, and the third condition at the same time is a pixel point that meets the preset first culling condition; and the front point that satisfies the first condition, the second condition, and the fourth condition at the same time is A pixel that meets the preset second culling condition.
- the second processed image is taken as the image to be processed, and all the front sights in the image to be processed are traversed again, and all pixels that meet the preset first culling condition are eliminated. Point, and cull all pixels that match the preset second culling condition. This loops until the number of iterations after the update is equal to the preset maximum number of iterations. At this time, the remaining front spot in the processed image constitutes a skeleton image of the binarized image.
- the unnecessary pixels are deleted by multiple iterations, and the skeleton pixels are reserved to realize the skeleton extraction of the image.
- the step S250 includes:
- Step S251 traverses the contour information, and eliminates the repeated contour lines between the contours in the contour information.
- Step S252 saves the remaining contours in the form of strokes to obtain stroke data of the original still image.
- all contours are detected from the extracted skeleton image. Traverse all the contours, and eliminate the repeating contour according to the coincidence degree of the contour point position, as shown in Figure 9 for the stick figure 2, the duckling as an example, the duckling has two contours at the foot, and the two contours overlap, so There are 2 repeated outlines, and 1 can be deleted.
- the remaining contours are saved according to the stroke form (including the stroke number and the corresponding stroke point), the stroke number corresponds to the contour, and the stroke point corresponds to the contour point, thereby completing the extraction of the stroke data.
- the Chinese character 3 to be drawn on the drawing board 4 "word” is taken as an example.
- the stroke data is extracted, the number of strokes that can be obtained is five, and the stroke number is 1 to 5, and each stroke is drawn. It also contains several stroke points, each of which corresponds to a plane coordinate value.
- Table 1 The specific data is shown in Table 1 below:
- Stroke number Corresponding to the word "word” Number of stroke points Stroke point coordinates (x, y) 1 Click “ ⁇ ” 173 (264,122)(265,122)...(319,150) 2 Horizontal “one” 754 (420,209)(419,210)...(119,232) 3 Horizontal "one” 308 (372,272)(371,273)...(226,282)
- stroke 1 represents the top point of the word " ⁇ ", which has 173 stroke points.
- Each stroke point corresponds to the value of a set of plane coordinates (that is, the stroke contains 173 coordinates). Value), each stroke point is actually a pixel, so the number of stroke points extracted by each stroke is determined by the pixels of the image, that is, the same stroke, the strokes extracted in the more pixels The more points.
- the stick figure 2 to be drawn on the drawing board 4 is taken as an example.
- the number of strokes that can be obtained is nine, and the stroke numbers are 1 to 9, and each stroke includes A number of stroke points, each stroke point corresponds to a plane coordinate value, the specific data is shown in Table 2 below:
- Stroke number Corresponding to the "Little Duck" part Number of stroke points Stroke point coordinates (x, y) 1 Foot (front) 393 (312,534)(313,533)...(195,554) 2 Foot (post) 455 (422,520) (423, 519)...(297,599) 3 Torso 1157 (345,250)(346,249)...(641,343) 4 Wings 614 (349,276) (348,277)...(506,353) 5 neck 298 (212,251)(213,250)...(131,319) 6 head 625 (182,30)(183,29)...(285,75) 7 Mouth 362 (358,70)(359,69)...(267,166) 8 Eye 38 (248,86)(249,86)...(251,101) 9 Nostril (point on the mouth) 12 (305,91)(305,92)...(304,96)
- stroke 1 represents the part of the foot of the front of the stick figure “Little Duck”. It contains 393 stroke points, and there are also 393 sets of plane coordinate points. These coordinate points together form the front foot of "Little Duck”. unit.
- a machine vision based drawing method includes:
- Step S100 collects an original still image of the content to be drawn.
- Step S211 calibrates the original still image according to a preset positioning block to obtain a calibrated still image.
- Step S212 performs gradation processing on the calibrated still image to obtain a corresponding grayscale image.
- Step S220 performs threshold processing on the grayscale image to obtain a binarized image.
- Step S231 traverses each pixel in the binarized image to obtain a line segment, a curved segment or a contour of a closed figure obtained by connecting a single pixel point or a series of single pixels as a skeleton image, and the single pixel point is non- A single pixel connects the middle pixel of the image.
- Step S240 obtains corresponding contour information according to the skeleton image.
- Step S251 traverses the contour information, and eliminates the repeated contour lines between the contours in the contour information.
- step S2521 the remaining contours are respectively saved in the form of strokes, and each stroke is cropped according to a preset precision to obtain stroke data of the original still image.
- each stroke is tailored according to a preset precision, for example, the precision is set to 0.5 mm, and only the data of the two endpoints are retained within 0.5 mm, and the strokes in each stroke are Click on the precision to crop, and trim the completed data as the last stroke data.
- the higher the precision setting the smaller the corresponding precision value, the more stroke points are retained for each stroke, and the higher the degree of reduction of the content drawn by the user.
- Step S310 generates motion trajectory data of the corresponding robot according to the stroke data of the original still image.
- the data of the stroke data of the original still image is sorted to obtain motion track data.
- Each stroke is saved according to the new data structure, and the data is saved as a linked list, that is, motion trajectory data is obtained.
- the new data structure is:
- the motion_number corresponds to the track number (from the stroke number)
- the point_number corresponds to the track point contained in the track number (from the stroke point)
- the point is a two-dimensional array, and the coordinate values of each track point are saved.
- the value of n represents the number of track points.
- Step S320 respectively performs kinematic calculation on each track point corresponding to each track number in the motion track data of the robot, and obtains a robot state variable corresponding to each track point.
- each track point of each track number is traversed, and each track point is kinematically solved to obtain a robot state variable corresponding to the track point.
- Kinematics calculation can be used and not limited to geometric methods, algebraic methods, analytical methods, intelligent composite algorithms, etc. Different robots can use the appropriate algorithm to perform kinematics calculation. Save the robot state variable corresponding to each track point.
- each locus point is kinematically solved to obtain a set of robot state variables, which are a set of five angles.
- Value data each angle value corresponds to one joint of the robot, and the five joints of the robot can reach a corresponding track point after moving according to the five angle values.
- Step S330 obtains the list of the robot state variables corresponding to the stroke data according to the robot state variables corresponding to all the track points.
- Each of the data in the robot status list is a data structure, as shown below:
- state_number represents the state variable number of the robot, corresponding to motion_number, how many strokes are generally corresponding to the state variable number
- point_number represents the number of state points contained under the state variable number, which is generally equal to the point_number of the motion track list
- State represents the state variable value of the robot, which is an array of j rows and k columns.
- the k column generation robot table has k joints, j represents the number of robot state variables, and the number of track points corresponding to the motion track list is n, that is, the two values are equal. .
- Step S410 calls a pre-configured robot motion planning library, and performs motion path planning on each robot state variable in the robot state variable list to obtain a motion message corresponding to each robot state variable.
- Step S420 generates a robot motion message sequence according to the motion message corresponding to each robot state variable.
- Step S500 performs a drawing action according to the motion message sequence.
- the robot motion planning refers to generating a motion path from one state variable to another state variable according to different state variables of the robot.
- This motion path needs to meet certain external constraints, such as obstacle avoidance, nearest path, and minimum energy consumption.
- the robot motion planning method proposed in this embodiment is completed based on ROS (Robot Operating System), and the specific steps are as follows:
- Step 1 Modeling the robotic ROS system.
- ROS system modeling can be carried out in purely programming mode or imported into the original 3D model of the robot.
- the programming language used in ROS modeling is a scripting language.
- URDF Unified Robot Description Format
- Robot description format Robot description format
- XACRO XML Macros, an XML macro language
- Step 2 ROS System Moveit Module Configuration.
- Moveit is a module of the ROS system. It integrates several open source motion planning libraries, and the framework of this module can realize the motion planning and simulation of the robot. Further, the configuration process is to use the Setup Assistant tool to load the model description file of the robot generated in the previous step, and then configure the motion planning group, collision detection, initial state, motion planning library, and the like, and finally generate a ROS configuration file package.
- Step 3 Read the list of state variables. Call the ROS system library to write the Moveit interface program, and read each state variable value in the list of robot state variables one by one.
- Step 4 Call to the motion planning library.
- a pre-configured robot motion planning library is called for each state variable value to perform motion planning of the state variables.
- the motion planning library is called to generate a set of motion messages.
- Step 5 Robot motion message sequence generation.
- the motion messages generated in the previous step are saved one by one to generate a robot motion message sequence.
- the motion message sequence of the robot is the amount of motion of each joint of the robot and can be directly sent to the robot drive module for execution.
- Step 6 The motion message sequence is sent.
- the robot motion message sequence generated in the previous step is packaged and sent to the robot driver module, and the robot driver module executes the motion message and executes it.
- the two communication methods can be carried out in a wired manner or in a wireless manner.
- a machine vision based drawing system includes:
- the image acquisition module 100 is configured to collect an original still image of the content to be drawn.
- the content to be drawn refers to the content drawn by the user on the drawing board, including Chinese characters and/or stick figures.
- the image acquisition module collects the content to be drawn through a camera mounted on the robot, and obtains an original still image of the content to be drawn.
- the stroke extraction module 200 is electrically connected to the image acquisition module 100 for processing the original still image and extracting stroke data of the original still image.
- the original still image is processed, for example, graying and binarizing the image, then extracting the skeleton from the binarized image, detecting all the contours from the skeleton, and saving in the form of strokes, thereby obtaining Stroke data for the original still image.
- the stroke data may have only one stroke, for example, the user draws a circle, and one circle has only one stroke; there may also be multiple strokes, such as writing a Chinese character "word" with five strokes.
- the state variable generation module 300 is electrically connected to the stroke extraction module 200 for obtaining a corresponding robot state variable list according to the stroke data of the original still image.
- the motion trajectory data of the robot is generated, and then the kinematics calculation is performed and finally the robot state variable list is generated.
- the path planning module 400 is electrically connected to the state variable generating module 300, and is configured to perform motion path planning according to the robot state variable list to generate a motion message sequence.
- motion path planning is performed to generate a motion message sequence that can be executed by the robot.
- the drawing module 500 is electrically connected to the path planning module 400, and is configured to perform a drawing action according to the motion message sequence.
- the drawing action is performed according to the motion message sequence, thereby completely mimicking the user's handwriting and style to complete the drawn content.
- the machine vision-based drawing method of the present invention extracts stroke data by using image processing technology after acquiring the original static image of the content to be drawn, thereby obtaining a list of state variables, and then performing motion planning to generate a motion message sequence. And executing the message sequence to complete the drawing action; since the drawing is extracted from the drawn content to realize the drawing, neither the preset standard font nor the drawing of the stick figure with the stroke can be realized, thereby realizing the personality
- the imitation of handwriting improves the drawing speed, and has stronger interactivity and real-time.
- the stroke extraction module 200 includes:
- a graying unit 210 configured to process the original still image to obtain a corresponding grayscale image
- the original still image is processed, and the original still image is converted from a color map to a grayscale image.
- the conversion of the grayscale image is realized by the function of the image processing library OpenCV.
- a binarization unit 220 configured to perform threshold processing on the grayscale image to obtain a binarized image
- the grayscale image is binarized. For example, setting the threshold to 100, traversing each pixel in the grayscale image, when the pixel value is lower than the threshold, setting the background value, such as 0; when the pixel value is higher than the threshold, setting the foreground value, such as 255. .
- the noise in the image can be removed, and the pixel values of the content drawn by the user are unified, which is convenient for improving the accuracy of skeleton extraction and edge detection.
- the skeleton extracting unit 230 is configured to extract a skeleton image of the binarized image.
- the skeleton extraction of the image uses a certain algorithm to delete the useless pixels of the edge of the content drawn by the user in the image, and only retains the skeleton portion of the drawn content.
- the skeleton image is a collection of single pixel connected images.
- the edge detecting unit 240 is configured to obtain corresponding contour information according to the skeleton image.
- edge detection is implemented by using the image processing library OpenCV, and all contours are detected from the extracted skeleton image.
- the stroke extracting unit 250 is configured to extract stroke data of the original still image according to the contour information.
- a stroke here refers to a single-pixel and continuous line segment, which can be a straight line segment, a curved segment, a boundary line of a closed region, or an independent point.
- the combination of all strokes according to the position constitutes the overall content of the user's painting. profile.
- the image blurring can be avoided, the number of strokes can be reduced, and the drawing speed can be improved.
- the stroke extraction module 200 of the corresponding embodiment of FIG. 6 is further refined:
- the stroke extraction module 200 further includes:
- the calibration unit 260 is configured to calibrate the original still image according to a preset positioning block to obtain a calibrated still image.
- the present embodiment allows the camera to tilt at an angle to the tablet without having to be completely perpendicular to the plane of the tablet.
- the accuracy of the subsequent extracted content is not affected.
- Presetting the positioning block on the drawing board for example, using two rectangular blocks to form a positioning angle.
- four positioning angles 1 are set on the drawing board 4, and the detailed size and spacing of the four positioning angles 1 are set.
- the deformation of the four positioning angles 1 can be obtained, and the captured image is calibrated according to the deformation. Thereby obtaining a more accurate image.
- the original still image does not need to be calibrated.
- the gradation unit 210 is configured to perform gradation processing on the calibrated still image to obtain a corresponding gradation map.
- the calibrated still image is converted from a color map to a grayscale image.
- a skeleton extracting unit 230 configured to extract a skeleton image of the binarized image, specifically: the skeleton extracting unit traverses each pixel point in the binarized image to acquire a single pixel point or a series of A line segment, a curved segment, or a contour of a closed figure in which a single pixel is connected is a skeleton image, and the single pixel point is an intermediate pixel point of a non-single pixel connected image.
- the image is extracted from the binarized image.
- the skeleton extraction uses a certain algorithm to delete the edgeless pixels of the content drawn by the user in the image, and only retains the skeleton portion of the drawn content.
- An implementation method is: initializing the number of iterations, using the binarized image as an image to be processed; traversing all the front sights in the image to be processed, marking the front sights that meet the preset first culling condition, and after traversing Excluding the marked front sights to obtain the image after the first processing; traversing all the front sights in the image after the first processing, marking the front sights that meet the preset second culling condition, after the traversal is completed , the marked front view is removed, the second processed image is obtained, which is completed by one iteration; the number of iterations is updated; when the number of iterations is less than the preset maximum number of iterations, the second processed image is taken as the image to be processed , continue the aforementioned culling action, start a new iteration; through multiple rounds of
- FIG. 10 There are eight reference pixel points around each pixel, as shown in FIG. 10, which are respectively located at the upper (P2), lower (P6), left (P8), right (P4), and upper left positions of the pixel ( P9), lower left (P7), upper right (P3), lower right (P5); if the pixel is at the boundary, the eight reference pixels around it will not be presented in the figure, the unpresented reference pixels The value of the point is treated as 0.
- the pixel is an isolated point; if only one of the eight reference pixels around it is the front point, the pixel is the end point If only one or two of the eight reference pixels around it are background points, then this pixel is the inner point.
- the pixel is a non-isolated point and is not an end point and is not an inner point;
- a second condition among the eight reference pixel points of the pixel, only one set of reference pixel groups exists; the reference pixel group is two adjacent reference pixel points, and the clockwise direction is used, the adjacent two The values of the reference pixels are the background value and the foreground value, respectively. As shown in FIG. 11 , with the central pixel as the core, among the eight reference pixels around it, there are two sets of reference pixel groups in the clockwise direction, and the result is 0, 1, so the central pixel is not satisfied. Second condition.
- the third condition as shown in FIG. 10, if at least one of the P2, P4, and P6 pixels is a background point, and at least one of the P4, P6, and P8 pixels is a background point, the P1 pixel satisfies the third condition. ;
- the fourth condition as shown in FIG. 10, if at least one of the P2, P4, and P8 pixels is a background point, and at least one of the P2, P6, and P8 pixels is a background point, the P1 pixel satisfies the fourth condition. ;
- the front spot that satisfies the first condition, the second condition, and the third condition at the same time is a pixel point that meets the preset first culling condition; and the front point that satisfies the first condition, the second condition, and the fourth condition at the same time is A pixel that meets the preset second culling condition.
- the second processed image is taken as the image to be processed, and all the front sights in the image to be processed are traversed again, and all pixels that meet the preset first culling condition are eliminated. Point, and cull all pixels that match the preset second culling condition. This loops until the number of iterations after the update is equal to the preset maximum number of iterations. At this time, the remaining front spot in the processed image constitutes a skeleton image of the binarized image.
- the unnecessary pixels are deleted by multiple iterations, and the skeleton pixels are reserved to realize the skeleton extraction of the image.
- a stroke extracting unit 250 configured to extract stroke data of the original still image according to the contour information, specifically: traversing the contour information, and eliminating a contour line repeated between contours in the contour information; The remaining contours are respectively saved in the form of strokes to obtain stroke data of the original still image.
- all contours are detected from the extracted skeleton image. Traverse all the contours, and eliminate the repeating contour according to the coincidence degree of the contour point position, as shown in Figure 9 for the stick figure 2, the duckling as an example, the duckling has two contours at the foot, and the two contours overlap, so There are 2 repeated outlines, and 1 can be deleted.
- the remaining contours are saved according to the stroke form (including the stroke number and the corresponding stroke point), the stroke number corresponds to the contour, and the stroke point corresponds to the contour point, thereby completing the extraction of the stroke data.
- the Chinese character 3 to be drawn on the drawing board 4 "word” is taken as an example.
- the stroke data is extracted, the number of strokes that can be obtained is five, and the stroke number is 1 to 5, and each stroke is drawn. It also contains several stroke points, each of which corresponds to a plane coordinate value.
- Table 1 The specific data is shown in Table 1 below:
- Stroke number Corresponding to the word "word” Number of stroke points Stroke point coordinates (x, y) 1 Click “ ⁇ ” 173 (264,122)(265,122)...(319,150) 2 Horizontal “one” 754 (420,209)(419,210)...(119,232) 3 Horizontal “one” 308 (372,272)(371,273)...(226,282) 4 Horizontal “one” 324 (376,334)(375,335)...(220,345) 5 Mouth "mouth” 575 (348,397)(347,398)...(210,401)
- stroke 1 represents the top point of the word " ⁇ ", which has 173 stroke points.
- Each stroke point corresponds to the value of a set of plane coordinates (that is, the stroke contains 173 coordinates). Value), each stroke point is actually a pixel, so the number of stroke points extracted by each stroke is determined by the pixels of the image, that is, the same stroke, the strokes extracted in the more pixels The more points.
- the stick figure 2 to be drawn on the drawing board 4 is taken as an example.
- the number of strokes that can be obtained is nine, and the stroke numbers are 1 to 9, and each stroke includes A number of stroke points, each stroke point corresponds to a plane coordinate value, the specific data is shown in Table 2 below:
- Stroke number Corresponding to the "Little Duck" part Number of stroke points Stroke point coordinates (x, y) 1 Foot (front) 393 (312,534)(313,533)...(195,554) 2 Foot (post) 455 (422,520) (423, 519)...(297,599) 3 Torso 1157 (345,250)(346,249)...(641,343) 4 Wings 614 (349,276) (348,277)...(506,353) 5 neck 298 (212,251)(213,250)...(131,319) 6 head 625 (182,30)(183,29)...(285,75) 7 Mouth 362 (358,70)(359,69)...(267,166) 8 Eye 38 (248,86)(249,86)...(251,101) 9 Nostril (point on the mouth) 12 (305,91)(305,92)...(304,96)
- stroke 1 represents the part of the foot of the front of the stick figure “Little Duck”. It contains 393 stroke points, and there are also 393 sets of plane coordinate points. These coordinate points together form the front foot of "Little Duck”. unit.
- a machine vision based drawing system includes:
- the image acquisition module 100 is configured to collect an original still image of the content to be drawn.
- the stroke extraction module 200 is configured to process the original still image, and extract stroke data of the original static image, including:
- the calibration unit 260 is configured to calibrate the original still image according to a preset positioning block to obtain a calibrated still image
- the gradation unit 210 is configured to perform gradation processing on the calibrated still image to obtain a corresponding grayscale image
- a binarization unit 220 configured to perform threshold processing on the grayscale image to obtain a binarized image
- a skeleton extracting unit 230 configured to extract a skeleton image of the binarized image, specifically: the skeleton extracting unit traverses each pixel point in the binarized image to acquire a single pixel point or a series of a single pixel connected line segment, a curved segment or a closed graphic outline is used as a skeleton image, and the single pixel point is an intermediate pixel point of a non-single pixel connected image;
- An edge detecting unit 240 configured to obtain corresponding contour information according to the skeleton image
- a stroke extracting unit 250 configured to extract stroke data of the original still image according to the contour information, specifically: traversing the contour information, and eliminating a contour line repeated between contours in the contour information; The remaining contours are respectively saved in the form of strokes, and each stroke is cropped according to a preset precision to obtain stroke data of the original still image.
- each stroke is tailored according to a preset precision, for example, the precision is set to 0.5 mm, and only the data of the two endpoints are retained within 0.5 mm, and the strokes in each stroke are Click on the precision to crop, and trim the completed data as the last stroke data.
- the higher the precision setting the smaller the corresponding precision value, the more stroke points are retained for each stroke, and the higher the degree of reduction of the content drawn by the user.
- the state variable generating module 300 is configured to obtain a corresponding list of robot state variables according to the stroke data of the original still image, including:
- the motion trajectory data generating unit 310 is configured to generate motion trajectory data of the corresponding robot according to the stroke data of the original still image.
- the data of the stroke data of the original still image is sorted to obtain motion track data.
- Each stroke is saved according to the new data structure, and the data is saved as a linked list, that is, motion trajectory data is obtained.
- the new data structure is:
- the motion_number corresponds to the track number (from the stroke number)
- the point_number corresponds to the track point contained in the track number (from the stroke point)
- the point is a two-dimensional array, and the coordinate values of each track point are saved.
- the value of n represents the number of track points.
- the kinematics solving unit 320 is configured to separately perform kinematic calculation on each track point corresponding to each track number in the motion track data of the robot, and obtain a robot state variable corresponding to each track point.
- each track point of each track number is traversed, and each track point is kinematically solved to obtain a robot state variable corresponding to the track point.
- Kinematics calculation can be used and not limited to geometric methods, algebraic methods, analytical methods, intelligent composite algorithms, etc. Different robots can use the appropriate algorithm to perform kinematics calculation. Save the robot state variable corresponding to each track point.
- each locus point is kinematically solved to obtain a set of robot state variables, which are a set of five angles.
- Value data each angle value corresponds to one joint of the robot, and the five joints of the robot can reach a corresponding track point after moving according to the five angle values.
- the state variable list generating unit 330 is configured to obtain the robot state variable list corresponding to the stroke data according to the robot state variable corresponding to all track points;
- the path planning module 400 is configured to perform motion path planning according to the list of the robot state variables, and generate a motion message sequence, including:
- the path planning unit 410 is configured to call a pre-configured robot motion planning library, perform motion path planning on each robot state variable in the robot state variable list, and obtain a motion message corresponding to each robot state variable;
- a message sequence generating unit 420 configured to generate a robot motion message sequence according to the motion message corresponding to each robot state variable
- the drawing module 500 is configured to perform a drawing action according to the motion message sequence.
- the robot motion planning refers to generating a motion path from one state variable to another state variable according to different state variables of the robot.
- This motion path needs to meet certain external constraints, such as obstacle avoidance, nearest path, and minimum energy consumption.
- the robot motion planning method proposed in this embodiment is completed based on ROS (Robot Operating System), and the specific steps are as follows:
- Step 1 Modeling the robotic ROS system.
- ROS system modeling can be carried out in purely programming mode or imported into the original 3D model of the robot.
- the programming language used in ROS modeling is a scripting language.
- URDF Unified Robot Description Format
- Robot description format Robot description format
- XACRO XML Macros, an XML macro language
- Step 2 ROS System Moveit Module Configuration.
- Moveit is a module of the ROS system. It integrates several open source motion planning libraries, and the framework of this module can realize the motion planning and simulation of the robot. Further, the configuration process is to use the Setup Assistant tool to load the model description file of the robot generated in the previous step, and then configure the motion planning group, collision detection, initial state, motion planning library, and the like, and finally generate a ROS configuration file package.
- Step 3 Read the list of state variables. Call the ROS system library to write the Moveit interface program, and read each state variable value in the list of robot state variables one by one.
- Step 4 Call to the motion planning library.
- a pre-configured robot motion planning library is called for each state variable value to perform motion planning of the state variables.
- the motion planning library is called to generate a set of motion messages.
- Step 5 Robot motion message sequence generation.
- the motion messages generated in the previous step are saved one by one to generate a robot motion message sequence.
- the motion message sequence of the robot is the amount of motion of each joint of the robot and can be directly sent to the robot drive module for execution.
- Step 6 The motion message sequence is sent.
- the robot motion message sequence generated in the previous step is packaged and sent to the robot driver module, and the robot driver module can execute the motion message after receiving the motion message.
- the two communication methods can be carried out in a wired manner or in a wireless manner.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
笔画号 | 对应“言”字部分 | 笔画点数量 | 笔画点坐标(x,y) |
1 | 点“丶” | 173个 | (264,122)(265,122)…(319,150) |
2 | 横“一” | 754个 | (420,209)(419,210)…(119,232) |
3 | 横“一” | 308个 | (372,272)(371,273)…(226,282) |
4 | 横“一” | 324个 | (376,334)(375,335)…(220,345) |
5 | 口“口” | 575个 | (348,397)(347,398)…(210,401) |
笔画号 | 对应“小鸭子”部位 | 笔画点数量 | 笔画点坐标(x,y) |
1 | 脚部(前) | 393 | (312,534)(313,533)…(195,554) |
2 | 脚部(后) | 455 | (422,520)(423、519)…(297,599) |
3 | 躯干部 | 1157 | (345,250)(346,249)…(641,343) |
4 | 翅膀部 | 614 | (349,276)(348,277)…(506,353) |
5 | 颈部 | 298 | (212,251)(213,250)…(131,319) |
6 | 头部 | 625 | (182,30)(183,29)…(285,75) |
7 | 嘴部 | 362 | (358,70)(359,69)…(267,166) |
8 | 眼部 | 38 | (248,86)(249,86)…(251,101) |
9 | 鼻孔(嘴上的点) | 12 | (305,91)(305,92)…(304,96) |
笔画号 | 对应“言”字部分 | 笔画点数量 | 笔画点坐标(x,y) |
1 | 点“丶” | 173个 | (264,122)(265,122)…(319,150) |
2 | 横“一” | 754个 | (420,209)(419,210)…(119,232) |
3 | 横“一” | 308个 | (372,272)(371,273)…(226,282) |
4 | 横“一” | 324个 | (376,334)(375,335)…(220,345) |
5 | 口“口” | 575个 | (348,397)(347,398)…(210,401) |
笔画号 | 对应“小鸭子”部位 | 笔画点数量 | 笔画点坐标(x,y) |
1 | 脚部(前) | 393 | (312,534)(313,533)…(195,554) |
2 | 脚部(后) | 455 | (422,520)(423、519)…(297,599) |
3 | 躯干部 | 1157 | (345,250)(346,249)…(641,343) |
4 | 翅膀部 | 614 | (349,276)(348,277)…(506,353) |
5 | 颈部 | 298 | (212,251)(213,250)…(131,319) |
6 | 头部 | 625 | (182,30)(183,29)…(285,75) |
7 | 嘴部 | 362 | (358,70)(359,69)…(267,166) |
8 | 眼部 | 38 | (248,86)(249,86)…(251,101) |
9 | 鼻孔(嘴上的点) | 12 | (305,91)(305,92)…(304,96) |
Claims (16)
- 一种基于机器视觉的绘图方法,其特征在于,包括:步骤S100采集待绘制内容的原始静态图像;步骤S200对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据;步骤S300根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表;步骤S400根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列;步骤S500根据所述运动消息序列,执行绘图动作。
- 根据权利要求1所述的基于机器视觉的绘图方法,其特征在于,所述步骤S200具体包括:步骤S210对所述原始静态图像进行处理,得到对应的灰度图;步骤S220对所述灰度图进行阈值处理,得到二值化的图像;步骤S230提取所述二值化的图像的骨架图像;步骤S240根据所述骨架图像,得到对应的轮廓信息;步骤S250根据所述轮廓信息,提取所述原始静态图像的笔画数据。
- 根据权利要求2所述的基于机器视觉的绘图方法,其特征在于,所述步骤S210包括:步骤S211根据预设的定位块对所述原始静态图像进行校准,得到校准后的静态图像;步骤S212对所述校准后的静态图像进行灰度处理,得到对应的灰度图。
- 根据权利要求2所述的基于机器视觉的绘图方法,其特征在于,所述 步骤S230包括:步骤S231遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点。
- 根据权利要求2所述的基于机器视觉的绘图方法,其特征在于,所述步骤S250包括:步骤S251遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重复的轮廓线;步骤S252将剩下的各轮廓分别按照笔画的形式保存,得到所述原始静态图像的笔画数据。
- 根据权利要求5所述的基于机器视觉的绘图方法,其特征在于,所述步骤S252进一步包括:步骤S2521将剩下的各轮廓分别按照笔画的形式保存,并对每个笔画按照预设的精度进行裁剪,得到所述原始静态图像的笔画数据。
- 根据权利要求1所述的基于机器视觉的绘图方法,其特征在于,所述步骤S300包括:步骤S310根据所述原始静态图像的笔画数据,生成对应的机器人的运动轨迹数据;步骤S320分别对所述机器人的运动轨迹数据中所有轨迹号各自对应的每个轨迹点进行运动学解算,得到每个轨迹点对应的机器人状态变量;步骤S330根据所有轨迹点对应的所述机器人状态变量,得到所述笔画数据对应的所述机器人状态变量列表。
- 根据权利要求1所述的基于机器视觉的绘图方法,其特征在于,所述步骤S400包括:步骤S410调用预先配置好的机器人运动规划库,对所述机器人状态变量列表中每个机器人状态变量进行运动路径规划,得到每个机器人状态变量对应的运动消息;步骤S420根据每个机器人状态变量对应的运动消息,生成机器人运动消息序列。
- 一种基于机器视觉的绘图系统,其特征在于,包括:图像采集模块,用于采集待绘制内容的原始静态图像;笔画提取模块,与所述图像采集模块电连接,用于对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据;状态变量生成模块,与所述笔画提取模块电连接,用于根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表;路径规划模块,与所述状态变量生成模块电连接,用于根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列;绘图模块,与所述路径规划模块电连接,用于根据所述运动消息序列,执行绘图动作。
- 根据权利要求9所述的基于机器视觉的绘图系统,其特征在于,所述笔画提取模块包括:灰度化单元,用于对所述原始静态图像进行处理,得到对应的灰度图;二值化单元,用于对所述灰度图进行阈值处理,得到二值化的图像;骨架提取单元,用于提取所述二值化的图像的骨架图像;边缘检测单元,用于根据所述骨架图像,得到对应的轮廓信息;笔画提取单元,用于根据所述轮廓信息,提取所述原始静态图像的笔画 数据。
- 根据权利要求10所述的基于机器视觉的绘图系统,其特征在于,所述笔画提取模块还包括:校准单元,用于根据预设的定位块对所述原始静态图像进行校准,得到校准后的静态图像;所述灰度化单元,进一步用于对所述校准后的静态图像进行灰度处理,得到对应的灰度图。
- 根据权利要求10所述的基于机器视觉的绘图系统,其特征在于:所述骨架提取单元,用于提取所述二值化的图像的骨架图像具体为:所述骨架提取单元,遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点。
- 根据权利要求10所述的基于机器视觉的绘图系统,其特征在于:所述笔画提取单元,用于根据所述轮廓信息,提取所述原始静态图像的笔画数据具体为:用于遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重复的轮廓线;以及,将剩下的各轮廓分别按照笔画的形式保存,得到所述原始静态图像的笔画数据。
- 根据权利要求13所述的基于机器视觉的绘图系统,其特征在于:所述笔画提取单元,进一步用于将剩下的各轮廓分别按照笔画的形式保存,并对每个笔画按照预设的精度进行裁剪,得到所述原始静态图像的笔画数据。
- 根据权利要求9所述的基于机器视觉的绘图系统,其特征在于,所述状态变量生成模块包括:运动轨迹数据生成单元,用于根据所述原始静态图像的笔画数据,生成对应的机器人的运动轨迹数据;运动学解算单元,用于分别对所述机器人的运动轨迹数据中所有轨迹号各自对应的每个轨迹点进行运动学解算,得到每个轨迹点对应的机器人状态变量;状态变量列表生成单元,用于根据所有轨迹点对应的所述机器人状态变量,得到所述笔画数据对应的所述机器人状态变量列表。
- 根据权利要求9所述的基于机器视觉的绘图系统,其特征在于,所述路径规划模块包括:路径规划单元,用于调用预先配置好的机器人运动规划库,对所述机器人状态变量列表中每个机器人状态变量进行运动路径规划,得到每个机器人状态变量对应的运动消息;消息序列生成单元,用于根据每个机器人状态变量对应的运动消息,生成机器人运动消息序列。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810297791.7A CN108460369B (zh) | 2018-04-04 | 2018-04-04 | 一种基于机器视觉的绘图方法及系统 |
CN201810297791.7 | 2018-04-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019192149A1 true WO2019192149A1 (zh) | 2019-10-10 |
Family
ID=63235051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/106790 WO2019192149A1 (zh) | 2018-04-04 | 2018-09-20 | 一种基于机器视觉的绘图方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108460369B (zh) |
WO (1) | WO2019192149A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111152234A (zh) * | 2019-12-27 | 2020-05-15 | 深圳市越疆科技有限公司 | 用于机器人的书法临摹方法、装置及机器人 |
CN111275049A (zh) * | 2020-01-19 | 2020-06-12 | 佛山市国方识别科技有限公司 | 一种文字图像骨架特征描述符获取的方法及装置 |
CN112950535A (zh) * | 2021-01-22 | 2021-06-11 | 北京达佳互联信息技术有限公司 | 视频处理方法、装置、电子设备及存储介质 |
CN113487697A (zh) * | 2021-07-20 | 2021-10-08 | 维沃移动通信(杭州)有限公司 | 简笔画生成方法、装置、电子设备及存储介质 |
CN114407047A (zh) * | 2022-03-01 | 2022-04-29 | 蓝宙(江苏)技术有限公司 | 一种绘画机器人及其控制方法 |
CN115756175A (zh) * | 2023-01-06 | 2023-03-07 | 山东维创精密电子有限公司 | 一种基于虚拟现实数据的数据处理系统 |
CN116795222A (zh) * | 2023-06-20 | 2023-09-22 | 广东工业大学 | 一种基于OpenCV图像识别的数字毛笔 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460369B (zh) * | 2018-04-04 | 2020-04-14 | 南京阿凡达机器人科技有限公司 | 一种基于机器视觉的绘图方法及系统 |
CN109940611B (zh) * | 2019-02-26 | 2022-01-21 | 深圳市越疆科技有限公司 | 轨迹复现方法、系统及终端设备 |
CN109993680A (zh) * | 2019-04-04 | 2019-07-09 | 中科云创(厦门)科技有限公司 | 字迹模仿方法、装置、电子设备及计算机可读介质 |
CN110033498B (zh) * | 2019-04-18 | 2021-03-30 | 吉林大学 | 一种椭圆/矩形螺旋线一笔画效果的图案处理方法 |
CN110524549A (zh) * | 2019-08-19 | 2019-12-03 | 广东智媒云图科技股份有限公司 | 一种基于机械臂和铆钉枪的作画方法、装置及系统 |
CN110587620A (zh) * | 2019-08-30 | 2019-12-20 | 重庆智能机器人研究院 | 工业机器人写绘方法、系统、工件加工方法和计算机设备 |
CN111125403B (zh) * | 2019-11-27 | 2022-07-05 | 浙江大学 | 一种基于人工智能的辅助设计绘图方法及系统 |
CN111047671B (zh) * | 2019-12-24 | 2023-05-16 | 成都来画科技有限公司 | 一种手绘图片的绘画路径的优化方法及存储介质 |
CN111185902B (zh) * | 2019-12-30 | 2021-05-28 | 深圳市越疆科技有限公司 | 基于视觉识别的机器人文字书写方法、装置和书写系统 |
CN111251309B (zh) * | 2020-01-08 | 2021-06-15 | 杭州未名信科科技有限公司 | 控制机器人绘制图像的方法、装置、机器人及介质 |
CN111185903B (zh) * | 2020-01-08 | 2022-05-13 | 杭州未名信科科技有限公司 | 控制机械臂绘制人像画的方法、装置及机器人系统 |
CN111195912B (zh) * | 2020-01-08 | 2021-06-15 | 杭州未名信科科技有限公司 | 利用机械臂绘制肖像画的方法、装置、机器人及存储介质 |
CN111168676B (zh) * | 2020-01-08 | 2021-06-15 | 杭州未名信科科技有限公司 | 机械臂手眼协作绘画方法、装置、绘画机器人及介质 |
CN112959320A (zh) * | 2021-02-08 | 2021-06-15 | 广州富港万嘉智能科技有限公司 | 控制机械手自动书写的方法及装置、机械手、系统 |
CN112975958B (zh) * | 2021-02-08 | 2023-04-28 | 广州富港生活智能科技有限公司 | 文字的笔顺的路径点生成方法及装置、机械手控制系统 |
CN114332985B (zh) * | 2021-12-06 | 2024-06-18 | 上海大学 | 一种基于双机械臂协同的肖像轮廓智能绘制方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1710608A (zh) * | 2005-07-07 | 2005-12-21 | 上海交通大学 | 机器人绘制人脸漫画的图像处理方法 |
CN105945947A (zh) * | 2016-05-20 | 2016-09-21 | 西华大学 | 基于手势控制的机器人写字系统及其控制方法 |
CN106826822A (zh) * | 2017-01-25 | 2017-06-13 | 南京阿凡达机器人科技有限公司 | 一种基于ros系统的视觉定位及机械臂抓取实现方法 |
CN107127753A (zh) * | 2017-05-05 | 2017-09-05 | 燕山大学 | 一种基于脱机文字识别的仿生写字机械手书写汉字系统 |
CN108460369A (zh) * | 2018-04-04 | 2018-08-28 | 南京阿凡达机器人科技有限公司 | 一种基于机器视觉的绘图方法及系统 |
-
2018
- 2018-04-04 CN CN201810297791.7A patent/CN108460369B/zh active Active
- 2018-09-20 WO PCT/CN2018/106790 patent/WO2019192149A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1710608A (zh) * | 2005-07-07 | 2005-12-21 | 上海交通大学 | 机器人绘制人脸漫画的图像处理方法 |
CN105945947A (zh) * | 2016-05-20 | 2016-09-21 | 西华大学 | 基于手势控制的机器人写字系统及其控制方法 |
CN106826822A (zh) * | 2017-01-25 | 2017-06-13 | 南京阿凡达机器人科技有限公司 | 一种基于ros系统的视觉定位及机械臂抓取实现方法 |
CN107127753A (zh) * | 2017-05-05 | 2017-09-05 | 燕山大学 | 一种基于脱机文字识别的仿生写字机械手书写汉字系统 |
CN108460369A (zh) * | 2018-04-04 | 2018-08-28 | 南京阿凡达机器人科技有限公司 | 一种基于机器视觉的绘图方法及系统 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111152234A (zh) * | 2019-12-27 | 2020-05-15 | 深圳市越疆科技有限公司 | 用于机器人的书法临摹方法、装置及机器人 |
CN111275049A (zh) * | 2020-01-19 | 2020-06-12 | 佛山市国方识别科技有限公司 | 一种文字图像骨架特征描述符获取的方法及装置 |
CN111275049B (zh) * | 2020-01-19 | 2023-07-21 | 佛山市国方识别科技有限公司 | 一种文字图像骨架特征描述符获取的方法及装置 |
CN112950535A (zh) * | 2021-01-22 | 2021-06-11 | 北京达佳互联信息技术有限公司 | 视频处理方法、装置、电子设备及存储介质 |
CN112950535B (zh) * | 2021-01-22 | 2024-03-22 | 北京达佳互联信息技术有限公司 | 视频处理方法、装置、电子设备及存储介质 |
CN113487697A (zh) * | 2021-07-20 | 2021-10-08 | 维沃移动通信(杭州)有限公司 | 简笔画生成方法、装置、电子设备及存储介质 |
CN114407047A (zh) * | 2022-03-01 | 2022-04-29 | 蓝宙(江苏)技术有限公司 | 一种绘画机器人及其控制方法 |
CN115756175A (zh) * | 2023-01-06 | 2023-03-07 | 山东维创精密电子有限公司 | 一种基于虚拟现实数据的数据处理系统 |
CN116795222A (zh) * | 2023-06-20 | 2023-09-22 | 广东工业大学 | 一种基于OpenCV图像识别的数字毛笔 |
CN116795222B (zh) * | 2023-06-20 | 2024-03-29 | 广东工业大学 | 一种基于OpenCV图像识别的数字毛笔 |
Also Published As
Publication number | Publication date |
---|---|
CN108460369A (zh) | 2018-08-28 |
CN108460369B (zh) | 2020-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019192149A1 (zh) | 一种基于机器视觉的绘图方法及系统 | |
CN109664300B (zh) | 一种基于力觉学习的机器人多风格书法临摹方法 | |
CN109376582B (zh) | 一种基于生成对抗网络的交互式人脸卡通方法 | |
JP3441690B2 (ja) | テンプレートを用いて物理オブジェクトを認識する方法 | |
CN108656107B (zh) | 一种基于图像处理的机械臂抓取系统及方法 | |
US20200034971A1 (en) | Image Object Segmentation Based on Temporal Information | |
CN109746916B (zh) | 一种机器人书写书法的方法及系统 | |
CN105500370B (zh) | 一种基于体感技术的机器人离线示教编程系统及方法 | |
Mueller et al. | Robotic calligraphy—learning how to write single strokes of Chinese and Japanese characters | |
CN113927597B (zh) | 基于深度学习的机器人连接件六自由度位姿估计系统 | |
CN104951788B (zh) | 一种书法作品中单字笔画的提取方法 | |
CN111723789A (zh) | 一种基于深度学习的图像文本坐标定位方法 | |
Thalhammer et al. | Pyrapose: Feature pyramids for fast and accurate object pose estimation under domain shift | |
CN110084890A (zh) | 基于混合现实的机械臂文字复写方法及装置 | |
CN112381783A (zh) | 一种基于红色线激光的焊缝轨迹提取方法 | |
CN110232337B (zh) | 基于全卷积神经网络的中文字符图像笔划提取方法、系统 | |
CN114782645A (zh) | 虚拟数字人制作方法、相关设备及可读存储介质 | |
CN113793385A (zh) | 鱼头鱼尾定位方法及装置 | |
Gan et al. | Towards a robotic Chinese calligraphy writing framework | |
CN111860515A (zh) | 一种交互式智能2d语义分割系统、方法、存储介质和装置 | |
TWI817680B (zh) | 影像擴增方法以及裝置 | |
CN111078008A (zh) | 一种早教机器人的控制方法 | |
US20230071291A1 (en) | System and method for a precise semantic segmentation | |
EP4155036A1 (en) | A method for controlling a grasping robot through a learning phase and a grasping phase | |
CN109591012B (zh) | 加强学习方法、机器人和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18913596 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18913596 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 23.03.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18913596 Country of ref document: EP Kind code of ref document: A1 |