WO2019192149A1 - 一种基于机器视觉的绘图方法及系统 - Google Patents

一种基于机器视觉的绘图方法及系统 Download PDF

Info

Publication number
WO2019192149A1
WO2019192149A1 PCT/CN2018/106790 CN2018106790W WO2019192149A1 WO 2019192149 A1 WO2019192149 A1 WO 2019192149A1 CN 2018106790 W CN2018106790 W CN 2018106790W WO 2019192149 A1 WO2019192149 A1 WO 2019192149A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
stroke
robot
motion
state variable
Prior art date
Application number
PCT/CN2018/106790
Other languages
English (en)
French (fr)
Inventor
张光肖
Original Assignee
南京阿凡达机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京阿凡达机器人科技有限公司 filed Critical 南京阿凡达机器人科技有限公司
Publication of WO2019192149A1 publication Critical patent/WO2019192149A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/003Manipulators for entertainment
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • G06V30/347Sampling; Contour coding; Stroke extraction
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40083Pick up pen and robot hand writing

Definitions

  • the invention relates to the field of robots, in particular to a drawing method and system based on machine vision.
  • robots are more intelligent and can adapt to more work. Therefore, robots are not only widely used in industry, but also popularized in the consumer field. For example, it is involved in areas such as family escort, practical teaching, and science and technology exhibitions.
  • the prior art is based on pre-processing techniques to achieve writing or drawing preset content.
  • the TTF font is used to extract the contour points of the Chinese characters to be written, and the contours are converted into spline curves, and then the curves are processed in the background and converted into the end trajectories of the robot arms.
  • the control robot arm is operated according to the preset trajectory.
  • the writing function of the robot arm is as follows: 1. Only Chinese characters can be written, and sketches cannot be drawn; 2. Standard fonts need to be preset in advance, only standard fonts can be processed, and non-standard and personalized fonts cannot be realized.
  • the object of the present invention is to provide a drawing method and system based on machine vision, which can realize the personalized handwriting imitation by collecting the content drawn by the user, extracting the stroke data and drawing, and realizing the real-time analysis and imitation of the robot, thereby having stronger Interactivity.
  • a drawing method based on machine vision includes: step S100 is to collect an original still image of the content to be drawn; step S200 processes the original still image to extract stroke data of the original still image; step S300 is based on the original static image The stroke data of the image obtains a corresponding list of robot state variables; step S400 performs motion path planning according to the robot state variable list to generate a motion message sequence; and step S500 performs a drawing action according to the motion message sequence.
  • the image processing technology is used to extract the stroke data, thereby obtaining a state variable list, and then performing motion planning to generate a motion message sequence, and executing the message sequence to complete the drawing action; Since the drawing is extracted from the drawn content, the standard font library is not required, and the stick figure with strokes can be drawn, thereby realizing personalized handwriting imitation and having stronger interaction. Sex.
  • step S200 specifically includes: step S210 processing the original still image to obtain a corresponding grayscale image; step S220 performing threshold processing on the grayscale image to obtain a binarized image; and step S230 extracting the a skeleton image of the binarized image is described; step S240 obtains corresponding contour information based on the skeleton image; and step S250 extracts stroke data of the original still image based on the contour information.
  • step S210 includes: step S211 calibrating the original still image according to a preset positioning block to obtain a calibrated still image; and step S212 performing gradation processing on the calibrated still image to obtain a corresponding Grayscale image.
  • step S230 includes: step S231 traversing each pixel in the binarized image to obtain a line segment, a curved segment or a contour of a closed graphic formed by a single pixel point or a series of single pixels as a skeleton An image, the single pixel point being an intermediate pixel point of a non-single pixel connected image.
  • the image blur can be avoided, the number of strokes can be reduced, and the drawing speed can be improved.
  • step S250 includes: step S251 traversing the contour information, culling the repeated contour lines between the contours in the contour information; and step S252, respectively saving the remaining contours in the form of strokes to obtain the original Stroke data for still images.
  • step S252 further includes: in step S2521, the remaining contours are respectively saved in the form of strokes, and each stroke is trimmed according to a preset precision to obtain stroke data of the original still image.
  • step S300 includes: Step S310: generating motion trajectory data of the corresponding robot according to the stroke data of the original still image; and step S320 respectively for each trajectory corresponding to all the track numbers in the motion trajectory data of the robot
  • the point is kinematically solved to obtain a robot state variable corresponding to each track point;
  • step S330 obtains the robot state variable list corresponding to the stroke data according to the robot state variable corresponding to all the track points.
  • the robot state variable list is obtained according to the stroke data of the image, so that the robot can perform the drawing action.
  • step S400 includes: step S410, calling a pre-configured robot motion planning library, performing motion path planning on each robot state variable in the robot state variable list, and obtaining a motion message corresponding to each robot state variable; S420 generates a robot motion message sequence according to the motion message corresponding to each robot state variable.
  • the robot motion message sequence is obtained according to the robot state variable list and the motion planning library, so that the robot performs painting according to the motion message sequence to complete the imitation of the user's painting.
  • the present invention also provides a machine vision-based drawing system, comprising: an image acquisition module for collecting an original still image of the content to be drawn; a stroke extraction module electrically connected to the image acquisition module for the original static The image is processed to extract the stroke data of the original still image; the state variable generating module is electrically connected to the stroke extraction module, and is configured to obtain a corresponding robot state variable list according to the stroke data of the original static image; path planning a module, electrically connected to the state variable generating module, configured to perform a motion path planning according to the robot state variable list, and generate a motion message sequence; and a drawing module electrically connected to the path planning module, according to the motion A sequence of messages that performs a drawing action.
  • the image processing technology is used to extract the stroke data, thereby obtaining a state variable list, and then performing motion planning to generate a motion message sequence, and executing the message sequence to complete the drawing action; Since the drawing is extracted from the drawn content, the standard font library is not required, and the stick figure with strokes can be drawn, thereby realizing personalized handwriting imitation and having stronger interaction. Sex.
  • the stroke extraction module includes: a grayscale unit configured to process the original still image to obtain a corresponding grayscale image; and a binarization unit configured to perform threshold processing on the grayscale image to obtain a binarized image; a skeleton extracting unit, configured to extract a skeleton image of the binarized image; an edge detecting unit, configured to obtain corresponding contour information according to the skeleton image; and a stroke extracting unit, configured to The contour information is extracted, and the stroke data of the original still image is extracted.
  • the stroke extraction module further includes: a calibration unit, configured to calibrate the original still image according to a preset positioning block to obtain a calibrated still image; and the gradation unit is further configured to The calibrated still image is subjected to gradation processing to obtain a corresponding grayscale image.
  • the skeleton extracting unit is configured to extract a skeleton image of the binarized image, specifically: the skeleton extracting unit traverses each pixel point in the binarized image to acquire a single pixel point or A series of single-pixel connected line segments, curved segments, or closed graphics contours are used as skeleton images, and the single pixel points are intermediate pixel points of non-single pixel connected images.
  • eliminating redundant pixel points can avoid image blurring, reduce the number of strokes, and improve the drawing speed.
  • the stroke extracting unit is configured to extract the stroke data of the original still image according to the contour information, specifically: traversing the contour information, and eliminating the repeated contour lines between the contours in the contour information; The remaining contours are respectively saved in the form of strokes to obtain stroke data of the original still image.
  • the stroke extracting unit is further configured to save the remaining contours in the form of strokes, and perform cropping on each stroke according to a preset precision to obtain stroke data of the original still image.
  • the state variable generating module includes: a motion trajectory data generating unit, configured to generate motion trajectory data of the corresponding robot according to the stroke data of the original still image; and a kinematics solving unit, configured to respectively respectively Each of the track points corresponding to each track number in the motion track data is kinematically solved to obtain a robot state variable corresponding to each track point; the state variable list generating unit is configured to use the robot state corresponding to all track points The variable obtains the list of the robot state variables corresponding to the stroke data.
  • the robot state variable list is obtained according to the stroke data of the image, so that the robot can perform the drawing action.
  • the path planning module includes: a path planning unit, configured to invoke a pre-configured robot motion planning library, and perform motion path planning on each robot state variable in the robot state variable list to obtain a corresponding state variable of each robot.
  • the motion message generating unit is configured to generate a robot motion message sequence according to the motion message corresponding to each robot state variable.
  • the robot motion message sequence is obtained according to the robot state variable list and the motion planning library, so that the robot performs painting according to the motion message sequence to complete the imitation of the user's painting.
  • the machine vision-based drawing method and system provided by the invention can bring the following beneficial effects: by collecting the content drawn by the user, extracting the stroke data and then drawing, allowing the robot to analyze and imitate in real time, thereby realizing personalized handwriting. Imitation, more interactive.
  • FIG. 1 is a flow chart of one embodiment of a machine vision based drawing method of the present invention
  • FIG. 2 is a flow chart of another embodiment of a machine vision based drawing method of the present invention.
  • FIG. 3 is a flow chart of another embodiment of a machine vision based drawing method of the present invention.
  • FIG. 4 is a flow chart of another embodiment of a machine vision based drawing method of the present invention.
  • Figure 5 is a schematic structural view of an embodiment of a machine vision based drawing system of the present invention.
  • FIG. 6 is a schematic structural view of another embodiment of a machine vision based drawing system of the present invention.
  • Figure 7 is a schematic structural view of another embodiment of a machine vision based drawing system of the present invention.
  • Figure 8 is a block diagram showing another embodiment of a machine vision based drawing system of the present invention.
  • FIG. 9 is a schematic structural view of a content to be drawn on a drawing board in the corresponding embodiment of FIG. 3 and FIG. 7;
  • FIG. 10 is a schematic structural diagram of one pixel point and eight reference pixel points in the corresponding embodiment of FIG. 3 and FIG. 7;
  • FIG. 11 is a schematic structural view of a pixel point and its reference pixel group in the corresponding embodiment of FIGS. 3 and 7.
  • Image acquisition module 200. Stroke extraction module, 300. State variable generation module, 400. Path planning module, 500. Drawing module, 600. Image calibration module, 210. Grayscale unit, 220. Binarization unit, 230. Skeleton extraction unit, 240. Edge detection unit, 250. Stroke extraction unit, 260. Calibration unit, 310. Motion trajectory data generation unit, 320. Kinematics solution unit, 330. State variable list generation unit, 410. Path Planning unit, 420. Message sequence generating unit, 1. Positioning angle, 2. Stick figure, 3. Chinese character, 4. Drawing board.
  • a machine vision based drawing method includes:
  • Step S100 collects an original still image of the content to be drawn.
  • the content to be drawn refers to the content drawn by the user on the drawing board, including Chinese characters and/or stick figures.
  • the image to be drawn is collected by a camera mounted on the robot to obtain an original still image of the content to be drawn.
  • Step S200 processes the original still image to extract stroke data of the original still image.
  • the original still image is processed, for example, graying and binarizing the image, then extracting the skeleton from the binarized image, detecting all the contours from the skeleton, and saving in the form of strokes, thereby obtaining Stroke data for the original still image.
  • the stroke data may have only one stroke, for example, the user draws a circle, and one circle has only one stroke; there may also be multiple strokes, such as writing a Chinese character "word" with five strokes.
  • Step S300 obtains a corresponding list of robot state variables according to the stroke data of the original still image.
  • the motion trajectory data of the robot is generated, and then the kinematics calculation is performed and finally the robot state variable list is generated.
  • Step S400 performs motion path planning according to the robot state variable list to generate a motion message sequence.
  • motion path planning is performed to generate a motion message sequence that can be executed by the robot.
  • Step S500 performs a drawing action according to the motion message sequence.
  • the drawing action is performed according to the motion message sequence, thereby completely mimicking the user's handwriting and style to complete the drawn content.
  • the machine vision-based drawing method of the present invention extracts stroke data by using image processing technology after acquiring the original static image of the content to be drawn, thereby obtaining a list of state variables, and then performing motion planning to generate a motion message sequence. And executing the message sequence to complete the drawing action; since the drawing is extracted from the drawn content to realize the drawing, neither the preset standard font nor the drawing of the stick figure with the stroke can be realized, thereby realizing the personality
  • the imitation of handwriting improves the drawing speed, and has stronger interactivity and real-time.
  • step S200 is replaced by the following steps:
  • Step S210 processes the original still image to obtain a corresponding grayscale image.
  • the original still image is processed, and the original still image is converted from a color map to a grayscale image.
  • the conversion of the grayscale image is realized by the function of the image processing library OpenCV.
  • Step S220 performs threshold processing on the grayscale image to obtain a binarized image.
  • the grayscale image is binarized. For example, setting the threshold to 100, traversing each pixel in the grayscale image, when the pixel value is lower than the threshold, setting the background value, such as 0; when the pixel value is higher than the threshold, setting the foreground value, such as 255. .
  • the noise in the image can be removed, and the pixel values of the content drawn by the user are unified, which is convenient for improving the accuracy of skeleton extraction and edge detection.
  • Step S230 extracts a skeleton image of the binarized image.
  • the skeleton extraction of the image uses a certain algorithm to delete the useless pixels of the edge of the content drawn by the user in the image, and only retains the skeleton portion of the drawn content.
  • the skeleton image is a collection of single pixel connected images.
  • Step S240 obtains corresponding contour information according to the skeleton image.
  • edge detection is implemented by using the image processing library OpenCV, and all contours are detected from the extracted skeleton image.
  • Step S250 extracts stroke data of the original still image according to the contour information.
  • a stroke here refers to a single-pixel and continuous line segment, which can be a straight line segment, a curved segment, a boundary line of a closed region, or an independent point.
  • the combination of all strokes according to the position constitutes the overall content of the user's painting. profile.
  • the image blurring can be avoided, the number of strokes can be reduced, and the drawing speed can be improved.
  • step S210 is replaced by step S211-step S212
  • step S230 is replaced by step S231
  • step S250 is used.
  • Step S251 - step S252 are replaced.
  • the step S210 includes:
  • Step S211 calibrates the original still image according to a preset positioning block to obtain a calibrated still image.
  • the present embodiment allows the camera to tilt at an angle to the tablet without having to be completely perpendicular to the plane of the tablet.
  • the accuracy of the subsequent extracted content is not affected.
  • Presetting the positioning block on the drawing board for example, using two rectangular blocks to form a positioning angle.
  • four positioning angles 1 are set on the drawing board 4, and the detailed size and spacing of the four positioning angles 1 are set.
  • the deformation of the four positioning angles 1 can be obtained, and the captured image is calibrated according to the deformation. Thereby obtaining a more accurate image.
  • the original still image does not need to be calibrated.
  • Step S212 performs gradation processing on the calibrated still image to obtain a corresponding grayscale image.
  • the calibrated still image is converted from a color map to a grayscale image.
  • the step S230 includes:
  • Step S231 traverses each pixel in the binarized image to obtain a line segment, a curved segment or a contour of a closed figure obtained by connecting a single pixel point or a series of single pixels as a skeleton image, and the single pixel point is non- A single pixel connects the middle pixel of the image.
  • the image is extracted from the binarized image.
  • the skeleton extraction uses a certain algorithm to delete the edgeless pixels of the content drawn by the user in the image, and only retains the skeleton portion of the drawn content.
  • An implementation method is: initializing the number of iterations, using the binarized image as an image to be processed; traversing all the front sights in the image to be processed, marking the front sights that meet the preset first culling condition, and after traversing Excluding the marked front sights to obtain the image after the first processing; traversing all the front sights in the image after the first processing, marking the front sights that meet the preset second culling condition, after the traversal is completed , the marked front view is removed, the second processed image is obtained, which is completed by one iteration; the number of iterations is updated; when the number of iterations is less than the preset maximum number of iterations, the second processed image is taken as the image to be processed , continue the aforementioned culling action, start a new iteration; through multiple rounds of
  • FIG. 10 There are eight reference pixel points around each pixel, as shown in FIG. 10, which are respectively located at the upper (P2), lower (P6), left (P8), right (P4), and upper left positions of the pixel ( P9), lower left (P7), upper right (P3), lower right (P5); if the pixel is at the boundary, the eight reference pixels around it will not be presented in the figure, the unpresented reference pixels The value of the point is treated as 0.
  • the pixel is an isolated point; if only one of the eight reference pixels around it is the front point, the pixel is the end point If only one or two of the eight reference pixels around it are background points, then this pixel is the inner point.
  • the pixel is a non-isolated point and is not an end point and is not an inner point;
  • a second condition among the eight reference pixel points of the pixel, only one set of reference pixel groups exists; the reference pixel group is two adjacent reference pixel points, and the clockwise direction is used, the adjacent two The values of the reference pixels are the background value and the foreground value, respectively. As shown in FIG. 11 , with the central pixel as the core, among the eight reference pixels around it, there are two sets of reference pixel groups in the clockwise direction, and the result is 0, 1, so the central pixel is not satisfied. Second condition.
  • the third condition as shown in FIG. 10, if at least one of the P2, P4, and P6 pixels is a background point, and at least one of the P4, P6, and P2 pixels is a background point, the P1 pixel satisfies the third condition. ;
  • the fourth condition as shown in FIG. 10, if at least one of the P2, P4, and P8 pixels is a background point, and at least one of the P2, P6, and P8 pixels is a background point, the P1 pixel satisfies the fourth condition. ;
  • the front spot that satisfies the first condition, the second condition, and the third condition at the same time is a pixel point that meets the preset first culling condition; and the front point that satisfies the first condition, the second condition, and the fourth condition at the same time is A pixel that meets the preset second culling condition.
  • the second processed image is taken as the image to be processed, and all the front sights in the image to be processed are traversed again, and all pixels that meet the preset first culling condition are eliminated. Point, and cull all pixels that match the preset second culling condition. This loops until the number of iterations after the update is equal to the preset maximum number of iterations. At this time, the remaining front spot in the processed image constitutes a skeleton image of the binarized image.
  • the unnecessary pixels are deleted by multiple iterations, and the skeleton pixels are reserved to realize the skeleton extraction of the image.
  • the step S250 includes:
  • Step S251 traverses the contour information, and eliminates the repeated contour lines between the contours in the contour information.
  • Step S252 saves the remaining contours in the form of strokes to obtain stroke data of the original still image.
  • all contours are detected from the extracted skeleton image. Traverse all the contours, and eliminate the repeating contour according to the coincidence degree of the contour point position, as shown in Figure 9 for the stick figure 2, the duckling as an example, the duckling has two contours at the foot, and the two contours overlap, so There are 2 repeated outlines, and 1 can be deleted.
  • the remaining contours are saved according to the stroke form (including the stroke number and the corresponding stroke point), the stroke number corresponds to the contour, and the stroke point corresponds to the contour point, thereby completing the extraction of the stroke data.
  • the Chinese character 3 to be drawn on the drawing board 4 "word” is taken as an example.
  • the stroke data is extracted, the number of strokes that can be obtained is five, and the stroke number is 1 to 5, and each stroke is drawn. It also contains several stroke points, each of which corresponds to a plane coordinate value.
  • Table 1 The specific data is shown in Table 1 below:
  • Stroke number Corresponding to the word "word” Number of stroke points Stroke point coordinates (x, y) 1 Click “ ⁇ ” 173 (264,122)(265,122)...(319,150) 2 Horizontal “one” 754 (420,209)(419,210)...(119,232) 3 Horizontal "one” 308 (372,272)(371,273)...(226,282)
  • stroke 1 represents the top point of the word " ⁇ ", which has 173 stroke points.
  • Each stroke point corresponds to the value of a set of plane coordinates (that is, the stroke contains 173 coordinates). Value), each stroke point is actually a pixel, so the number of stroke points extracted by each stroke is determined by the pixels of the image, that is, the same stroke, the strokes extracted in the more pixels The more points.
  • the stick figure 2 to be drawn on the drawing board 4 is taken as an example.
  • the number of strokes that can be obtained is nine, and the stroke numbers are 1 to 9, and each stroke includes A number of stroke points, each stroke point corresponds to a plane coordinate value, the specific data is shown in Table 2 below:
  • Stroke number Corresponding to the "Little Duck" part Number of stroke points Stroke point coordinates (x, y) 1 Foot (front) 393 (312,534)(313,533)...(195,554) 2 Foot (post) 455 (422,520) (423, 519)...(297,599) 3 Torso 1157 (345,250)(346,249)...(641,343) 4 Wings 614 (349,276) (348,277)...(506,353) 5 neck 298 (212,251)(213,250)...(131,319) 6 head 625 (182,30)(183,29)...(285,75) 7 Mouth 362 (358,70)(359,69)...(267,166) 8 Eye 38 (248,86)(249,86)...(251,101) 9 Nostril (point on the mouth) 12 (305,91)(305,92)...(304,96)
  • stroke 1 represents the part of the foot of the front of the stick figure “Little Duck”. It contains 393 stroke points, and there are also 393 sets of plane coordinate points. These coordinate points together form the front foot of "Little Duck”. unit.
  • a machine vision based drawing method includes:
  • Step S100 collects an original still image of the content to be drawn.
  • Step S211 calibrates the original still image according to a preset positioning block to obtain a calibrated still image.
  • Step S212 performs gradation processing on the calibrated still image to obtain a corresponding grayscale image.
  • Step S220 performs threshold processing on the grayscale image to obtain a binarized image.
  • Step S231 traverses each pixel in the binarized image to obtain a line segment, a curved segment or a contour of a closed figure obtained by connecting a single pixel point or a series of single pixels as a skeleton image, and the single pixel point is non- A single pixel connects the middle pixel of the image.
  • Step S240 obtains corresponding contour information according to the skeleton image.
  • Step S251 traverses the contour information, and eliminates the repeated contour lines between the contours in the contour information.
  • step S2521 the remaining contours are respectively saved in the form of strokes, and each stroke is cropped according to a preset precision to obtain stroke data of the original still image.
  • each stroke is tailored according to a preset precision, for example, the precision is set to 0.5 mm, and only the data of the two endpoints are retained within 0.5 mm, and the strokes in each stroke are Click on the precision to crop, and trim the completed data as the last stroke data.
  • the higher the precision setting the smaller the corresponding precision value, the more stroke points are retained for each stroke, and the higher the degree of reduction of the content drawn by the user.
  • Step S310 generates motion trajectory data of the corresponding robot according to the stroke data of the original still image.
  • the data of the stroke data of the original still image is sorted to obtain motion track data.
  • Each stroke is saved according to the new data structure, and the data is saved as a linked list, that is, motion trajectory data is obtained.
  • the new data structure is:
  • the motion_number corresponds to the track number (from the stroke number)
  • the point_number corresponds to the track point contained in the track number (from the stroke point)
  • the point is a two-dimensional array, and the coordinate values of each track point are saved.
  • the value of n represents the number of track points.
  • Step S320 respectively performs kinematic calculation on each track point corresponding to each track number in the motion track data of the robot, and obtains a robot state variable corresponding to each track point.
  • each track point of each track number is traversed, and each track point is kinematically solved to obtain a robot state variable corresponding to the track point.
  • Kinematics calculation can be used and not limited to geometric methods, algebraic methods, analytical methods, intelligent composite algorithms, etc. Different robots can use the appropriate algorithm to perform kinematics calculation. Save the robot state variable corresponding to each track point.
  • each locus point is kinematically solved to obtain a set of robot state variables, which are a set of five angles.
  • Value data each angle value corresponds to one joint of the robot, and the five joints of the robot can reach a corresponding track point after moving according to the five angle values.
  • Step S330 obtains the list of the robot state variables corresponding to the stroke data according to the robot state variables corresponding to all the track points.
  • Each of the data in the robot status list is a data structure, as shown below:
  • state_number represents the state variable number of the robot, corresponding to motion_number, how many strokes are generally corresponding to the state variable number
  • point_number represents the number of state points contained under the state variable number, which is generally equal to the point_number of the motion track list
  • State represents the state variable value of the robot, which is an array of j rows and k columns.
  • the k column generation robot table has k joints, j represents the number of robot state variables, and the number of track points corresponding to the motion track list is n, that is, the two values are equal. .
  • Step S410 calls a pre-configured robot motion planning library, and performs motion path planning on each robot state variable in the robot state variable list to obtain a motion message corresponding to each robot state variable.
  • Step S420 generates a robot motion message sequence according to the motion message corresponding to each robot state variable.
  • Step S500 performs a drawing action according to the motion message sequence.
  • the robot motion planning refers to generating a motion path from one state variable to another state variable according to different state variables of the robot.
  • This motion path needs to meet certain external constraints, such as obstacle avoidance, nearest path, and minimum energy consumption.
  • the robot motion planning method proposed in this embodiment is completed based on ROS (Robot Operating System), and the specific steps are as follows:
  • Step 1 Modeling the robotic ROS system.
  • ROS system modeling can be carried out in purely programming mode or imported into the original 3D model of the robot.
  • the programming language used in ROS modeling is a scripting language.
  • URDF Unified Robot Description Format
  • Robot description format Robot description format
  • XACRO XML Macros, an XML macro language
  • Step 2 ROS System Moveit Module Configuration.
  • Moveit is a module of the ROS system. It integrates several open source motion planning libraries, and the framework of this module can realize the motion planning and simulation of the robot. Further, the configuration process is to use the Setup Assistant tool to load the model description file of the robot generated in the previous step, and then configure the motion planning group, collision detection, initial state, motion planning library, and the like, and finally generate a ROS configuration file package.
  • Step 3 Read the list of state variables. Call the ROS system library to write the Moveit interface program, and read each state variable value in the list of robot state variables one by one.
  • Step 4 Call to the motion planning library.
  • a pre-configured robot motion planning library is called for each state variable value to perform motion planning of the state variables.
  • the motion planning library is called to generate a set of motion messages.
  • Step 5 Robot motion message sequence generation.
  • the motion messages generated in the previous step are saved one by one to generate a robot motion message sequence.
  • the motion message sequence of the robot is the amount of motion of each joint of the robot and can be directly sent to the robot drive module for execution.
  • Step 6 The motion message sequence is sent.
  • the robot motion message sequence generated in the previous step is packaged and sent to the robot driver module, and the robot driver module executes the motion message and executes it.
  • the two communication methods can be carried out in a wired manner or in a wireless manner.
  • a machine vision based drawing system includes:
  • the image acquisition module 100 is configured to collect an original still image of the content to be drawn.
  • the content to be drawn refers to the content drawn by the user on the drawing board, including Chinese characters and/or stick figures.
  • the image acquisition module collects the content to be drawn through a camera mounted on the robot, and obtains an original still image of the content to be drawn.
  • the stroke extraction module 200 is electrically connected to the image acquisition module 100 for processing the original still image and extracting stroke data of the original still image.
  • the original still image is processed, for example, graying and binarizing the image, then extracting the skeleton from the binarized image, detecting all the contours from the skeleton, and saving in the form of strokes, thereby obtaining Stroke data for the original still image.
  • the stroke data may have only one stroke, for example, the user draws a circle, and one circle has only one stroke; there may also be multiple strokes, such as writing a Chinese character "word" with five strokes.
  • the state variable generation module 300 is electrically connected to the stroke extraction module 200 for obtaining a corresponding robot state variable list according to the stroke data of the original still image.
  • the motion trajectory data of the robot is generated, and then the kinematics calculation is performed and finally the robot state variable list is generated.
  • the path planning module 400 is electrically connected to the state variable generating module 300, and is configured to perform motion path planning according to the robot state variable list to generate a motion message sequence.
  • motion path planning is performed to generate a motion message sequence that can be executed by the robot.
  • the drawing module 500 is electrically connected to the path planning module 400, and is configured to perform a drawing action according to the motion message sequence.
  • the drawing action is performed according to the motion message sequence, thereby completely mimicking the user's handwriting and style to complete the drawn content.
  • the machine vision-based drawing method of the present invention extracts stroke data by using image processing technology after acquiring the original static image of the content to be drawn, thereby obtaining a list of state variables, and then performing motion planning to generate a motion message sequence. And executing the message sequence to complete the drawing action; since the drawing is extracted from the drawn content to realize the drawing, neither the preset standard font nor the drawing of the stick figure with the stroke can be realized, thereby realizing the personality
  • the imitation of handwriting improves the drawing speed, and has stronger interactivity and real-time.
  • the stroke extraction module 200 includes:
  • a graying unit 210 configured to process the original still image to obtain a corresponding grayscale image
  • the original still image is processed, and the original still image is converted from a color map to a grayscale image.
  • the conversion of the grayscale image is realized by the function of the image processing library OpenCV.
  • a binarization unit 220 configured to perform threshold processing on the grayscale image to obtain a binarized image
  • the grayscale image is binarized. For example, setting the threshold to 100, traversing each pixel in the grayscale image, when the pixel value is lower than the threshold, setting the background value, such as 0; when the pixel value is higher than the threshold, setting the foreground value, such as 255. .
  • the noise in the image can be removed, and the pixel values of the content drawn by the user are unified, which is convenient for improving the accuracy of skeleton extraction and edge detection.
  • the skeleton extracting unit 230 is configured to extract a skeleton image of the binarized image.
  • the skeleton extraction of the image uses a certain algorithm to delete the useless pixels of the edge of the content drawn by the user in the image, and only retains the skeleton portion of the drawn content.
  • the skeleton image is a collection of single pixel connected images.
  • the edge detecting unit 240 is configured to obtain corresponding contour information according to the skeleton image.
  • edge detection is implemented by using the image processing library OpenCV, and all contours are detected from the extracted skeleton image.
  • the stroke extracting unit 250 is configured to extract stroke data of the original still image according to the contour information.
  • a stroke here refers to a single-pixel and continuous line segment, which can be a straight line segment, a curved segment, a boundary line of a closed region, or an independent point.
  • the combination of all strokes according to the position constitutes the overall content of the user's painting. profile.
  • the image blurring can be avoided, the number of strokes can be reduced, and the drawing speed can be improved.
  • the stroke extraction module 200 of the corresponding embodiment of FIG. 6 is further refined:
  • the stroke extraction module 200 further includes:
  • the calibration unit 260 is configured to calibrate the original still image according to a preset positioning block to obtain a calibrated still image.
  • the present embodiment allows the camera to tilt at an angle to the tablet without having to be completely perpendicular to the plane of the tablet.
  • the accuracy of the subsequent extracted content is not affected.
  • Presetting the positioning block on the drawing board for example, using two rectangular blocks to form a positioning angle.
  • four positioning angles 1 are set on the drawing board 4, and the detailed size and spacing of the four positioning angles 1 are set.
  • the deformation of the four positioning angles 1 can be obtained, and the captured image is calibrated according to the deformation. Thereby obtaining a more accurate image.
  • the original still image does not need to be calibrated.
  • the gradation unit 210 is configured to perform gradation processing on the calibrated still image to obtain a corresponding gradation map.
  • the calibrated still image is converted from a color map to a grayscale image.
  • a skeleton extracting unit 230 configured to extract a skeleton image of the binarized image, specifically: the skeleton extracting unit traverses each pixel point in the binarized image to acquire a single pixel point or a series of A line segment, a curved segment, or a contour of a closed figure in which a single pixel is connected is a skeleton image, and the single pixel point is an intermediate pixel point of a non-single pixel connected image.
  • the image is extracted from the binarized image.
  • the skeleton extraction uses a certain algorithm to delete the edgeless pixels of the content drawn by the user in the image, and only retains the skeleton portion of the drawn content.
  • An implementation method is: initializing the number of iterations, using the binarized image as an image to be processed; traversing all the front sights in the image to be processed, marking the front sights that meet the preset first culling condition, and after traversing Excluding the marked front sights to obtain the image after the first processing; traversing all the front sights in the image after the first processing, marking the front sights that meet the preset second culling condition, after the traversal is completed , the marked front view is removed, the second processed image is obtained, which is completed by one iteration; the number of iterations is updated; when the number of iterations is less than the preset maximum number of iterations, the second processed image is taken as the image to be processed , continue the aforementioned culling action, start a new iteration; through multiple rounds of
  • FIG. 10 There are eight reference pixel points around each pixel, as shown in FIG. 10, which are respectively located at the upper (P2), lower (P6), left (P8), right (P4), and upper left positions of the pixel ( P9), lower left (P7), upper right (P3), lower right (P5); if the pixel is at the boundary, the eight reference pixels around it will not be presented in the figure, the unpresented reference pixels The value of the point is treated as 0.
  • the pixel is an isolated point; if only one of the eight reference pixels around it is the front point, the pixel is the end point If only one or two of the eight reference pixels around it are background points, then this pixel is the inner point.
  • the pixel is a non-isolated point and is not an end point and is not an inner point;
  • a second condition among the eight reference pixel points of the pixel, only one set of reference pixel groups exists; the reference pixel group is two adjacent reference pixel points, and the clockwise direction is used, the adjacent two The values of the reference pixels are the background value and the foreground value, respectively. As shown in FIG. 11 , with the central pixel as the core, among the eight reference pixels around it, there are two sets of reference pixel groups in the clockwise direction, and the result is 0, 1, so the central pixel is not satisfied. Second condition.
  • the third condition as shown in FIG. 10, if at least one of the P2, P4, and P6 pixels is a background point, and at least one of the P4, P6, and P8 pixels is a background point, the P1 pixel satisfies the third condition. ;
  • the fourth condition as shown in FIG. 10, if at least one of the P2, P4, and P8 pixels is a background point, and at least one of the P2, P6, and P8 pixels is a background point, the P1 pixel satisfies the fourth condition. ;
  • the front spot that satisfies the first condition, the second condition, and the third condition at the same time is a pixel point that meets the preset first culling condition; and the front point that satisfies the first condition, the second condition, and the fourth condition at the same time is A pixel that meets the preset second culling condition.
  • the second processed image is taken as the image to be processed, and all the front sights in the image to be processed are traversed again, and all pixels that meet the preset first culling condition are eliminated. Point, and cull all pixels that match the preset second culling condition. This loops until the number of iterations after the update is equal to the preset maximum number of iterations. At this time, the remaining front spot in the processed image constitutes a skeleton image of the binarized image.
  • the unnecessary pixels are deleted by multiple iterations, and the skeleton pixels are reserved to realize the skeleton extraction of the image.
  • a stroke extracting unit 250 configured to extract stroke data of the original still image according to the contour information, specifically: traversing the contour information, and eliminating a contour line repeated between contours in the contour information; The remaining contours are respectively saved in the form of strokes to obtain stroke data of the original still image.
  • all contours are detected from the extracted skeleton image. Traverse all the contours, and eliminate the repeating contour according to the coincidence degree of the contour point position, as shown in Figure 9 for the stick figure 2, the duckling as an example, the duckling has two contours at the foot, and the two contours overlap, so There are 2 repeated outlines, and 1 can be deleted.
  • the remaining contours are saved according to the stroke form (including the stroke number and the corresponding stroke point), the stroke number corresponds to the contour, and the stroke point corresponds to the contour point, thereby completing the extraction of the stroke data.
  • the Chinese character 3 to be drawn on the drawing board 4 "word” is taken as an example.
  • the stroke data is extracted, the number of strokes that can be obtained is five, and the stroke number is 1 to 5, and each stroke is drawn. It also contains several stroke points, each of which corresponds to a plane coordinate value.
  • Table 1 The specific data is shown in Table 1 below:
  • Stroke number Corresponding to the word "word” Number of stroke points Stroke point coordinates (x, y) 1 Click “ ⁇ ” 173 (264,122)(265,122)...(319,150) 2 Horizontal “one” 754 (420,209)(419,210)...(119,232) 3 Horizontal “one” 308 (372,272)(371,273)...(226,282) 4 Horizontal “one” 324 (376,334)(375,335)...(220,345) 5 Mouth "mouth” 575 (348,397)(347,398)...(210,401)
  • stroke 1 represents the top point of the word " ⁇ ", which has 173 stroke points.
  • Each stroke point corresponds to the value of a set of plane coordinates (that is, the stroke contains 173 coordinates). Value), each stroke point is actually a pixel, so the number of stroke points extracted by each stroke is determined by the pixels of the image, that is, the same stroke, the strokes extracted in the more pixels The more points.
  • the stick figure 2 to be drawn on the drawing board 4 is taken as an example.
  • the number of strokes that can be obtained is nine, and the stroke numbers are 1 to 9, and each stroke includes A number of stroke points, each stroke point corresponds to a plane coordinate value, the specific data is shown in Table 2 below:
  • Stroke number Corresponding to the "Little Duck" part Number of stroke points Stroke point coordinates (x, y) 1 Foot (front) 393 (312,534)(313,533)...(195,554) 2 Foot (post) 455 (422,520) (423, 519)...(297,599) 3 Torso 1157 (345,250)(346,249)...(641,343) 4 Wings 614 (349,276) (348,277)...(506,353) 5 neck 298 (212,251)(213,250)...(131,319) 6 head 625 (182,30)(183,29)...(285,75) 7 Mouth 362 (358,70)(359,69)...(267,166) 8 Eye 38 (248,86)(249,86)...(251,101) 9 Nostril (point on the mouth) 12 (305,91)(305,92)...(304,96)
  • stroke 1 represents the part of the foot of the front of the stick figure “Little Duck”. It contains 393 stroke points, and there are also 393 sets of plane coordinate points. These coordinate points together form the front foot of "Little Duck”. unit.
  • a machine vision based drawing system includes:
  • the image acquisition module 100 is configured to collect an original still image of the content to be drawn.
  • the stroke extraction module 200 is configured to process the original still image, and extract stroke data of the original static image, including:
  • the calibration unit 260 is configured to calibrate the original still image according to a preset positioning block to obtain a calibrated still image
  • the gradation unit 210 is configured to perform gradation processing on the calibrated still image to obtain a corresponding grayscale image
  • a binarization unit 220 configured to perform threshold processing on the grayscale image to obtain a binarized image
  • a skeleton extracting unit 230 configured to extract a skeleton image of the binarized image, specifically: the skeleton extracting unit traverses each pixel point in the binarized image to acquire a single pixel point or a series of a single pixel connected line segment, a curved segment or a closed graphic outline is used as a skeleton image, and the single pixel point is an intermediate pixel point of a non-single pixel connected image;
  • An edge detecting unit 240 configured to obtain corresponding contour information according to the skeleton image
  • a stroke extracting unit 250 configured to extract stroke data of the original still image according to the contour information, specifically: traversing the contour information, and eliminating a contour line repeated between contours in the contour information; The remaining contours are respectively saved in the form of strokes, and each stroke is cropped according to a preset precision to obtain stroke data of the original still image.
  • each stroke is tailored according to a preset precision, for example, the precision is set to 0.5 mm, and only the data of the two endpoints are retained within 0.5 mm, and the strokes in each stroke are Click on the precision to crop, and trim the completed data as the last stroke data.
  • the higher the precision setting the smaller the corresponding precision value, the more stroke points are retained for each stroke, and the higher the degree of reduction of the content drawn by the user.
  • the state variable generating module 300 is configured to obtain a corresponding list of robot state variables according to the stroke data of the original still image, including:
  • the motion trajectory data generating unit 310 is configured to generate motion trajectory data of the corresponding robot according to the stroke data of the original still image.
  • the data of the stroke data of the original still image is sorted to obtain motion track data.
  • Each stroke is saved according to the new data structure, and the data is saved as a linked list, that is, motion trajectory data is obtained.
  • the new data structure is:
  • the motion_number corresponds to the track number (from the stroke number)
  • the point_number corresponds to the track point contained in the track number (from the stroke point)
  • the point is a two-dimensional array, and the coordinate values of each track point are saved.
  • the value of n represents the number of track points.
  • the kinematics solving unit 320 is configured to separately perform kinematic calculation on each track point corresponding to each track number in the motion track data of the robot, and obtain a robot state variable corresponding to each track point.
  • each track point of each track number is traversed, and each track point is kinematically solved to obtain a robot state variable corresponding to the track point.
  • Kinematics calculation can be used and not limited to geometric methods, algebraic methods, analytical methods, intelligent composite algorithms, etc. Different robots can use the appropriate algorithm to perform kinematics calculation. Save the robot state variable corresponding to each track point.
  • each locus point is kinematically solved to obtain a set of robot state variables, which are a set of five angles.
  • Value data each angle value corresponds to one joint of the robot, and the five joints of the robot can reach a corresponding track point after moving according to the five angle values.
  • the state variable list generating unit 330 is configured to obtain the robot state variable list corresponding to the stroke data according to the robot state variable corresponding to all track points;
  • the path planning module 400 is configured to perform motion path planning according to the list of the robot state variables, and generate a motion message sequence, including:
  • the path planning unit 410 is configured to call a pre-configured robot motion planning library, perform motion path planning on each robot state variable in the robot state variable list, and obtain a motion message corresponding to each robot state variable;
  • a message sequence generating unit 420 configured to generate a robot motion message sequence according to the motion message corresponding to each robot state variable
  • the drawing module 500 is configured to perform a drawing action according to the motion message sequence.
  • the robot motion planning refers to generating a motion path from one state variable to another state variable according to different state variables of the robot.
  • This motion path needs to meet certain external constraints, such as obstacle avoidance, nearest path, and minimum energy consumption.
  • the robot motion planning method proposed in this embodiment is completed based on ROS (Robot Operating System), and the specific steps are as follows:
  • Step 1 Modeling the robotic ROS system.
  • ROS system modeling can be carried out in purely programming mode or imported into the original 3D model of the robot.
  • the programming language used in ROS modeling is a scripting language.
  • URDF Unified Robot Description Format
  • Robot description format Robot description format
  • XACRO XML Macros, an XML macro language
  • Step 2 ROS System Moveit Module Configuration.
  • Moveit is a module of the ROS system. It integrates several open source motion planning libraries, and the framework of this module can realize the motion planning and simulation of the robot. Further, the configuration process is to use the Setup Assistant tool to load the model description file of the robot generated in the previous step, and then configure the motion planning group, collision detection, initial state, motion planning library, and the like, and finally generate a ROS configuration file package.
  • Step 3 Read the list of state variables. Call the ROS system library to write the Moveit interface program, and read each state variable value in the list of robot state variables one by one.
  • Step 4 Call to the motion planning library.
  • a pre-configured robot motion planning library is called for each state variable value to perform motion planning of the state variables.
  • the motion planning library is called to generate a set of motion messages.
  • Step 5 Robot motion message sequence generation.
  • the motion messages generated in the previous step are saved one by one to generate a robot motion message sequence.
  • the motion message sequence of the robot is the amount of motion of each joint of the robot and can be directly sent to the robot drive module for execution.
  • Step 6 The motion message sequence is sent.
  • the robot motion message sequence generated in the previous step is packaged and sent to the robot driver module, and the robot driver module can execute the motion message after receiving the motion message.
  • the two communication methods can be carried out in a wired manner or in a wireless manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明提供了一种基于机器视觉的绘图方法及系统,包括:步骤S100采集待绘制内容的原始静态图像;步骤S200对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据;步骤S300根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表;步骤S400根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列;步骤S500根据所述运动消息序列,执行绘图动作。本发明通过采集用户所绘画的内容、提取笔画数据后绘制,让机器人实时分析和模仿,从而实现个性化的笔迹模仿,具有更强的交互性。

Description

一种基于机器视觉的绘图方法及系统
本申请要求2018年04月04日提交的申请号为:201810297791.7、发明名称为“一种基于机器视觉的绘图方法及系统”的中国专利申请的优先权,其全部内容合并在此。
技术领域
本发明涉及机器人领域,尤指一种基于机器视觉的绘图方法及系统。
背景技术
随着机器人技术飞速发展,特别是机器视觉、精密控制等技术的发展,使机器人更加智能,也能适应更多的工作,因此目前机器人不仅在工业上广泛应用,在消费领域也越来越普及,例如在家庭陪护、实践教学、科技展览等领域都有涉及。
在消费级机器人领域,机器人的应用也越来越丰富,甚至可以做到“琴棋书画”样样精通。对于含有操作臂的机器人,控制其“写字”和“绘画”是常见的应用场景。
现有技术是基于预处理技术实现书写或绘制预设的内容。首先利用TTF字库,提取要书写的汉字的轮廓点,并把轮廓转换成样条曲线,然后后台处理这些曲线并转换成机械臂的末端轨迹,最后控制机械臂按照预设好的轨迹运行从而实现机械臂的写字功能。该方法的缺点是:1、只能书写汉字,不能画简图;2、需要提前预置标准字库,只能处理标准字体,无法实现非标准、个性化的字体。
发明内容
本发明的目的是提供一种基于机器视觉的绘图方法及系统,通过采集用户所绘画的内容、提取笔画数据后绘制,让机器人实时分析和模仿,从而实现个 性化的笔迹模仿,具有更强的交互性。
本发明提供的技术方案如下:
一种基于机器视觉的绘图方法,包括:步骤S100采集待绘制内容的原始静态图像;步骤S200对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据;步骤S300根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表;步骤S400根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列;步骤S500根据所述运动消息序列,执行绘图动作。
在上述技术方案中,在采集待绘制内容的原始静态图像后,利用图像处理技术提取笔画数据、从而得到状态变量列表,再进行运动规划生成运动消息序列,并执行该消息序列来完成绘图动作;由于是从被绘制内容中提取其笔画来实现绘制,因此既不需要预置标准字库、又可以实现对具有笔画的简笔图进行绘制,从而实现了个性化的笔迹模仿,具有更强的交互性。
进一步,所述步骤S200具体包括:步骤S210对所述原始静态图像进行处理,得到对应的灰度图;步骤S220对所述灰度图进行阈值处理,得到二值化的图像;步骤S230提取所述二值化的图像的骨架图像;步骤S240根据所述骨架图像,得到对应的轮廓信息;步骤S250根据所述轮廓信息,提取所述原始静态图像的笔画数据。
在上述技术方案中,通过剔除冗余的像素点,提取原始静态图像的轮廓信息,得到笔画数据,既可以避免图像模糊,又可以减少笔画数量,提高绘制速度。
进一步,所述步骤S210包括:步骤S211根据预设的定位块对所述原始静态图像进行校准,得到校准后的静态图像;步骤S212对所述校准后的静态图像进行灰度处理,得到对应的灰度图。
在上述技术方案中,通过对原始静态图像进行校准,可以降低对拍摄的要求,绘图板平面无需完全垂直于摄像头,同时避免因原始静态图像的变形影响 机器人所模仿的绘画质量。
进一步,所述步骤S230包括:步骤S231遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点。
在上述技术方案中,通过剔除冗余的像素点,获取骨架图像,既可以避免图像模糊,又可以减少笔画数量,提高绘画速度。
进一步,所述步骤S250包括:步骤S251遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重复的轮廓线;步骤S252将剩下的各轮廓分别按照笔画的形式保存,得到所述原始静态图像的笔画数据。
在上述技术方案中,剔除重复的轮廓线,避免因重复轮廓,导致图像模糊。
进一步,所述步骤S252进一步包括:步骤S2521将剩下的各轮廓分别按照笔画的形式保存,并对每个笔画按照预设的精度进行裁剪,得到所述原始静态图像的笔画数据。
在上述技术方案中,通过对笔画进行适当的裁剪,减少绘画的数据量,提高绘画速度。
进一步,所述步骤S300包括:步骤S310根据所述原始静态图像的笔画数据,生成对应的机器人的运动轨迹数据;步骤S320分别对所述机器人的运动轨迹数据中所有轨迹号各自对应的每个轨迹点进行运动学解算,得到每个轨迹点对应的机器人状态变量;步骤S330根据所有轨迹点对应的所述机器人状态变量,得到所述笔画数据对应的所述机器人状态变量列表。
在上述技术方案中,根据图像的笔画数据得到机器人状态变量列表,便于机器人执行绘画动作。
进一步,所述步骤S400包括:步骤S410调用预先配置好的机器人运动规划库,对所述机器人状态变量列表中每个机器人状态变量进行运动路径规划,得到每个机器人状态变量对应的运动消息;步骤S420根据每个机器人状态变 量对应的运动消息,生成机器人运动消息序列。
在上述技术方案中,根据机器人状态变量列表和运动规划库,得到机器人运动消息序列,以便机器人根据运动消息序列执行绘画,完成对用户绘画的模仿。
本发明还提供一种基于机器视觉的绘图系统,包括:图像采集模块,用于采集待绘制内容的原始静态图像;笔画提取模块,与所述图像采集模块电连接,用于对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据;状态变量生成模块,与所述笔画提取模块电连接,用于根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表;路径规划模块,与所述状态变量生成模块电连接,用于根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列;绘图模块,与所述路径规划模块电连接,用于根据所述运动消息序列,执行绘图动作。
在上述技术方案中,在采集待绘制内容的原始静态图像后,利用图像处理技术提取笔画数据、从而得到状态变量列表,再进行运动规划生成运动消息序列,并执行该消息序列来完成绘图动作;由于是从被绘制内容中提取其笔画来实现绘制,因此既不需要预置标准字库、又可以实现对具有笔画的简笔图进行绘制,从而实现了个性化的笔迹模仿,具有更强的交互性。
进一步,所述笔画提取模块包括:灰度化单元,用于对所述原始静态图像进行处理,得到对应的灰度图;二值化单元,用于对所述灰度图进行阈值处理,得到二值化的图像;骨架提取单元,用于提取所述二值化的图像的骨架图像;边缘检测单元,用于根据所述骨架图像,得到对应的轮廓信息;笔画提取单元,用于根据所述轮廓信息,提取所述原始静态图像的笔画数据。
在上述技术方案中,通过剔除冗余的像素点,提取原始静态图像的轮廓信息,得到笔画数据,既可以避免图像模糊,又可以减少笔画数量,提高绘制速度。
进一步,所述笔画提取模块还包括:校准单元,用于根据预设的定位块对所述原始静态图像进行校准,得到校准后的静态图像;所述灰度化单元,进一步用于对所述校准后的静态图像进行灰度处理,得到对应的灰度图。
在上述技术方案中,通过对原始静态图像进行校准,可以降低对拍摄的要求,绘图板平面无需完全垂直于摄像头,同时避免因原始静态图像的变形影响机器人所模仿的绘画质量。
进一步,所述骨架提取单元,用于提取所述二值化的图像的骨架图像具体为:所述骨架提取单元,遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点。
在上述技术方案中,剔除冗余的像素点,既可以避免图像模糊,又可以减少笔画数量,提高绘画速度。
进一步,笔画提取单元,用于根据所述轮廓信息,提取所述原始静态图像的笔画数据具体为:用于遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重复的轮廓线;以及,将剩下的各轮廓分别按照笔画的形式保存,得到所述原始静态图像的笔画数据。
在上述技术方案中,剔除重复的轮廓线,避免因重复轮廓,导致图像模糊。
进一步,所述笔画提取单元,进一步用于将剩下的各轮廓分别按照笔画的形式保存,并对每个笔画按照预设的精度进行裁剪,得到所述原始静态图像的笔画数据。
在上述技术方案中,通过对笔画进行适当的裁剪,减少绘画的数据量,提高绘画速度。
进一步,所述状态变量生成模块包括:运动轨迹数据生成单元,用于根据所述原始静态图像的笔画数据,生成对应的机器人的运动轨迹数据;运动学解算单元,用于分别对所述机器人的运动轨迹数据中所有轨迹号各自对应的每个 轨迹点进行运动学解算,得到每个轨迹点对应的机器人状态变量;状态变量列表生成单元,用于根据所有轨迹点对应的所述机器人状态变量,得到所述笔画数据对应的所述机器人状态变量列表。
在上述技术方案中,根据图像的笔画数据得到机器人状态变量列表,便于机器人执行绘画动作。
进一步,所述路径规划模块包括:路径规划单元,用于调用预先配置好的机器人运动规划库,对所述机器人状态变量列表中每个机器人状态变量进行运动路径规划,得到每个机器人状态变量对应的运动消息;消息序列生成单元,用于根据每个机器人状态变量对应的运动消息,生成机器人运动消息序列。
在上述技术方案中,根据机器人状态变量列表和运动规划库,得到机器人运动消息序列,以便机器人根据运动消息序列执行绘画,完成对用户绘画的模仿。
通过本发明提供的一种基于机器视觉的绘图方法及系统,能够带来以下有益效果:通过采集用户所绘画的内容、提取笔画数据后绘制,让机器人实时分析和模仿,从而实现个性化的笔迹模仿,具有更强的交互性。
附图说明
下面将以明确易懂的方式,结合附图说明优选实施方式,对一种基于机器视觉的绘图方法及系统的上述特性、技术特征、优点及其实现方式予以进一步说明。
图1是本发明的一种基于机器视觉的绘图方法的一个实施例的流程图;
图2是本发明的一种基于机器视觉的绘图方法的另一个实施例的流程图;
图3是本发明的一种基于机器视觉的绘图方法的另一个实施例的流程图;
图4是本发明的一种基于机器视觉的绘图方法的另一个实施例的流程图;
图5是本发明的一种基于机器视觉的绘图系统的一个实施例的结构示意 图;
图6是本发明的一种基于机器视觉的绘图系统的另一个实施例的结构示意图;
图7是本发明的一种基于机器视觉的绘图系统的另一个实施例的结构示意图;
图8是本发明的一种基于机器视觉的绘图系统的另一个实施例的结构示意图;
图9是图3、图7对应实施例中绘图板上的待绘制内容的结构示意图;
图10是图3、图7对应实施例中一个像素点与其八个参考像素点的结构示意图;
图11是图3、图7对应实施例中一个像素点与其参考像素组的结构示意图。
附图标号说明:
100.图像采集模块,200.笔画提取模块,300.状态变量生成模块,400.路径规划模块,500.绘图模块,600.图像校准模块,210.灰度化单元,220.二值化单元,230.骨架提取单元,240.边缘检测单元,250.笔画提取单元,260.校准单元,310.运动轨迹数据生成单元,320.运动学解算单元,330.状态变量列表生成单元,410.路径规划单元,420.消息序列生成单元,1.定位角,2.简笔画,3.汉字,4.绘图板。
具体实施方式
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对照附图说明本发明的具体实施方式。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图,并获得其他的实施方式。
为使图面简洁,各图中只示意性地表示出了与本发明相关的部分,它们并 不代表其作为产品的实际结构。另外,以使图面简洁便于理解,在有些图中具有相同结构或功能的部件,仅示意性地绘示了其中的一个,或仅标出了其中的一个。在本文中,“一个”不仅表示“仅此一个”,也可以表示“多于一个”的情形。
在本发明的一个实施例中,如图1所示,一种基于机器视觉的绘图方法,包括:
步骤S100采集待绘制内容的原始静态图像。
具体的,待绘制内容是指用户在绘图板上所绘制的内容,包括汉字和/或简笔画。通过安装在机器人上的摄像头采集待绘制内容,得到待绘制内容的原始静态图像。
步骤S200对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据。
具体的,对原始静态图像进行处理,比如,先对图像灰度化、二值化,再对二值化的图像提取骨架,从骨架中检测出所有的轮廓,并按笔画形式保存,从而得到原始静态图像的笔画数据。所述笔画数据可能仅有一个笔画,比如用户画个圆,一个圆只有一笔;也可能有多个笔画,比如写个汉字“言”,有5个笔画。
步骤S300根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表。
具体的,根据笔画数据,生成机器人的运动轨迹数据,然后进行运动学解算并最终生成机器人状态变量列表。
步骤S400根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列。
具体的,根据生成的机器人状态变量列表,进行运动路径规划,生成可以供机器人执行的运动消息序列。
步骤S500根据所述运动消息序列,执行绘图动作。
具体的,根据运动消息序列执行画图动作,从而完全模仿用户的笔迹和风格完成所绘制的内容。
区别于现有技术,本发明的基于机器视觉的绘图方法,在采集待绘制内容的原始静态图像后,利用图像处理技术提取笔画数据、从而得到状态变量列表,再进行运动规划生成运动消息序列,并执行该消息序列来完成绘图动作;由于是从被绘制内容中提取其笔画来实现绘制,因此既不需要预置标准字库、又可以实现对具有笔画的简笔图进行绘制,从而实现了个性化的笔迹模仿,提高了绘制速度,具有更强的交互性和实时性。
在本发明的另一个实施例中,如图2所示,在前一个实施例基础上,所述步骤S200用以下步骤替代:
步骤S210对所述原始静态图像进行处理,得到对应的灰度图。
具体的,对原始静态图像进行处理,将原始静态图像由彩图转换成灰度图。比如,利用图像处理库OpenCV的函数实现灰度图的转换。
步骤S220对所述灰度图进行阈值处理,得到二值化的图像。
具体的,将灰度图二值化。比如将阈值设为100,遍历所述灰度图中的每个像素,当像素值低于阈值时,设为背景值,比如0;当像素值高于阈值时,设为前景值,比如255。通过图像的二值化可以去除图像中的噪点,并将用户所绘制的内容的像素值统一,便于提高骨架提取、边缘检测的准确度。
步骤S230提取所述二值化的图像的骨架图像。
具体的,图像的骨架提取是利用一定的算法将图像中用户绘制的内容的边缘无用像素点删除,只保留所绘制内容的骨架部分。所述骨架图像是单像素连通图像的集合。
步骤S240根据所述骨架图像,得到对应的轮廓信息。
具体的,利用图像处理库OpenCV实现边缘检测,从提取出的骨架图像中 检测出所有的轮廓。
步骤S250根据所述轮廓信息,提取所述原始静态图像的笔画数据。
具体的,对所有的轮廓及每条轮廓对应的轮廓点,按照笔画的形式(包括笔画号和对应的笔画点)进行保存,笔画号对应于轮廓,笔画点对应于轮廓点,从而得到所述二值化的图像的笔画数据。这里的一条笔画是指单像素且连续的线段,它可以是直线段、曲线段、某闭合区域的边界线或独立的点等,将所有笔画按照位置组合起来则构成了用户所绘内容的整体轮廓。
在本实施例中,通过剔除冗余的像素点,提取原始静态图像的轮廓信息,得到笔画数据,既可以避免图像模糊,又可以减少笔画数量,提高绘制速度。
在本发明的另一个实施例中,如图3所示,在前一个实施例基础上,所述步骤S210用步骤S211-步骤S212替代,所述步骤S230用步骤S231替代,所述步骤S250用步骤S251-步骤S252替代。
所述步骤S210包括:
步骤S211根据预设的定位块对所述原始静态图像进行校准,得到校准后的静态图像。
具体的,本实施例允许摄像头对绘图板有一定角度的倾斜,无需完全垂直于绘图板平面。通过对所拍摄的原始静态图像进行校准,从而不影响后续提取内容的精度。在绘图板上预设定位块,比如用两个长方形块构成一个定位角,如图9所示,在绘图板4上设置四个定位角1,所述四个定位角1的详细尺寸和间距预先已知。根据所拍摄图片中的四个定位角1的尺寸、间距,和拍摄前四个定位角1的尺寸、间距,可以得出四个定位角1的形变,根据该形变对所拍摄图像进行校准,从而得到更准确的图像。
如果摄像头正对绘图板,与绘图板成90度拍摄,得到的原始静态图像没有因拍摄引入图像变形,则不需要对原始静态图像进行校准。
步骤S212对所述校准后的静态图像进行灰度处理,得到对应的灰度图。
具体的,将校准后的静态图像由彩图转换成灰度图。
所述步骤S230包括:
步骤S231遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点。
具体的,对二值化的图像进行图像的骨架提取,骨架提取是利用一定的算法将图像中用户绘制的内容的边缘无用像素点删除,只保留所绘制内容的骨架部分。一种实现方法为,初始化迭代次数,将二值化的图像作为待处理图像;遍历待处理图像中所有的前景点,对符合预设的第一剔除条件的前景点进行标记,当遍历完毕后,将标记的前景点剔除,得到第一次处理后的图像;遍历第一次处理后的图像中所有的前景点,对符合预设的第二剔除条件的前景点进行标记,当遍历完毕后,将标记的前景点剔除,得到第二次处理后的图像,这算完成一次迭代;更新迭代次数;当迭代次数小于预设最大迭代次数时,将第二次处理后的图像作为待处理图像,继续前述的剔除的动作,开始一次新的迭代;通过多轮迭代,由外及内,逐步删除边缘无用像素点;当迭代次数等于预设最大迭代次数时,将处理后图像中剩下的前景点,即构成该二值化的图像的骨架图像。这些剩下的前景点,可能是孤立的单像素点,或由一系列单像素连接而成的线段、曲线段或封闭图形等的轮廓线,这些单像素点是原来非单像素连接图像的中间像素点。
示例:首先将原始静态图像转换成灰度图,再将灰度图二值化。比如将阈值设为100,遍历所述灰度图中的每个像素,当像素值低于阈值时,设为背景值,比如0;当像素值高于阈值时,设为前景值,比如1;如此,得到二值化的图像。在二值化的图像中,如果像素点的值为0,表示是背景点;如果为1,表示是前景点。
每个像素点周围有八个参考像素点,如图10所示,分别位于所述像素点的 上位(P2)、下位(P6)、左位(P8)、右位(P4)、左上位(P9)、左下位(P7)、右上位(P3)、右下位(P5);如果所述像素点处于边界,其周围的八个参考像素点在图中不会都呈现,未呈现的参考像素点的值按0处理。
在前景点中,如果其周围的八个参考像素点都是背景点,则本像素点为孤立点;如果其周围的八个参考像素点中只有1个是前景点,则本像素点为端点;如果其周围的八个参考像素点中只有1个或2个是背景点,则本像素点为内点。
第一条件:所述像素点为非孤立点且非端点且非内点;
第二条件:所述像素点的八个参考像素点中,只存在一组参考像素组;所述参考像素组为相邻两个参考像素点,且按顺时针方向数,所述相邻两个参考像素点的值分别为背景值和前景值。如图11所示,以中心像素点为核心,其周围的八个参考像素点中,按顺时针方向数,存在两组参考像素组,其结果是0、1,所以该中心像素点不满足第二条件。
第三条件,如图10所示,如果P2、P4、P6像素点中至少有一个是背景点,且P4、P6、P2像素点中至少有一个是背景点,则P1像素点满足第三条件;
第四条件,如图10所示,如果P2、P4、P8像素点中至少有一个是背景点,且P2、P6、P8像素点中至少有一个是背景点,则P1像素点满足第四条件;
同时满足第一条件、第二条件和第三条件的前景点,即为符合预设的第一剔除条件的像素点;同时满足第一条件、第二条件和第四条件的前景点,即为符合预设的第二剔除条件的像素点。
将二值化的图像作为待处理图像,遍历待处理图像中的所有前景点,对符合预设的第一剔除条件的像素点进行标记,当遍历完毕后,将标记的像素点剔除,得到第一次处理后的图像;遍历第一次处理后的图像中所有的前景点,对符合预设的第二剔除条件的像素点进行标记,当遍历完毕后,将标记的像素点剔除,得到第二次处理后的图像。剔除的含义是,将该像素点的值由1变成0,即由前景点变成背景点。这算进行了一次迭代。迭代次数加1,得到更新后的 迭代次数。当更新后的迭代次数小于预设最大迭代次数时,将第二次处理后的图像作为待处理图像,再次遍历待处理图像中的所有前景点,剔除所有符合预设的第一剔除条件的像素点,以及剔除所有符合预设的第二剔除条件的像素点。如此循环,直至更新后的迭代次数等于预设最大迭代次数。此时,处理后图像中剩下的前景点即构成二值化的图像的骨架图像。
预设最大迭代次数,依据经验设置,比如N=10。通过多轮迭代删除无用的像素,保留骨架像素点,实现图像的骨架提取。
所述步骤S250包括:
步骤S251遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重复的轮廓线。
步骤S252将剩下的各轮廓分别按照笔画的形式保存,得到所述原始静态图像的笔画数据。
具体的,从提取的骨架图像中检测出所有的轮廓。遍历所有的轮廓,根据轮廓点位置重合度剔除重复轮廓线,如图9所示的简笔画2,小鸭子为例,小鸭子的脚部存在2条轮廓,这2条轮廓存在交叠,所以存在2条重复的轮廓线,删除1条即可。
将剩余的轮廓按照笔画的形式(包括笔画号和对应的笔画点)进行保存,笔画号对应于轮廓,笔画点对应于轮廓点,从而完成笔画数据的提取。
示例,如图9所示,绘图板4上的待绘制的汉字3,“言”为例,经过笔画数据提取后可以得到的笔画数量是5个,则笔画号为1~5,每个笔画又包含若干个笔画点,每个笔画点都对应一个平面坐标值,具体数据如下表1所示:
表1
笔画号 对应“言”字部分 笔画点数量 笔画点坐标(x,y)
1 点“丶” 173个 (264,122)(265,122)…(319,150)
2 横“一” 754个 (420,209)(419,210)…(119,232)
3 横“一” 308个 (372,272)(371,273)…(226,282)
4 横“一” 324个 (376,334)(375,335)…(220,345)
5 口“口” 575个 (348,397)(347,398)…(210,401)
以笔画1为例,它代表的是“言”字最上面的点“丶”,这个点共有173个笔画点,每个笔画点都对应一组平面坐标的值(即此笔画包含173个坐标值),每个笔画点实际上都是一个像素点,因此每个笔画提取到的笔画点的数目是由图片的像素决定的,即同样一个笔画,在像素越多的图片中提取到的笔画点也越多。
如图9所示,绘图板4上的待绘制的简笔画2,小鸭子为例,经过笔画数据提取后可以得到的笔画数量是9个,则笔画号为1~9,每个笔画又包含若干个笔画点,每个笔画点都对应一个平面坐标值,具体数据如下表2所示:
表2
笔画号 对应“小鸭子”部位 笔画点数量 笔画点坐标(x,y)
1 脚部(前) 393 (312,534)(313,533)…(195,554)
2 脚部(后) 455 (422,520)(423、519)…(297,599)
3 躯干部 1157 (345,250)(346,249)…(641,343)
4 翅膀部 614 (349,276)(348,277)…(506,353)
5 颈部 298 (212,251)(213,250)…(131,319)
6 头部 625 (182,30)(183,29)…(285,75)
7 嘴部 362 (358,70)(359,69)…(267,166)
8 眼部 38 (248,86)(249,86)…(251,101)
9 鼻孔(嘴上的点) 12 (305,91)(305,92)…(304,96)
以笔画1为例,它代表简笔画“小鸭子”的前面的脚部的部分,共包含393个笔画点,同样对应有393组平面坐标点,这些坐标点共同组成了“小鸭子”的前脚部。
在本发明的另一个实施例中,如图4所示,一种基于机器视觉的绘图方法,包括:
步骤S100采集待绘制内容的原始静态图像。
步骤S211根据预设的定位块对所述原始静态图像进行校准,得到校准后 的静态图像。
步骤S212对所述校准后的静态图像进行灰度处理,得到对应的灰度图。
步骤S220对所述灰度图进行阈值处理,得到二值化的图像。
步骤S231遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点。
步骤S240根据所述骨架图像,得到对应的轮廓信息。
步骤S251遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重复的轮廓线。
步骤S2521将剩下的各轮廓分别按照笔画的形式保存,并对每个笔画按照预设的精度进行裁剪,得到所述原始静态图像的笔画数据。
具体的,由于目前的摄像头的分辨率一般较高,因此各个笔画中的笔画点数量较多,而无需如此多的笔画点即可完成机器人绘图,因此可以对笔画进行裁剪处理。实现方法为:遍历上一步骤保存的笔画,每个笔画按照预设的精度对数据进行裁剪,例如精度设置为0.5mm,在0.5mm内只保留两个端点的数据,对各个笔画内的笔画点按如此精度进行裁剪,将修剪完成的数据作为最后的笔画数据。精度设置越高,对应的精度值越小,则每个笔画保留的笔画点越多,对用户所绘制的内容的还原度越高。通过设置合理的精度进行笔画裁剪处理,在保证模仿精度的同时剔除一些冗余数据,可以减小数据量,减小本方案的运算量提高绘图速度。
步骤S310根据所述原始静态图像的笔画数据,生成对应的机器人的运动轨迹数据。
具体的,对原始静态图像的笔画数据进行数据整理,得到运动轨迹数据。每个笔画按照新的数据结构体进行保存,将这些数据保存成为链表,即得到运动轨迹数据。
新的数据结构体为:
struct robot_motion
{
   int motion_number;
   int point_number;
   double point[n][2];
};
其中,motion_number对应于轨迹号(源自笔画号),point_number对应于该轨迹号内包含的轨迹点(源自笔画点),point是一个二维数组,保存的是每个轨迹点的坐标值,n的值代表轨迹点的数量。
步骤S320分别对所述机器人的运动轨迹数据中所有轨迹号各自对应的每个轨迹点进行运动学解算,得到每个轨迹点对应的机器人状态变量。
具体的,遍历每个轨迹号的每个轨迹点,对每个轨迹点进行运动学解算,得到该轨迹点对应的机器人状态变量。运动学解算可以采用且不限于几何法、代数法、解析法、智能复合算法等,不同的机器人可以采用相应合适的算法进行运动学解算。保存每个轨迹点对应的机器人状态变量。
以一个五自由度(有五个关节,即五个转轴)的机器人为例,每一个轨迹点经过运动学解算即可得到一组机器人状态变量,此机器人状态变量就是一组包含五个角度值的数据,每个角度值对应机器人的一个关节,机器人五个关节按照所述五个角度值运动后就可以到达一个对应的轨迹点。
步骤S330根据所有轨迹点对应的所述机器人状态变量,得到所述笔画数据对应的所述机器人状态变量列表。
具体的,将每个轨迹点对应的机器人状态变量进行保存,最后整理成机器人状态变量列表。其中机器人状态列表中的每一个数据都是一个数据结构体,如下所示:
struct robot_state
{
   int state_number;
   int point_number;
   double state[j][k];
};
其中,state_number代表机器人的状态变量号,对应于motion_number,一般有多少个笔画就对应有多少个状态变量号;point_number代表该状态变量号下面包含的状态点数量,一般与运动轨迹列表的point_number相等;state代表机器人的状态变量值,为j行k列的数组,k列代机器人表有k个关节,j代表机器人状态变量的数量,对应于运动轨迹列表的轨迹点数量n,即二者值相等。
步骤S410调用预先配置好的机器人运动规划库,对所述机器人状态变量列表中每个机器人状态变量进行运动路径规划,得到每个机器人状态变量对应的运动消息。
步骤S420根据每个机器人状态变量对应的运动消息,生成机器人运动消息序列。
步骤S500根据所述运动消息序列,执行绘图动作。
具体的,机器人运动规划是指根据机器人不同的状态变量,生成由一个状态变量到另一个状态状态变量的运动路径。这一运动路径需要满足一定的外部约束,比如避障、路径最近、耗能最小等。
本实施例提出的机器人运动规划方法是基于ROS(机器人操作系统)完成的,具体步骤如下:
步骤1:机器人ROS系统建模。ROS系统建模既可以采用纯编程的方式进行也可以导入机器人原有三维模型并在其基础上进行编程,ROS建模采用的编程语言为脚本类语言,目前支持URDF(Unified Robot Description Format,统一机器人描述格式)和XACRO(XML Macros,一种XML宏语言)两种。进一步,ROS系统建模的过程就是明确机器人的几何尺寸、关节类型、运动范围、避障 条件等特征的过程,最终生成机器人的模型描述文件。
步骤2:ROS系统Moveit模块配置。Moveit是ROS系统的一个模块,它集成了多个开源的运动规划库,利用这个模块的框架可以实现机器人的运动规划和仿真等。进一步,配置的过程为利用Setup Assistant tool工具,加载上一步骤中生成的机器人的模型描述文件,然后配置运动规划群、碰撞检测、初始状态、运动规划库等信息,最后生成ROS配置文件包。
步骤3:读取状态变量列表。调用ROS系统库编写Moveit接口程序,逐一读取机器人状态变量列表中的每个状态变量值。
步骤4:运动规划库调用。针对每一个状态变量值调用预先配置好的机器人运动规划库,进行状态变量的运动规划。机器人的各个状态变量号下面对应的状态变量值每读取一个,都会调用一次运动规划库,生成一组运动消息。
步骤5:机器人运动消息序列生成。将上一步骤中生成的运动消息进行逐一保存,生成机器人运动消息序列。机器人的运动消息序列就是机器人各个关节的运动量,可以直接发送给机器人驱动模块执行。
步骤6:运动消息序列发送。将上一步骤中生成的机器人运动消息序列进行打包并发送给机器人驱动模块,机器人驱动模块接收到运动消息后执行即可。二者通信方式既可以采用有线的方式也可以采用无线的方式进行。
在本发明的另一个实施例中,如图5所示,一种基于机器视觉的绘图系统,包括:
图像采集模块100,用于采集待绘制内容的原始静态图像。
具体的,待绘制内容是指用户在绘图板上所绘制的内容,包括汉字和/或简笔画。图像采集模块通过安装在机器人上的摄像头采集待绘制内容,得到待绘制内容的原始静态图像。
笔画提取模块200,与所述图像采集模块100电连接,用于对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据。
具体的,对原始静态图像进行处理,比如,先对图像灰度化、二值化,再对二值化的图像提取骨架,从骨架中检测出所有的轮廓,并按笔画形式保存,从而得到原始静态图像的笔画数据。所述笔画数据可能仅有一个笔画,比如用户画个圆,一个圆只有一笔;也可能有多个笔画,比如写个汉字“言”,有5个笔画。
状态变量生成模块300,与所述笔画提取模块200电连接,用于根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表。
具体的,根据笔画数据,生成机器人的运动轨迹数据,然后进行运动学解算并最终生成机器人状态变量列表。
路径规划模块400,与所述状态变量生成模块300电连接,用于根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列。
具体的,根据生成的机器人状态变量列表,进行运动路径规划,生成可以供机器人执行的运动消息序列。
绘图模块500,与所述路径规划模块400电连接,用于根据所述运动消息序列,执行绘图动作。
具体的,根据运动消息序列执行画图动作,从而完全模仿用户的笔迹和风格完成所绘制的内容。
区别于现有技术,本发明的基于机器视觉的绘图方法,在采集待绘制内容的原始静态图像后,利用图像处理技术提取笔画数据、从而得到状态变量列表,再进行运动规划生成运动消息序列,并执行该消息序列来完成绘图动作;由于是从被绘制内容中提取其笔画来实现绘制,因此既不需要预置标准字库、又可以实现对具有笔画的简笔图进行绘制,从而实现了个性化的笔迹模仿,提高了绘制速度,具有更强的交互性和实时性。
在本发明的另一个实施例中,如图6所示,在前一个实施例基础上,对图5对应实施例的笔画提取模块200做了细化:
所述笔画提取模块200包括:
灰度化单元210,用于对所述原始静态图像进行处理,得到对应的灰度图;
具体的,对原始静态图像进行处理,将原始静态图像由彩图转换成灰度图。比如,利用图像处理库OpenCV的函数实现灰度图的转换。
二值化单元220,用于对所述灰度图进行阈值处理,得到二值化的图像;
具体的,将灰度图二值化。比如将阈值设为100,遍历所述灰度图中的每个像素,当像素值低于阈值时,设为背景值,比如0;当像素值高于阈值时,设为前景值,比如255。通过图像的二值化可以去除图像中的噪点,并将用户所绘制的内容的像素值统一,便于提高骨架提取、边缘检测的准确度。
骨架提取单元230,用于提取所述二值化的图像的骨架图像。
具体的,图像的骨架提取是利用一定的算法将图像中用户绘制的内容的边缘无用像素点删除,只保留所绘制内容的骨架部分。所述骨架图像是单像素连通图像的集合。
边缘检测单元240,用于根据所述骨架图像,得到对应的轮廓信息。
具体的,利用图像处理库OpenCV实现边缘检测,从提取出的骨架图像中检测出所有的轮廓。
笔画提取单元250,用于根据所述轮廓信息,提取所述原始静态图像的笔画数据。
具体的,对所有的轮廓及每条轮廓对应的轮廓点,按照笔画的形式(包括笔画号和对应的笔画点)进行保存,笔画号对应于轮廓,笔画点对应于轮廓点,从而得到所述二值化的图像的笔画数据。这里的一条笔画是指单像素且连续的线段,它可以是直线段、曲线段、某闭合区域的边界线或独立的点等,将所有笔画按照位置组合起来则构成了用户所绘内容的整体轮廓。
在本实施例中,通过剔除冗余的像素点,提取原始静态图像的轮廓信息,得到笔画数据,既可以避免图像模糊,又可以减少笔画数量,提高绘制速度。
在本发明的另一个实施例中,如图7所示,在前一个实施例基础上,对图6对应实施例的笔画提取模块200做了进一步的细化:
所述笔画提取模块200还包括:
校准单元260,用于根据预设的定位块对所述原始静态图像进行校准,得到校准后的静态图像。
具体的,本实施例允许摄像头对绘图板有一定角度的倾斜,无需完全垂直于绘图板平面。通过对所拍摄的原始静态图像进行校准,从而不影响后续提取内容的精度。在绘图板上预设定位块,比如用两个长方形块构成一个定位角,如图9所示,在绘图板4上设置四个定位角1,所述四个定位角1的详细尺寸和间距预先已知。根据所拍摄图片中的四个定位角1的尺寸、间距,和拍摄前四个定位角1的尺寸、间距,可以得出四个定位角1的形变,根据该形变对所拍摄图像进行校准,从而得到更准确的图像。
如果摄像头正对绘图板,与绘图板成90度拍摄,得到的原始静态图像没有因拍摄引入图像变形,则不需要对原始静态图像进行校准。
灰度化单元210,用于对所述校准后的静态图像进行灰度处理,得到对应的灰度图。
具体的,将校准后的静态图像由彩图转换成灰度图。
骨架提取单元230,用于提取所述二值化的图像的骨架图像,具体为:所述骨架提取单元,遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点。
具体的,对二值化的图像进行图像的骨架提取,骨架提取是利用一定的算法将图像中用户绘制的内容的边缘无用像素点删除,只保留所绘制内容的骨架部分。一种实现方法为,初始化迭代次数,将二值化的图像作为待处理图像;遍历待处理图像中所有的前景点,对符合预设的第一剔除条件的前景点进行标 记,当遍历完毕后,将标记的前景点剔除,得到第一次处理后的图像;遍历第一次处理后的图像中所有的前景点,对符合预设的第二剔除条件的前景点进行标记,当遍历完毕后,将标记的前景点剔除,得到第二次处理后的图像,这算完成一次迭代;更新迭代次数;当迭代次数小于预设最大迭代次数时,将第二次处理后的图像作为待处理图像,继续前述的剔除的动作,开始一次新的迭代;通过多轮迭代,由外及内,逐步删除边缘无用像素点;当迭代次数等于预设最大迭代次数时,将处理后图像中剩下的前景点,即构成该二值化的图像的骨架图像。这些剩下的前景点,可能是孤立的单像素点,或由一系列单像素连接而成的线段、曲线段或封闭图形等的轮廓线,这些单像素点是原来非单像素连接图像的中间像素点。
示例:首先将原始静态图像转换成灰度图,再将灰度图二值化。比如将阈值设为100,遍历所述灰度图中的每个像素,当像素值低于阈值时,设为背景值,比如0;当像素值高于阈值时,设为前景值,比如1;如此,得到二值化的图像。在二值化的图像中,如果像素点的值为0,表示是背景点;如果为1,表示是前景点。
每个像素点周围有八个参考像素点,如图10所示,分别位于所述像素点的上位(P2)、下位(P6)、左位(P8)、右位(P4)、左上位(P9)、左下位(P7)、右上位(P3)、右下位(P5);如果所述像素点处于边界,其周围的八个参考像素点在图中不会都呈现,未呈现的参考像素点的值按0处理。
在前景点中,如果其周围的八个参考像素点都是背景点,则本像素点为孤立点;如果其周围的八个参考像素点中只有1个是前景点,则本像素点为端点;如果其周围的八个参考像素点中只有1个或2个是背景点,则本像素点为内点。
第一条件:所述像素点为非孤立点且非端点且非内点;
第二条件:所述像素点的八个参考像素点中,只存在一组参考像素组;所述参考像素组为相邻两个参考像素点,且按顺时针方向数,所述相邻两个参考 像素点的值分别为背景值和前景值。如图11所示,以中心像素点为核心,其周围的八个参考像素点中,按顺时针方向数,存在两组参考像素组,其结果是0、1,所以该中心像素点不满足第二条件。
第三条件,如图10所示,如果P2、P4、P6像素点中至少有一个是背景点,且P4、P6、P8像素点中至少有一个是背景点,则P1像素点满足第三条件;
第四条件,如图10所示,如果P2、P4、P8像素点中至少有一个是背景点,且P2、P6、P8像素点中至少有一个是背景点,则P1像素点满足第四条件;
同时满足第一条件、第二条件和第三条件的前景点,即为符合预设的第一剔除条件的像素点;同时满足第一条件、第二条件和第四条件的前景点,即为符合预设的第二剔除条件的像素点。
将二值化的图像作为待处理图像,遍历待处理图像中的所有前景点,对符合预设的第一剔除条件的像素点进行标记,当遍历完毕后,将标记的像素点剔除,得到第一次处理后的图像;遍历第一次处理后的图像中所有的前景点,对符合预设的第二剔除条件的像素点进行标记,当遍历完毕后,将标记的像素点剔除,得到第二次处理后的图像。剔除的含义是,将该像素点的值由1变成0,即由前景点变成背景点。这算进行了一次迭代。迭代次数加1,得到更新后的迭代次数。当更新后的迭代次数小于预设最大迭代次数时,将第二次处理后的图像作为待处理图像,再次遍历待处理图像中的所有前景点,剔除所有符合预设的第一剔除条件的像素点,以及剔除所有符合预设的第二剔除条件的像素点。如此循环,直至更新后的迭代次数等于预设最大迭代次数。此时,处理后图像中剩下的前景点即构成二值化的图像的骨架图像。
预设最大迭代次数,依据经验设置,比如N=10。通过多轮迭代删除无用的像素,保留骨架像素点,实现图像的骨架提取。
笔画提取单元250,用于根据所述轮廓信息,提取所述原始静态图像的笔画数据,具体为:用于遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重 复的轮廓线;以及,将剩下的各轮廓分别按照笔画的形式保存,得到所述原始静态图像的笔画数据。
具体的,从提取的骨架图像中检测出所有的轮廓。遍历所有的轮廓,根据轮廓点位置重合度剔除重复轮廓线,如图9所示的简笔画2,小鸭子为例,小鸭子的脚部存在2条轮廓,这2条轮廓存在交叠,所以存在2条重复的轮廓线,删除1条即可。
将剩余的轮廓按照笔画的形式(包括笔画号和对应的笔画点)进行保存,笔画号对应于轮廓,笔画点对应于轮廓点,从而完成笔画数据的提取。
示例,如图9所示,绘图板4上的待绘制的汉字3,“言”为例,经过笔画数据提取后可以得到的笔画数量是5个,则笔画号为1~5,每个笔画又包含若干个笔画点,每个笔画点都对应一个平面坐标值,具体数据如下表1所示:
表1
笔画号 对应“言”字部分 笔画点数量 笔画点坐标(x,y)
1 点“丶” 173个 (264,122)(265,122)…(319,150)
2 横“一” 754个 (420,209)(419,210)…(119,232)
3 横“一” 308个 (372,272)(371,273)…(226,282)
4 横“一” 324个 (376,334)(375,335)…(220,345)
5 口“口” 575个 (348,397)(347,398)…(210,401)
以笔画1为例,它代表的是“言”字最上面的点“丶”,这个点共有173个笔画点,每个笔画点都对应一组平面坐标的值(即此笔画包含173个坐标值),每个笔画点实际上都是一个像素点,因此每个笔画提取到的笔画点的数目是由图片的像素决定的,即同样一个笔画,在像素越多的图片中提取到的笔画点也越多。
如图9所示,绘图板4上的待绘制的简笔画2,小鸭子为例,经过笔画数据提取后可以得到的笔画数量是9个,则笔画号为1~9,每个笔画又包含若干个笔画点,每个笔画点都对应一个平面坐标值,具体数据如下表2所示:
表2
笔画号 对应“小鸭子”部位 笔画点数量 笔画点坐标(x,y)
1 脚部(前) 393 (312,534)(313,533)…(195,554)
2 脚部(后) 455 (422,520)(423、519)…(297,599)
3 躯干部 1157 (345,250)(346,249)…(641,343)
4 翅膀部 614 (349,276)(348,277)…(506,353)
5 颈部 298 (212,251)(213,250)…(131,319)
6 头部 625 (182,30)(183,29)…(285,75)
7 嘴部 362 (358,70)(359,69)…(267,166)
8 眼部 38 (248,86)(249,86)…(251,101)
9 鼻孔(嘴上的点) 12 (305,91)(305,92)…(304,96)
以笔画1为例,它代表简笔画“小鸭子”的前面的脚部的部分,共包含393个笔画点,同样对应有393组平面坐标点,这些坐标点共同组成了“小鸭子”的前脚部。
在本发明的另一个实施例中,如图8所示,一种基于机器视觉的绘图系统,包括:
图像采集模块100,用于采集待绘制内容的原始静态图像。
笔画提取模块200,用于对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据,包括:
校准单元260,用于根据预设的定位块对所述原始静态图像进行校准,得到校准后的静态图像;
灰度化单元210,用于对所述校准后的静态图像进行灰度处理,得到对应的灰度图;
二值化单元220,用于对所述灰度图进行阈值处理,得到二值化的图像;
骨架提取单元230,用于提取所述二值化的图像的骨架图像,具体为:所述骨架提取单元,遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点;
边缘检测单元240,用于根据所述骨架图像,得到对应的轮廓信息;
笔画提取单元250,用于根据所述轮廓信息,提取所述原始静态图像的笔画数据,具体为:用于遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重复的轮廓线;以及,将剩下的各轮廓分别按照笔画的形式保存,并对每个笔画按照预设的精度进行裁剪,得到所述原始静态图像的笔画数据。
具体的,由于目前的摄像头的分辨率一般较高,因此各个笔画中的笔画点数量较多,而无需如此多的笔画点即可完成机器人绘图,因此可以对笔画进行裁剪处理。实现方法为:遍历上一步骤保存的笔画,每个笔画按照预设的精度对数据进行裁剪,例如精度设置为0.5mm,在0.5mm内只保留两个端点的数据,对各个笔画内的笔画点按如此精度进行裁剪,将修剪完成的数据作为最后的笔画数据。精度设置越高,对应的精度值越小,则每个笔画保留的笔画点越多,对用户所绘制的内容的还原度越高。通过设置合理的精度进行笔画裁剪处理,在保证模仿精度的同时剔除一些冗余数据,可以减小数据量,减小本方案的运算量提高绘图速度。
状态变量生成模块300,用于根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表,包括:
运动轨迹数据生成单元310,用于根据所述原始静态图像的笔画数据,生成对应的机器人的运动轨迹数据。
具体的,对原始静态图像的笔画数据进行数据整理,得到运动轨迹数据。每个笔画按照新的数据结构体进行保存,将这些数据保存成为链表,即得到运动轨迹数据。
新的数据结构体为:
struct robot_motion
{
   int motion_number;
   int point_number;
   double point[n][2];
};
其中,motion_number对应于轨迹号(源自笔画号),point_number对应于该轨迹号内包含的轨迹点(源自笔画点),point是一个二维数组,保存的是每个轨迹点的坐标值,n的值代表轨迹点的数量。
运动学解算单元320,用于分别对所述机器人的运动轨迹数据中所有轨迹号各自对应的每个轨迹点进行运动学解算,得到每个轨迹点对应的机器人状态变量。
具体的,遍历每个轨迹号的每个轨迹点,对每个轨迹点进行运动学解算,得到该轨迹点对应的机器人状态变量。运动学解算可以采用且不限于几何法、代数法、解析法、智能复合算法等,不同的机器人可以采用相应合适的算法进行运动学解算。保存每个轨迹点对应的机器人状态变量。
以一个五自由度(有五个关节,即五个转轴)的机器人为例,每一个轨迹点经过运动学解算即可得到一组机器人状态变量,此机器人状态变量就是一组包含五个角度值的数据,每个角度值对应机器人的一个关节,机器人五个关节按照所述五个角度值运动后就可以到达一个对应的轨迹点。
状态变量列表生成单元330,用于根据所有轨迹点对应的所述机器人状态变量,得到所述笔画数据对应的所述机器人状态变量列表;
路径规划模块400,用于根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列,包括:
路径规划单元410,用于调用预先配置好的机器人运动规划库,对所述机器人状态变量列表中每个机器人状态变量进行运动路径规划,得到每个机器人状态变量对应的运动消息;
消息序列生成单元420,用于根据每个机器人状态变量对应的运动消息,生成机器人运动消息序列;
绘图模块500,用于根据所述运动消息序列,执行绘图动作。
具体的,机器人运动规划是指根据机器人不同的状态变量,生成由一个状态变量到另一个状态状态变量的运动路径。这一运动路径需要满足一定的外部约束,比如避障、路径最近、耗能最小等。
本实施例提出的机器人运动规划方法是基于ROS(机器人操作系统)完成的,具体步骤如下:
步骤1:机器人ROS系统建模。ROS系统建模既可以采用纯编程的方式进行也可以导入机器人原有三维模型并在其基础上进行编程,ROS建模采用的编程语言为脚本类语言,目前支持URDF(Unified Robot Description Format,统一机器人描述格式)和XACRO(XML Macros,一种XML宏语言)两种。进一步,ROS系统建模的过程就是明确机器人的几何尺寸、关节类型、运动范围、避障条件等特征的过程,最终生成机器人的模型描述文件。
步骤2:ROS系统Moveit模块配置。Moveit是ROS系统的一个模块,它集成了多个开源的运动规划库,利用这个模块的框架可以实现机器人的运动规划和仿真等。进一步,配置的过程为利用Setup Assistant tool工具,加载上一步骤中生成的机器人的模型描述文件,然后配置运动规划群、碰撞检测、初始状态、运动规划库等信息,最后生成ROS配置文件包。
步骤3:读取状态变量列表。调用ROS系统库编写Moveit接口程序,逐一读取机器人状态变量列表中的每个状态变量值。
步骤4:运动规划库调用。针对每一个状态变量值调用预先配置好的机器人运动规划库,进行状态变量的运动规划。机器人的各个状态变量号下面对应的状态变量值每读取一个,都会调用一次运动规划库,生成一组运动消息。
步骤5:机器人运动消息序列生成。将上一步骤中生成的运动消息进行逐一保存,生成机器人运动消息序列。机器人的运动消息序列就是机器人各个关节的运动量,可以直接发送给机器人驱动模块执行。
步骤6:运动消息序列发送。将上一步骤中生成的机器人运动消息序列进行 打包并发送给机器人驱动模块,机器人驱动模块接收到运动消息后执行即可。二者通信方式既可以采用有线的方式也可以采用无线的方式进行。
应当说明的是,上述实施例均可根据需要自由组合。以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (16)

  1. 一种基于机器视觉的绘图方法,其特征在于,包括:
    步骤S100采集待绘制内容的原始静态图像;
    步骤S200对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据;
    步骤S300根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表;
    步骤S400根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列;
    步骤S500根据所述运动消息序列,执行绘图动作。
  2. 根据权利要求1所述的基于机器视觉的绘图方法,其特征在于,所述步骤S200具体包括:
    步骤S210对所述原始静态图像进行处理,得到对应的灰度图;
    步骤S220对所述灰度图进行阈值处理,得到二值化的图像;
    步骤S230提取所述二值化的图像的骨架图像;
    步骤S240根据所述骨架图像,得到对应的轮廓信息;
    步骤S250根据所述轮廓信息,提取所述原始静态图像的笔画数据。
  3. 根据权利要求2所述的基于机器视觉的绘图方法,其特征在于,所述步骤S210包括:
    步骤S211根据预设的定位块对所述原始静态图像进行校准,得到校准后的静态图像;
    步骤S212对所述校准后的静态图像进行灰度处理,得到对应的灰度图。
  4. 根据权利要求2所述的基于机器视觉的绘图方法,其特征在于,所述 步骤S230包括:
    步骤S231遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点。
  5. 根据权利要求2所述的基于机器视觉的绘图方法,其特征在于,所述步骤S250包括:
    步骤S251遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重复的轮廓线;
    步骤S252将剩下的各轮廓分别按照笔画的形式保存,得到所述原始静态图像的笔画数据。
  6. 根据权利要求5所述的基于机器视觉的绘图方法,其特征在于,所述步骤S252进一步包括:
    步骤S2521将剩下的各轮廓分别按照笔画的形式保存,并对每个笔画按照预设的精度进行裁剪,得到所述原始静态图像的笔画数据。
  7. 根据权利要求1所述的基于机器视觉的绘图方法,其特征在于,所述步骤S300包括:
    步骤S310根据所述原始静态图像的笔画数据,生成对应的机器人的运动轨迹数据;
    步骤S320分别对所述机器人的运动轨迹数据中所有轨迹号各自对应的每个轨迹点进行运动学解算,得到每个轨迹点对应的机器人状态变量;
    步骤S330根据所有轨迹点对应的所述机器人状态变量,得到所述笔画数据对应的所述机器人状态变量列表。
  8. 根据权利要求1所述的基于机器视觉的绘图方法,其特征在于,所述步骤S400包括:
    步骤S410调用预先配置好的机器人运动规划库,对所述机器人状态变量列表中每个机器人状态变量进行运动路径规划,得到每个机器人状态变量对应的运动消息;
    步骤S420根据每个机器人状态变量对应的运动消息,生成机器人运动消息序列。
  9. 一种基于机器视觉的绘图系统,其特征在于,包括:
    图像采集模块,用于采集待绘制内容的原始静态图像;
    笔画提取模块,与所述图像采集模块电连接,用于对所述原始静态图像进行处理,提取所述原始静态图像的笔画数据;
    状态变量生成模块,与所述笔画提取模块电连接,用于根据所述原始静态图像的笔画数据,得到对应的机器人状态变量列表;
    路径规划模块,与所述状态变量生成模块电连接,用于根据所述机器人状态变量列表,进行运动路径规划,生成运动消息序列;
    绘图模块,与所述路径规划模块电连接,用于根据所述运动消息序列,执行绘图动作。
  10. 根据权利要求9所述的基于机器视觉的绘图系统,其特征在于,所述笔画提取模块包括:
    灰度化单元,用于对所述原始静态图像进行处理,得到对应的灰度图;
    二值化单元,用于对所述灰度图进行阈值处理,得到二值化的图像;
    骨架提取单元,用于提取所述二值化的图像的骨架图像;
    边缘检测单元,用于根据所述骨架图像,得到对应的轮廓信息;
    笔画提取单元,用于根据所述轮廓信息,提取所述原始静态图像的笔画 数据。
  11. 根据权利要求10所述的基于机器视觉的绘图系统,其特征在于,所述笔画提取模块还包括:
    校准单元,用于根据预设的定位块对所述原始静态图像进行校准,得到校准后的静态图像;
    所述灰度化单元,进一步用于对所述校准后的静态图像进行灰度处理,得到对应的灰度图。
  12. 根据权利要求10所述的基于机器视觉的绘图系统,其特征在于:
    所述骨架提取单元,用于提取所述二值化的图像的骨架图像具体为:所述骨架提取单元,遍历所述二值化的图像中的每个像素点,获取单像素点或一系列单像素连接而成的线段、曲线段或封闭图形的轮廓线作为骨架图像,所述单像素点是非单像素连接图像的中间像素点。
  13. 根据权利要求10所述的基于机器视觉的绘图系统,其特征在于:
    所述笔画提取单元,用于根据所述轮廓信息,提取所述原始静态图像的笔画数据具体为:用于遍历所述轮廓信息,剔除所述轮廓信息中各轮廓之间重复的轮廓线;以及,将剩下的各轮廓分别按照笔画的形式保存,得到所述原始静态图像的笔画数据。
  14. 根据权利要求13所述的基于机器视觉的绘图系统,其特征在于:
    所述笔画提取单元,进一步用于将剩下的各轮廓分别按照笔画的形式保存,并对每个笔画按照预设的精度进行裁剪,得到所述原始静态图像的笔画数据。
  15. 根据权利要求9所述的基于机器视觉的绘图系统,其特征在于,所述状态变量生成模块包括:
    运动轨迹数据生成单元,用于根据所述原始静态图像的笔画数据,生成对应的机器人的运动轨迹数据;
    运动学解算单元,用于分别对所述机器人的运动轨迹数据中所有轨迹号各自对应的每个轨迹点进行运动学解算,得到每个轨迹点对应的机器人状态变量;
    状态变量列表生成单元,用于根据所有轨迹点对应的所述机器人状态变量,得到所述笔画数据对应的所述机器人状态变量列表。
  16. 根据权利要求9所述的基于机器视觉的绘图系统,其特征在于,所述路径规划模块包括:
    路径规划单元,用于调用预先配置好的机器人运动规划库,对所述机器人状态变量列表中每个机器人状态变量进行运动路径规划,得到每个机器人状态变量对应的运动消息;
    消息序列生成单元,用于根据每个机器人状态变量对应的运动消息,生成机器人运动消息序列。
PCT/CN2018/106790 2018-04-04 2018-09-20 一种基于机器视觉的绘图方法及系统 WO2019192149A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810297791.7A CN108460369B (zh) 2018-04-04 2018-04-04 一种基于机器视觉的绘图方法及系统
CN201810297791.7 2018-04-04

Publications (1)

Publication Number Publication Date
WO2019192149A1 true WO2019192149A1 (zh) 2019-10-10

Family

ID=63235051

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/106790 WO2019192149A1 (zh) 2018-04-04 2018-09-20 一种基于机器视觉的绘图方法及系统

Country Status (2)

Country Link
CN (1) CN108460369B (zh)
WO (1) WO2019192149A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111152234A (zh) * 2019-12-27 2020-05-15 深圳市越疆科技有限公司 用于机器人的书法临摹方法、装置及机器人
CN111275049A (zh) * 2020-01-19 2020-06-12 佛山市国方识别科技有限公司 一种文字图像骨架特征描述符获取的方法及装置
CN112950535A (zh) * 2021-01-22 2021-06-11 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备及存储介质
CN113487697A (zh) * 2021-07-20 2021-10-08 维沃移动通信(杭州)有限公司 简笔画生成方法、装置、电子设备及存储介质
CN114407047A (zh) * 2022-03-01 2022-04-29 蓝宙(江苏)技术有限公司 一种绘画机器人及其控制方法
CN115756175A (zh) * 2023-01-06 2023-03-07 山东维创精密电子有限公司 一种基于虚拟现实数据的数据处理系统
CN116795222A (zh) * 2023-06-20 2023-09-22 广东工业大学 一种基于OpenCV图像识别的数字毛笔

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460369B (zh) * 2018-04-04 2020-04-14 南京阿凡达机器人科技有限公司 一种基于机器视觉的绘图方法及系统
CN109940611B (zh) * 2019-02-26 2022-01-21 深圳市越疆科技有限公司 轨迹复现方法、系统及终端设备
CN109993680A (zh) * 2019-04-04 2019-07-09 中科云创(厦门)科技有限公司 字迹模仿方法、装置、电子设备及计算机可读介质
CN110033498B (zh) * 2019-04-18 2021-03-30 吉林大学 一种椭圆/矩形螺旋线一笔画效果的图案处理方法
CN110524549A (zh) * 2019-08-19 2019-12-03 广东智媒云图科技股份有限公司 一种基于机械臂和铆钉枪的作画方法、装置及系统
CN110587620A (zh) * 2019-08-30 2019-12-20 重庆智能机器人研究院 工业机器人写绘方法、系统、工件加工方法和计算机设备
CN111125403B (zh) * 2019-11-27 2022-07-05 浙江大学 一种基于人工智能的辅助设计绘图方法及系统
CN111047671B (zh) * 2019-12-24 2023-05-16 成都来画科技有限公司 一种手绘图片的绘画路径的优化方法及存储介质
CN111185902B (zh) * 2019-12-30 2021-05-28 深圳市越疆科技有限公司 基于视觉识别的机器人文字书写方法、装置和书写系统
CN111251309B (zh) * 2020-01-08 2021-06-15 杭州未名信科科技有限公司 控制机器人绘制图像的方法、装置、机器人及介质
CN111185903B (zh) * 2020-01-08 2022-05-13 杭州未名信科科技有限公司 控制机械臂绘制人像画的方法、装置及机器人系统
CN111195912B (zh) * 2020-01-08 2021-06-15 杭州未名信科科技有限公司 利用机械臂绘制肖像画的方法、装置、机器人及存储介质
CN111168676B (zh) * 2020-01-08 2021-06-15 杭州未名信科科技有限公司 机械臂手眼协作绘画方法、装置、绘画机器人及介质
CN112959320A (zh) * 2021-02-08 2021-06-15 广州富港万嘉智能科技有限公司 控制机械手自动书写的方法及装置、机械手、系统
CN112975958B (zh) * 2021-02-08 2023-04-28 广州富港生活智能科技有限公司 文字的笔顺的路径点生成方法及装置、机械手控制系统
CN114332985B (zh) * 2021-12-06 2024-06-18 上海大学 一种基于双机械臂协同的肖像轮廓智能绘制方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1710608A (zh) * 2005-07-07 2005-12-21 上海交通大学 机器人绘制人脸漫画的图像处理方法
CN105945947A (zh) * 2016-05-20 2016-09-21 西华大学 基于手势控制的机器人写字系统及其控制方法
CN106826822A (zh) * 2017-01-25 2017-06-13 南京阿凡达机器人科技有限公司 一种基于ros系统的视觉定位及机械臂抓取实现方法
CN107127753A (zh) * 2017-05-05 2017-09-05 燕山大学 一种基于脱机文字识别的仿生写字机械手书写汉字系统
CN108460369A (zh) * 2018-04-04 2018-08-28 南京阿凡达机器人科技有限公司 一种基于机器视觉的绘图方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1710608A (zh) * 2005-07-07 2005-12-21 上海交通大学 机器人绘制人脸漫画的图像处理方法
CN105945947A (zh) * 2016-05-20 2016-09-21 西华大学 基于手势控制的机器人写字系统及其控制方法
CN106826822A (zh) * 2017-01-25 2017-06-13 南京阿凡达机器人科技有限公司 一种基于ros系统的视觉定位及机械臂抓取实现方法
CN107127753A (zh) * 2017-05-05 2017-09-05 燕山大学 一种基于脱机文字识别的仿生写字机械手书写汉字系统
CN108460369A (zh) * 2018-04-04 2018-08-28 南京阿凡达机器人科技有限公司 一种基于机器视觉的绘图方法及系统

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111152234A (zh) * 2019-12-27 2020-05-15 深圳市越疆科技有限公司 用于机器人的书法临摹方法、装置及机器人
CN111275049A (zh) * 2020-01-19 2020-06-12 佛山市国方识别科技有限公司 一种文字图像骨架特征描述符获取的方法及装置
CN111275049B (zh) * 2020-01-19 2023-07-21 佛山市国方识别科技有限公司 一种文字图像骨架特征描述符获取的方法及装置
CN112950535A (zh) * 2021-01-22 2021-06-11 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备及存储介质
CN112950535B (zh) * 2021-01-22 2024-03-22 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备及存储介质
CN113487697A (zh) * 2021-07-20 2021-10-08 维沃移动通信(杭州)有限公司 简笔画生成方法、装置、电子设备及存储介质
CN114407047A (zh) * 2022-03-01 2022-04-29 蓝宙(江苏)技术有限公司 一种绘画机器人及其控制方法
CN115756175A (zh) * 2023-01-06 2023-03-07 山东维创精密电子有限公司 一种基于虚拟现实数据的数据处理系统
CN116795222A (zh) * 2023-06-20 2023-09-22 广东工业大学 一种基于OpenCV图像识别的数字毛笔
CN116795222B (zh) * 2023-06-20 2024-03-29 广东工业大学 一种基于OpenCV图像识别的数字毛笔

Also Published As

Publication number Publication date
CN108460369A (zh) 2018-08-28
CN108460369B (zh) 2020-04-14

Similar Documents

Publication Publication Date Title
WO2019192149A1 (zh) 一种基于机器视觉的绘图方法及系统
CN109664300B (zh) 一种基于力觉学习的机器人多风格书法临摹方法
CN109376582B (zh) 一种基于生成对抗网络的交互式人脸卡通方法
JP3441690B2 (ja) テンプレートを用いて物理オブジェクトを認識する方法
CN108656107B (zh) 一种基于图像处理的机械臂抓取系统及方法
US20200034971A1 (en) Image Object Segmentation Based on Temporal Information
CN109746916B (zh) 一种机器人书写书法的方法及系统
CN105500370B (zh) 一种基于体感技术的机器人离线示教编程系统及方法
Mueller et al. Robotic calligraphy—learning how to write single strokes of Chinese and Japanese characters
CN113927597B (zh) 基于深度学习的机器人连接件六自由度位姿估计系统
CN104951788B (zh) 一种书法作品中单字笔画的提取方法
CN111723789A (zh) 一种基于深度学习的图像文本坐标定位方法
Thalhammer et al. Pyrapose: Feature pyramids for fast and accurate object pose estimation under domain shift
CN110084890A (zh) 基于混合现实的机械臂文字复写方法及装置
CN112381783A (zh) 一种基于红色线激光的焊缝轨迹提取方法
CN110232337B (zh) 基于全卷积神经网络的中文字符图像笔划提取方法、系统
CN114782645A (zh) 虚拟数字人制作方法、相关设备及可读存储介质
CN113793385A (zh) 鱼头鱼尾定位方法及装置
Gan et al. Towards a robotic Chinese calligraphy writing framework
CN111860515A (zh) 一种交互式智能2d语义分割系统、方法、存储介质和装置
TWI817680B (zh) 影像擴增方法以及裝置
CN111078008A (zh) 一种早教机器人的控制方法
US20230071291A1 (en) System and method for a precise semantic segmentation
EP4155036A1 (en) A method for controlling a grasping robot through a learning phase and a grasping phase
CN109591012B (zh) 加强学习方法、机器人和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18913596

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18913596

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 23.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18913596

Country of ref document: EP

Kind code of ref document: A1