CN111251309A - Method and device for controlling robot to draw image, robot and medium - Google Patents

Method and device for controlling robot to draw image, robot and medium Download PDF

Info

Publication number
CN111251309A
CN111251309A CN202010016613.XA CN202010016613A CN111251309A CN 111251309 A CN111251309 A CN 111251309A CN 202010016613 A CN202010016613 A CN 202010016613A CN 111251309 A CN111251309 A CN 111251309A
Authority
CN
China
Prior art keywords
contour
track point
image
point sequence
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010016613.XA
Other languages
Chinese (zh)
Other versions
CN111251309B (en
Inventor
俞泽远
高飞
李鹏
朱静洁
王韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Original Assignee
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Institute of Information Technology AIIT of Peking University, Hangzhou Weiming Information Technology Co Ltd filed Critical Advanced Institute of Information Technology AIIT of Peking University
Priority to CN202010016613.XA priority Critical patent/CN111251309B/en
Publication of CN111251309A publication Critical patent/CN111251309A/en
Application granted granted Critical
Publication of CN111251309B publication Critical patent/CN111251309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The application provides a method and a device for controlling a robot to draw an image, the robot and a computer readable medium. The method comprises the following steps: acquiring a target image to be drawn; decomposing the target image into a plurality of layers of contours from outside to inside step by step, and generating a contour track point sequence corresponding to each layer of contour; and carrying out data cleaning on the contour track point sequence according to a preset drawing precision range, and inputting the contour track point sequence subjected to data cleaning into a mechanical arm for drawing. This scheme can be according to the input image to and the effective drawing precision of actual arm, generate the drawing orbit, can draw the input image completely, and optimize the process that the arm was drawn, thereby satisfied the diversified demand of user.

Description

Method and device for controlling robot to draw image, robot and medium
Technical Field
The application relates to the technical field of robot control, in particular to a method and a device for controlling a robot to draw an image, the robot and a computer readable medium.
Background
With the development of science and technology, the functions of the robot are more and more powerful, and the application of the robot is more and more extensive. Today, many people are keen on painting still, and the painting robot can paint pictures into images, so that the painting robot is popular among more people.
Further, the drawing robot is dedicated to solve the problem of converting a target image into a track sequence for the mechanical arm to draw. At present, most painting robots generate images in a printing mode, and the mode of interaction with users is single, so that the various requirements of the users cannot be met.
Disclosure of Invention
The application aims to provide a method and a device for controlling a robot to draw an image, the robot and a computer readable medium.
A first aspect of the application provides a method of controlling a robot to render an image, comprising:
acquiring a target image to be drawn;
decomposing the target image into a plurality of layers of contours from outside to inside step by step, and generating a contour track point sequence corresponding to each layer of contour;
and carrying out data cleaning on the contour track point sequence according to a preset drawing precision range, and inputting the contour track point sequence subjected to data cleaning into a mechanical arm for drawing.
In some embodiments of the present application, the decomposing the target image into multiple layers of contours from outside to inside step by step and generating a sequence of contour track points corresponding to each layer of contours includes:
s1, retrieving the outermost layer contour of the target image and recording the retrieved initial contour track point sequence;
step S2, adjusting the initial contour track point sequence according to a preset stroke width, and generating a contour track point sequence corresponding to the outermost contour;
step S3, subtracting the outermost layer contour from the target image to obtain a residual image;
step S4, the same processing from step S1 to step S3 is carried out on the residual partial images until the condition of ending the retrieval is triggered; the search ending condition is as follows: and calculating to obtain the area of the image of the residual part, and finishing the retrieval if the area of the image of the residual part is smaller than a preset area threshold.
In some embodiments of the present application, the performing data cleaning on the contour trace point sequence according to a preset drawing precision range includes:
calculating the distance between the current track point and the next track point from the initial track point of each layer of outline;
if the distance is smaller than a preset drawing precision range, filtering the next track point;
if the distance is larger than the preset drawing precision range, searching from the next track point again until all track points on the current contour are traversed, and finishing primary filtering;
and after the preliminary filtering, if the number of the residual track points in the current contour is 1, deleting the contour.
In some embodiments of the present application, before the step of decomposing the target image into the multi-layer contour from outside to inside, the method further includes:
and preprocessing the target image.
A second aspect of the present application provides an apparatus for controlling a robot to draw an image, comprising:
the acquisition module is used for acquiring a target image to be drawn;
the generating module is used for decomposing the target image into a plurality of layers of contours from outside to inside step by step and generating a contour track point sequence corresponding to each layer of contour;
and the data cleaning module is used for carrying out data cleaning on the contour track point sequence according to a preset drawing precision range and inputting the contour track point sequence subjected to data cleaning into the mechanical arm for drawing.
In some embodiments of the present application, the generating module is specifically configured to:
s1, retrieving the outermost layer contour of the target image, and recording the retrieved initial contour track point sequence;
s2, adjusting the initial contour track point sequence according to a preset stroke width, and generating a contour track point sequence corresponding to the outermost contour;
s3, subtracting the outermost layer outline from the target image to obtain a residual image;
s4, the same processing from S1 to S3 is carried out on the residual images until the condition of ending the retrieval is triggered; the search ending condition is as follows: and calculating to obtain the area of the image of the residual part, and finishing the retrieval if the area of the image of the residual part is smaller than a preset area threshold.
In some embodiments of the present application, the data cleansing module is specifically configured to:
calculating the distance between the current track point and the next track point from the initial track point of each layer of outline;
if the distance is smaller than a preset drawing precision range, filtering the next track point;
if the distance is larger than the preset drawing precision range, searching from the next track point again until all track points on the current contour are traversed, and finishing primary filtering;
and after the preliminary filtering, if the number of the residual track points in the current contour is 1, deleting the contour.
In some embodiments of the present application, the apparatus further comprises:
and the preprocessing module is used for preprocessing the region image before the generation module gradually decomposes the target image into a plurality of layers of contours from outside to inside.
A third aspect of the present application provides a robot comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the computer program when executing the computer program to perform the method of the first aspect of the application.
A fourth aspect of the present application provides a computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of the first aspect of the present application.
Compared with the prior art, the method, the device, the robot and the medium for controlling the robot to draw the image are realized by acquiring the target image to be drawn; decomposing the target image into a plurality of layers of contours from outside to inside step by step, and generating a contour track point sequence corresponding to each layer of contour; and carrying out data cleaning on the contour track point sequence according to a preset drawing precision range, and inputting the contour track point sequence subjected to data cleaning into a mechanical arm for drawing. This scheme can be according to the input image to and the effective drawing precision of actual arm, generate the drawing orbit, can draw the input image completely, and optimize the process that the arm was drawn, thereby satisfied the diversified demand of user.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 illustrates a flow chart of a method of controlling a robot to render an image, provided by some embodiments of the present application;
fig. 2 shows a flowchart of step S102 provided by some embodiments of the present application;
FIG. 3 illustrates a schematic diagram of an apparatus for controlling a robot to render an image, provided by some embodiments of the present application;
FIG. 4 illustrates a schematic view of a robot provided by some embodiments of the present application;
FIG. 5 illustrates a schematic diagram of a computer-readable medium provided by some embodiments of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
In addition, the terms "first" and "second", etc. are used to distinguish different objects, rather than to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the application provides a method and a device for controlling a robot to draw an image, the robot and a computer readable medium, which are described below with reference to the accompanying drawings.
Referring to fig. 1, which illustrates a flowchart of a method for controlling a robot to draw an image according to some embodiments of the present application, as shown, the method for controlling the robot to draw an image may include the following steps:
step S101: acquiring a target image to be drawn;
in this embodiment, the target image may be any image, for example, a portrait, a forest landscape, or the like, a color image, or a binary image such as black and white.
In order to better meet the user requirement, in some embodiments of the present application, before step S102, the method may further include: and preprocessing the target image.
For example, after a portrait of a person is obtained, different machine vision preprocessing is performed on the portrait of the person according to the actual needs of a user, such as image sharpening on a blurred image, adaptive filtering on a noise image, smoothing on a rough-line image, and the like.
Step S102: decomposing the target image into a plurality of layers of contours from outside to inside step by step, and generating a contour track point sequence corresponding to each layer of contour;
in this embodiment, as shown in fig. 2, step S102 may specifically be implemented as follows:
step S201, retrieving the outermost layer contour of the target image, and recording the retrieved initial contour track point sequence;
for example, searching all the outlines of a portrait of a person from outside to inside, first searching the outermost outline of the portrait of the person, a large number of outline trace points may be searched, and a sequence α 1 of the searched initial outline trace points may be recorded, each group of points representing an outline.
Step S202, adjusting the initial contour track point sequence according to a preset stroke width, and generating a contour track point sequence corresponding to the outermost contour;
specifically, the preset stroke width is the stroke width of the actually used pen, and according to the stroke of the actually used pen, if the total width of a plurality of pixels of the image corresponding to the stroke width is calculated to be K, the K is used as the width, the contour trace points are connected one by one, and a corresponding contour trace point sequence L1 is generated.
Step S203, subtracting the outermost layer outline from the target image to obtain a remaining part image, and calculating the area of the remaining part image;
step S204, judging whether the area of the residual partial image is smaller than a preset area threshold value or not; if not, the same processing from the step S201 to the step S203 is carried out on the residual partial image; if yes, the search is ended.
For example, from the original personSubtracting the contour track point sequence L1 obtained in the step S202 from the portrait image, and repeatedly decomposing the original portrait into a contour track point sequence (α) of an outermost contour α 1, a second outer contour α 2, a second outer contour … and an innermost contour α n by adopting the same processing in the steps S201 to S203 for the rest part of the image (α)1,α2…,αn) And completing the retrieval until the retrieval ending condition is triggered. The search ending condition is as follows: and calculating to obtain the area of the image of the residual part, and finishing the retrieval if the area of the image of the residual part is smaller than a preset area threshold.
Step S103: and carrying out data cleaning on the contour track point sequence according to a preset drawing precision range, and inputting the contour track point sequence subjected to data cleaning into a mechanical arm for drawing.
Specifically, step S103 may be specifically implemented as:
calculating the distance between the current track point and the next track point from the initial track point of each layer of outline; if the distance is smaller than a preset drawing precision range, filtering the next track point; if the distance is larger than the preset drawing precision range, searching from the next track point again until all track points on the current contour are traversed, and finishing primary filtering; and after the preliminary filtering, if the number of the residual track points in the current contour is 1, deleting the contour.
Specifically, a sequence of contour trace points (α)1,α2…,αn) When the mechanical arm is input, preprocessing such as data conversion is required, so that the distance between contour track points is smaller than the precision range of the mechanical arm, and the mechanical arm cannot be drawn correctly. Therefore, the sequence of contour trace points may be subjected to data cleaning or the like in advance. For example, starting from the starting point of each contour track, calculating the distance between the current track point a and the next track point a +1, if the distance is smaller than the precision range of the mechanical arm (preset drawing precision range), filtering the next point, and if the distance is larger than the precision range of the mechanical arm, starting to search from the point a +1 until all points on the contour are traversed; after the preliminary filtering, if only 1 contour point is left in the contour, the contour is deleted to prevent the breakageSpoiling the aesthetics of the final painting result.
Compared with the prior art, the method for controlling the robot to draw the image can generate the drawing track according to the input image and the actual effective drawing precision of the mechanical arm, can completely draw the input image, and optimizes the process of drawing the mechanical arm, so that various requirements of users are met.
In the above embodiment, a method for controlling a robot to draw an image is provided, and correspondingly, the present application also provides an apparatus for controlling a robot to draw an image. The device for controlling the robot to draw the image can implement the method for controlling the robot to draw the image, and the device for controlling the robot to draw the image can be implemented by software, hardware or a combination of software and hardware. For example, the apparatus for controlling a robot to draw an image may include integrated or separate functional modules or units to perform the corresponding steps of the above-described methods. Referring to fig. 3, a schematic diagram of an apparatus for controlling a robot to draw an image according to some embodiments of the present disclosure is shown. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
As shown in fig. 3, the apparatus 10 for controlling a robot to draw an image may include:
an obtaining module 101, configured to obtain a target image to be drawn;
the generating module 102 is configured to gradually decompose the target image into multiple layers of contours from outside to inside, and generate a contour track point sequence corresponding to each layer of contour;
and the data cleaning module 103 is used for performing data cleaning on the contour track point sequence according to a preset drawing precision range, and inputting the contour track point sequence subjected to data cleaning into the mechanical arm for drawing.
In some implementations of the embodiments of the present application, the generating module 102 is specifically configured to:
s1, retrieving the outermost layer contour of the target image, and recording the retrieved initial contour track point sequence;
s2, adjusting the initial contour track point sequence according to a preset stroke width, and generating a contour track point sequence corresponding to the outermost contour;
s3, subtracting the outermost layer outline from the target image to obtain a residual image;
s4, the same processing from S1 to S3 is carried out on the residual images until the condition of ending the retrieval is triggered; the search ending condition is as follows: and calculating to obtain the area of the image of the residual part, and finishing the retrieval if the area of the image of the residual part is smaller than a preset area threshold.
In some implementations of the embodiments of the present application, the data cleansing module 103 is specifically configured to:
calculating the distance between the current track point and the next track point from the initial track point of each layer of outline;
if the distance is smaller than a preset drawing precision range, filtering the next track point;
if the distance is larger than the preset drawing precision range, searching from the next track point again until all track points on the current contour are traversed, and finishing primary filtering;
and after the preliminary filtering, if the number of the residual track points in the current contour is 1, deleting the contour.
In some implementations of embodiments of the present application, the apparatus 10 further comprises:
and the preprocessing module is used for preprocessing the region image before the generation module gradually decomposes the target image into a plurality of layers of contours from outside to inside.
The apparatus 10 for controlling a robot to draw an image according to the embodiment of the present application has the same advantages as the method for controlling a robot to draw an image according to the previous embodiment of the present application.
The embodiment of the present application further provides a robot corresponding to the method for controlling the robot to draw an image provided in the foregoing embodiments, please refer to fig. 4, which shows a schematic diagram of a robot provided in some embodiments of the present application. As shown in fig. 4, the robot 20 includes: the system comprises a processor 200, a memory 201, a bus 202 and a communication interface 203, wherein the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202; the memory 201 stores a computer program that can be executed on the processor 200, and the processor 200 executes the method for controlling the robot to draw the image provided by any one of the foregoing embodiments when executing the computer program.
The Memory 201 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 203 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 202 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 201 is used for storing a program, and the processor 200 executes the program after receiving an execution instruction, and the method for controlling the robot to draw an image disclosed in any embodiment of the present application may be applied to the processor 200, or implemented by the processor 200.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 200. The Processor 200 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201 and completes the steps of the method in combination with the hardware thereof.
The robot provided by the embodiment of the application and the method for controlling the robot to draw the image provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the robot.
Referring to fig. 5, a computer readable storage medium is shown as an optical disc 30, on which a computer program (i.e., a program product) is stored, and when the computer program is executed by a processor, the computer program executes the method for controlling a robot to draw an image according to any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the method for controlling the robot to draw the image provided by the embodiment of the present application have the same beneficial effects as the method adopted, run or implemented by the application program stored in the computer-readable storage medium.
It should be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present disclosure, and the present disclosure should be construed as being covered by the claims and the specification.

Claims (10)

1. A method of controlling a robot to render an image, comprising:
acquiring a target image to be drawn;
decomposing the target image into a plurality of layers of contours from outside to inside step by step, and generating a contour track point sequence corresponding to each layer of contour;
and carrying out data cleaning on the contour track point sequence according to a preset drawing precision range, and inputting the contour track point sequence subjected to data cleaning into a mechanical arm for drawing.
2. The method according to claim 1, wherein the step of decomposing the target image into a plurality of layers of contours from outside to inside and generating a sequence of contour track points corresponding to each layer of contours comprises:
s1, retrieving the outermost layer contour of the target image and recording the retrieved initial contour track point sequence;
step S2, adjusting the initial contour track point sequence according to a preset stroke width, and generating a contour track point sequence corresponding to the outermost contour;
step S3, subtracting the outermost layer contour from the target image to obtain a residual image;
step S4, the same processing from step S1 to step S3 is carried out on the residual partial images until the condition of ending the retrieval is triggered; the search ending condition is as follows: and calculating to obtain the area of the image of the residual part, and finishing the retrieval if the area of the image of the residual part is smaller than a preset area threshold.
3. The method according to claim 1, wherein the data cleaning of the contour trace point sequence according to a preset drawing precision range comprises:
calculating the distance between the current track point and the next track point from the initial track point of each layer of outline;
if the distance is smaller than a preset drawing precision range, filtering the next track point;
if the distance is larger than the preset drawing precision range, searching from the next track point again until all track points on the current contour are traversed, and finishing primary filtering;
and after the preliminary filtering, if the number of the residual track points in the current contour is 1, deleting the contour.
4. The method according to any one of claims 1 to 3, wherein before the step of decomposing the target image into a plurality of layers of contours from outside to inside, the method further comprises:
and preprocessing the target image.
5. An apparatus for controlling a robot to draw an image, comprising:
the acquisition module is used for acquiring a target image to be drawn;
the generating module is used for decomposing the target image into a plurality of layers of contours from outside to inside step by step and generating a contour track point sequence corresponding to each layer of contour;
and the data cleaning module is used for carrying out data cleaning on the contour track point sequence according to a preset drawing precision range and inputting the contour track point sequence subjected to data cleaning into the mechanical arm for drawing.
6. The apparatus of claim 5, wherein the generating module is specifically configured to:
s1, retrieving the outermost layer contour of the target image, and recording the retrieved initial contour track point sequence;
s2, adjusting the initial contour track point sequence according to a preset stroke width, and generating a contour track point sequence corresponding to the outermost contour;
s3, subtracting the outermost layer outline from the target image to obtain a residual image;
s4, the same processing from S1 to S3 is carried out on the residual images until the condition of ending the retrieval is triggered; the search ending condition is as follows: and calculating to obtain the area of the image of the residual part, and finishing the retrieval if the area of the image of the residual part is smaller than a preset area threshold.
7. The apparatus of claim 5, wherein the data cleansing module is specifically configured to:
calculating the distance between the current track point and the next track point from the initial track point of each layer of outline;
if the distance is smaller than a preset drawing precision range, filtering the next track point;
if the distance is larger than the preset drawing precision range, searching from the next track point again until all track points on the current contour are traversed, and finishing primary filtering;
and after the preliminary filtering, if the number of the residual track points in the current contour is 1, deleting the contour.
8. The apparatus of any one of claims 5 to 7, further comprising:
and the preprocessing module is used for preprocessing the region image before the generation module gradually decomposes the target image into a plurality of layers of contours from outside to inside.
9. A robot, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor executes the computer program to implement the method according to any of claims 1 to 4.
10. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 4.
CN202010016613.XA 2020-01-08 2020-01-08 Method and device for controlling robot to draw image, robot and medium Active CN111251309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010016613.XA CN111251309B (en) 2020-01-08 2020-01-08 Method and device for controlling robot to draw image, robot and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010016613.XA CN111251309B (en) 2020-01-08 2020-01-08 Method and device for controlling robot to draw image, robot and medium

Publications (2)

Publication Number Publication Date
CN111251309A true CN111251309A (en) 2020-06-09
CN111251309B CN111251309B (en) 2021-06-15

Family

ID=70943923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010016613.XA Active CN111251309B (en) 2020-01-08 2020-01-08 Method and device for controlling robot to draw image, robot and medium

Country Status (1)

Country Link
CN (1) CN111251309B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1710608A (en) * 2005-07-07 2005-12-21 上海交通大学 Picture processing method for robot drawing human-face cartoon
CN101101203A (en) * 2006-07-05 2008-01-09 三星电子株式会社 Apparatus, method, medium and mobile robot using characteristic point for dividing area
CN101133962A (en) * 2006-09-01 2008-03-05 西门子公司 Method for reconstructing a three-dimensional image volume and x-ray devices
US20110251719A1 (en) * 2010-04-08 2011-10-13 Vodafone Holding Gmbh Method and device for actuating a key of a keyboard with a tracer finger of a robot
CN104574457A (en) * 2014-12-05 2015-04-29 杭州新松机器人自动化有限公司 Shorter-path robot image drawing method
CN108460369A (en) * 2018-04-04 2018-08-28 南京阿凡达机器人科技有限公司 A kind of drawing practice and system based on machine vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1710608A (en) * 2005-07-07 2005-12-21 上海交通大学 Picture processing method for robot drawing human-face cartoon
CN101101203A (en) * 2006-07-05 2008-01-09 三星电子株式会社 Apparatus, method, medium and mobile robot using characteristic point for dividing area
CN101133962A (en) * 2006-09-01 2008-03-05 西门子公司 Method for reconstructing a three-dimensional image volume and x-ray devices
US20110251719A1 (en) * 2010-04-08 2011-10-13 Vodafone Holding Gmbh Method and device for actuating a key of a keyboard with a tracer finger of a robot
CN104574457A (en) * 2014-12-05 2015-04-29 杭州新松机器人自动化有限公司 Shorter-path robot image drawing method
CN108460369A (en) * 2018-04-04 2018-08-28 南京阿凡达机器人科技有限公司 A kind of drawing practice and system based on machine vision

Also Published As

Publication number Publication date
CN111251309B (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN108010031B (en) Portrait segmentation method and mobile terminal
CN107958460B (en) Instance-level semantic segmentation system
US11593948B2 (en) Generating refined alpha mattes utilizing guidance masks and a progressive refinement network
JP7402623B2 (en) Filter processing device and its control method
CN111290684B (en) Image display method, image display device and terminal equipment
CN112464833A (en) Dynamic gesture recognition method, device, equipment and storage medium based on optical flow
Deng et al. T-former: An efficient transformer for image inpainting
CN111168676B (en) Mechanical arm hand-eye cooperation painting method and device, painting robot and medium
CN110766068B (en) Verification code identification method and computing equipment
CN113313662B (en) Image processing method, device, equipment and storage medium
CN111251309B (en) Method and device for controlling robot to draw image, robot and medium
Kulshreshtha et al. Feature refinement to improve high resolution image inpainting
CN112991151B (en) Image processing method, image generation method, apparatus, device, and medium
CN112598687A (en) Image segmentation method and device, storage medium and electronic equipment
CN108810319B (en) Image processing apparatus, image processing method, and program
CN107480479B (en) Application program reinforcing method and device, computing equipment and computer storage medium
JP2021144428A (en) Data processing device and data processing method
CN113344198A (en) Model training method and device
Huang et al. Linedl: Processing images line-by-line with deep learning
CN112150486A (en) Image processing method and device
Diebel Bayesian Image Vectorization: the probabilistic inversion of vector image rasterization
CN112085025A (en) Object segmentation method, device and equipment
CN113297995B (en) Human body posture estimation method and terminal equipment
CN115035382A (en) Model training method, 3D face reconstruction method, device and storage medium
CN110930472A (en) Picture generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 101, building 1, block C, Qianjiang Century Park, ningwei street, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Weiming Information Technology Co.,Ltd.

Applicant after: Institute of Information Technology, Zhejiang Peking University

Address before: Room 288-1, 857 Xinbei Road, Ningwei Town, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant before: Institute of Information Technology, Zhejiang Peking University

Applicant before: Hangzhou Weiming Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200609

Assignee: Zhejiang smart video security Innovation Center Co.,Ltd.

Assignor: Institute of Information Technology, Zhejiang Peking University

Contract record no.: X2022330000930

Denomination of invention: Method, device, robot and medium for controlling robot to draw image

Granted publication date: 20210615

License type: Common License

Record date: 20221229