CN104637076A - Robot portrait drawing system and robot portrait drawing method - Google Patents
Robot portrait drawing system and robot portrait drawing method Download PDFInfo
- Publication number
- CN104637076A CN104637076A CN201310574206.0A CN201310574206A CN104637076A CN 104637076 A CN104637076 A CN 104637076A CN 201310574206 A CN201310574206 A CN 201310574206A CN 104637076 A CN104637076 A CN 104637076A
- Authority
- CN
- China
- Prior art keywords
- robot
- image
- face
- portrait
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention discloses a robot portrait drawing system and a robot portrait drawing method. The robot portrait drawing system comprises a near-infrared light source, a near-infrared industrial camera, a robot and a processing unit. The near-infrared light source, the near-infrared industrial camera, the robot and the processing unit are in communication connection. The near-infrared light source is used for emitting electromagnetic waves to human faces; the near-infrared industrial camera is used for shooting the near-infrared light source reflected from the human faces to complete preliminary acquisition of human face images; the processing unit is used for receiving the generated human face images, extracting human face outlines according to the images and generating human face drawing trajectory data; the robot is used for receiving the drawing trajectory data of the processing unit, and completing portrait drawing according to calculated its corresponding moving distance each time. The system can adapt to external light rays well and can extract the human face outlines stably and accurately during light ray change to enable the robot to complete drawing tasks.
Description
Technical field
The invention belongs to robot automation field, relate generally to a kind of robot system for drawing portrait and method.
Background technology
In prior art, robot drawing human-face portrait process is: treat that painter stands or sits up straight in camera front, camera completes the acquisition treating painter's facial image, the image obtained extracts facial contour through process, profile after extraction sends to robot controller through trajectory planning, completes the drafting of industrial robot face portrait.The performance then having German robot laboratory to use six-shaft industrial robot to make portrait automatically to draw in Europe.And the technology that they adopt, mainly use the edge extracting effect in image procossing to obtain the outline line of face, then change into vector point for robot drawing, the method is very high to the requirement of light, and system is also unstable in painting process.
Summary of the invention
Fundamental purpose of the present invention is to provide a kind of robot system for drawing portrait and method, and it can overcome the defect of prior art, utilizes near-infrared light source, when extraneous light change, also can ensure the safety and stability of drawing portrait.Meanwhile, utilize off-line programming technique to complete the trajectory planning of robot drawing human-face profile, robotic programming process can be simplified, improve the efficiency of programming.
For achieving the above object, the present invention adopts following technical scheme:
A kind of robot system for drawing portrait, comprise a near-infrared light source, a near infrared industrial camera, a robot and a processing unit, near-infrared light source, near infrared industrial camera, robot and processing unit communication with one another connect, near-infrared light source, for face emitting electromagnetic wave;
Near infrared industrial camera, for taking the near-infrared light source from face reflection, completes the preliminary acquisition to facial image;
Processing unit, for receiving the facial image of generation, according to the profile of described image zooming-out face, and generates face track drafting data;
Robot, for receiving the track drafting data of described processing unit, and according to calculating the distance of each corresponding movement of described robot, completes the drawing of portrait.
Preferably, processing unit also comprises image correction unit, binary conversion treatment unit, contours extract unit, off-line programing unit;
Image correction unit, for the shade of face in removal of images, improves the overall brightness of facial image simultaneously;
Binary conversion treatment unit, for carrying out binary conversion treatment to the facial image after correction;
Contours extract unit, for according to the facial contour in the image zooming-out facial image after binaryzation;
Trajectory planning unit, carrys out planning robot's track drafting for utilizing off-line programming technique according to the facial contour extracted.
Preferably, near-infrared light source is made up of the near-infrared LED that 40 sizes are identical, and 40 LED are according to inside and outside two circles placement in " returning " font.
Preferably, described electromagnetic wavelength is 830nm to 870nm.
Preferably, near infrared industrial camera, is positioned at the central authorities of described 40 near-infrared LEDs.
Preferably, described robot limning system also comprises: unit changed by drawing board and paper.
Described drawing board, for painting for described robot.
Unit changed by described paper, after each robot completes face drawing on drawing board, automatically changes paper.
A kind of robot method for drawing portrait, comprises the steps:
Near-infrared light source sends electromagnetic wave to face;
Obtain the near-infrared light source from face reflection, complete the preliminary acquisition to facial image;
The shade of face in removal of images, improves the overall brightness of facial image simultaneously, obtains the facial image after correcting;
Binary conversion treatment is carried out to the facial image after correcting;
To the facial contour in the image zooming-out facial image after binaryzation;
Utilize off-line programming technique to the facial contour planning robot track drafting extracted;
Calculate the distance of each corresponding movement of robot, complete the drawing of portrait.
Preferably, to the facial contour in the image zooming-out facial image after binaryzation, comprising: according to the position relationship between each valley point of horizontal projection or vertical projection image, can the horizontal level of locating human face's face, concrete grammar is as follows:
Horizontal projection is carried out to entire image;
In the horizontal projection integrogram of entire image, find out the trough point of central point both sides and record the ordinate point of the original image of their correspondences, trough point on the upside of central point is considered to the horizontal level of eyes, and the trough point on the downside of central point is considered to the horizontal level of face;
After eye, mouth horizontal level obtain determining, just can learn the vertical range numerical value between both, then determine face area, wherein ocular is dropped in the top in face district, drops on mouth region below.
Preferably, according to the position relationship between horizontal projection or each peak valley point of vertical projection diagram, can determine the general width of face, concrete grammar is as follows:
According to mathematical model, vertical integral projection is carried out to entire image;
In the horizontal vertical integral projection figure of entire image, find out the trough point of central point both sides and record the horizontal ordinate point of the original image of their correspondences.Trough point on the left of central point is considered to face leftward position, and the trough point on the right side of central point is considered to face right positions.
After adopting technique scheme, this invention tool has the following advantages:
1, this system can well adapt to extraneous light, and when light change, native system also can be stablized, extract facial contour accurately, makes robot complete drafting task;
2, this system utilizes off-line programming technique to complete the trajectory planning of robot drawing human-face profile, can simplify robotic programming process, improves the efficiency of programming.
Accompanying drawing explanation
Fig. 1 is a kind of robot of the present invention system for drawing portrait structured flowchart;
Fig. 2 is a kind of robot of the present invention method for drawing portrait process flow diagram.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Please refer to shown in Fig. 1, a kind of robot system for drawing portrait, comprise near-infrared light source 10, near infrared industrial camera 20, robot 30 and a processing unit 40, described near-infrared light source 10, near infrared industrial camera 20, robot 30 and processing unit 40 communication with one another connect.
A described near-infrared light source 10, for face emitting electromagnetic wave.Near-infrared light source 10 is made up of the near-infrared LED that 40 sizes are identical, 40 LED according to inside and outside two circles in " returning " font placement (not shown), electromagnetic wavelength is 830nm to 870nm, the best results when wavelength is 850nm.Near-infrared light source can according to the shape size adjustment position of face.
Described near infrared industrial camera 20, is positioned at the central authorities of described 40 near-infrared LEDs, that is, " going back to " word central authorities, for taking the near-infrared light source from face reflection, completing the preliminary acquisition to facial image.Near infrared industrial camera 20 is taken after the near-infrared light source of face reflection, and generate the image file of face, described image file can be bmp form or jpg form.Compared with visible ray figure image, the near-infrared image of acquisition seems to only have half-tone information, and does not comprise color information.Compared with the mid and far infrared image of thermal imaging, near-infrared image remains more image detail.Such as, the identity judging gathered person that human eye can be clear and definite from the near infrared picture gathered clearly, and some personal information such as sex, age.From this point, near infrared picture imaging is closer to visible ray picture.Near infrared industrial camera 20 is installed an optical filter 21, the light for other frequency ranges of filtering (be less than 830nm, be greater than 870nm) disturbs.
Described processing unit 40, for receiving the facial image of generation, according to the profile of described image zooming-out face.
Described processing unit 40 also comprises image correction unit 41, binary conversion treatment unit 42, contours extract unit 43, off-line programing unit 44.
Described image correction unit 41, for the shade of face in removal of images, improves the overall brightness of facial image simultaneously, obtains the facial image after correcting.
In the present embodiment, adopt the method for mean normalization to carry out the shade of face in removal of images, adopt Gamma method to correct facial image, to improve the overall brightness of facial image simultaneously.
In order to eliminate the impact of illumination variation on facial image, mean variance method for normalizing is adopted to carry out pre-service to image.This process has two steps: greyscale transformation and gray scale stretch.
Be specially: set the pixel value of certain pixel in image as f (x, y), in a so whole image, the mean value of all pixel values is
Standard deviation is:
Greyscale transformation can be expressed as f (x, y)=f (x, y)-aver/ σ, namely recalculates the gray-scale value of each pixel according to formula.After carrying out greyscale transformation to pixels all in image, the scope of pixel grayscale just there occurs change, not in 0 ~ 255 grey level range expected, so also need to carry out gray scale stretching.
After greyscale transformation is carried out to an image, the maximal value marked in all grey scale pixel values is max, and minimum value is min, then gray scale stretching formula can be expressed as f (x, y)=(f (x, y)-min) × 255/ (max-min).After carrying out pre-service to image, can find out compared with original image, the illumination of image becomes very even, and shade obtains and removes well.
Established by Interval Maps between pixel value and Gamma value and connect each other.Specifically be expressed as, make P represent pixel value interval [0,255], g (x)=255 (x/255)
1/h (x)represent angle value interval [0, π], it is interval that Γ represents Gamma value, and x represents the numerical value (x ∈ P) of a pixel, and get the mid point that xm is interval P, then the linear mapping of P to Ω is defined as
wherein, φ (x)=π x/2x
m.The mapping definition of Ω to Γ is: h: Ω → Γ Γ={ r|r=h (x) }, wherein h (x)=1+f1 (x)+f2 (x)+f3 (x),
a ∈ (0,1) is a weighting coefficient.
Because under actual illumination condition, in image, always there is the transitional region between high light and shade region.Pixel in transitional region is not owing to being subject to the direct effect of light source, and the quality often having showed image causes adverse effect.For realizing taking different object of correcting intensity to Gao Guang, transition and shadow region, realize pixel rectification effect with f2 (x), f3 (x) on the weak side in transitional region, high light and shade region is partially strong.
F2 (x)=(K (x)+b) cos α+xsin α, K (x)=ρ sin (4 π x/255), α=arctan (-b/x
m) function f 2 (x) departs from middle part function f 1 (x) correction degree, and two ends are partially strong, while having Extended Gamma value interval, take into account the object that figure image height light, transition and shadow region adopt different corrected strength.Here ρ represents K(x) amplitude, b determines f2 (x) maximum changing range, and α represents K(x) deflection angle.f3(x)=R(x)cos(3πx/255),R(x)=c|x/x
m-1|。There are 3 zero crossings in curve f3 (x) in pixel range, successively pixel range is divided into 4 regions, function f 3 (x) takes correction in various degree to Gamma numerical value in these 4 regions, while strengthening the Gamma numerical value change in pixel range two end regions, slow down the Gamma numerical value change in middle two regions.This effectively can not only improve the illumination condition in figure image height light and shade region, and is beneficial to the pixel real information safeguarded in transitional region.C represents the amplitude of R (x), defines f3 (x) to Gamma value maximum extent.Gamma correction function is: g (x)=255 (x/255)
1/h (x), g (x) represents the corrected value of pixel x, and the correction value of such pixel just connects with the numerical value of pixel itself, meets the requirement realizing image flame detection under the condition of unknown images illumination priori.
Binary conversion treatment unit 42, for carrying out binary conversion treatment to the facial image after correction.
Unique point is split from facial image, and choosing of binary-state threshold is crucial.The present embodiment adopts OTSU algorithm (also cry and restrain algorithm greatly) to obtain the optimal threshold of segmentation image.
Be specially: establish gray level image gray level to be L, then tonal range is [0, L-1], utilize the optimal threshold of OTSU algorithm computed image to be:
t=Max[w0(t)*(u0(t)-u)
2+w1(t)*(u1(t)-u)
2]
Variable declaration is wherein as follows: when the threshold value split is t, w0 is background ratio, and u0 is background mean value, and w1 is prospect ratio, and u1 is prospect average, and u is the average of entire image.Make the t that above transition formula evaluation is maximum, be the optimal threshold of segmentation image.
Contours extract unit 43, for according to the facial contour in the image zooming-out facial image after binaryzation.
At this, mainly through there being two conventional operators in mathematical morphology: burn into expands and carries out image zooming-out.
Object boundary point is eliminated in the effect of wherein corroding in mathematical morphological operation; The effect of dilation operation in mathematical morphology is that the background dot of image peripheral is merged in object.In image procossing perform corrosion function time, by certain some p pixel value be set to lower of the kernel covering corresponding with p a little in minimum value, same, for perform expansive working time, minimum value will be got and be changed to maximal value.
Effectively can distinguish human face region and non-face region through morphologic process, therefore to corrode and these two kinds of means that expand can as effective supplementary means to determine facial contour.
According to the projection process of facial image, obtain the approximate location of face in general image.
In above formula, (x, y) represents the position residing for pixel, I(x, y) represent the gray-scale value of this pixel, N represents the pixel number that a line is all, visible horizon projection be exactly the gray-scale value of pixels all for a line is carried out cumulative after show again.
In above formula, (x, y) represents the position residing for pixel, I(x, y) represent the gray-scale value of this pixel, N represent the pixel number that row are all, vertical projection be exactly the gray-scale value of pixels all for row is carried out cumulative after show again.
According to the position relationship between each valley point of horizontal projection or vertical projection image, the horizontal level of the organ such as eye, mouth accurately can be located.Its concrete localization method is as follows:
Horizontal projection is carried out to entire image.
In the horizontal projection integrogram of entire image, find out the trough point of central point both sides and record the ordinate point of the original image of their correspondences.Trough point on the upside of central point is considered to the horizontal level of eyes, and the trough point on the downside of central point is considered to the horizontal level of face.
After eye, mouth horizontal level obtain determining, just can learn the vertical range numerical value between both, then determine face area, wherein ocular is dropped in the top in face district, drops on mouth region below.
According to the position relationship between horizontal projection or each peak valley point of vertical projection diagram, the general width of face can be determined.Its concrete localization method is as follows:
According to mathematical model, vertical integral projection is carried out to entire image.
In the horizontal vertical integral projection figure of entire image, find out the trough point of central point both sides and record the horizontal ordinate point of the original image of their correspondences.Trough point on the left of central point is considered to face leftward position, and the trough point on the right side of central point is considered to face right positions.
So once after between about face, position obtains location, just can learn the horizontal range between both, thus determine face sector width, wherein the left and right coordinate position in face district obtains and determines.
The image that final basis is obtained by the vertical projection anomalous integral horizontal projection integration of face up and down key point can orient facial contour accurately.
Trajectory planning unit 44, carrys out planning robot's track drafting for utilizing off-line programming technique according to the facial contour extracted.
Described robot 30, for receiving the track drafting data of described processing unit 40, and according to calculating the distance of each corresponding movement of described robot, completes the drawing of portrait.
According to the pixel of selected near infrared camera and the fixed size of A3 drawing paper, determine the distance of the every pixel movement of robot.The pixel of preliminary phasing machine is 640 × 480, and the preparation point of robot portrait to be painted can be corresponding with of an A3 drawing paper edge.Now the tool coordinates system of the nib of industrial robot paintbrush overlaps with the coordinate system that A3 drawing paper edge is set up.Following formula computed image can move the mobile corresponding millimeter of a pixel industrial robot correspondence in X, Y-direction respectively.
Described robot system for drawing portrait also comprises: unit 60 changed by drawing board 50 and paper,
Described drawing board 50, for painting for described robot 30.
Unit 60 changed by described paper, to complete instruction, automatically changing paper for receiving robot.
Please refer to Fig. 2, a kind of robot method for drawing portrait, its concrete steps are:
S10: near-infrared light source sends electromagnetic wave to face.
S20: obtain the near-infrared light source from face reflection, complete the preliminary acquisition to facial image.
S30: the shade of face in removal of images, improves the overall brightness of facial image simultaneously, obtains the facial image after correcting.
In the present embodiment, adopt the method for mean normalization to carry out the shade of face in removal of images, adopt Gamma method to correct facial image, to improve the overall brightness of facial image simultaneously.
S40: binary conversion treatment is carried out to the facial image after correcting.
Unique point is split from facial image, and choosing of binary-state threshold is crucial.The present embodiment adopts OTSU algorithm (also cry and restrain algorithm greatly) to obtain the optimal threshold of segmentation image.
Be specially: establish gray level image gray level to be L, then tonal range is [0, L-1], utilize the optimal threshold of OTSU algorithm computed image to be:
t=Max[w0(t)*(u0(t)-u)
2+w1(t)*(u1(t)-u)
2]
Variable declaration is wherein as follows: when the threshold value split is t, w0 is background ratio, and u0 is background mean value, and w1 is prospect ratio, and u1 is prospect average, and u is the average of entire image.Make the t that above transition formula evaluation is maximum, be the optimal threshold of segmentation image.
S50: to the facial contour in the image zooming-out facial image after binaryzation.
At this, mainly through there being two conventional operators in mathematical morphology: burn into expands and carries out image zooming-out.
Object boundary point is eliminated in the effect of wherein corroding in mathematical morphological operation; The effect of dilation operation in mathematical morphology is that the background dot of image peripheral is merged in object.In image procossing perform corrosion function time, by certain some p pixel value be set to lower of the kernel covering corresponding with p a little in minimum value, same, for perform expansive working time, minimum value will be got and be changed to maximal value.
Effectively can distinguish human face region and non-face region through morphologic process, therefore to corrode and these two kinds of means that expand can as effective supplementary means to determine facial contour.
According to the projection process of facial image, obtain the approximate location of face in general image.
In above formula, (x, y) represents the position residing for pixel, I(x, y) represent the gray-scale value of this pixel, N represents the pixel number that a line is all, visible horizon projection be exactly the gray-scale value of pixels all for a line is carried out cumulative after show again.
In above formula, (x, y) represents the position residing for pixel, I(x, y) represent the gray-scale value of this pixel, N represent the pixel number that row are all, vertical projection be exactly the gray-scale value of pixels all for row is carried out cumulative after show again.
According to the position relationship between each valley point of horizontal projection or vertical projection image, the horizontal level of the organ such as eye, mouth accurately can be located.Its concrete localization method is as follows:
Horizontal projection is carried out to entire image.
In the horizontal projection integrogram of entire image, find out the trough point of central point both sides and record the ordinate point of the original image of their correspondences.Trough point on the upside of central point is considered to the horizontal level of eyes, and the trough point on the downside of central point is considered to the horizontal level of face.
After eye, mouth horizontal level obtain determining, just can learn the vertical range numerical value between both, then determine face area, wherein ocular is dropped in the top in face district, drops on mouth region below.
According to the position relationship between horizontal projection or each peak valley point of vertical projection diagram, the general width of face can be determined.Its concrete localization method is as follows:
According to mathematical model, vertical integral projection is carried out to entire image.
In the horizontal vertical integral projection figure of entire image, find out the trough point of central point both sides and record the horizontal ordinate point of the original image of their correspondences.Trough point on the left of central point is considered to face leftward position, and the trough point on the right side of central point is considered to face right positions.
So once after between about face, position obtains location, just can learn the horizontal range between both, thus determine face sector width, wherein the left and right coordinate position in face district obtains and determines.
The image that final basis is obtained by the vertical projection anomalous integral horizontal projection integration of face up and down key point can orient facial contour accurately.
S60: utilize off-line programming technique to the facial contour planning robot track drafting extracted.
S70: the distance calculating each corresponding movement of robot, completes the drawing of portrait.
According to the pixel of selected near infrared camera and the fixed size of A3 drawing paper, determine the distance of the every pixel movement of robot.The pixel of preliminary phasing machine is 640 × 480, and the preparation point of robot portrait to be painted can be corresponding with of an A3 drawing paper edge.Now the tool coordinates system of the nib of industrial robot paintbrush overlaps with the coordinate system that A3 drawing paper edge is set up.Following formula computed image can move the mobile corresponding millimeter of a pixel industrial robot correspondence in X, Y-direction respectively.
Described a kind of robot method for drawing portrait, also comprises: each robot, after drawing board completes face drawing, changes paper automatically.
The above; be only the present invention's preferably embodiment, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; the change that can expect easily or replacement, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.
Claims (9)
1. a robot system for drawing portrait, it is characterized in that: comprise a near-infrared light source, a near infrared industrial camera, a robot and a processing unit, described near-infrared light source, near infrared industrial camera, robot and processing unit communication with one another connect, and are specially:
Described near-infrared light source, for face emitting electromagnetic wave;
Described near infrared industrial camera, for taking the near-infrared light source from face reflection, completes the preliminary acquisition to facial image;
Described processing unit, for receiving the facial image of generation, according to the profile of described image zooming-out face, and generates face track drafting data;
Described robot, for receiving the track drafting data of described processing unit, and according to calculating the distance of each corresponding movement of described robot, completes the drawing of portrait.
2. robot as claimed in claim 1 system for drawing portrait, is characterized in that: described processing unit also comprises image correction unit, binary conversion treatment unit, contours extract unit, off-line programing unit;
Described image correction unit, for the shade of face in removal of images, improves the overall brightness of facial image simultaneously;
Described binary conversion treatment unit, for carrying out binary conversion treatment to the facial image after correction;
Described contours extract unit, for according to the facial contour in the image zooming-out facial image after binaryzation;
Described trajectory planning unit, carrys out planning robot's track drafting for utilizing off-line programming technique according to the facial contour extracted.
3. robot as claimed in claim 1 system for drawing portrait, is characterized in that: described near-infrared light source is made up of the near-infrared LED that 40 sizes are identical, and 40 LED are according to inside and outside two circles placement in " returning " font.
4. robot as claimed in claim 1 system for drawing portrait, is characterized in that: described electromagnetic wavelength is 830nm ~ 870nm.
5. robot as claimed in claim 3 system for drawing portrait, is characterized in that: described near infrared industrial camera, is positioned at the central authorities of described 40 near-infrared LEDs.
6. robot as claimed in claim 1 system for drawing portrait, is characterized in that: described robot limning system also comprises: unit changed by drawing board and paper,
Described drawing board, for painting for described robot;
Unit changed by described paper, after each robot completes face drawing on drawing board, automatically changes paper.
7. a robot method for drawing portrait, is characterized in that:
Near-infrared light source sends electromagnetic wave to face;
Obtain the near-infrared light source from face reflection, complete the preliminary acquisition to facial image;
The shade of face in removal of images, improves the overall brightness of facial image simultaneously, obtains the facial image after correcting;
Binary conversion treatment is carried out to the facial image after correcting;
To the facial contour in the image zooming-out facial image after binaryzation;
Utilize off-line programming technique to the facial contour planning robot track drafting extracted;
Calculate the distance of each corresponding movement of robot, complete the drawing of portrait.
8. robot as claimed in claim 7 method for drawing portrait, it is characterized in that: described to the facial contour in the image zooming-out facial image after binaryzation, comprise: according to the position relationship between each valley point of horizontal projection or vertical projection image, can the horizontal level of locating human face's face, concrete grammar is as follows:
Horizontal projection is carried out to entire image;
In the horizontal projection integrogram of entire image, find out the trough point of central point both sides and record the ordinate point of the original image of their correspondences, trough point on the upside of central point is considered to the horizontal level of eyes, and the trough point on the downside of central point is considered to the horizontal level of face;
After eye, mouth horizontal level obtain determining, just can learn the vertical range numerical value between both, then determine face area, wherein ocular is dropped in the top in face district, drops on mouth region below.
9. robot as claimed in claim 7 method for drawing portrait, is characterized in that: according to the position relationship between horizontal projection or each peak valley point of vertical projection diagram, can determine the general width of face, concrete grammar is as follows:
According to mathematical model, vertical integral projection is carried out to entire image;
In the horizontal vertical integral projection figure of entire image, find out the trough point of central point both sides and record the horizontal ordinate point of the original image of their correspondences, trough point on the left of central point is considered to face leftward position, and the trough point on the right side of central point is considered to face right positions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310574206.0A CN104637076A (en) | 2013-11-13 | 2013-11-13 | Robot portrait drawing system and robot portrait drawing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310574206.0A CN104637076A (en) | 2013-11-13 | 2013-11-13 | Robot portrait drawing system and robot portrait drawing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104637076A true CN104637076A (en) | 2015-05-20 |
Family
ID=53215782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310574206.0A Pending CN104637076A (en) | 2013-11-13 | 2013-11-13 | Robot portrait drawing system and robot portrait drawing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104637076A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105291108A (en) * | 2015-09-13 | 2016-02-03 | 常州大学 | Intelligent full-filling and laser-engraving plotting technology |
CN105437768A (en) * | 2015-09-13 | 2016-03-30 | 常州大学 | Machine-vision-based intelligent artistic paint robot |
CN106113045A (en) * | 2016-08-29 | 2016-11-16 | 昆山塔米机器人有限公司 | The portrait robot that can remotely manage and operational approach thereof |
CN106485765A (en) * | 2016-10-13 | 2017-03-08 | 中国科学院半导体研究所 | A kind of method of automatic description face stick figure |
CN106651988A (en) * | 2016-10-13 | 2017-05-10 | 中国科学院半导体研究所 | Automatic drawing system for face line paint |
CN108230238A (en) * | 2017-12-22 | 2018-06-29 | 苏州灵猴机器人有限公司 | Robot human face sketch system and its drawing practice |
CN108335423A (en) * | 2017-12-08 | 2018-07-27 | 广东数相智能科技有限公司 | A kind of system for drawing portrait, method and storage medium |
CN108614994A (en) * | 2018-03-27 | 2018-10-02 | 深圳市智能机器人研究院 | A kind of Human Head Region Image Segment extracting method and device based on deep learning |
WO2021139556A1 (en) * | 2020-01-08 | 2021-07-15 | 杭州未名信科科技有限公司 | Method and apparatus for controlling robotic arm to draw portrait, and robot system |
CN114157847A (en) * | 2021-11-11 | 2022-03-08 | 深圳市普渡科技有限公司 | Projection method, system, terminal device, robot and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1710608A (en) * | 2005-07-07 | 2005-12-21 | 上海交通大学 | Picture processing method for robot drawing human-face cartoon |
CN101404060A (en) * | 2008-11-10 | 2009-04-08 | 北京航空航天大学 | Human face recognition method based on visible light and near-infrared Gabor information amalgamation |
-
2013
- 2013-11-13 CN CN201310574206.0A patent/CN104637076A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1710608A (en) * | 2005-07-07 | 2005-12-21 | 上海交通大学 | Picture processing method for robot drawing human-face cartoon |
CN101404060A (en) * | 2008-11-10 | 2009-04-08 | 北京航空航天大学 | Human face recognition method based on visible light and near-infrared Gabor information amalgamation |
Non-Patent Citations (1)
Title |
---|
宋宇等: "《一种人脸面部特征的提取方法》", 《长春工业大学学报(自然科学版)》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105291108A (en) * | 2015-09-13 | 2016-02-03 | 常州大学 | Intelligent full-filling and laser-engraving plotting technology |
CN105437768A (en) * | 2015-09-13 | 2016-03-30 | 常州大学 | Machine-vision-based intelligent artistic paint robot |
CN106113045A (en) * | 2016-08-29 | 2016-11-16 | 昆山塔米机器人有限公司 | The portrait robot that can remotely manage and operational approach thereof |
CN106113045B (en) * | 2016-08-29 | 2018-07-20 | 昆山塔米机器人有限公司 | Can remote management portrait robot |
CN106485765A (en) * | 2016-10-13 | 2017-03-08 | 中国科学院半导体研究所 | A kind of method of automatic description face stick figure |
CN106651988A (en) * | 2016-10-13 | 2017-05-10 | 中国科学院半导体研究所 | Automatic drawing system for face line paint |
CN106485765B (en) * | 2016-10-13 | 2019-09-03 | 中国科学院半导体研究所 | A kind of method of automatic description face stick figure |
CN108335423A (en) * | 2017-12-08 | 2018-07-27 | 广东数相智能科技有限公司 | A kind of system for drawing portrait, method and storage medium |
CN108230238A (en) * | 2017-12-22 | 2018-06-29 | 苏州灵猴机器人有限公司 | Robot human face sketch system and its drawing practice |
CN108614994A (en) * | 2018-03-27 | 2018-10-02 | 深圳市智能机器人研究院 | A kind of Human Head Region Image Segment extracting method and device based on deep learning |
WO2021139556A1 (en) * | 2020-01-08 | 2021-07-15 | 杭州未名信科科技有限公司 | Method and apparatus for controlling robotic arm to draw portrait, and robot system |
CN114157847A (en) * | 2021-11-11 | 2022-03-08 | 深圳市普渡科技有限公司 | Projection method, system, terminal device, robot and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104637076A (en) | Robot portrait drawing system and robot portrait drawing method | |
CN105701437B (en) | System for drawing portrait based on robot | |
CN105760826B (en) | Face tracking method and device and intelligent terminal | |
CN105138965B (en) | A kind of near-to-eye sight tracing and its system | |
CN104978012B (en) | One kind points to exchange method, apparatus and system | |
CN106066696B (en) | Sight tracing under natural light based on projection mapping correction and blinkpunkt compensation | |
CN101872237B (en) | Method and system for pupil tracing as well as correction method and module for pupil tracing | |
CN101339606B (en) | Human face critical organ contour characteristic points positioning and tracking method and device | |
CN111310760B (en) | Method for detecting alpha bone inscription characters by combining local priori features and depth convolution features | |
CN104063700B (en) | The method of eye center point location in natural lighting front face image | |
CN103530618A (en) | Non-contact sight tracking method based on corneal reflex | |
CN105069389A (en) | Two-dimensional code partitioning decoding method and system | |
CN104598878A (en) | Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information | |
CN105046252A (en) | Method for recognizing Renminbi (Chinese currency yuan) crown codes | |
CN103679175A (en) | Fast 3D skeleton model detecting method based on depth camera | |
CN104361353A (en) | Application of area-of-interest positioning method to instrument monitoring identification | |
CN108960235B (en) | Method for identifying filling and coating block of answer sheet | |
CN105006003A (en) | Random projection fern based real-time target tracking algorithm | |
CN102938060A (en) | Dynamic gesture recognition system and method | |
Ferhat et al. | A cheap portable eye-tracker solution for common setups | |
CN109978940A (en) | A kind of SAB air bag size vision measuring method | |
CN107341811A (en) | The method that hand region segmentation is carried out using MeanShift algorithms based on depth image | |
CN104820999B (en) | A kind of method that natural image is converted into ink and wash style image | |
CN104331885A (en) | Circular target detection method based on voting line clustering | |
CN103996020A (en) | Head mounted eye tracker detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150520 |