CN109313710A - Model of Target Recognition training method, target identification method, equipment and robot - Google Patents

Model of Target Recognition training method, target identification method, equipment and robot Download PDF

Info

Publication number
CN109313710A
CN109313710A CN201880002216.8A CN201880002216A CN109313710A CN 109313710 A CN109313710 A CN 109313710A CN 201880002216 A CN201880002216 A CN 201880002216A CN 109313710 A CN109313710 A CN 109313710A
Authority
CN
China
Prior art keywords
identified
target
identification
model
cargo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880002216.8A
Other languages
Chinese (zh)
Inventor
张�浩
吴启帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Blue Fat Robot Co Ltd
Original Assignee
Shenzhen Blue Fat Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Blue Fat Robot Co Ltd filed Critical Shenzhen Blue Fat Robot Co Ltd
Publication of CN109313710A publication Critical patent/CN109313710A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

This application discloses a kind of Model of Target Recognition training method, target identification method, equipment and robot, which includes: the sampled images for obtaining target to be identified;The sampled images are inputted into identification model, the segmentation data after output identification;Segmentation data are compared with the normal data of standard picture, obtain identification error;Identification error is fed back into identification model, corrects identification model;Wherein, when obtaining standard picture, object edge is coated with fluorescent material, according to the colour developing of fluorescent material in standard picture, obtains the normal data of target to be identified.By the above-mentioned means, the speed and efficiency of model training can be improved in the application.

Description

Model of Target Recognition training method, target identification method, equipment and robot
Technical field
This application involves field of target recognition, more particularly, to a kind of Model of Target Recognition training method, target identification side Method, equipment and robot.
Background technique
During current target identification, when training identification model, the destination sample image that will acquire first is input to knowledge In other model, error after the image after the label is compared with standard picture, is fed back to knowledge by the image after output token Other model, and the identification model is corrected, after changing target position, the above process is recycled, until after the identification model trains, Target identification is carried out using the identification model.In above-mentioned training process, which needs artificial after obtaining target image Objective contour is sketched the contours, manually sketching the contours process, time-consuming, low efficiency.
Summary of the invention
The application provides a kind of Model of Target Recognition training method, target identification method, equipment and robot, is able to solve Using manually sketch the contours profile obtain standard picture time-consuming, low efficiency the problem of.
To solve the above-mentioned problems, first technical solution that the application uses is: providing a kind of Model of Target Recognition instruction Practice method, comprising: obtain the sampled images of target to be identified;The sampled images are inputted into identification model, point after output identification Cut data;Segmentation data after identification are compared with the normal data of standard picture, obtain identification error;By identification error Identification model is fed back to, identification model is corrected;Wherein, when obtaining standard picture, the edge of target to be identified is coated with phosphor Material, according to the colour developing of fluorescent material in standard picture, obtains the normal data of target to be identified.
To solve the above-mentioned problems, second technical solution that the application uses is: providing a kind of target identification method, wraps It includes: obtaining the image of cargo to be identified;The image is inputted into the Model of Target Recognition that training is completed, it is to be identified after output identification The segmentation data of cargo;Using the data of the segmentation data acquisition cargo to be identified, the data of the cargo to be identified are for planning Obtain the position for holding cargo to be identified and/or posture;Wherein, which is instructed by Model of Target Recognition as described above Practice method training to obtain.
To solve the above-mentioned problems, the third technical solution that the application uses is: providing a kind of target identification equipment, wraps It includes: telecommunication circuit interconnected and processor;Telecommunication circuit is used to obtain the sampled images of target to be identified;Processor is used for The sampled images are inputted into identification model, the segmentation data after output identification, by the segmentation data and standard picture after identification Normal data is compared, and obtains identification error, and identification error is fed back to identification model, corrects identification model;Wherein, it obtains When standard picture, the edge of the target to be identified is coated with fluorescent material, according to the colour developing of fluorescent material in standard picture, obtains The normal data of target to be identified.
To solve the above-mentioned problems, the 4th technical solution that the application uses is: providing a kind of robot, comprising: phase The mechanical arm and target identification equipment as described above to connect;The mechanical arm is used for the target identified according to target identification equipment Data, planning obtain the position for holding target and/or posture, to operate target object.
The beneficial effect of the application is: it is in contrast to the prior art, in the section Example of the application, and training objective During identification model, when obtaining standard picture, object edge is coated with fluorescent material, according to fluorescent material in standard picture Colour developing, the normal data of available target to be identified, so as to directly acquire criterion numeral according to the standard picture of acquisition According to without obtaining standard picture by way of manually sketching the contours after acquiring image, and then obtaining normal data, therefore can be with The time manually sketched the contours is saved, and then improves the speed and efficiency of model training.
Detailed description of the invention
Fig. 1 is the flow diagram of the application Model of Target Recognition training method first embodiment;
Fig. 2 is the flow diagram of the application Model of Target Recognition training method second embodiment;
Fig. 3 is one application scenarios when obtaining standard picture in the application Model of Target Recognition training method second embodiment Schematic diagram;
Fig. 4 is another application scene when obtaining standard picture in the application Model of Target Recognition training method second embodiment Schematic diagram;
Fig. 5 is the flow diagram of one embodiment of the application target identification method;
Fig. 6 is the structural schematic diagram of the application target identification equipment first embodiment;
Fig. 7 is the structural schematic diagram of the application target identification equipment second embodiment;
Fig. 8 is the structural schematic diagram of the application target identification equipment 3rd embodiment;
Fig. 9 is the structural schematic diagram of one embodiment of the application robot.
Specific embodiment
The application is described in detail with reference to the accompanying drawings and examples.
As shown in Figure 1, the application Model of Target Recognition training method first embodiment includes:
S11: the sampled images of target to be identified are obtained.
Wherein, which includes but is not limited to cargo, personage or animal, is also possible to other objects for needing to identify Body, the number of the target to be identified can be one, can also be two or more, be not specifically limited herein.It should in the application Target to be identified is illustrated by taking cargo as an example.
S12: the sampled images are inputted into identification model, the segmentation data after output identification.
Wherein, which is the model of the target to be identified for identification, and the type of the identification model can basis Depending on actual demand, such as neural network recognization model etc..Segmentation data after the identification can be divides from the sampled images The area coordinate sequence for the target to be identified cut out or the edge coordinate sequence of target to be identified or target to be identified are just Projecting edge coordinate sequence is also possible to mark the image after the target to be identified on the sampled images, or is segmentation The area image of target to be identified out, is not specifically limited herein.
S13: the segmentation data after the identification are compared with the normal data of standard picture, obtain identification error.
Wherein, the standard picture be obtain in advance marked the image of target to be identified, or be partitioned into The image of target to be identified.When obtaining the standard picture, object edge is coated with fluorescent material, according to the fluorescence in standard picture The colour developing of material, the normal data of available reality target to be identified can using description form corresponding with segmentation data Think the positive throwing of the area coordinate sequence of the target to be identified or the edge coordinate sequence of target to be identified or target to be identified Shadow edge coordinate sequence is also possible to the standard picture or fluorescent material colour developing mark with fluorescent material colour developing got Know the area image of target to be identified out.The normal data of cargo to be identified can be obtained by the colour developing of fluorescent material. The fluorescent material can be the material that pre-set color is presented under available light, which, which can be, is different from non-edge Color, such as black or red etc., the fluorescent material be also possible to it is colourless under available light, but in the light of certain frequency ranges The colour developing of irradiation lower section, is presented the material of the pre-set color, such as red etc., the specific choosing of the fluorescent material are presented under infrared light Material can according to actual needs depending on, be not specifically limited herein.
The identification error can be, mesh to be identified in the segmentation data and standard picture after the identification of identification model output The difference of target normal data, such as segmentation data and normal data are cargo area data, cargo edge data or cargo Projecting edge data, then difference can be the phase difference value of segmentation data and the coordinate sequence of normal data.
S14: the identification error is fed back into identification model, corrects the identification model.
Wherein, correcting the identification model can be the parameter of amendment identification model, for example, when identification model is neural network Model then corrects the weight of the neural network.
Specifically, in an application examples, filming apparatus, such as camera, video camera, visual sensor etc. be can use, The sampled images of captured in real-time target to be identified, such as shooting pass through the sampled images for a pile cargo that artificial/machine stacks, so The sampled images are inputted into identification model afterwards, which can be split processing to the sampled images, by a pile goods Object segmentation identifies its multiple cargo for including, in order to the subsequent operation to each cargo.Know for example, by using limb recognition, face Not, the methods of region recognition identifies the target to be identified in the sampled images, to obtain the segmentation number of the target to be identified According to, for example, edge coordinate sequence or region coordinate sequence.Then by the segmentation data, with the same a pile goods obtained in advance Object, the identical normal data of standard picture for obtaining visual angle are compared.Wherein, in the standard picture, cargo edge to be identified Smeared in advance using fluorescent material, when shooting the heap cargo just direct available cargo edge to be identified show it is pre- If the normal data of color.It is to be understood that the normal data is the recognition result that the trained expection of identification model reaches, that is, mention The normal data for being partitioned into target to be identified for the identification model as standard.Available segmentation data and the standard in turn Whether the normal data of image consistent, for example, edge coordinate sequence or area coordinate sequence it is whether consistent, if inconsistent, calculate Difference between the two, such as the phase difference value of coordinate sequence, feed back to identification model, correct the parameter of the identification model, with Improve the recognition accuracy of the identification model.In other embodiments, or obtain whether segmentation data fall into criterion numeral According to allowable error section, if it is not, both then calculating the difference of difference or both non-permitted burst error.Adjust cargo to be identified Position or after adjustment shoots the shooting angle of target to be identified, can repeat the above steps, continue to correct the identification model, Until the identification model reaches preset requirement, such as accuracy rate greater than preset threshold (such as 80%), then it represents that the identification model It has been trained that, it is subsequent directly to carry out target identification using the trained identification model.
Wherein, target to be identified can be all objects or certain objects that include or a certain specific object in image Deng can flexibly be set according to the demand of application scenarios.
During the present embodiment training objective identification model, when obtaining standard picture, object edge is coated with phosphor Material, according to the colour developing of fluorescent material in standard picture, obtains the normal data of target to be identified.So as to according to the mark of acquisition Quasi- image directly acquires normal data, after acquiring image, standard picture is obtained by way of manually sketching the contours, in turn Obtain normal data.Therefore, the present embodiment mode saves the time manually sketched the contours, and then improves the speed and effect of model training Rate.And which can use above-mentioned filming apparatus and obtain sampled images and standard picture together, can directly know performance objective The training of other model.Without after obtaining sampled images, also needing to obtain after manually sketching the contours cargo to be identified under manual mode of operation To standard picture, cause to take a significant amount of time.
As shown in Fig. 2, the application Model of Target Recognition training method second embodiment is in the application Model of Target Recognition On the basis of training method first embodiment, before step S13, comprising:
S131: it is coated with the image of the target to be identified of fluorescent material, using filming apparatus shooting edge to obtain the mark Quasi- image.
Wherein, which can directly be the material that certain color (such as black) is presented under natural light, this is glimmering Luminescent material is also possible to colorless fluorescent material, is that presentation is colourless, but spy can be shown under the light of certain frequency ranges under natural light Determine color, such as red etc..The filming apparatus can be general camera, by the way that specific light source device is arranged, so that fluorescent material Color needed for presenting.Be also possible to the camera with specific function, for example, with so that the fluorescent material display color light source Camera etc..
Specifically, in an application examples, when the fluorescent material is that certain color (such as black) is presented under natural light Material when, light supply apparatus can be opened, specific light source is presented, so that fluorescent material shows particular color, directlyed adopt common Camera shoots the image of the target to be identified, such as the image of shooting a pile cargo to be identified, wherein the side of the cargo to be identified Edge is coated with fluorescent material, and therefore, in the image shot, the edge of the cargo to be identified will show the face of the fluorescent material Color, so as to directly use the image as standard picture.
Optionally, as shown in Fig. 2, when the fluorescent material is colorless fluorescent material (such as colorless fluorescent ink), step Before S131, further comprise:
S130: irradiating the fluorescent material using the light of default frequency range, so that the fluorescent material shows pre-set color.
Wherein, which is so that the fluorescent material shows the frequency range of the light of pre-set color, same phosphor Expect to show different colors, light of different types of fluorescent material in same frequency range under the light irradiation of different frequency range Can also show different colors under line irradiation, thus the default specific value of frequency range can according to the type of the fluorescent material and Characteristic selection.
Specifically, as shown in connection with fig. 3, in an application examples, when obtaining the standard picture of cargo to be identified, Ke Yixian The light for irradiating default frequency range to a pile cargo 302 to be identified placed using light source 301, so that the heap is to be identified The same fluorescent material that the edge of cargo 302 is smeared shows pre-set color (such as red), and then utilizes filming apparatus 303 (such as camera) can directly obtain the standard picture with frame colour developing.
In another application examples, target number to be identified is at least two in the standard picture, can be adjacent to be identified Different fluorescent materials is smeared at the edge of target, including, under the light source irradiation of special frequency channel, different fluorescent materials is shown respectively Show different colors;Or under the irradiation of different frequency range light source, different fluorescent materials develops the color respectively, so that this is adjacent wait know The edge of other target shows different colors, thus when the edge contour for carrying out adjacent target to be identified compares, can compare Adjacent target to be identified is easily distinguished, the identification accuracy of Model of Target Recognition is further increased.
For example, as shown in connection with fig. 4, such as the part 401 irised out in Fig. 4, being smeared if it is using same type of ink When cargo edge, there are overlapping lines in the part 401 in the edge of several cargo A, B and C being placed adjacent, so that knowing Other model is not easy to judge which cargo the overlapping coordinate sequence belongs to, to can not accurately be partitioned into be identified Cargo will affect the recognition accuracy of the identification model, and then influences subsequent robot and plan that obtaining for cargo to be identified holds pose. For example, robot closes the face that lines identification is a cargo according to one, it is assumed that think that closing dotted line 402 is a cargo A Face, but close the face that the essence of dotted line 402 is cargo A a part, be blocked one piece, if system or robot root The scheme of holding is obtained according to the planning of this face, it is possible to just be directly perpendicular to the face that outlines of closing dotted line 402 and go to obtain hold, and vertical direction is moved Out, then the part that is blocked that can generate cargo A is collided with front cargo C, then may cause mission failure.Therefore, the application example In can smear different types of ink using at the edge for the cargo (such as A, B and C) being placed adjacent, light source can be made When the light of 403 similar frequency bands issued irradiates cargo A, B and C, marginal portion shows different colors, thus standard picture The edge of middle difference cargo can be distinguished by different colours, to obtain more precisely distinguishing the normal data of each cargo, thus Identification error can be more accurately obtained, and then improves the identification accuracy of model in training identification model.
Certainly, in other application example, which can also be smeared respectively at different frequency range light source The lower fluorescent material to develop the color of irradiation irradiates the fluorescent material using the light of different frequency range when obtaining standard picture, so that the phase The fluorescent material for the cargo that neighbour places is respectively at display color under the light source of different frequency range.For example, part chest is coated with infrared light Lower colour developing is red fluorescence material, and it is red or its allochromatic colour fluorescent material that part chest, which is coated with colour developing under ultraviolet light,.Work as chest It stacks and fixes, filming apparatus shooting angle is fixed, and can be irradiated infrared light respectively and be obtained standard picture A, irradiating ultraviolet light obtains Standard picture B, it is of course also possible to first irradiating ultraviolet light obtains standard picture B, then irradiates infrared light and obtains standard picture A, according to Standard picture A, standard picture B can obtain the normal data of different chests respectively.It is understood that can have it is a variety of not Fluorescent material with the colour developing of frequency range light source smears different chests, accurately to obtain the normal data of chest.Further, work as case Son, which stacks, to be fixed, and filming apparatus shooting angle is fixed, and can be obtained respectively in the fluorescent material of corresponding multiple frequency range light source colour developing Before standard picture, therebetween or later, execute the step of obtaining sampled images.
After folding a pile stacks of cargo, filming apparatus sets an acquisition visual angle, can obtain together sampled images and Standard picture.For example, can immediately turn on specific light source after shooting obtains sampled images under nonspecific light source and obtain standard drawing Picture.It is understood that the two acquisition sequencing is unlimited.To effectively reduce the time for obtaining standard picture.Into And, thus it is possible to vary stack manner obtains visual angle, obtains next group of sampled images and standard picture, improves entire training effect Rate.
When fluorescent material is using transparent color under natural light, or with the consistent color of cargo to be identified, by specific frequency The mode that difference color is presented under Duan Guangyuan further can show spy caused by fluorescent material to avoid sampled images Data are levied, and then avoid identification condition of the identification model by this feature data as output segmentation data.To which instruction The cargo to be identified in practical application and not having fluorescent material can more accurately be identified by practising the identification model come.It can manage It solves, in other embodiments, if in practical application, cargo to be identified is set as the object with fluorescent material, then can adopt Use the characteristic of training identification model identification fluorescent material as segmentation condition.
As shown in figure 5, one embodiment of the application target identification method includes:
S21: the image of cargo to be identified is obtained.
S22: the image is inputted into the Model of Target Recognition that training is completed, the segmentation number of the cargo to be identified after output identification According to.
S23: using the data of the segmentation data acquisition cargo to be identified, the data of the cargo to be identified are held for planning to obtain The position of the cargo to be identified and/or posture.
Wherein, which is mentioned by the application Model of Target Recognition training method first or second embodiments The method training of confession obtains.
Specifically, in an application examples, robot is obtained before holding cargo, needs to obtain the data of cargo, such as space Data and/or obtain the face of holding, could planning robot hold obtaining for cargo for obtaining and hold pose.Therefore, before obtaining and holding cargo, elder generation is needed It identifies cargo, can use the image that filming apparatus (such as camera) obtains cargo to be identified first, which is input to instruction In the Model of Target Recognition perfected, the segmentation data of cargo to be identified are exported by Model of Target Recognition, and then robot can be with According to position and/or the posture, edge contour of the spatial data of each cargo to be identified of the segmentation data acquisition, such as each face The position of line and the dimension information of length, width and height etc., and according to the spatial data plan cargo obtain the face of holding and robot obtains and holds the goods Position and/or posture are held in obtaining for object.
In the present embodiment, identification model used by target identification is carried out in the training process, when obtaining standard picture, mesh Mark edge is coated with fluorescent material, according to the colour developing of the fluorescent material in standard picture, obtains the normal data of target to be identified. So as to directly acquire normal data according to the standard picture of acquisition, after acquiring image, pass through what is manually sketched the contours Mode obtains standard picture, and then obtains normal data.Therefore, the present embodiment method saves the time manually sketched the contours, in turn The speed and efficiency for improving model training ultimately facilitate planning and obtain the pose for holding cargo, and raising obtains holding effect rate.
As shown in fig. 6,60 first embodiment of the application target identification equipment includes: 601 He of telecommunication circuit interconnected Processor 602;
The telecommunication circuit 601 is used to obtain the sampled images of target to be identified;The processor 602 is used for the sampled images Identification model is inputted, the segmentation data after output identification carry out the normal data of segmentation data and standard picture after identification Compare, obtains identification error, which is fed back into identification model, and correct the identification model;
Wherein, when obtaining the standard picture, the edge of target to be identified is coated with fluorescent material, is somebody's turn to do according in standard picture The colour developing of fluorescent material obtains the normal data of target to be identified.
Processor 602 controls the operation of target identification equipment 60, and processor 602 can also be known as CPU (Central Processing Unit, central processing unit).Processor 602 may be a kind of IC chip, the processing with signal Ability.Processor 602 can also be general processor, digital signal processor (DSP), specific integrated circuit (ASIC), ready-made Programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware group Part.General processor can be microprocessor or the processor is also possible to any conventional processor etc..
Optionally, which is further used for: the sampled images of target to be identified are inputted neural network recognization mould Type identifies target to be identified using neural network recognization model;Export the segmentation data of target to be identified.Wherein, divide data It include: the orthographic projection of the area coordinate sequence of target to be identified or the edge coordinate sequence of target to be identified or target to be identified Image after marking the target to be identified in edge coordinate sequence or sampled images, or the area of target to be identified being partitioned into Area image.
Optionally, which can be also used for: the step of executing the sampled images for obtaining target to be identified is returned, To obtain the sampled images of the target to be identified behind the acquisition visual angle for shifting one's position or converting shooting, until identification model is accurate When rate is greater than preset threshold, identification model training is completed.
Optionally, which is further used for obtaining the image of cargo to be identified;The processor 602 is further For the image to be inputted the identification model that training is completed, the segmentation data of the cargo to be identified after output identification;Then it utilizes Divide the data of data acquisition cargo to be identified, the data of cargo to be identified are for planning the pose for obtaining and holding cargo to be identified.
Wherein, the data of cargo to be identified include the space number for obtaining the face of holding and/or cargo to be identified of cargo to be identified According to for example, cargo to be identified is in the coordinate information in space, length, width and the elevation information in space etc. are for describing space Information etc..
Optionally, which is also used to for being marked in the images at the edge of cargo to be identified, and exports mark Image after note.
Wherein, which can be obtained from external other equipment or system by telecommunication circuit 601 The sampled images and standard picture of the target to be identified.The target identification equipment 60 identifies mould using 602 training objective of processor The detailed process of type can it is real with reference to the application Model of Target Recognition training method first or second apply method provided by example with And the content of one embodiment of the application target identification method, it is not repeated herein.
In the present embodiment, target identification equipment is carried out in the training process of Model of Target Recognition, when obtaining standard picture, mesh Mark edge is coated with fluorescent material, according to the colour developing of the fluorescent material in standard picture, obtains the normal data of target to be identified. So as to directly acquire normal data according to the standard picture of acquisition, after acquiring image, pass through what is manually sketched the contours Mode obtains standard picture, and then obtains normal data.Therefore, the present embodiment mode saves the time manually sketched the contours, in turn Improve the speed and efficiency of model training.
In other embodiments, which also can use the filming apparatus connecting with the telecommunication circuit and obtains The standard picture and sampled images.
Specifically as shown in fig. 7, the structure and the application target identification equipment of the application target identification equipment second embodiment The structure of first embodiment is similar, and details are not described herein again, the difference is that, the target identification equipment 70 of the present embodiment is further It include: filming apparatus 603, connection communication circuit 601, the figure for being coated with the target to be identified of fluorescent material for shooting edge Picture, to obtain the standard picture.
Wherein, which can be general camera, video camera or 3D camera etc., be also possible to have special function Can camera, such as with so that the light source of the fluorescent material display color camera etc..The fluorescent material can directly be The material of certain color (such as black) is presented under natural light, which is also possible to colorless fluorescent material, in natural light Under be to present colourless, but particular color, such as red etc. can be shown under the light of certain frequency ranges.
Target number to be identified can be one in the standard picture, or two or more.When target to be identified When being multiple, different fluorescent materials is smeared at the edge of adjacent target to be identified, including, it is irradiated in the light source of special frequency channel Under, different fluorescent materials shows different colors respectively;Or under the irradiation of different frequency range light source, different fluorescent material point It does not develop the color.The edge of the adjacent target to be identified can be made to show different colors, thus carrying out adjacent target to be identified Edge contour when comparing, can relatively easily distinguish adjacent target to be identified, further increase the knowledge of Model of Target Recognition Other accuracy.
Optionally, which can be also used for shooting the sampled images of target to be identified, and by the sampled images It is transferred to telecommunication circuit 601.
Wherein, the concrete function of the filming apparatus 603 realizes that process can refer to the application Model of Target Recognition training side The content of the step S131 of method second embodiment, is not repeated herein.
Certainly, in other embodiments, the sampling of target to be identified can also be obtained respectively using different filming apparatus Image and standard picture.
In other embodiments, the light which can also issue default frequency range first with light source makes wait know The fluorescent material at the edge of other target shows pre-set color, to obtain standard picture.
Specifically as shown in figure 8, the structure and the application target identification equipment of the application target identification equipment 3rd embodiment The structure of second embodiment is similar, and details are not described herein again, the difference is that, the target identification equipment 80 of the present embodiment is further Include: light source 604, connect the filming apparatus 603, the light for generating default frequency range irradiates the fluorescent material, so that shooting When device 603 shoots the image of target to be identified, which shows pre-set color.
Wherein, which is so that the fluorescent material shows the frequency range of the light of pre-set color, same phosphor Expect to show different colors, light of different types of fluorescent material in same frequency range under the light irradiation of different frequency range Can also show different colors under line irradiation, thus the default specific value of frequency range can according to the type of the fluorescent material and Characteristic selection.
In the present embodiment, target identification equipment 80 irradiates target to be identified using light source 604, and utilizes capture apparatus 603 The detailed process for obtaining standard picture can refer to the step S130 of the application Model of Target Recognition training method second embodiment Content, be not repeated herein.
As described in Figure 9,90 1 embodiment of the application robot includes: that mechanical arm 901 interconnected and target identification are set Standby 902.
Wherein, the structure and function of the target identification equipment 902 can be with reference to the application target identification equipment first to the The content of three any one embodiments, is not repeated herein.
The data for the target that the mechanical arm 901 is used to be identified according to target identification equipment 902, planning obtain the position for holding target Appearance, to obtain target object.
Wherein, which can be set end effector (not shown), and robot 90 is set according to the target identification After standby 902 identify the spatial data of target, it can plan and obtain the best pose for holding the target, to control the robot 901 End effector obtain the target.
In the present embodiment, during which utilizes target identification equipment training objective identification model, standard is obtained When image, object edge is coated with fluorescent material, according to the colour developing of the fluorescent material in standard picture, obtains target to be identified Normal data.So as to directly acquire normal data according to the standard picture of acquisition, after acquiring image, pass through people The mode that work is sketched the contours obtains standard picture, and then obtains normal data.Therefore, the present embodiment method save manually sketch the contours when Between, and then improve the speed and efficiency of model training.
The foregoing is merely presently filed embodiments, are not intended to limit the scope of the patents of the application, all to utilize this Equivalent structure or equivalent flow shift made by application specification and accompanying drawing content, it is relevant to be applied directly or indirectly in other Technical field similarly includes in the scope of patent protection of the application.

Claims (20)

1. a kind of Model of Target Recognition training method characterized by comprising
Obtain the sampled images of target to be identified;
The sampled images are inputted into identification model, the segmentation data after output identification;
The segmentation data are compared with the normal data of standard picture, obtain identification error;
The identification error is fed back into the identification model, corrects the identification model;
Wherein, when obtaining the standard picture, the edge of the target to be identified is coated with fluorescent material, according to the standard drawing The colour developing of the fluorescent material as described in obtains the normal data of the target to be identified.
2. Model of Target Recognition training method according to claim 1, which is characterized in that it is described by the segmentation data with Before the normal data of standard picture is compared, comprising:
It is coated with the image of the target to be identified of fluorescent material, using filming apparatus shooting edge to obtain the standard drawing Picture.
3. Model of Target Recognition training method according to claim 2, which is characterized in that the fluorescent material is colourless glimmering Luminescent material;The image of the target to be identified that fluorescent material is coated with using filming apparatus shooting edge, to obtain Before stating standard picture, further comprise:
The fluorescent material is irradiated using the light of default frequency range, so that the fluorescent material shows pre-set color.
4. Model of Target Recognition training method according to claim 1, which is characterized in that be identified in the standard picture Target number is at least two;Different fluorescent materials is smeared at the edge of adjacent target to be identified, so that described adjacent to be identified The edge of target shows different colors.
5. Model of Target Recognition training method according to claim 1, which is characterized in that described that the sampled images are defeated Enter identification model, the segmentation data after output identification include:
The sampled images are inputted into neural network recognization model, are identified using the neural network recognization model described to be identified Target;
The segmentation data of the target to be identified are exported, the segmentation data include the area coordinate sequence of the target to be identified The orthographic projection edge coordinate sequence or described of the edge coordinate sequence or the target to be identified of column or the target to be identified Image after marking the target to be identified on sampled images, or the area image of the target to be identified being partitioned into.
6. Model of Target Recognition training method according to claim 1, which is characterized in that the amendment identification model Later, comprising:
The step of executing the sampled images for obtaining target to be identified is returned, obtains angle to obtain to shift one's position and/or convert The sampled images of the target to be identified after degree, until when the accuracy rate of the identification model is greater than preset threshold, the knowledge Other model training is completed.
7. a kind of target identification method characterized by comprising
Obtain the image of cargo to be identified;
Described image is inputted into the Model of Target Recognition that training is completed, the segmentation number of the cargo to be identified after output identification According to;
Using the data of cargo to be identified described in the segmentation data acquisition, the data of the cargo to be identified are held for planning to obtain The position of the cargo to be identified and/or posture;
Wherein, the Model of Target Recognition is instructed by Model of Target Recognition training method as claimed in any one of claims 1 to 6 It gets.
8. target identification method according to claim 7, which is characterized in that the segmentation data include: described to be identified The orthographic projection side of the edge coordinate sequence or the target to be identified of the area coordinate sequence of target or the target to be identified Image after marking the target to be identified on edge coordinate sequence or the sampled images, or the mesh to be identified being partitioned into Target area image.
9. target identification method according to claim 7, which is characterized in that the data of the cargo to be identified include described The spatial data for obtaining the face of holding and/or the cargo to be identified of cargo to be identified.
10. a kind of target identification equipment characterized by comprising telecommunication circuit interconnected and processor;
The telecommunication circuit is used to obtain the sampled images of target to be identified;
The processor is used to input the sampled images identification model, the segmentation data after output identification, by the segmentation Data are compared with the normal data of standard picture, obtain identification error, the identification error is fed back to the identification mould Type corrects the identification model;
Wherein, when obtaining the standard picture, the edge of the target to be identified is coated with fluorescent material, according to the standard drawing The colour developing of the fluorescent material as described in obtains the normal data of the target to be identified.
11. equipment according to claim 10, which is characterized in that further comprise: filming apparatus connects the communication electricity Road, the image for being coated with the target to be identified of fluorescent material for obtaining edge, to obtain the standard picture.
12. equipment according to claim 11, which is characterized in that further comprise: light source connects the filming apparatus, Light for generating default frequency range irradiates the fluorescent material, so that the filming apparatus shoots the figure of the target to be identified When picture, the fluorescent material shows pre-set color.
13. equipment according to claim 11, which is characterized in that it is described wait know that the filming apparatus is further used for shooting The sampled images of other target, and the sampled images are transferred to the telecommunication circuit.
14. equipment according to claim 10, which is characterized in that target number to be identified described in the standard picture is extremely It is less two;Different fluorescent materials is smeared at the edge of adjacent target to be identified, so that the edge of the adjacent target to be identified Show different colors.
15. equipment according to claim 10, which is characterized in that the processor is further used for: by the sample graph As input neural network recognization model, the target to be identified is identified using the neural network recognization model;It is also used to know The segmentation data of the target to be identified after not are compared with the normal data of target to be identified in standard picture, are obtained and are known Other error;And be also used to feeding back to the identification error into neural network recognization model, correct the neural network recognization model Weight.
16. equipment according to claim 10, which is characterized in that the processor is further used for: returning described in executing The step of obtaining the sampled images of target to be identified, it is described to be identified after shifting one's position and/or converting acquisition visual angle to obtain The sampled images of target, until the identification model training is completed when the accuracy rate of the identification model is greater than preset threshold.
17. equipment according to claim 16, which is characterized in that
The telecommunication circuit is further used for obtaining the image of cargo to be identified;
The processor is further used for inputting described image into the identification model that training is completed, described after output identification The segmentation data of cargo to be identified;Utilize the data of cargo to be identified described in the segmentation data acquisition, the cargo to be identified Data for planning the pose for obtaining and holding the cargo to be identified.
18. equipment according to claim 17, which is characterized in that the processor is further used for the goods to be identified The edge of object is marked in described image, the described image after output token.
19. equipment according to claim 17, which is characterized in that the data of the cargo to be identified include described to be identified The spatial data for obtaining the face of holding and/or the cargo to be identified of cargo.
20. a kind of robot characterized by comprising mechanical arm interconnected and as described in claim any one of 10-19 Target identification equipment;
The data for the target that the mechanical arm is used to be identified according to the target identification equipment, planning obtain the position for holding the target And/or posture, to operate target object.
CN201880002216.8A 2018-02-02 2018-02-02 Model of Target Recognition training method, target identification method, equipment and robot Pending CN109313710A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/075134 WO2019148453A1 (en) 2018-02-02 2018-02-02 Method for training target recognition model, target recognition method, apparatus, and robot

Publications (1)

Publication Number Publication Date
CN109313710A true CN109313710A (en) 2019-02-05

Family

ID=65221748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880002216.8A Pending CN109313710A (en) 2018-02-02 2018-02-02 Model of Target Recognition training method, target identification method, equipment and robot

Country Status (2)

Country Link
CN (1) CN109313710A (en)
WO (1) WO2019148453A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046545A (en) * 2019-03-05 2019-07-23 深兰科技(上海)有限公司 A kind of laying for goods system, method, apparatus, electronic equipment and storage medium
CN110222235A (en) * 2019-06-11 2019-09-10 百度在线网络技术(北京)有限公司 3 D stereo content display method, device, equipment and storage medium
CN111708794A (en) * 2020-06-22 2020-09-25 中国平安财产保险股份有限公司 Data comparison method and device based on big data platform and computer equipment
CN111746169A (en) * 2020-05-26 2020-10-09 湖南天琪智慧印刷有限公司 Positioning mark thermal sensitive paper and its positioning method and making method
CN112388655A (en) * 2020-12-04 2021-02-23 齐鲁工业大学 Grabbed object identification method based on fusion of touch vibration signals and visual images
WO2022121766A1 (en) * 2020-12-07 2022-06-16 天津天瞳威势电子科技有限公司 Method and apparatus for detecting free space
CN114911221A (en) * 2021-02-09 2022-08-16 北京小米移动软件有限公司 Robot control method and device and robot
CN116645413A (en) * 2023-06-02 2023-08-25 湖州丽天智能科技有限公司 Photovoltaic cell panel position identification method and system and photovoltaic robot
CN111708794B (en) * 2020-06-22 2024-05-03 中国平安财产保险股份有限公司 Data comparison method and device based on big data platform and computer equipment

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308003B (en) * 2020-11-06 2023-04-28 中冶赛迪信息技术(重庆)有限公司 Method, system, equipment and medium for identifying loading state of scrap steel wagon
CN113561181B (en) * 2021-08-04 2023-01-31 北京京东乾石科技有限公司 Target detection model updating method, device and system
CN114264607B (en) * 2021-12-29 2022-06-28 佛山市帆思科材料技术有限公司 Machine vision-based tile color difference online detection system and method
CN114310954B (en) * 2021-12-31 2024-04-16 北京理工大学 Self-adaptive lifting control method and system for nursing robot
CN115019300B (en) * 2022-08-09 2022-10-11 成都运荔枝科技有限公司 Method for automated warehouse goods identification

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101573959A (en) * 2006-11-01 2009-11-04 索尼株式会社 Segment tracking in motion picture
CN103186901A (en) * 2013-03-29 2013-07-03 中国人民解放军第三军医大学 Full-automatic image segmentation method
CN103827919A (en) * 2011-07-28 2014-05-28 医疗技术股份公司 Method for providing images of a tissue section
CN103914851A (en) * 2013-01-08 2014-07-09 彩滋公司 Using infrared imaging to create digital images for use in product customization
US20160125601A1 (en) * 2014-11-05 2016-05-05 Carestream Health, Inc. Detection of tooth condition using reflectance images with red and green fluorescence
CN105654045A (en) * 2015-12-29 2016-06-08 大连楼兰科技股份有限公司 Method applied in active driving technology for identifying traffic control personnel
CN105787482A (en) * 2016-02-26 2016-07-20 华北电力大学 Specific target outline image segmentation method based on depth convolution neural network
US20170154212A1 (en) * 2015-11-30 2017-06-01 International Business Machines Corporation System and method for pose-aware feature learning
CN106874914A (en) * 2017-01-12 2017-06-20 华南理工大学 A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks
CN107392895A (en) * 2017-07-14 2017-11-24 深圳市唯特视科技有限公司 A kind of 3D blood vessel structure extracting methods based on convolution loop network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1738426A (en) * 2005-09-09 2006-02-22 南京大学 Video motion goal division and track method
CN101354359B (en) * 2008-09-04 2010-11-10 湖南大学 Method for detecting, tracking and recognizing movement visible exogenous impurity in medicine liquid
CN105678332B (en) * 2016-01-08 2020-01-10 昆明理工大学 Converter steelmaking end point judgment method and system based on flame image CNN recognition modeling
CN106023220B (en) * 2016-05-26 2018-10-19 史方 A kind of vehicle appearance image of component dividing method based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101573959A (en) * 2006-11-01 2009-11-04 索尼株式会社 Segment tracking in motion picture
CN103827919A (en) * 2011-07-28 2014-05-28 医疗技术股份公司 Method for providing images of a tissue section
CN103914851A (en) * 2013-01-08 2014-07-09 彩滋公司 Using infrared imaging to create digital images for use in product customization
CN103186901A (en) * 2013-03-29 2013-07-03 中国人民解放军第三军医大学 Full-automatic image segmentation method
US20160125601A1 (en) * 2014-11-05 2016-05-05 Carestream Health, Inc. Detection of tooth condition using reflectance images with red and green fluorescence
US20170154212A1 (en) * 2015-11-30 2017-06-01 International Business Machines Corporation System and method for pose-aware feature learning
CN105654045A (en) * 2015-12-29 2016-06-08 大连楼兰科技股份有限公司 Method applied in active driving technology for identifying traffic control personnel
CN105787482A (en) * 2016-02-26 2016-07-20 华北电力大学 Specific target outline image segmentation method based on depth convolution neural network
CN106874914A (en) * 2017-01-12 2017-06-20 华南理工大学 A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks
CN107392895A (en) * 2017-07-14 2017-11-24 深圳市唯特视科技有限公司 A kind of 3D blood vessel structure extracting methods based on convolution loop network

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046545A (en) * 2019-03-05 2019-07-23 深兰科技(上海)有限公司 A kind of laying for goods system, method, apparatus, electronic equipment and storage medium
CN110222235A (en) * 2019-06-11 2019-09-10 百度在线网络技术(北京)有限公司 3 D stereo content display method, device, equipment and storage medium
CN111746169A (en) * 2020-05-26 2020-10-09 湖南天琪智慧印刷有限公司 Positioning mark thermal sensitive paper and its positioning method and making method
CN111708794A (en) * 2020-06-22 2020-09-25 中国平安财产保险股份有限公司 Data comparison method and device based on big data platform and computer equipment
CN111708794B (en) * 2020-06-22 2024-05-03 中国平安财产保险股份有限公司 Data comparison method and device based on big data platform and computer equipment
CN112388655A (en) * 2020-12-04 2021-02-23 齐鲁工业大学 Grabbed object identification method based on fusion of touch vibration signals and visual images
CN112388655B (en) * 2020-12-04 2021-06-04 齐鲁工业大学 Grabbed object identification method based on fusion of touch vibration signals and visual images
WO2022121766A1 (en) * 2020-12-07 2022-06-16 天津天瞳威势电子科技有限公司 Method and apparatus for detecting free space
CN114911221A (en) * 2021-02-09 2022-08-16 北京小米移动软件有限公司 Robot control method and device and robot
CN114911221B (en) * 2021-02-09 2023-11-28 北京小米机器人技术有限公司 Robot control method and device and robot
CN116645413A (en) * 2023-06-02 2023-08-25 湖州丽天智能科技有限公司 Photovoltaic cell panel position identification method and system and photovoltaic robot

Also Published As

Publication number Publication date
WO2019148453A1 (en) 2019-08-08

Similar Documents

Publication Publication Date Title
CN109313710A (en) Model of Target Recognition training method, target identification method, equipment and robot
CN111145177B (en) Image sample generation method, specific scene target detection method and system thereof
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN103942796A (en) High-precision projector and camera calibration system and method
CN105147311B (en) For the visualization device sub-scanning localization method and system in CT system
CN108701234A (en) Licence plate recognition method and cloud system
JP3930482B2 (en) 3D visual sensor
CN109087315B (en) Image identification and positioning method based on convolutional neural network
CN109341591A (en) A kind of edge detection method and system based on handheld three-dimensional scanner
CN109242835A (en) Vehicle bottom defect inspection method, device, equipment and system based on artificial intelligence
CN106625713A (en) Method of improving gumming accuracy of gumming industrial robot
CN108898634A (en) Pinpoint method is carried out to embroidery machine target pinprick based on binocular camera parallax
CN109993086A (en) Method for detecting human face, device, system and terminal device
CN111721259A (en) Underwater robot recovery positioning method based on binocular vision
CN105678710B (en) Color correction also original system and color correction restoring method
JP2014524074A (en) Method for processing multiple images of the same scene
CN104361580A (en) Projected image real-time correction method based on planar screen
CN114049557A (en) Garbage sorting robot visual identification method based on deep learning
CN115816471B (en) Unordered grabbing method, unordered grabbing equipment and unordered grabbing medium for multi-view 3D vision guided robot
CN109308702A (en) A kind of real-time recognition positioning method of target
CN110147162A (en) A kind of reinforced assembly teaching system and its control method based on fingertip characteristic
CN208254424U (en) A kind of laser blind hole depth detection system
CN110646431A (en) Automatic teaching method of gluing sensor
CN105939474A (en) Test equipment and test method for testing camera
CN106023319A (en) Laser point cloud ground target structural characteristic repairing method based on CCD picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190205