CN101807244B - machine recognition and reconstruction method - Google Patents

machine recognition and reconstruction method Download PDF

Info

Publication number
CN101807244B
CN101807244B CN2009100780799A CN200910078079A CN101807244B CN 101807244 B CN101807244 B CN 101807244B CN 2009100780799 A CN2009100780799 A CN 2009100780799A CN 200910078079 A CN200910078079 A CN 200910078079A CN 101807244 B CN101807244 B CN 101807244B
Authority
CN
China
Prior art keywords
model
characteristic
object model
image
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100780799A
Other languages
Chinese (zh)
Other versions
CN101807244A (en
Inventor
王晨升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN2009100780799A priority Critical patent/CN101807244B/en
Publication of CN101807244A publication Critical patent/CN101807244A/en
Application granted granted Critical
Publication of CN101807244B publication Critical patent/CN101807244B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a machine recognition and reconstruction method which is used for recognizing and reconstructing an object. The method comprises the following steps of: acquiring an image of the object to be recognized; extracting image characteristics; calling an object model from an object model knowledge base, and extracting the characteristics of the model; comparing the model characteristics with the image characteristics, if matching, recognizing the object and calling relevant information in the object model knowledge base to reconstruct the object to be recognized; and if not, continuing to calling the next model and repeating the operation above until searching the matched model. By utilizing the method, the robot can easily recognize and reconstruct the object.

Description

Machine recognition and reconstructing method
Technical field
The present invention relates to field of machine vision.More know clearly it, the present invention relates to a kind of machine recognition and reconstructing method.
Background technology
Target object identification and reconstruct are critical technical problems extremely in computer vision and the robot identification field.For example, on-the-spot the robot manipulation in industry manufacture field, robot need identify on-the-spot subject, and the object that reconstruct is discerned on base of recognition, makes corresponding action according to recognition result to different subject again.How to realize the correct identification and the reconstruct of subject, this is the technical barrier that is always perplexing the machine vision development.
In the prior art, mainly contain two kinds of recognition methodss: a kind of recognition methods that is based on the object structures description; Another kind is based on the recognizer of image.The recognizer of describing based on object structures has mainly been used the theory of vision computing of Marr, and this method is the process of object identification as a multi-level identification, mainly is the local simple feature of identification earlier, progressively discerns the complex three-dimensional object then.Owing to be to begin identification from local feature earlier, therefore utilize this recognition methods bad for the recognition result of the global feature of 3 D complex object, often differ greatly with real structure.Based on the recognizer of image, mainly be human cognitive strategy, and then revise the deficiency in the recognition methods that utilization is described based on object structures is discerned the object global shape.
On base of recognition, subject is carried out the research focus that reconstruct always is this area.The existing object reconstructing method mainly is based on the reconstructing method of projection image and based on the object reconstruction method of geometric projection characteristic.Because the complicacy that object dimensional constitutes and the complicacy of topological relation also do not have proposition can reach the object reconstruction method of realistic scale in the prior art.
Summary of the invention
Therefore, the purpose of this invention is to provide a kind of machine identification method, be used for the identification and the reconstruct of object,, and on base of recognition, carry out construction with of the identification of realization robot to object.
For this reason, the invention provides a kind of machine identification method, be used for the identification and/or the reconstruct of object, comprising:
Gather and want the recognition object image;
Object image to gathering carries out feature extraction, promptly extracts image feature;
The object model knowledge base is provided, and this object model knowledge base comprises N object model, wherein, and N >=1; Transfer first object model in the object model knowledge base;
Object model to this extraction carries out feature extraction, i.e. the extraction model characteristic;
The image feature and the aspect of model are compared;
If image feature and aspect of model matching rate are not less than the setting critical value, the object model that then will extract carries out record, with as alternative model;
If image feature and aspect of model boundary matching rate are less than setting critical value; Then from the object model knowledge base, transfer second object model different with first object model; Repeat the step of extraction model characteristic, characteristic contrast; The the 3rd, the 4th object model in the traversal object model knowledge base ... N object model is till the object model that the aspect of model of image feature that searches out collection and extraction is complementary.
The present invention compared with prior art owing to set up the object model knowledge base, therefore; When carrying out machine vision; The image that only needs to discern carries out feature extraction, the characteristic of model in characteristic of extracting and the object model knowledge base is compared, if both matching rates reach certain threshold values; The object that then will discern and the object model that calls coupling; Promptly realized the identification of object, transferred the information that is stored in the object model knowledge base simultaneously, then realized wanting the construction of recognition object about model.This recognition methods can be applied in various technical fields, like factory automation, space probation, modern medical service etc.For example in the factory automation field, usually need robot that the limited instrument or the operation object at scene are discerned, as choose a certain instrument from tool rack; And select suitable operand to operate accordingly; According to the method for the invention, if set up the model of these limited objects in advance and store in the object model knowledge base, robot can utilize identification of the present invention and constructing method so; Identify on-the-spot object, and make further action according to this.
These characteristics of the present invention, advantage and other feature and advantage will become clear through the explanation of following specific embodiment.
Description of drawings
The present invention will be described with reference to accompanying drawing below, and wherein accompanying drawing only shows the preferred embodiments of the present invention with the mode of example.Among the figure:
Fig. 1 is the process flow diagram according to an exemplary embodiment of machine identification method of the present invention;
Fig. 2 is the process flow diagram according to another exemplary embodiment of machine identification method of the present invention.
Embodiment
Below, with reference to accompanying drawing, the present invention is more comprehensively explained, exemplary embodiment of the present invention has been shown in the accompanying drawing.Yet the present invention can be presented as multiple multi-form, and should not be construed as the exemplary embodiment that is confined to narrate here.But through these embodiment are provided, thereby make the present invention comprehensively with complete, and scope of the present invention is fully conveyed to those of ordinary skill in the art.
With reference to figure 1, the process flow diagram according to an exemplary embodiment of machine identification method of the present invention has been shown among the figure.Shown in figure, be used for the machine identification method of the identification and the reconstruct of object, comprising:
The recognition object image is wanted in S101, collection.In this step.Gather and want the recognition object image, can utilize various Image intake devices (like camera, video camera etc.) to obtain the object image in the scene.In one embodiment, for example in remote control control field, tele-robotic utilizes camera to obtain on-the-spot photo, and photo is sent to control desk through wireless network, and the staff can select the image of the target object that robot will operate through graphical interfaces.In another embodiment, for example in the factory automation field, machining robot is according to the programmed control object image that selection will be discerned from the field scene of taking of the flow process of processing and manufacturing.
S102, image is carried out pre-service.In this step, need carry out operations such as filtering, denoising, distortion correction to selected object image, to get rid of the various noises of object image, be convenient to carry out feature extraction.In one embodiment, can dispense this step.
S103, the object image of gathering is carried out feature extraction, promptly extract image feature.In this step, feature extraction is a method of utilizing various characteristics commonly used in the prior art, and for example Canny algorithm and improvement algorithm thereof carry out feature extraction, utilizes sift and algorithm thereof to carry out architectural feature and extracts.For the sake of simplicity, do not elaborate.
S104, the object model knowledge base is provided, this object model knowledge base comprises N object model, wherein, and N >=1.In this step, a plurality of object models can be set in advance in the object model knowledge base.In one embodiment, for example, in factory automation; In the robot manipulation scene; The instrument of required contact of robot or operation or object are limited relatively, and the instrument that therefore can this is limited or the modelling of object are got up, and are stored in the object model knowledge base.In addition, also can in identifying, add object model as requested.
S105, transfer first object model in the object model knowledge base.
S106, the object model of this extraction is carried out feature extraction, i.e. the extraction model characteristic.The characteristic of extraction model utilizes method of the prior art to realize, for example utilizes the Canny algorithm.The characteristic of extraction model can extraction model architectural feature, shape facility, projection properties, boundary characteristic etc., for example can utilize employed method extraction model characteristic in the background technology.
S107, the image feature and the aspect of model are compared.Just the image feature and the aspect of model compare the similarity of the characteristic of the characteristic that is used for judging image and model, for convenience of description, utilize matching rate to describe this meaning, and matching rate is used for describing two similarity degrees between the characteristic.For example matching rate is high more, and then both are similar more, and matching rate is 100%, explains that then both are identical.In the machine vision process; Can set the critical value (or threshold values) of some matching rates; For example matching rate is 70%, 80%, 90,95%, 99% etc., can accelerate the process of matching judgment like this, and characteristic that will be not all is mated fully just can make correct conclusion; Can save time, raise the efficiency.
Result for contrast judges, and carries out different steps according to different decision structures:
If S108 image feature and aspect of model matching rate are not less than the setting critical value, then this object that will discern is identified as the object model of this extraction or writes down by model, as alternative model.
If S109 image feature and aspect of model boundary matching rate are less than setting critical value; Then from the object model knowledge base, transfer second object model different with first object model; Before transferring second object model, judge whether at first whether first object model is last model in the object model knowledge base, if not; Then execution in step S110 transfers next model; And repeat the step of S106 extraction model characteristic, S107 characteristic contrast; The the 3rd, the 4th object model in the traversal object model knowledge base ... N object model is till the object model that the aspect of model of image feature that searches out collection and extraction is complementary.
In one embodiment, said critical value is adjustable.For example, can utilize software setting to change.
In another embodiment; When said critical value setting hour; For example be 85%, possibly from the model knowledge base, search out a plurality ofly when wanting the model of image feature of recognition object, then therefrom choose the object that the highest extraction object model conduct of matching rate will be discerned.
In another embodiment, when said critical value setting is big, for example be 99.999%, then possibly go out not search for object model from knowledge base, can turn critical value down this moment, for example is 80%, carries out said method again.
With reference to figure 2, Fig. 2 shows the process flow diagram of another exemplary embodiment of machine identification method of the present invention.
The difference of embodiment among embodiment shown in Fig. 2 and Fig. 1 mainly is the feature extraction of object model, extraction be projection properties, and carry out following steps:
Step S206, projection on selected a certain direction; S207, extraction projection properties for example utilize foregoing method; S208, carry out feature extraction for projection, compare with the characteristic of image respectively for the aspect of model that extracts, promptly whether judging characteristic matees.
According to the result of characteristic contrast, make judgement then:
If the characteristic matching rate of the characteristic of projection and object image reaches the setting critical value, the object identification that then will discern is an object model;
If the characteristic matching rate of the characteristic of projection and object image is less than setting critical value, then execution in step S211 is transformed to second direction with the direction of observation of model.
Then; Repeat said process; Set critical value then the direction that stops transformation if the characteristic matching rate of the characteristic of projection and object image reaches, otherwise direction of observation is transformed to the 3rd, the 4th ... the M direction, M is the numerical value of setting according to certain search strategy; Be not less than the setting critical value up to said matching rate, otherwise next object model is continued to carry out said process
Owing in space coordinates, be the coordinate center for example, can observe different projections from different view with object observing (object model of promptly transferring).In order to improve the search efficiency of projection, the present invention also provides a kind of preferred searching method.
At first set up space coordinates,
The mode of going forward one by one with increment then changes the direction of observation of model.The mode of for example going forward one by one with angle step, for example angular dimension can be the arbitrary value between 1-10 °, more preferably is 3-5 °, also can be other angle values.
In the direction of observation process that changes model; Judgement is in the size of the characteristic matching rate of the characteristic of the projection of the next increment direction of the characteristic matching rate of the characteristic of a certain direction projection and object image and this direction and object image; If it is big that matching rate becomes, then direction of observation is changed into the next increment direction of this increment direction, if matching rate diminishes; Then direction of observation is moved this increment in the opposite direction; And the size of comparison match rate, this process repeatedly is up to finding the big direction of matching rate; If be not less than the critical value of setting at the matching rate of the characteristic of the projection of the big direction of this matching rate that finds; The object that then will discern and Model Matching; The end loop process, otherwise direction of observation is turned to other coordinate directions of confirming space coordinates, and carry out said process; Till value that matching rate is big in finding the setting direction of observation and the big value of this matching rate are not less than the critical value of setting; Be and find if travel through all directions, then judge the object that this model does not match and will discern, and finish above-mentioned circulation.
It should be understood that and can utilize several different methods to obtain projection, said method only is exemplary, and the present invention is not limited thereto.For example can in the space, appoint and get a bit, be that shifting axle is in some space planes, with arbitrarily angled mobile with the straight line (being direction of observation) of putting model through this; Constantly obtain projection, contrast the characteristic and the matching rate of wanting the characteristic of identifying object of projection then, when in this plane, having moved after 360 °; Direction of observation is being rotated to an angle; In another coordinate plane, move,, seek out maximum matching rate then up to complete coordinate space of traversal; And relatively, thereby make judgement with the critical value of this maximum match rate and setting.In addition, also have much other searching algorithms, improve the efficient of search.
In one embodiment, each model in the object model knowledge base is stored in the object model knowledge base with the mode of extracting in advance in the characteristic of the projection of different directions.For example, can model be carried out projection in space coordinates according to certain angle increment conversion direction of observation, and extract the characteristic of these projections, then with these characteristic storage in the object model knowledge base.When carrying out characteristic matching, in order the characteristic in the object model knowledge base is compared, make judgement then.The benefit of doing like this has been to save the time of carrying out projection and extraction, but it is big to occupy storage space.In another embodiment, can carry out real-time projection, feature extraction in real time compares then.The present invention does not do concrete qualification to this.
When the object that will discern is identified as the object model of this extraction, may further include the model information of transferring in the object model knowledge base, thereby the said object that will discern is carried out reconstruct.Model information comprises geometric configuration, color, material, physicochemical characteristics, composition characteristic and action message.
All models in having traveled through the object model knowledge base; Do not search out the object model that the aspect of model of image feature and the extraction of collection is complementary; Judge that then the said object that will discern is new object, the model information that the user will be somebody's turn to do corresponding to this new object is added into the object model knowledge base.
The mode through example describes the present invention above, it should be understood that under the prerequisite that does not deviate from the spirit and scope of the present invention, can carry out various modifications and change to the present invention.

Claims (8)

1. a machine identification method is used for the identification and/or the reconstruct of object, it is characterized in that, comprising:
Gather and want the recognition object image;
Object image to gathering carries out feature extraction, promptly extracts image feature;
The object model knowledge base is provided, and this object model knowledge base comprises N object model, wherein, and N >=1;
Transfer first object model in the object model knowledge base;
Object model to this extraction carries out feature extraction, i.e. the extraction model characteristic;
The image feature and the aspect of model are compared;
If image feature and aspect of model matching rate are not less than the setting critical value, the object model that then will extract carries out record, with as alternative model;
If image feature and aspect of model matching rate are less than setting critical value; Then from the object model knowledge base, transfer second object model different with first object model; Repeat the step of said extraction model characteristic, the contrast of said characteristic; The the 3rd, the 4th object model in the traversal object model knowledge base ... till the object model that the object model that N object model, the matching rate of the aspect of model that extracts until searching out and the image feature of collection are not less than critical value promptly is complementary with the object that will discern;
Wherein, the object model that extracts is carried out in the step of feature extraction,, carry out projection, carry out feature extraction respectively for the projection on each direction in different directions through the direction of observation of transformation model;
The step of the direction of observation of transformation model comprises:
The setting space coordinate system;
The mode of going forward one by one with angle step changes the direction of observation of model, and it is specially:
Relatively in the size of the characteristic matching rate of the characteristic of the projection of next angle step direction of the characteristic matching rate of the characteristic of a certain direction projection and object image and this direction and object image; If it is big that matching rate becomes; Then direction of observation is changed into the next angle step direction of this angle step direction,, then direction of observation is moved this angle step in the opposite direction if matching rate diminishes; And the size of comparison match rate; This process repeatedly is not less than the direction of the critical value of setting up to the matching rate of the characteristic of characteristic that finds projection and object image, and object and the model that indicate to discern this moment are complementary; Otherwise direction of observation is turned to other coordinate directions of confirming space coordinates, and carry out said process, all do not find, then judge the object that this model does not match and will discern if travel through all directions.
2. machine identification method according to claim 1, wherein said critical value is adjustable.
3. machine identification method according to claim 2; Wherein, When said critical value setting hour, and when from the model knowledge base, searching out the alternative model that a plurality of and the image feature of wanting recognition object be complementary, then choose the object that the highest object model conduct of matching rate identifies.
4. machine identification method according to claim 1; Wherein, The step of the object model that extracts being carried out feature extraction further comprises: the characteristic of the projection model on each direction of being extracted is compared with the characteristic of object image respectively; If the matching rate of the characteristic of the characteristic of projection model and object image is not less than the setting critical value on this direction; The object identification that then will discern is the object model that is extracted; If the matching rate of the characteristic of the characteristic of projection model and object image is then changed direction the direction of observation of institute's extraction model less than setting critical value on this direction, and repeat the step of projection, feature extraction, characteristic contrast; The characteristic matching rate of the characteristic of the projection model on the direction that transforms to and object image is not less than to be set critical value or is converted into the M direction, and M is the numerical value of setting according to certain search strategy.
5. machine identification method according to claim 4; Wherein, The characteristic of each model projection in different directions in the object model knowledge base is stored in the object model knowledge base with the mode of extracting in advance; Or during machine recognition, the characteristic of said each model of extract real-time projection in different directions.
6. machine identification method according to claim 1, wherein, if this object that will discern is identified as the object model of this extraction, then this method further comprises the model information of transferring in the object model knowledge base.
7. machine identification method according to claim 1; Wherein, If traveled through all models in the object model knowledge base; The object model that the object image characteristic that does not search out and gather is complementary judges that then the said object that will discern is new object, and the user will be added into the object model knowledge base corresponding to the model information of this new object.
8. machine identification method according to claim 6, wherein, said model information comprises one or more in geometric configuration, color, physicochemical characteristics and the action message.
CN2009100780799A 2009-02-13 2009-02-13 machine recognition and reconstruction method Expired - Fee Related CN101807244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100780799A CN101807244B (en) 2009-02-13 2009-02-13 machine recognition and reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100780799A CN101807244B (en) 2009-02-13 2009-02-13 machine recognition and reconstruction method

Publications (2)

Publication Number Publication Date
CN101807244A CN101807244A (en) 2010-08-18
CN101807244B true CN101807244B (en) 2012-02-08

Family

ID=42609032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100780799A Expired - Fee Related CN101807244B (en) 2009-02-13 2009-02-13 machine recognition and reconstruction method

Country Status (1)

Country Link
CN (1) CN101807244B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855493A (en) * 2012-08-02 2013-01-02 成都众合云盛科技有限公司 Object recognition system
CN103903297B (en) * 2012-12-27 2016-12-28 同方威视技术股份有限公司 Three-dimensional data processes and recognition methods
CN111399456B (en) * 2020-03-26 2021-03-19 深圳市鑫疆基业科技有限责任公司 Intelligent warehouse control method and system, intelligent warehouse and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6795567B1 (en) * 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
CN1698067A (en) * 2003-04-28 2005-11-16 索尼株式会社 Image recognition device and method, and robot device
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6795567B1 (en) * 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
CN1698067A (en) * 2003-04-28 2005-11-16 索尼株式会社 Image recognition device and method, and robot device
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开平10-312463A 1998.11.24

Also Published As

Publication number Publication date
CN101807244A (en) 2010-08-18

Similar Documents

Publication Publication Date Title
US11537104B2 (en) System and method for planning support removal in hybrid manufacturing with the aid of a digital computer
CN112669385B (en) Industrial robot part identification and pose estimation method based on three-dimensional point cloud features
CN114417616A (en) Digital twin modeling method and system for assembly robot teleoperation environment
Tsai et al. Simultaneous 3D object recognition and pose estimation based on RGB-D images
CN107818598B (en) Three-dimensional point cloud map fusion method based on visual correction
CN107969995B (en) Visual floor sweeping robot and repositioning method thereof
JP2019036167A (en) Image processing apparatus and image processing method
CN101556647A (en) mobile robot visual orientation method based on improved SIFT algorithm
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN101807244B (en) machine recognition and reconstruction method
KAYMAK et al. Implementation of object detection and recognition algorithms on a robotic arm platform using raspberry pi
Chen et al. Random bin picking with multi-view image acquisition and cad-based pose estimation
CN112613551A (en) Automobile part identification method, storage medium and system
CN101964053A (en) On-line identification method of compound pattern
Di Marco et al. Creating and using roboearth object models
Tee et al. A framework for tool cognition in robots without prior tool learning or observation
Kuzmič et al. Object segmentation and learning through feature grouping and manipulation
WO2022001739A1 (en) Mark point identification method and apparatus, and device and storage medium
CN109313708B (en) Image matching method and vision system
CN103942554A (en) Image identifying method and device
CN111771227A (en) Program, system, electronic device, and method for recognizing three-dimensional object
CN110334237B (en) Multi-mode data-based three-dimensional object retrieval method and system
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
CN116798028A (en) Automatic dimension marking method for three-dimensional part
CN115862074A (en) Human body direction determining method, human body direction determining device, screen control method, human body direction determining device and related equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120208

Termination date: 20130213