CN103824085B - The devices and methods therefor of image procossing, image recognition and image classification - Google Patents

The devices and methods therefor of image procossing, image recognition and image classification Download PDF

Info

Publication number
CN103824085B
CN103824085B CN201410102668.7A CN201410102668A CN103824085B CN 103824085 B CN103824085 B CN 103824085B CN 201410102668 A CN201410102668 A CN 201410102668A CN 103824085 B CN103824085 B CN 103824085B
Authority
CN
China
Prior art keywords
target
candidate
layer
initial stage
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410102668.7A
Other languages
Chinese (zh)
Other versions
CN103824085A (en
Inventor
门洪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daqiang Vision Technology Huzhou Co ltd
Original Assignee
SUZHOU BITSTRONG CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SUZHOU BITSTRONG CO Ltd filed Critical SUZHOU BITSTRONG CO Ltd
Priority to CN201410102668.7A priority Critical patent/CN103824085B/en
Publication of CN103824085A publication Critical patent/CN103824085A/en
Priority to JP2015050161A priority patent/JP2015179511A/en
Application granted granted Critical
Publication of CN103824085B publication Critical patent/CN103824085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses the devices and methods therefor of a kind of image procossing, image recognition and image classification, for the target of the non-special image data such as the description of non-special object thing from shooting extraction, carries out necessary conversion, can extract the fresh target of corresponding known object thing.Similarity determination unit reads similarity decision procedure, carries out the extraction of candidate target at initial stage.Candidate object transformation unit reads to thicken to be thinned thickens thinned program, conversion candidate target at initial stage with target selecting routine, candidate target.And then target determination unit carries out similarity measure to the target of conversion, as needed, repeated transformation action, it is determined that the target of extraction.With this, the fresh target of corresponding known object thing can extract.

Description

The devices and methods therefor of image procossing, image recognition and image classification
Technical field
The present invention relates to the devices and methods therefor of a kind of image procossing, image recognition and image classification, it is by from unknown The image data extraction target of object enters line translation, obtains the fresh target of known object thing.
Background technology
At present, when handling image, by the processing of the image processing apparatus such as computer, from the unknown object of shooting It is not known corresponding to just apt be extracted when thing is the image data extraction images such as the description of not clear object Object part.Such as in image processing process, as a part for the part that should be extracted, i.e., should by comprising The parts of images of view data is deleted, or on the contrary, should be a unwanted part, i.e., the image that should be deleted The parts of images of data on the contrary again by comprising.Extracted if said process is human eye according to view data, based on being carried Experience, the knowledge of the people taken, relevantly judges necessary part, can extract appropriate required view data.However, by such The mankind judge the judgement of the scope of extraction, are that comparison is abstract, and concept is wide and with multifarious, but is passing through computer etc. In the image processing apparatus of control, the judgement same with the mankind, not a duck soup are also carried out.
Such as according to target in general description extracted with high accuracy image(Parts of images)Technology, it is multiple on being divided into The edge image that Bands is adjusted(Bands adjusts edge image), it is corresponding adjacent with belonging to 1 Ge Bands Tiao Bands tune edge images to belong to Qi Ta Bands Tiao Bands adjust the field number of edge image, and the knowledge for deciding whether to extract target in image is widely known by the people(Referenced patent document 1, Unexamined Patent 11-25271 publications).But in the case of patent document 1 because the edge part of image grade as whether The judgment standard of target in image is extracted, for relying solely on the marginal information of image, it is difficult to judge whether extract Object, therefore, when extracting parts of images, the target in extraction image that can not be appropriately.
The content of the invention
In order to overcome drawbacks described above, the invention provides the device of a kind of image procossing, image recognition and image classification and Its method, the target extracted according to unknown images such as the descriptions of unknown object thing of shooting carries out necessary conversion, so as to extract The fresh target of corresponding known object thing.Here, so-called target, refers to necessarily coherent parts of images, by this corresponding image A group Pixel Information in partial certain limit field is formed.
The present invention in order to solve its technical problem used by technical scheme be:
Image processing apparatus of the present invention is extracted according to the unknown images data of the unknown object thing of shooting Target, enter after line translation the image processing apparatus for the fresh target for extracting corresponding known object thing.The image processing apparatus include with Lower unit:The layer generation unit of generation layer, this layer is by the segmentation image collection comprising the target extracted according to unknown images data Body is formed;The candidate target determining means at initial stage of decision candidate at initial stage target, i.e., included according in the layer of layer generation unit generation Target, enter line translation and obtain the candidate target at initial stage as candidate at initial stage;Candidate object transformation unit, i.e., generated using layer The initial stage included in the layer of unit generation, the target beyond candidate target, conversion candidate target at initial stage, generated new candidate target Candidate object transformation unit.
By above-mentioned image processing apparatus, in candidate object transformation unit, conversion candidate target at initial stage, new candidate is generated Target.Necessary conversion is carried out for the target of the unknown images data such as the description of unknown object thing according to shooting extraction, The fresh target of extractable corresponding known object thing.
The specific side of the present invention, also have with lower unit:The similarity for determining target and known object thing similarity is surveyed Order member, and the similarity measurement result that will be determined based on similarity determination unit, pass through the change of candidate object transformation unit The mesh that the new candidate target changed and generated and the target selected from the candidate target before conversion are preserved as determination target Mark determining unit.Now, the measure based on similarity, for the target being transformed, can be more definite determine whether for should The target of the extraction.
The another side of the present invention, initial stage, measurement result of the candidate target determining means based on similarity determination unit, came Decision candidate at initial stage target.Now, the early stage for the target that can promote extract determines.
Another side of the present invention, it is defined in advance during the similarity that target determination unit is determined based on similarity determination unit Threshold value, determine the target that should be saved as determination target.Now, such as, for the target more than threshold value as defined in prior, If portion of institute, as object is extracted, can reduce the target omitted and be extracted, not carried as determining that target is preserved The possibility taken.(now, such as, for the target more than prior defined threshold value, it will be used as and determine that target be preserved, can Reduce the target omitted and be extracted, the possibility do not extracted.)
Another side of the present invention, target determination unit to the similarity measurement result that is determined based on similarity determination unit, It is compared by the conversion of candidate object transformation unit and the candidate target before the new candidate target generated and conversion, judgement is The conversion action of the repetition candidate object transformation unit is also off the action, will determine that the candidate target that result obtains is made To determine that target is preserved.Now, acted by conversion, it is extractable to be considered as most appropriate target.
The another side of the present invention, in the various targets that initial stage, candidate target determining means generated layer generation unit, surpass The target of the similarity threshold of similarity determination unit measure is crossed as candidate target at initial stage.Now, by select similarity compared with High target can control the amount of calculation of conversion process as candidate target at initial stage.
The another side of the present invention, similarity determination unit carries out similarity measure for known object thing, by extraction Benchmark of the similarity (certainty factor) that the height of target and known object thing approximation is quantized as measure.Now, base In the measure of similarity determination unit, according to numeric ratio compared with can determine whether the height of approximation.
There is the 1st layer of generation unit to generate by including candidate target at initial stage for another side of the present invention, layer generation unit The candidate layer and the 2nd layer of generation unit i.e. generation that target complex is formed are made up of special the target complex not comprising candidate target at initial stage Layer.2nd layer of generation unit generates special layer, that is, generates by the attendant layers of subsidiary target configuration, and the subsidiary target is according to unknown It is excluded in image procossing during image data extraction target from the various targets for forming candidate layer, and in candidate object transformation list It is subsidiary in other targets in the conversion of member.Now, such as, determine candidate layer when, even if being related to the letter for lacking parts of images Breath, also can be by as subsidiary target, by as the final part for determining target.
The another side of the present invention, layer generation unit also have the 3rd layer of generation unit generate according to known object thing and Other candidate layers that the aggregate of other candidate target correlation division images of other known object thing extraction is formed, candidate target Converter unit is included in other candidate targets of other candidate layers from conversion action object when generating new candidate target Exclude.At this point it is possible to conversion process object is concentrated in advance.
The another side of the present invention, the layer picture of layer generation unit generation in upper layer and relative are in down comprising relative As the layer group of the stratum character construction of the layer of position.The layer of upper side will form the more of the next side layer by father's target configuration, father's target Individual target is included in inside as sub-goal.Now, because there is stratum character construction, the options of conversion process is arranged, can be smooth Handled.
The another side of the present invention, the layer for the stratum character construction that initial stage, candidate target determining means generated in layer generation unit In group, according to the target of the next side layer, candidate target at initial stage is determined.Candidate object transformation unit makes for candidate target at initial stage Other subsidiary targets, generate new candidate target.Now, by from bottom to top, can extract from the next side layer to upper side layer Purpose target.
The another side of the present invention, the layer for the stratum character construction that initial stage, candidate target determining means generated in layer generation unit In group, it is subordinated in the target of upper side and determines candidate target at initial stage, candidate object transformation unit is gone for candidate target at initial stage The sub-goal being contained in except a part in candidate target at initial stage, generates new candidate target.Now, from upper side layer to bottom Side layer from top to bottom, can extract purpose target.
Another side of the present invention, layer generation unit determine the size of each segmentation figure picture in segmentation figure picture is both deposited, compare by The view data size of the view data size of the partial segmentation image of measure and corresponding known object thing is as needed, right The partial segmentation image is split again, generates the layer based on fresh target.Now, such as, the mesh that ought to be extracted can be avoided Mark is embedded in situation about being seen in bulk target by leakage.
To reach above-mentioned purpose, the mesh that present invention conversion is extracted according to the unknown images data of the unknown object thing of shooting Mark, extract the image processing method of the fresh target of corresponding known object thing.With following process:(1) layer generation process, generate by The fit layer formed of segmentation image set comprising the target extracted according to unknown images data;(2) candidate at initial stage target determines work Sequence, according to the target being contained in the layer that layer generation process generate, determine candidate target at initial stage i.e. as candidate at initial stage and by Conversion gained is dealt with objects;(3) candidate object transformation process, waited using the initial stage being contained in layer layer of generation process generation The target beyond target is mended, conversion candidate target at initial stage, generates new candidate target at initial stage.
By above-mentioned image processing method, based on the similarity of similarity mensuration operation, wait initial candidate i.e. initial stage The extraction of target is mended, and then, in candidate object transformation process, candidate target at initial stage is constantly converted, measure is transformed the phase of target Like degree, it is determined that the target being extracted.With this, by the unknown images data such as description of unknown object thing captured by The target of extraction carries out necessary conversion, can extract the fresh target of corresponding known object thing.
To reach above-mentioned purpose, associated picture identification device of the present invention is the unknown images of the unknown object thing captured by Data, carry out the pattern recognition device of known object thing Classical correlation.The image processing apparatus of any of the above described one record is included, According to the image processing apparatus, based on the fresh target extracted according to above-mentioned unknown images data, carry out whetheing there is known object thing Judge.
By above-mentioned pattern recognition device, the fresh target of corresponding known object thing is relevantly extracted, can be carried out more precisely Image recognition.
To reach above-mentioned purpose, image-recognizing method of the present invention is the unknown images of the unknown object thing captured by Data, carry out the image-recognizing method of known object thing Classical correlation.With process is judged, that is, include the image graph of above-mentioned record It is right known to progress based on the fresh target extracted according to above-mentioned unknown images data according to the image processing method as processing method The judgement being whether there is as thing.
To reach above-mentioned purpose, the unknown images number of image processing apparatus of the present invention unknown object thing captured by According to the image classification device of progress known object thing Classical correlation.Include the image processing apparatus of any of the above described one record, root According to the image processing apparatus, based on the fresh target extracted according to above-mentioned unknown images data, the classification of known object thing is carried out.
According to above-mentioned image classification device, relevantly extract the fresh target of corresponding known object thing, can be more definite enter Row image classification.
To reach above-mentioned purpose, sorting technique of the present invention is the unknown images number of the unknown object thing captured by According to the image classification method of progress known object thing Classical correlation.With classification process, that is, include the image procossing of above-mentioned record Method, according to the image processing method, based on the fresh target extracted according to above-mentioned unknown images data, carry out known object thing Classification.
The beneficial effects of the invention are as follows:Device and its side of image procossing of the present invention, image recognition and image classification Method, there is provided a kind of new image procossing, the apparatus and method of image recognition and calssification, can be more quick accurate with respect to human eye Known object thing is whether there is from the unknown images data judging of unknown object.
Brief description of the drawings
【Fig. 1】(A) and (B) is the figure that layer construction is conceptually illustrated in the associated picture processing unit of embodiment 1.
【Fig. 2】(A)~(D) is to extracting the concept map that initial stage, the citing of candidate object procedure illustrated, (E) according to each layer It is the figure that object i.e. target in initial stage candidate object transformation is conceptually illustrated.
【Fig. 3】Conversion to target and compare to determine the figure for carrying out conceptual illustration.
【Fig. 4】(A) and (B) is the figure illustrated with regard to the processing under target overlapping cases.
【Fig. 5】With regard to object transformation in Fig. 4 and compare to determine the figure conceptually illustrated.
【Fig. 6】Concept is carried out with regard to the construction in the image processing method middle level using the 2nd embodiment associated picture processing unit The figure that property illustrates.
【Fig. 7】In image processing method, the flow chart of conceptual illustration candidate layer generation processing.
【Fig. 8】(A) and (B) is the figure conceptually illustrated with regard to the segmentation of target in candidate layer generation processing.
【Fig. 9】(A)~(E) is that the stratum's row for conceptually illustrating target in candidate layer generation processing constructs to form composition Figure.
【Figure 10】(A) and (B) is image processing method of the conceptual illustration using the 2nd embodiment associated picture processing unit Variation figure.
【Figure 11】(A) and (B) is the figure for conceptually illustrating layer construction.
【Figure 12】(A) be conceptually illustrate the 5th embodiment associated picture identification device construction block diagram, (B) is concept Property illustrates the block diagram of the composition of storage device in pattern recognition device.
【Figure 13】Illustrate related 1st figure of image-recognizing method.
【Figure 14】Illustrate related 2nd figure of image-recognizing method.
【Figure 15】(A)~(G) is the figure for illustrating that image-recognizing method is related 3rd.
【Figure 16】(A) be the 6th embodiment associated picture sorter the i.e. unknown images of process object example, (B)~ (E) it is figure that explanation carries out the related test paper data of classification.
Embodiment
Below in conjunction with accompanying drawing, a preferred embodiment to the present invention elaborates.But protection scope of the present invention is not It is limited to following embodiments, i.e., the simple equivalence changes made in every case with scope of the present invention patent and description are with repairing Decorations, all still belong within patent covering scope of the present invention.
Embodiment 1:
The present embodiment described image processing unit, there is CPU, storage device, display device, input unit and bus etc..
Image processing apparatus acts CPU etc., the similarity based on known special object thing Yu the image of unknown object thing (certainty factor), from the view data of unknown object, the image section of special object thing known to correspondence is carried as target Take.
Herein, so-called target, refer to have necessarily coherent image section.That is, except from unknown object image A unknown images data extraction part obtained from outside image section, connect the part of each image section synthesis also referred to as pair As.In addition, in the target extracted from unknown view data, the target that can turn into individually special object thing known to correspondence is referred to as Candidate target.Candidate target can not individually be turned into but subsidiary in other targets, it is possible to as the mesh of a new candidate target part It is nominally subsidiary target (i.e. special target, non-candidate target).
In above-mentioned image processing apparatus, it is mutual that storage device, display device, input unit can be achieved by bus in CPU Data receive and grant.In addition, operation instructions of the CPU based on input unit, reads the program formulated, data from storage device, performs Various processing based on these programs and data.
Specifically, CPU collects the image information read in advance in the storage device, based on the image in storage device Information, carry out the extraction of various targets, carry out various judgements.Image information can be image device or other filming apparatus or Both with reference to filming apparatus captured by image.It is solid by this for example camera can be the solid-state image sensors such as CCD The image that body camera detects can be output as data image signal.
Storage device has program storage unit (PSU) and data storage cell.Program storage unit (PSU) has programming arts, the inside Store various programs of multiple operating image processing apparatus etc..Data storage cell has data fields, and the inside interim storage Input instruction, output data, result etc..
Display device is made up of display driving circuit, image-display units etc., the command signal based on CPU, is carried out necessary Display.Show that driving circuit based on the data inputted from CPU, generates drive signal.Drive based on display driving circuit input Dynamic signal, image-display units carry out necessary display.
Input unit is made up of keyboard etc., and the instruction that the operational order of reflection operation image processing apparatus is exported to CPU is believed Number.
Storage device stores following procedure as the program storage unit (PSU) in inscape:Carry out dealing with objects preceding place The pre-treatment program of reason, generation layer layer generation program (such as:1st candidate layer generation program, the 2nd candidate layer generation program ... N-th candidate layer generates program, the 1st attendant layers generation program, the 2nd attendant layers generation program ... m attendant layers generation programs etc.), Judgement and extracting object are similarity (certainty factor) the decision procedure JP of the target similarity of known object, the object change of selection candidate What the target utilized when changing thickened thinned target thickens thinned target selecting routine, based on selected increasing Thick thinned target, the candidate target for carrying out object transformation thicken thinned program.
Here, so-called layer, is the layer for carrying the hypothesis according to the acquired target of image segmentation, according to each Fractionation regimen And generate, it is finally containing multiple layers.In other words, generate by obtained by 1 image Fractionation regimen according to general image The target complexes of multiple target configurations be 1 layer because having multiple images Fractionation regimen, can generate respectively corresponding more Individual layer.In the present embodiment, it is various dividing methods according to known gimmick, generates multiple layers.In addition, in various layers, so-called time Layer is mended, refers to the layer containing candidate target.In contrast, attendant layers are only only (i.e. non-candidate target, special by subsidiary target Target) form layer.
Storage device stores herein below as the data storage cell in inscape:Store the figure of unknown object As information is the unknown images data storage cell of unknown images data, stores the 1st~the n-th candidate layer of candidate layer related data Data storage cell I, the 1st~the m attendant layers data storage cell for storing attendant layers related data, store candidate target etc. (such as candidate at initial stage target data memory cell, tentative candidate target data storage are single for the target data memory cell of related data Member and determination target data memory cell).Deposited in addition, storage device also stores special image data in the data store Storage unit.It is necessary that institute when being judged according to similarity (certainty factor) decision procedure is store in special image data memory cell Known object related data.
In program storage unit (PSU), pre-treatment program is the journey of the various processing of last stage necessary to carrying out graphical analysis Sequence, such as, various processing are carried out to the unknown images data stored in the unknown images data storage cell of data storage cell. In addition, the 1st~the n-th candidate layer generation program is read according to from unknown image data memory cell, the quilt in pre-treatment program The view data of pre-treatment is carried out, generates the program of each candidate layer.Such as 1 candidate layer generation program, split equivalent to image The image of overall unknown images data, the aggregate of segmentation figure picture is formed, i.e., according to 1 general image, forms multiple targets Aggregate.
Here, in each candidate layer, the target as candidate target is referred to as candidate target at initial stage.In other words, the 1st~the N candidates layer generation program is generated by the program of the candidate layer of the various target configurations comprising candidate target at initial stage.Initial stage candidate The decision of target is carried out based on later stage similarity (certainty factor) decision procedure.
1st~the n-th candidate layer data memory cell of data storage cell is temporarily in store to be based on the 1st~the n-th candidate layer The data for each candidate layer correlation that generation program is generated.For example it is formed in the candidate layer generated in the 1st candidate layer generation program The relevant information of target be stored in the 1st candidate layer data memory cell.
In program storage unit (PSU), the 1st~the m attendant layers generation program is to generate the program of each attendant layers respectively.Here, close In the subsidiary target that can not turn into candidate target, specifically, such as, program etc. is generated by the 1st~the n-th candidate layer, from depositing In image procossing when being stored in the unknown images data extraction target of unknown images data storage cell, by from forming candidate layer The target being excluded in various targets is exactly to attach target.The image procossing of 1st candidate layer generation program etc., is exactly mainly to pass through Image segmentation is carried out, and form is various.In processes, such as, on the target in divided image less than threshold value, have When can also carry out delete processing without exception.That is, for the small target of size, delete processing can be also carried out sometimes, because it The candidate target of the known object image caught can not be turned into.But for such target, even if can not be referred to as from the point of view of individually Purpose target, it is single that there is the possibility for forming a purpose target part, sometimes may also incomplete deletion.In other words, even if It is this deleted target, to capture purpose target conscientiously, can be also allowed to subsidiary in other targets sometimes, be applied. In the present embodiment, as described above, in various image processing process, it may have this possibility, will be excluded from candidate Non- candidate target is used again as special target (or subsidiary target), is even uniformly processed as computer Image processing apparatus, but can more accurately extract purpose target.
1st~the m attendant layers data storage cells interim storage of data storage cell is given birth to based on the 1st~the m attendant layers The data of each attendant layers generated into program.
In program storage unit (PSU), similarity (certainty factor) decision procedure is determinating reference program, COUNTABLY VALUED sex determination according to Whether the target that unknown images data are extracted as candidate target is corresponding special object thing.For example CPU is as similarity Determination unit, similarity (certainty factor) decision procedure is read from program storage unit (PSU), meanwhile, deposited from the 1st~the n-th candidate layer data Storage unit reads target, and then, known object thing related data is read from special image data memory cell, based on similarity (certainty factor) decision procedure, by comparing both data, certainty factor numerical value is calculated, and then, CPU determines as candidate target at initial stage Order member is according to result of calculation, using the high target of similarity in the target (target for exceeding prespecified threshold value) as initial stage Candidate target.On the information of candidate target at initial stage, candidate target data memory cell at initial stage is stored in.In addition, CPU conducts Target discrimination unit reads candidate target related data from the tentative candidate target data memory cell of the tentative candidate target of storage, It is more close to object to judge that the candidate target has.These tentative candidate targets are the initial stages by converting storage candidate at initial stage target Obtained from candidate target data memory cell, initial stage candidate target.
In program storage unit (PSU), corresponding candidate target thickens that to be thinned with target selecting routine be that selection participates in judging pair As being that the target that the conversion of candidate target deforms thickens the program of thinned target.In addition, candidate target is thickened and subtracted Thin program is when carrying out the deformation of candidate object transformation, from thickening in thinned target selected in option program, choosing Select which target is used to thicken, which target is used for thinned program.
The candidate target at initial stage extracted from initial stage candidate target data memory cell is thickened thinned by candidate target The processing of program, conversion deformation, after generating new target, the target is temporarily stored to interim as tentative candidate target In candidate target data memory cell.The tentative candidate target of interim storage is based on waiting in tentative candidate target data memory cell Benefit target thickens thinned program and is transformed deformation.Repeat the resulting final result target of this action and be used as determination target, It is stored in and determines target data memory cell.The view data for the target for determining to store in target data memory cell is most The target data that should be extracted eventually.
Hereinafter, reference picture 1, just using the formation in the image processing method middle level of the image processing apparatus of composition with more than And the decision of candidate target at initial stage, citing are conceptually illustrated.Fig. 1 (A) and 1 (B) is the structure for conceptually illustrating each layer The figure made.
First, based on above layers generation program etc., as shown in Fig. 1 (A), many has single or multiple target OB, point It is not generated.That is program is generated according to each layer respectively with the program equivalent to image dividing processing, formed respectively Candidate layer with multi-ply construction, attendant layers.Here, the candidate layer of n-layer (n >=1) and attaching for m layers (m >=1) are generated respectively Layer.Then, as shown in Fig. 1 (B), from the target OB being contained in n-layer candidate layer, based on certainty factor, candidate mesh at initial stage is selected Mark IB (dash area in figure).In addition, as described, candidate at initial stage is not selected from the subsidiary target AO of attendant layers for forming m layers Target.For selected each candidate at initial stage target IB, by add other targets thicken conversion, candidate target IB at initial stage subtracts The thinned conversion of the internal target included is gone, or not only carries out thickening conversion but also be thinned conversion, variation targets, more accurately Extraction purpose target.
Hereinafter, just illustrated using the image processing method of the image processing apparatus formed with more than.
First, each layer generation programs of the CPU as the layer generation unit for generating each layer, respectively read storage device, respectively Carry out layer generation processing (reference picture 1 (A)), i.e. the 1st step (step S1):Each candidate layer generation processing (extracts each candidate target Processing), second step (step S2):Each attendant layers generation processing (processing of each subsidiary target of extraction).More specifically, CPU As the 1st layer of generation unit, program is generated based on each layer, n-layer candidate layer is generated, as the 2nd layer of generation unit, based on each subsidiary Layer generation program, generate the attendant layers of m layers.
Then, CPU selects candidate target at initial stage, i.e. third step (step S3):Candidate target selection processing (selection at initial stage Certainty factor exceedes the target of threshold value).In the third step, first, CPU reads the target being determined, i.e., from data storage cell In 1st~the n-th candidate layer data memory cell, the target data (step S301) for forming candidate layer is read, carries out the data Information analysis (step S302).Secondly, CPU from storage device read certainty factor decision procedure while, appropriate read is stored in The necessary datas (step S303) such as known object thing related data, certainty factor threshold value in special image data memory cell, base The data read in the data and step S303 analyzed in step S302, it is true to judge whether each target for forming candidate layer exceedes Confidence threshold (step S304).In step s 304, if measure object target exceedes threshold value (step S304:Yes), CPU should Target is stored in candidate target data memory cell at initial stage (step S305) as candidate target at initial stage.On the other hand, in step In rapid S304, if measure object target is not above threshold value (step S304:No), CPU is not using the target as candidate mesh at initial stage Mark, the usual target that can be contrasted when being deformed as processing as object transformation, carried out as a target in subsidiary layer Store (step S306).As candidate target determination unit at initial stage CPU for form data storage cell the 1st~the n-th candidate Step S301~step S306 more than all targets of layer data memory cell, determine candidate target at initial stage.
Terminate in third step after candidate at initial stage target selection processing described above, the 4th step:Initial stage selected by CPU statistics The number of candidate target is simultaneously numbered (step S4).Here, N number of (N >=1) target is as jth candidate at initial stage target (1≤j ≤ N) it is selected.5th step:Initial setting (1 → j) is carried out, i.e., in this N number of candidate at initial stage target, the 1st is numbered Target is the 1st candidate target at initial stage (step S5) as initial management object.Secondly, the 6th step:CPU is confirmed to being numbered All candidate at initial stage targets whether be disposed (whether j>N) (step S6).In the 6th step, if it is determined that there is no (step also S6:No), the 7th step:CPU reads to thicken to be thinned from storage device and uses target selection journey as candidate object transformation unit Sequence, thicken being thinned being handled (step S7) with target selection.Here, there is loading to thicken thinned target selecting routine, This program turns into the target included in the candidate target of process object and the candidate mesh using as thinned target is thickened Adjacent target is marked as thickening to be thinned to be used with target.Thicken in thinned target, not to the utmost comprising composition candidate There is no selected target as candidate target at initial stage in the target of layer, also comprising the subsidiary target for forming attendant layers.
In the 7th step, after selection thickens thinned target, the 8th step:CPU reads candidate target from storage device Thinned program is thickened, extracts various pattern conversions (step S8), for example is transformed to, for process object candidate target, add (be allowed to subsidiary) is thickened with target, generates new target, or is removed and subtracted thickness target, generates new target, or be to add Thicken and use target, subtract multiple new candidate targets such as thick target, the fresh target of generation again, carry out conversion process (step S9a), Generation converts target based on multiple fresh targets that various change internal schema is transformed, before determining each conversion target and conversion respectively Target certainty factor (step S9b).Here, conversion target has more than the certainty factor being previously set with the target before conversion During the target of threshold value, CPU is as target determination unit, using the target that should be extracted as target is determined, as jth candidate One of object transformation relevant information, preserve to (step S10) in a field of storage device.Secondly, CPU determines as target Unit, target is converted to the new candidate target being saved after being converted in step S8~S10, compared based on certainty factor decision procedure Certainty factor, specific certainty factor highest target (step S11).Firmly believed here, new candidate target is converted in target to most High target is referred to as optimal candidate target.Secondly, it is by the certainty factor of specific optimal candidate target in CPU verification steps S11 The certainty factor of the no candidate target than before conversion rises (step S12).In step s 12, if it is confirmed that certainty factor rises, that is, sentence (step S12 when the optimal candidate target of certainty factor highest is temporarily the target of highest certainty factor in candidate target after disconnected conversion: Yes), the candidate target before substitution conversion, using optimal candidate target as the i.e. tentative candidate target of new process object.Rewrite temporary Determine candidate target information (step S13) in candidate target data memory cell, using this fix tentatively candidate target as process object, This repeat step S9~S12 action.On the other hand, in step s 12, confirm that certainty factor does not rise, that is, before judging conversion Candidate target be with more High assurance target when (step S12:No), judge that the candidate target before conversion is optimal mesh Mark, if the candidate target, which exceedes, firmly believes threshold value, handled as one of determination target, while terminate jth candidate target Conversion process (step S14), the processing (j+1 → j) of (step S15) next candidate at initial stage target is shifted to, is equally carried out above-mentioned Processing, until the processing of all candidate at initial stage targets terminates (step S16:Yes).
More than, by image processing apparatus, line translation or deformation are entered to all N number of (N >=1) candidate at initial stage targets, terminated Determine the extraction of target.Now, extracting object is saved in step S10 and determines target in the candidate object transformation of 1 species, It may bear multiple.Now, such as, can finally be selected by other decision methods or from multiple determination targets of human eye determination Select 1 determination target.By in 1 candidate object transformation, select it is multiple there is possibility to set the goal really, can reduce should be by The possibility of the omission of the target of extraction.
Fig. 2 and Fig. 3 is that the conversion candidate object procedure in the image procossing with regard to above-mentioned image processing apparatus is carried out more specifically The pattern diagram of explanation.For example will have the general image HD of the rectangle scope as shown in Fig. 2 (A) as unknown images, enter Row image procossing.Now, the figure in the layer (such as reference picture 1 (A)) for the segmentation figure picture that overlapping stratum character is set during each field is such as Shown in Fig. 2 (B).Field in general image HD is shown in a target OB part such as Fig. 2 (C), is selected as candidate at initial stage Target IB.And then as shown in Fig. 2 (D), for N number of candidate at initial stage target IB, in order to be handled in order, numbered. Fig. 2 (E) displays aiming at 1 candidate target IB at initial stage (be shown in figure the 1st initial stage candidate target IB1) in N number of, And the example for thickening thinned target selected.In legend, the 1st target shown in 1 dash line is contained within the beginning of the 1st Target in phase candidate target IB1.In other words, the 1st target be using the 1st initial stage candidate target IB1 as father's target specific item Mark CO.2nd target NO shown in dotted line is the adjacent target NO with candidate target IB1 adjoinings at the 1st initial stage.Belong to the mesh of each candidate layer In mark, in addition to as initial stage unadapted targets of candidate target IB, belong to the subsidiary target (special target) of each attendant layers Also these targets (such as the 1st target, adjacent target) be could be utilized as.That is, during candidate object transformation, by by this Not using target a bit, though not being used as candidate target IB at initial stage, determination can conduct as thinned target is thickened When purpose target is finally extracted, it can be used again as a target part.More specifically, as shown in figure 3, making For 1 pattern conversion (pattern conversion A), add and 1 in the adjacent target NO of candidate target IB1 adjoinings at the 1st initial stage, generation 1st new target A1 (dash area in figure).In addition, as other change internal schema (pattern conversion B), time at the 1st initial stage is subtracted 1 in the sub-goal CO included in target IB1 is mended, generates the 2nd new target B1 (dash area in figure).Others, such as, Other pattern conversions (pattern conversion C) are further used as, are additional to the adjacent target NO's of candidate target IB1 adjoinings at the 1st initial stage Meanwhile the sub-goal CO included is subtracted, generate the 3rd new target C1 (dash area in figure).In this way, it is superimposed a variety of (limited) Deformation pattern is converted, carries out similarity measure respectively to this, as illustrated by double arrows, by comparing these conversion candidate targets, Optimal candidate target can be determined.And then by compare these targets with the 1st initial stage candidate target IB1, can determine whether conversion before and after really Whether reliability has rising.For the candidate target (i.e. tentative candidate target) after conversion, as needed, repeat above-mentioned identical dynamic Make, final can use sets the goal really.
Hereinafter, reference picture 4 etc., a variation of above-mentioned pattern conversion is illustrated.Such as 2 dash lines in Fig. 4 (A) It is shown, sometimes also can other overlapping targets for candidate target at initial stage.During in the presence of such overlay target SS, determine to become mold changing During formula, it is contemplated that utilize overlay target SS.For example as shown in Fig. 4 (B), (the 1st will be shown as in figure comprising first candidate target CC Initial stage candidate target IB1) with overlay target SS target as father target PP, by the 1st candidate target CC profile with it is overlapping Target SS profile, delimit 5 sub-goal CO.As shown in figure 5, based on the Finite Transformation pattern by combining sub-goal CO, can The 1st, 2,3 fresh target D1, D2, D3 etc. are generated, on the other hand, carrying out the above-mentioned same comparison based on certainty factor.Such as on chi It is very little, it is specified that lower threshold, for target of the size below the lower threshold in sub-goal CO, can be used as at subsidiary target Put.The attendant layers for including the sub-goal are set, it is belonged to attendant layers, generate such form.
As above, in the image processing method of the present embodiment associated picture processing unit and the use device, CPU is as time Object transformation unit is mended, conversion candidate target at initial stage, generates new candidate target.That is, to the unknown object thing description from shooting Target etc. the extraction of unknown images data carries out necessary conversion, can extract the fresh target of corresponding known object thing.Particularly, on In stating, CPU reads certainty factor decision procedure, the similarity based on measure, turn into and initially wait as similarity determination unit Benefit initial stage candidate target extraction, and then, for CPU as candidate object transformation unit, reading thickens thinned target selection Program, candidate target thicken thinned program, conversion candidate target at initial stage.And then CPU is carried out similar for the target of conversion Degree measure, as needed, repeated transformation action, as target determination unit, it is determined that the target of extraction.Now, on from The target that the unknown images data such as captured unknown object thing description are extracted, carries out necessary conversion, can extract known to corresponding to The fresh target of object.
As described above, the generation on attendant layers or subsidiary target, for generating journey using according to the 1st~the n-th candidate layer When sequence generates the target being excluded during candidate layer, carry out for example, it is not limited to this, can also apply other various sides Method.
In addition, as described above, in step slo, based on prior defined certainty factor, carry out determining the determination that extract The judgement of target, the threshold value of certainty factor is not limited only to, such as, based on contour layer, it also can extract and determine target.Here, so-called wheel Wide layer, such as, refer to the layer being made up of profile relevant information, the profile refers to delimit point for turning into benchmark in image dividing processing Cut the profile in field.Such profile can be shown as the extracting object i.e. outer of target is answered.Such profile is related Information can also directly display the scope of target sometimes.For example turn into the outer for the target for judging object and the whole of the profile Or when a part of consistent, it can determine whether it and set the goal really for that should extract.Now, the extraction on target, can based on profile information Conscientiously the extraction of target needed for carrying out.
In addition, in above-mentioned, based on the similarity measurement result of certainty factor decision procedure progress, candidate target at initial stage is carried out Extraction, but on the extraction of candidate target at initial stage, such as, can also the various data thresholds such as sets target size, color, be based on The threshold value is extracted.That is, not by similarity concept is used, according to other data values, candidate mesh at initial stage is determined Mark, and then, without using similarity concept (for example utilizing contour layer), initial stage candidate target is converted, can extract new target.
In addition, gained information in above-mentioned each processing can also be used in other processing.For example calculated repeating threshold value Processing in, even if threshold value is to the last also not above, subject object relevant information is used as image point during also this can be handled The relevant informations such as processing are cut to be used.
Embodiment 2:
Hereinafter, by Fig. 6, the image processing method using the 2nd embodiment associated picture processing unit of the invention is carried out Explanation.The present embodiment described image processing unit is the variation of the 1st embodiment, on the construction of image processing apparatus, with the 1st The image processing apparatus of embodiment is identical, in this description will be omitted.On image processing procedure, the situation of special instruction is removed, with The flow of embodiment 1 is identical.
Fig. 6 is conceptually illustrated with regard to the construction in the image processing method middle level under the present embodiment.In other words, as Handled by thickening conversion etc. is thinned to carry out the leading portions of the various processing that Objective extraction is carried out, Fig. 6 is shown each layer A kind of state for being determined of composition.
In the present embodiment, the layer group of top layer and the such stratum character construction of underlying layer is contained in multiple layers.It is more specific next Say, the target for forming top layer includes the target for forming underlying layer.On the contrary, the target for forming underlying layer is contained in structure Into in the target of top layer.That is, the target of top layer is father's target, the target of underlying layer is its sub-goal, top layer Father and son's construction is formd with underlying layer.Particularly as shown in fig. 6, on all candidate layers, the construction of stratum character is formed.Than Such as, all targets for forming pth candidate layer are to form to be below layer i.e. father's target of all targets of P+1 candidates layer relatively, And then all targets for forming the candidate layer of pth+1 are to form the relative father for being below layer i.e. all targets of the candidate of pth+2 layer Target, relation as formation.In this way, being formed below former generation has offspring, and then there is stratum character parent-offspring structure as grandson generation below When making, the target of lower side is all contained in the target of upper side.Such as, it is not necessary to generate the institutes such as Fig. 4 of the 1st embodiment Show lap, it is not necessary to reconstruct each group, can carry out thickening thinned conversion.Kind of an angle is changed, Fig. 4 can also be carried out in advance Lap is handled Deng shown in, construction of the regeneration with set membership.
As one of method for forming above-mentioned father and son's construction, such as, in the selection of candidate layer generation program, it is contemplated that thing The program based on the image segmentation for carrying out meeting above-mentioned set membership condition is first selected, it is above-mentioned to be formed as shown in the examples such as Fig. 4 Father and son constructs, and on both depositing the generation of candidate layer, can become again.
Hereinafter, flow chart shown in reference picture 7, just having deposited the generation of candidate layer as one for basis has above-mentioned stratum's structure The candidate layer made, and the example of the image procossing of candidate layer is weaved into again, carry out Target Segmentation, split again, meet set membership Condition, and the situation for being formed in each target size of seizure in prescribed limit is illustrated.
First, CPU reads the information on both depositing target in candidate layer, determines the size (step S201) of each target, surveys It is sized and whether exceedes pre-determined upper limit threshold (step S202).In step S202, upper limit threshold above chi is detected (step S202 during very little target:Yes), CPU extracts the target (step S203), carries out dividing processing (step S204), until institute There is target size portion below upper limit threshold, repeat dividing processing action (step S205).Fig. 8 shows above dividing processing Pattern diagram.That is in Fig. 8 (A), when the size for belonging to the target MO of q candidate layers is more than threshold value, according to as defined in advance Dividing method, split this target MO, be subdivided into multiple segmentation object MOd shown in Fig. 8 (B).For example consider and corresponding known object The comparative result of object image data size, if necessary, and then being split again to these segmentation objects Mod.Before segmentation Target MO has father and son's tectonic relationship with the segmentation object Mod after segmentation.But such as, on size, it is specified that lower limit threshold It is worth, in segmentation object Mod, for more Small object of the size below the lower threshold, subsidiary target can be used as to be handled.
Fig. 7 is returned, in step S202, does not detect (step S202 during threshold value dimensions above target:No), Huo Zhetong Cross step S203~step S205 action, all targets are changed into (step S205 during the following size of threshold value:Yes), these are preserved Target (step S206).Secondly, CPU is from the target being saved, target (minimum target) of the extraction (step S207) without son. When seeing father and son's construction with upper and lower relation, extraction is located at undermost target.Secondly, CPU forms the layer group with father and son's construction Condition whether meet, i.e., for either with or without son target, be confirmed whether there is father's target (step S208) respectively, in step In S208, if it is determined that not meeting condition (not having father's target) (step S208:No after), system and the s.m.p target belonging to layer with Target in unified layer, generation turn into the target (step S209) of father's target.According to newly-generated father's target, generate in forming Top layer (step S201).The processing of layer above above-mentioned generation is repeated, until purpose condition meets that is, maximum target is no longer threshold It is worth dimensions above (step S208:Yes), the new candidate layer (step S211) weaved into again is formed with this.
By as above acting, the processing of top layer is generated according to the target for belonging to underlying layer.Fig. 9 is exactly that model utility illustrates this The figure of processing.Constructed to form the stratum character as shown in Fig. 9 (A), first, as shown in Fig. 9 (B), be centrally located at no sub-goal Bottom layer target, generate bottom layer.Here, bottom layer is referred to as the 1st layer.Then, as shown in Fig. 9 (C), carry out System and processing form the target of bottom layer, form his father's target, and the layer of father's target has been gathered in generation.It is here, newly-generated Layer is referred to as the 2nd layer.Hereinafter, equally, as shown in Fig. 9 (C), by uniting and handling the target of the 2nd layer of composition, his father's target is formed, As shown in Fig. 9 (D) and 9 (E), new layer (the 3rd layer) is generated.As above-mentioned step S208, such as, threshold is set with regard to target size etc. Value, action above is repeated always until exceeding the threshold value.It can be formed by father and son's structure of the target configuration of certain limit inside dimension The candidate layer made.On target system and method, can have various regulations, such as, on unite and target processing, in same layer, It can unite and most similar target.Or if when there is father and son's construction, can preferential its set membership.In other words, same layer It is interior, for there is common father's target originally, i.e., belong to the target of brotherhood in same layer, preferentially carry out system and.Avoid Overlay target as shown in Figure 4 occurs, while to retain existing father and son's construction.In addition, father and son is allowed to construct what complexity was interlocked State.
In addition, during example above, the prelude weaved into again as layer, in step S201~step S204, more than threshold value Size objectives, pressure are split.With this, the target that can avoid being extracted is embedded in the feelings being missed in bulk target Condition occurs.On segmentation herein, various methods can be used, such as, the hand that target size is reduced in unified segmentation of clathrate etc. Method etc..
Illustrated in Figure 10 is exactly to extract target in the case of the layer group with above-mentioned stratum character construction (father and son's construction) Example.Shown in Figure 10 (A) is the schematic diagram that conversion process is carried out from the candidate layer Anshun sequence of bottom layer pth+2.It is specific next Say, be just slaves in the candidate layer of bottom layer, selected the high target of similarity as candidate target at initial stage, carried out The extracting method of conversion.Now, such as, within the initial period, same layer, only carry out thickening processing, if the phase in same layer Given threshold is not up to like degree, to entering again using candidate target at initial stage as the nearest top layer father target that sub-goal is included Row thicken i.e. reduction processing, repeat always this action until similarity reach given threshold, when similarity achieves the goal threshold value, Target is extracted as determination target.On the other hand, Figure 10 (B) is illustrated that since the top layer i.e. the 1st candidate layer in order Carry out the example of conversion process.That is, selected the high target of similarity in the layer of the top as candidate target at initial stage, to this The sub-goal of candidate target at initial stage is formed in the initial stage nearest underlying layer of candidate target, reduction processing is only carried out in the initial period, Afterwards by thickening, being thinned, repeated as needed, up to the target of extraction reached purpose similarity, and as true Set the goal.Figure 10 (A) and Figure 10 (B) action can also be carried out simultaneously, when extracting common target, using the target as determination mesh Mark.
Embodiment 3:
The present embodiment associated picture processing unit is the variation of the 1st embodiment etc., is come with regard to the construction of image processing apparatus Say, it is identical with the image processing apparatus of the 1st embodiment, diagram and explanation are omitted herein.
In the present embodiment, generation is not only by candidate target, subsidiary target, also has other candidate target configurations other Candidate layer.It is unlike the embodiments above for this point.Here, other so-called candidate targets, although referring to that corresponding is known Object, the candidate target of single object for being not extracting object.In other words, other candidate targets and candidate target be not Together, being can not be as the target of purpose unknown object thing.On other candidate targets, be handled by Objective extraction, target becomes Change the not getable target of processing institute.That is, on other candidate targets, it is impossible to as candidate target, can not be referred to as Subsidiary target, is excluded from process object.
CPU generates Program Generating n-layer candidate layer as the 1st layer of generation unit based on each layer.As the 2nd layer of generation unit, Program is generated based on attendant layers, generates m layer attendant layers.Meanwhile and then be used as the 3rd layer of generation unit, based on other candidate layers give birth to Into program, as shown in Figure 11 (A), other candidate layers of generation k layers.But these k layers other candidate layers are left out.Also It is to say, is that to form later image procossing with layer unrelated.Specifically, CPU extract candidate layer be attendant layers do candidate target, While subsidiary target, other candidate targets are also extracted, afterwards, other candidate layers to object can not be turned into, do exclusion processing (reference picture 11 (B)).It is same as Example 1 on other processing steps, in this description will be omitted.
In program storage unit (PSU), it is specific outside determine object by being extracted from general image that other candidate layers generate program Object related objective, generation layer.As described above, the target of this place generation is other candidate targets, by other candidate targets The relevant informations such as other candidate layers formed are left out in processing afterwards.According to the attribute for the object that should be extracted, Correctness of layer etc., as in this embodiment, other candidate targets of object can not be turned into by removing in advance, can be quicker The processing in stage thickens reduction processing after progress.
Program is generated according to other candidate layers, in the image procossing for generating multiple other candidate layers, the mesh that is left out Mark, can be used as subsidiary target.
Embodiment 4:
Hereinafter, just the image processing method of the 4th embodiment associated picture processing unit of the invention illustrates.The present embodiment Associated picture processing unit is the variation of the 1st embodiment etc., with regard to the construction of image processing apparatus, because of the figure with the 1st embodiment Picture processing unit is identical, omits diagram and explanation herein.In addition, on image processing procedure, the feelings for especially needing to illustrate are removed It is same as Example 1 outside condition.In the present embodiment, it is desirable to, must be in some step when Complete Classification is multiple species known object things Object is other candidate layer generation units except being utilized in rapid, then be not in be excluded in advance (except) target.
In above-described embodiment, it is illustrated known object thing as 1 species, but can also exists from 1 general image Want the situation of known object things more than 2 species of extraction.For example on 1 general image, think sometimes just multiple anti- The target answered, by the known object thing that its Complete Classification is multiple species.Therefore, in the present embodiment, multiple familygrams are just carried out As the image processing apparatus and method of extraction illustrate.In other words, relative to other candidate layers shown in the 3rd embodiment Other candidate targets, the Objective extraction carried out only for candidate layer are handled equivalent to the Objective extraction for thickening thinned grade individually Various processing.
In image processing apparatus in the present embodiment, in the program storage unit (PSU) of storage device, store z species (z >= 2) similarity (certainty factor) decision procedure.This puts different from the situation under the 1st embodiment.That is, similarities of the CPU based on z species (certainty factor) judges, z kind class objects are handled, and the target waited is thinned in order to extract to thicken.That is, for z kinds Class object, each species generate 1 layer, determine candidate target at initial stage respectively, carry out the conversion of candidate target, and extraction determines mesh Mark.
Now, it may occur that such thing, that is, select certain target as known object thing correlation candidate at initial stage target (at the beginning of the 1st Phase candidate target), and select other targets as other known object thing correlation candidate at initial stage targets (the 2nd initial stage candidate mesh Mark).At this moment, on belonging to the subsidiary targets of each attendant layers, no matter for any one in different types of candidate target, portion It can be used as to thicken to be thinned and be used with target.That is, each subsidiary target can be used as the increasing of the 1st candidate target Thick thinned target, can also thicken thinned target as the 2nd candidate target.
Embodiment 5:
Hereinafter, image-recognizing method is illustrated, the image-recognizing method has used pattern recognition device, and the image is known Other device includes the related image processing apparatus of the 5th embodiment of the invention.Figure 12 (A) is the related figure of conceptual illustration the present embodiment The block diagram constructed as identification device 500.Figure 12 (B) is that storage device 512 is formed in conceptual illustration pattern recognition device 500 Block diagram.Pattern recognition device 500 is that various image processing programs etc. in image processing apparatus shown in embodiment 1 are added Add, further add being formed for the program that carries out image recognition processing.Make that is, pattern recognition device 500 includes For the function of the image processing apparatus of embodiment 1, and then this function is utilized, then carry out image recognition processing.Therefore, on figure The common point of image processing apparatus in being constructed as identification device 500, in this description will be omitted.In the present embodiment, at image The fresh target that is extracted of reason device, carries out image recognition, conscientiously and can be rapidly performed by image recognition processing.
Hereinafter, the detailed configuration and action of pattern recognition device 500 are illustrated.As shown in Figure 12 (A), image recognition Device 500 has CPU510, storage device 512, display device 513, input unit 514, bus 550 etc., to having shot not Know being handled for special image data, progress known object thing Classical correlation for object.In the present embodiment, image recognition is being formed In each unit of device 500, input unit 514 has external data receiving unit 514a, can receive object to be identified i.e. Original image related data, the object i.e. external image data such as unknown images data that have identification object etc. of being confirmed whether.In addition The external data receiving unit 514a various information received are stored in the data storage cell DM of storage device 512 by CPU510 In.By the above, pattern recognition device 500 analyzes the various view data of outside input, the processing of image recognition is carried out.
As shown in Figure 12 (B), the storage device 512 of pattern recognition device 500 is that have program storage unit (PSU) PM and data to deposit Storage unit DM is formed.Program storage unit (PSU) PM has the programming arts for storing various programs.Data storage cell DM has storage The data fields of various data.In inscape, as the part for being contained in program storage unit (PSU) PM, except with being located Reason object diagram is outside the pre-treatment program AP of pre-treatment of image, is also become with target data generation program DPX, original digital image data Change program OPX and data comparison procedures DCX.In these various programs, target data generation program DPX is according to unknown images The various procedure sets of data generation fresh target are fit, specifically, comprising the layer generation program illustrated in the 1st embodiment, increase The various programs such as the thinned program of thickness.In addition, original digital image data conversion program OPX conversion is comprising should allow the script of identification object The original digital image data of image, extract the known image data that the object script image becomes comparison other script image.Data Comparison program DCX compares the identification object data and target data generation program DPX of original digital image data conversion program OPX extractions The related data of the fresh target of extraction, extract various information and identify related data, including, in fresh target whether include equivalent to The data of identification object data, if comprising which position in the picture, or the target number met have several lamps to know Other object data relevant information.
Storage device 512 is as the device that data storage cell is contained in inscape, except unknown with storing Object pertinent image information is outside the unknown images data storage cell D0 of unknown images data, also with target dependency number It is related according to memory cell ODX, identification object original digital image data memory cell OIX, identification object data storage cell TDX, identification Data storage cell RDX.In these various data storage cells, target related data memory cell ODX is shown according to not Know that the various data sets of view data generation fresh target are fit, specifically, data are generated comprising the layer illustrated in the 1st embodiment The various data storage cells such as memory cell, target data memory cell.Identification object original digital image data memory cell OIX is stored Original digital image data conversion program OPX transforming object i.e. original digital image data.In addition, identification object data storage cell TDX is deposited Store up the identification object data of original digital image data conversion program OPX extractions.In addition, identification related data memory cell RDX storages The identification related data extracted in data comparison procedures DCX.
Hereinafter, summary explanation is carried out with regard to the image-recognizing method of pattern recognition device 500.First, pattern recognition device 500 CPU510 reads (step S501) from storage device 512 and is contained in identification object original digital image data memory cell OIX The original digital image data (identification object original digital image data) of identification object, read original digital image data conversion program OPX, extraction identification pair Image data, by the identification object data storage in identification object data storage cell TDX (step S502).Secondly, CPU510 It is appropriate to read target data generation program DPX etc., the information (step S503) of unknown images data is read, extracts various targets (step S504).On step S503 and S504 processing, the various division processing methods in the various embodiments described above can be applied.Its Secondary, CPU510 reads data comparison procedures DCX, the known figure of the original image as comparison other stored in comparison step S502 As data are the data of the unknown images data related objective extracted in identification object data, and step S503 and S504, know Whether the data not equivalent to identification object data are contained in fresh target (step S505).Namely based on according to unknown images number According to the fresh target of extraction, known object thing is at least determined whether.Finally, CPU510 makees the various information obtained in step S505 It is stored in for identification related data in identification related data memory cell RDX (step S506).
Hereinafter, reference picture 13 is based on related 1st progress of image-recognizing method of above-mentioned steps S505 comparison processing Explanation.It is identification pair of all image PI extractions as identification object from 1 object as illustrated, here, as image recognition Image data AS (or identification object data AS1).Identification object data AS herein is carried from all image PI specified profile The parts of images taken i.e. a kind target.On the method for the contours extract from identification object data AS, can apply manual, automatic etc. Various methods.CPU510 for specified identification object data AS, from figure multiple candidate target EO shown in 1 short-term (E01, E02 ..., EO10 ...) target XO corresponding to most suitable target (candidate target E03 in the illustrative example) conduct of middle selection.On Selected target XO, such as, 1 is not limited only to, also may be selected sometimes multiple.In addition, the position on selected target XO The information put, on identification object, as needed, it can appropriately select the target that obtain.Now, that is to say, that pass through ratio Compared with (contrast) each target, image recognition is carried out.On control methods, known various methods can be applied.
Hereinafter, with reference to figure 14, it is based on related 2nd progress of image-recognizing method for comparing processing in above-mentioned steps S505 Explanation.In Figure 13, using identification object data AS as the target for from original digital image data being general image PI extractions, in identifying processing In, that is to say, that each target is compared.On the other hand, as illustrated, specify all image PI field, by with this field Specific image carries out pattern match, selection target XO.Now, step S505 identification only can be carried out by designated field DL Processing.On the other hand, because be comprising object image section background parts, in identification, how much can produce a little noises.
Hereinafter, reference picture 15, related 3rd of the image-recognizing method for the comparison processing being based in above-mentioned steps S505 enter Row explanation.3rd is the application of the 1st, is flutterred relative to the 1st and catches 1 target, in this example, relative group based on multiple targets Close, be identified.In Figure 15 (A)~15 (G), Figure 15 (A) is identification object i.e. image correlation typical image sample.It is here, complete In body image, data image i.e. identification object data AS1, AS2, AS3 of the object of 3 species be present.As shown in Figure 15 (B), These identification objects have direction and position relationship shown in arrow AR1~AR3.That is, here, the corresponding 3 identification object numbers of identification According to AS1, AS2, AS3 3 targets with the presence or absence of the position relationship shown in Figure 15 (B).Specifically, in order to hold above-mentioned position Relation is, it is specified that dash area MK shown in Figure 15 (C) and the field AR1~AR3 extracted according to dash area MK, in each field In AR1~AR3, corresponding identification object data AS1, AS2, AS3 target are recognized whether.Figure 15 (D)~Figure 15 (G) is pin To unknown images, the data of above-mentioned identification maneuver are shown.First, as shown in Figure 15 (D), corresponding field AR1~AR3 the 1st neck In the field UA1~UA3 of domain~the 3rd, in the 1st field UA1, determine whether to exist and meet identification object data AS1 (reference pictures 15 (C) target).Here, as shown in Figure 15 (E), if in the presence of the target X01 met, then, in the 2nd field UA2, judgement is It is no the target for meeting identification object data AS2 (reference picture 15 (C)) to be present.And then as shown in Figure 15 (F), determine whether exist Meet identification object data AS3 (reference picture 15 (C)) target.As shown in Figure 15 (G), exist when meeting target X03, judge symbol It is image to close identification object.
Hereinafter, the image recognition order of pattern recognition device in above-mentioned 3rd is illustrated.First, CPU510 is based on Identification object original image related data, decision objects field (the 1st field UA1~the 3rd field UA3) processing (step S601).Its Secondary, CPU510 appropriately reads identification object data, carries out the 1st field UA1 identifying processing (step S602).CPU510 judges Data (the step S603 of corresponding target in 1 field UA1 be present:Yes), the recognition result in the 1st field UA1 is preserved temporarily (step S604), start the 2nd field UA2 identifying processing (step S605).CPU judges corresponding target in the 2nd field UA2 be present Data (step S606:Yes), the recognition result (step S607) in the 2nd field UA2 is preserved temporarily, starts the 3rd field UA3 Identification (step S608).CPU510 judges data (the step S609 that corresponding target in the 3rd field UA3 be present:Yes), judge There are a data met, preserve the field UA1~UA3 of the 1st field~the 3rd each recognition result (step S610).On the other hand, it is above-mentioned In, in the identifying processing (step S602, S605, S606) for judging each field UA1~UA3, judge that any of which one is not present The data that are consistent (step S603:No, step S605:No, step S609:No), preserve to be not present and meet (the step of information as target Rapid S611).
According to the present embodiment associated picture identification device, in multiple segmentation figure pictures of extraction, can more accurately catch pair As thing, the precision that can not be caught in being matched with normal mode, target is identified.
Embodiment 6:
Just divided below using the image of the image classification device of the 6th embodiment associated picture processing unit comprising the present invention Class method illustrates.The composition of image classification device and i.e. the 5th embodiment shown in Figure 12 (A) and Figure 12 (B) in the present embodiment Pattern recognition device form it is identical, herein omit in detail diagram and explanation, as needed, ask appropriate reference picture 12 (A) etc..
Figure 16 (A) is the process object i.e. unknown images UI example of the present embodiment associated picture sorter.Here, institute Image classification in meaning image classification device refers in unknown images UI as Figure 16 (A), has specified multiple species Know object, i.e., the images of multiple species known object things how many, where.Such as 4 species known object things, Specified.Figure 16 (B)~16 (E) is to show the knowledge data DI1~DI4 carried out required for above-mentioned classification.It is specific next Say, each knowledge data DI1~DI4 is to include multiple view data on 1 known object data respectively.Image classification device Based on these knowledge datas DI1~DI4, judge that unknown image belongs to shown known in 4 knowledge data DI1~DI4 Which of object, or it is not belonging to any one.Knowledge data DI1~DI4 is stored in figure as identification object data In 12 (B) identification object data storage cell TDX.
Hereinafter, the image classification method summary of image classification device is illustrated.As described, on image classification device Composition, borrow Figure 12 (A).First, CPU510 sets multiple species (being herein 4 species) object relevant knowledge data DI1~DI4 (step S701).That is, according to unknown images, the known object that should classify is determined.Secondly, CPU510 reads unknown figure As data are unknown images UI information (step S702), various targets (step S703) are extracted.On step S702 and step Rapid S703 processing, the various division processing methods shown in applicable above-mentioned various embodiments.In addition, now, such as can be with Using the segmentation of the object set in corresponding embodiment 1.Secondly, CPU510 reads multiple species object relevant knowledge data DI1~DI4 (step S703), the various targets being just extracted carry out data comparison (step S704).That is, CPU510 sentences It is fixed with the presence or absence of meeting the target of 4 objects, or judge which target is consistent (step S705~step with which object S706).That is, in step S705, confirmation whether there is the data that are consistent, (step S705 when being determined with:Yes), on the target that is consistent, enter Row data preserve (step S706).(step S705 when judging no:No), the information (step as target that is not consistent is preserved S707).The relevant treatment of above-mentioned data preservation is carried out for the target of various targets, can classify the unknown of image classification device The object of multiple species in image.
According to the present embodiment associated picture sorter, such as, 1 image memory different objects (multiple species Object) when, number and distribution of these different objects etc. can be held.More specifically, such as, on the image of blood, Can more correctly carry out in blood each composition with what kind of ratio exist etc. as analysis.In addition, on certain object (blood Composition in liquid), normal thing and anomalies can be carried out with analysis as what kind of ratio presence etc..

Claims (10)

1. a kind of image processing apparatus, converting the target extracted from the unknown images data of the unknown object thing of shooting, extract The fresh target of corresponding known object thing, it is characterised in that including with lower unit:
Layer generation unit, generates what is be made up of the segmentation image set zoarium comprising the target extracted according to unknown images data Layer;
Initial stage, candidate target determining means, entered according to the target included in the layer of layer generation unit generation, and the target Target obtained from after line translation, determine it whether as candidate target at initial stage;
Candidate object transformation unit, using the layer generation unit generation layer included in candidate at the initial stage target outside Target, convert primary candidate target, generate new candidate target;
Similarity determination unit, determine the similarity of the target and known object thing;
Target determination unit, the similarity measurement result determined according to the similarity determination unit, decision pass through the candidate Which in the target selected in candidate target before new candidate target that the conversion of object transformation unit is generated and conversion It is individual to be preserved as determination target;
Wherein, measurement result of candidate at the initial stage target determining means based on the similarity determination unit, determine described first Phase candidate target;The similarity that the target determination unit is determined based on the similarity determination unit, based on prior regulation Threshold value, determine whether the target is that keep setting the goal really;
The similarity measurement result that the target determination unit is determined based on the similarity determination unit, compares by the time The new candidate target and the candidate target before conversion mended object transformation unit conversion and generated, judgement is to repeat the target to become The conversion action for changing unit still terminates the action, and using the candidate target obtained by the judged result as the determination target Preserved;
Exceed the similarity in the various targets that candidate at the initial stage target determining means generates the layer generation unit to survey The target of the similarity threshold of order member as candidate target at initial stage, the similarity determination unit by the target of extraction with it is described The degree of certainty that the similarity of known object thing is quantized is as the benchmark judged.
2. image processing apparatus according to claim 1, it is characterised in that:The layer generation unit has the 1st layer of generation single Member generates the layer being made up of the target complex comprising candidate at the initial stage target, and the 2nd layer of generation unit is generated only by not The special layer that target complex comprising candidate at the initial stage target is formed;The 2nd layer of generation unit generation is only by subsidiary target structure Into attendant layers, as the special layer, the subsidiary target is at the image when the unknown images data extract target Excluded in reason from the various targets for forming the candidate layer, and in the conversion of the candidate object transformation unit, it is attached Band is in other targets.
3. image processing apparatus according to claim 2, it is characterised in that:The layer generation unit also has the 3rd layer of life Into unit, that is, other candidate layers are generated, other candidate layers are by other known object things different from above-mentioned known object thing The fit composition of other candidate target correlation division image sets of extraction, the candidate object transformation unit is generating new candidate During target, will be included in other described candidate layers described in other candidate targets from conversion action object in exclude.
4. image processing apparatus according to claim 2, it is characterised in that:The layer of the layer generation unit generation is that have The layer group of stratum character construction, containing relative top layer and underlying layer, the top layer in top includes father's target, and father's target will The multiple targets for forming the underlying layer are included in inside as sub-goal;Candidate at the initial stage target determining means is in the layer In the layer group of the stratum character construction of generation unit generation, according to the target for belonging to underlying layer, candidate at the initial stage mesh is determined Mark;The candidate object transformation unit is by candidate at the initial stage target and adheres to other targets, generates new candidate target;Institute Candidate target determining means at initial stage is stated in the layer group for the stratum character construction that the layer generation unit generates, according to belonging to The target of top layer is stated, determines candidate at the initial stage target;The candidate object transformation unit for candidate at the initial stage target, Remove in candidate at the initial stage target by comprising sub-goal a part, generate new candidate target.
5. image processing apparatus according to claim 4, it is characterised in that:The layer generation unit is for both depositing segmentation figure Picture, determine each segmentation picture size, the view data size of the segmentation figure picture of comparative measurements and the corresponding known object thing View data size, as needed, then split the segmentation figure picture, generate the layer based on fresh target.
6. a kind of convert the target extracted according to the unknown images data of the unknown object thing of shooting, corresponding known object thing is extracted Fresh target image processing method, there is following process:
Layer generation process, generates what is be made up of the segmentation image set zoarium comprising the target extracted from the unknown images data Layer;
Initial stage, candidate target determined process, according to the target being contained in the layer of the layer generation process generation, determined conduct The process object that initial stage, candidate was transformed i.e. candidate at initial stage target;
Candidate object transformation process, beyond candidate at the initial stage target that includes in the layer of layer generation process generation Target, by converting candidate at the initial stage target, generate new candidate target;
Similarity mensuration operation, determine the similarity of the target and known object thing;
Target determines process, and the similarity measurement result determined according to the similarity mensuration operation, decision passes through the candidate Which in the target selected in candidate target before new candidate target that the conversion of object transformation process is generated and conversion It is individual to be preserved as determination target;
Wherein, candidate at the initial stage target determines measurement result of the process based on the similarity mensuration operation, determines described first Phase candidate target;The target determines the similarity that process is determined based on the similarity mensuration operation, based on prior regulation Threshold value, determine whether the target is that keep setting the goal really;
The target determines the similarity measurement result that process is determined based on the similarity mensuration operation, compares by the time The new candidate target and the candidate target before conversion mended the conversion of object transformation process and generated, judgement is to repeat the target to become The conversion action for changing process still terminates the action, and using the candidate target obtained by the judged result as the determination target Preserved;
Candidate at the initial stage target determines that exceeding the similarity in the various targets that process generates the layer generation process surveys Determine the target of the similarity threshold of process as candidate target at initial stage, the similarity mensuration operation by the target of extraction with it is described The degree of certainty that the similarity of known object thing is quantized is as the benchmark judged.
7. a kind of pattern recognition device, it includes image processing apparatus any one of claim 1 to 5, the image recognition The fresh target that device extracts according to the image processing apparatus from the unknown images data, carry out whetheing there is sentencing for known object thing It is fixed.
8. a kind of unknown images data of unknown object thing according to shooting, carry out the image recognition of known object thing Classical correlation Method, have and judge process image-recognizing method, comprising the image processing method described in claim 6, based on according to the figure As the fresh target that processing method is extracted from the unknown images data, whether there is the judgement of known object thing.
9. a kind of unknown images data of unknown object thing according to shooting, carry out the image classification of known object thing Classical correlation Device, comprising the image processing apparatus any one of claim 1 to 5, based on according to the image processing apparatus from described The fresh target of unknown images data extraction, carries out the classification of known object thing.
10. a kind of unknown images data of unknown object thing according to shooting, carry out the image classification side of known object thing identification Method, comprising the image processing method described in claim 6, based on the new mesh extracted according to this method from the unknown images data Mark, carry out the classification of known object thing.
CN201410102668.7A 2014-03-19 2014-03-19 The devices and methods therefor of image procossing, image recognition and image classification Active CN103824085B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410102668.7A CN103824085B (en) 2014-03-19 2014-03-19 The devices and methods therefor of image procossing, image recognition and image classification
JP2015050161A JP2015179511A (en) 2014-03-19 2015-03-13 Image processing device and method, image recognition device and method, and image classification device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410102668.7A CN103824085B (en) 2014-03-19 2014-03-19 The devices and methods therefor of image procossing, image recognition and image classification

Publications (2)

Publication Number Publication Date
CN103824085A CN103824085A (en) 2014-05-28
CN103824085B true CN103824085B (en) 2018-02-02

Family

ID=50759137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410102668.7A Active CN103824085B (en) 2014-03-19 2014-03-19 The devices and methods therefor of image procossing, image recognition and image classification

Country Status (2)

Country Link
JP (1) JP2015179511A (en)
CN (1) CN103824085B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291706B (en) * 2016-03-30 2020-05-08 杭州海康威视数字技术股份有限公司 Picture retrieval method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101320426A (en) * 2007-06-06 2008-12-10 夏普株式会社 Image processing device and method, image forming device and image processing system
CN103034844A (en) * 2012-12-10 2013-04-10 广东图图搜网络科技有限公司 Image identification method and device
CN103268496A (en) * 2013-06-08 2013-08-28 中国人民解放军国防科学技术大学 Target identification method of SAR (synthetic aperture radar) images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224078B2 (en) * 2000-11-06 2012-07-17 Nant Holdings Ip, Llc Image capture and identification system and process

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101320426A (en) * 2007-06-06 2008-12-10 夏普株式会社 Image processing device and method, image forming device and image processing system
CN103034844A (en) * 2012-12-10 2013-04-10 广东图图搜网络科技有限公司 Image identification method and device
CN103268496A (en) * 2013-06-08 2013-08-28 中国人民解放军国防科学技术大学 Target identification method of SAR (synthetic aperture radar) images

Also Published As

Publication number Publication date
JP2015179511A (en) 2015-10-08
CN103824085A (en) 2014-05-28

Similar Documents

Publication Publication Date Title
US10013757B2 (en) Classification apparatus for pathologic diagnosis of medical image, and pathologic diagnosis system using the same
CN108154198B (en) Knowledge base entity normalization method, system, terminal and computer readable storage medium
EP2557524A1 (en) Method for automatic tagging of images in Internet social networks
JP2012226609A (en) Information processor, information processor control method and program
CN105095848A (en) Object identification method and system
CN112818821B (en) Human face acquisition source detection method and device based on visible light and infrared light
CN105590089A (en) Face identification method and device
CN103824085B (en) The devices and methods therefor of image procossing, image recognition and image classification
CN112259223A (en) Patient-level tumor intelligent diagnosis method based on full-field digital section
Sun et al. Boosting robust learning via leveraging reusable samples in noisy web data
CN1462884A (en) Method of recognizing image of lung cancer cells with high accuracy and low rate of false negative
CN116703837B (en) MRI image-based rotator cuff injury intelligent identification method and device
EP3893146A1 (en) An apparatus for determining a classifier for identifying objects in an image, an apparatus for identifying objects in an image and corresponding methods
Gund et al. Interpretable emotion classification using temporal convolutional models
Jothi et al. Abnormality classification of brain tumor in MRI images using multiclass SVM
Elshafey et al. A hybrid ensemble deep learning approach for reliable breast cancer detection
CN111368823B (en) Pointer type instrument reading identification method and device
CN111382775B (en) Generating countermeasure network for X-ray image processing and method therefor
CN114612527A (en) Image registration method and device, electronic equipment and storage medium
Aglibot et al. Urine crystal classification using convolutional neural networks
CN117392727B (en) Facial micro-expression recognition method based on contrast learning and feature decoupling
Mo et al. A multi-level context fusion network for exudate segmentation in retinal images
George et al. Tissues image retrieval system based on co-occuerrence, run length and roughness features
CN109284680B (en) Progressive image recognition method, device, system and storage medium
Koszykowski Machine Learning Based Identification of Intercellular Junctions Present in Localized Regions wthin Pathological Images of Heart Tissue

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220803

Address after: Room 1051, 1st floor, Nanxun science and technology entrepreneurship Park, No. 666, Chaoyang Road, Nanxun Town, Nanxun District, Huzhou City, Zhejiang Province 313000

Patentee after: Daqiang vision technology (Huzhou) Co.,Ltd.

Address before: Room 616, building a, Modern Plaza, No. 18 Weiye Road, Kunshan, Suzhou, Jiangsu 215300

Patentee before: SUZHOU BITSTRONG ELECTRONIC TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right