CN108664840A - Image-recognizing method and device - Google Patents
Image-recognizing method and device Download PDFInfo
- Publication number
- CN108664840A CN108664840A CN201710188186.1A CN201710188186A CN108664840A CN 108664840 A CN108664840 A CN 108664840A CN 201710188186 A CN201710188186 A CN 201710188186A CN 108664840 A CN108664840 A CN 108664840A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- recognizing method
- identification
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2193—Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/179—Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition
Abstract
A kind of image-recognizing method and device provided by the invention detect object from the image received, and determine target image from image;Quality evaluation is carried out to target image, with the corresponding target image judge value of determination, and judges whether judge value meets predetermined threshold value;When judge value is not less than predetermined threshold value, target image and the target data that prestores are compared to identify target, to determine target identification result.Particularly when carrying out quality evaluation to target image, when judge value is not less than predetermined threshold value, exit the identification to present image, it carries out carrying out aforementioned processing process to the image next received as early as possible, the time of processing image can substantially be saved, ensure the precision of identification, improve the processing speed of identification process, and the model for detecting object is consistent with assessment models, the memory space of model is saved, model need not reconfigured, reduce cost, keep the present invention more promising in the field of identification object, especially recognition of face.
Description
Technical field
The present invention relates to image identification technical field, more particularly to a kind of image-recognizing method and device.
Background technology
Existing image recognition is widely used in industry-by-industry, is usually used in detecting the object of our expectations, when detecting,
It is computer-based, and image procossing and detection are carried out on this basis, in processing, often by analyzing the image provided,
Extraction needs the object that is detected in providing image, and image recognition technology is widely used in video, identification, interface image are searched
Other various fields such as rope, monitoring system.However in this approach, due to carrying out target using non-contact method
Identification, since the environment of shooting or capture apparatus etc. can cause the image of shooting to degrade, such as fuzzy, low illumination and backlight
Deng, but image deterioration can carry out screening identification by existing image quality evaluating method, and degrading for picture material can not also
Screening identification is carried out by existing image quality evaluating method, as algorithm unsuccessfully leads to damage, the recognition of face of picture material
When facial angle it is excessive etc., while when the image to degrade to picture material carries out screening identification, device can not provide as a result, simultaneously
It devotes a tremendous amount of time and screening identification is carried out to it, not only waste the time, also reduce the decline of device accuracy of identification, also may be used
The result of erroneous matching can be provided.
Invention content
Based on this, it is necessary to be directed at least one problem mentioned above, provide a kind of image-recognizing method, and accordingly carry
For a kind of pattern recognition device.
A kind of image-recognizing method, including:
Object is detected from the image received, and target figure corresponding with the object is determined from described image
Picture;
Quality evaluation is carried out to the target image;
Processing is identified to determine target identification result in the target image.
Further, step:Quality evaluation is carried out to the target image, including:Determine the judge of the target image
Value, and judge whether the judge value meets predetermined threshold value.
Further, step:Quality evaluation is carried out to the target image, with the corresponding target image judge value of determination,
And judge whether the judge value meets in predetermined threshold value, further include:
When the judge value of the target image is less than predetermined threshold value, then exit to the object of current frame image
Identification, and enter next frame image and carry out object identification;
When the judge value of the target image is not less than predetermined threshold value, processing is identified in the target image
To determine target identification result.
In wherein a kind of embodiment, step:Object is detected from the image received, and from described image really
In fixed target image corresponding with the object, further include:
Image is received, object, and the mesh where obtaining the object in described image are detected from described image
Mark region;
According to the target area, the target area image is obtained, and is extracted in the target area image extremely
A few target signature information;
According to the target signature information, the target area image is pre-processed, obtains normalized target figure
Picture.
Further, step:Image is received, object is detected from described image, and from described image described in acquisition
In target area where object, further include:
Target analyte detection is carried out to described image, there are objects in described image to determine;
Specified target signature detection is carried out to described image, determines the target area in described image.
In wherein a kind of embodiment, step:Target area where obtaining the object in described image, packet
It includes:The target area where obtaining the object in described image is obtained with first object object model.
In wherein a kind of embodiment, step:Quality evaluation is carried out to the target image, including:With the second object
Model obtains the judge value of the target image.
Further, the first object model and second object module are same model, and are identical use
In the Adaboost adaptive boosting algorithm graders of detection.
Further, the target signature information includes the target signature in one or any number of target areas
Relative position and/or each feature area shared in the target area.
In wherein a kind of embodiment, step:According to the target signature information, the target area image is carried out pre-
Processing obtains in the normalized target image, including:
According to target signature information, the target area image is calculated, obtains just handling image;
Dimension normalization processing is carried out to the just processing image, obtains dimension normalization treated the target figure
Picture;
To dimension normalization, treated that the target image carries out unitary of illumination, obtains the normalized target figure
Picture.
Further, step:Dimension normalization processing is carried out to the just processing image, after obtaining dimension normalization processing
The target image in, further include:
Target signature information calibration is carried out to the target image after the just processing image, and the target signature is believed
Breath calibration includes the calibration of target signature information size and/or the calibration of angle.
In wherein a kind of embodiment, the target image pixel value is presetted pixel value.
In wherein a kind of embodiment, the Adaboost is detected simultaneously using local binaries of the MB-LBP based on region
Extract the target signature information in image and the target image.
In wherein a kind of embodiment, the Adaboost graders using Multiple Branch Tree multiway trees as
Weak Classifier.
A kind of pattern recognition device, including:
Detection module for detecting object from the image received, and determines and the target from described image
The corresponding target image of object;
Evaluation module, for carrying out quality evaluation to the target image;
Identification module, for processing to be identified to determine target identification result in the target image.
A kind of image-recognizing method and device provided by the invention, compared with prior art, simultaneously by the image from reception
It therefrom detects object, to determine target image, quality evaluation is carried out to determining target image, obtains target image
One judge value, assessed value and predetermined threshold value are compared, follow-up so as to carry out when judge value is not less than predetermined threshold value
Identification maneuver and recognition result is obtained, when judge value is not less than predetermined threshold value, exits the identification to present image, carry out as early as possible
Aforementioned processing process is carried out to the image next received, the time of processing image can be substantially saved, ensure the precision of identification, carry
The processing speed of high identification process.
Further, preferred embodiment mode through the invention is, it can be achieved that have the following advantages that:
1, after the present invention combines computer vision further to the process of image procossing, quality evaluation step is increased, it should
Degrading for image itself is not only allowed in step, it is also contemplated that picture material degrades the influence that is identified to facial image, avoids
Because Preprocessing Algorithm leads to Facial metamorphosis, wide-angle face or non-face, that is, the case where avoiding erroneous matching.
2, by the way that the matching degree of threshold ratings facial image and faceform is manually set, score is higher, then face figure
As more meeting face characteristic, score is lower, then more deviates from face characteristic, and essence of this case on carrying out recognition of face is improved with this
Degree, while when score is less than predetermined threshold value, you can the identification process to current face's image is exited, and can be entered as early as possible pair
Next in the image processing process received, to improve the speed of recognition of face.
3, same faceform is used during Face datection and face quality evaluation, saves the storage of model
Space need not reconfigure model, reduce cost.
4, in faceform, the model of Adaboost training is used, not only makes the calculating speed in face datection step
It spends faster, and the affinity score of model output can reflect the quality of facial image.
Description of the drawings
Fig. 1 is the principle schematic of image recognition in one embodiment of the invention;
Fig. 2 is faceform's schematic diagram in one embodiment of the invention;
Fig. 3 is MB-LBP feature structure figures in one embodiment of the invention;
Fig. 4 is the structural schematic diagram of image recognition in one embodiment of the invention.
Specific implementation mode
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and is only used for explaining the present invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singulative " one " used herein, " one
It is a ", " described " and "the" may also comprise plural form.It is to be further understood that is used in the specification of the present invention arranges
It refers to there are the feature, integer, step, operation, element and/or component, but it is not excluded that presence or addition to take leave " comprising "
Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member
Part is " connected " or when " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be
Intermediary element.In addition, " connection " used herein or " coupling " may include being wirelessly connected or wirelessly coupling.It is used herein to arrange
Diction "and/or" includes that the whole of one or more associated list items or any cell are combined with whole.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific terminology), there is meaning identical with the general understanding of the those of ordinary skill in fields of the present invention.Should also
Understand, those terms such as defined in the general dictionary, it should be understood that have in the context of the prior art
The consistent meaning of meaning, and unless by specific definitions as here, the meaning of idealization or too formal otherwise will not be used
To explain.
Image recognition is typically to be identified in the image that a terminal-pair current shooting or other equipment are transmitted, main needle
Object in image is identified, needs to pre-process the object extracted from image before identification, avoid
The influence of capture apparatus, shooting environmental and content of shooting to picture quality particularly, can needle to improve the precision of image recognition
To recognition of face, the recognition of face in the different industries such as monitoring, inquiry, security protection, human-computer interaction and field is realized.
In one embodiment of the invention, it as shown in Figure 1, providing a kind of image-recognizing method, specifically includes following
Step:
S100:Object is detected from the image received, and determination is corresponding with the object from described image
Target image;
After any types terminal device such as server, computer receives the original image of various scenes, from the original graph
It detects to need to be detected object as in, that is, detects the object being detected with the presence or absence of needs in the original image,
After detecting that object exists, image corresponding with the object is extracted from original image, to obtain target image,
Extraction object method is there is same type object in a model not based on to same type of object tectonic model
With the object form of angle, i.e. object module is the set of same type object different shape model, wherein object module
It is using statistics and the principle of same type object attribute to be combined to carry out analysis founding mathematical models, in obtained target figure
It before picture, needs to pre-process target image, in order to further evaluation and identification, improves the precision of identification, it is specific
Processing method is handled using the grader of Adboost algorithms training.
Specifically, after such as server, computer any type terminal device receive the original images of various scenes, from
Face is detected in the original image to exist, and isolates facial image from original image, is needed to obtain for subsequently knowing
Other facial image, Face datection are detected according to the faceform constructed in advance, faceform include different shapes of face and
The face type of different angle needs to pre-process facial image before obtaining facial image, is commented for use in subsequent
Sentence, makes judge precision higher, wherein specific Face datection and facial image pretreatment will be described in detail below, it is specific to locate
Reason method is to use Adboost algorithms, and principle that specific faceform combines biostatistics is analyzed and established
Mathematical model.
S200:Quality evaluation is carried out to the target image;
In order to improve the accuracy and speed to image recognition, avoid devoting a tremendous amount of time picture material drop for identification
The image of matter assesses the target image got, is obtained one and commented based on the target image that step S100 is obtained
Sentence value, by obtained judge value and threshold value be manually set in advance and compares, for judge the good of the objective image quality with
Difference, judge value is higher to indicate that the quality of the target image is better, and the judge value of target image is lower to indicate the target image
Quality it is poorer, be equally that target image is inputted into the mesh that is constructed with same type object when carrying out target image assessment
It marks in model, and equally exists the model of same type object different shape in object module, i.e., object module is same class
The set of type object different shape model, object module are that the attribute plus object on the basis of statistical is analyzed
The mathematical model of foundation, and the object module is by Adboost algorithm constructions.
Specifically, in recognition of face, the facial image of target is had been obtained for by step S100, by the face figure
Include the people under different shape of face and different angle in faceform as in the input artificially faceform that constructs in advance
Shape of face, faceform provides a judge value to inputting facial image therein, and itself and preset threshold value are compared,
When the judge value of facial image is less than threshold value, illustrate that the facial image is low with the similarity degree of faceform, i.e. the facial image
With faceform's matching degree is relatively low leads to not to be suitable for recognition of face or the facial image in the process for obtaining facial image
In due to algorithm fail, cause facial image damage and be not suitable for recognition of face, as wide-angle face, Facial metamorphosis or
It is non-face, or due to the image of shooting be distort, posture is strange, low illumination and the environment such as fuzzy and is clapped itself
Judge value is relatively low caused by factor is not suitable for for recognition of face etc.;When the judge value of facial image is not less than threshold value, say
The quality of the bright facial image is suitable for recognition of face, wherein object module is analyzed using the principle of biostatistics
Founding mathematical models, and the specific faceform is by Adboost algorithm constructions.
S300:Processing is identified to determine target identification result in the target image.
On the basis of step S200, if the judge value of the target image is not less than predetermined threshold value, by the target
The data of field that image is applied with it or the same type object to prestore in occasion are compared, and identify target,
So that it is determined that the recognition result of target.
Specifically, such as on the basis of step S200, when the judge value of the image of face is not less than preset threshold value, then
The facial image is compared with the big data of storage face, the recognition result of the image of the face is obtained, such as in security protection
Field, if the judge value of the facial image for the person of being taken is not less than preset value, then by the facial image and the face number that prestores
According to being compared, the people information for the person of being taken is obtained, so that it is determined that whether the photographer is allowed to the personnel entered.
Further, step:Quality evaluation is carried out to the target image, including:Determine the judge of the target image
Value, and judge whether the judge value meets predetermined threshold value.
In order to improve the accuracy and speed to image recognition, avoid devoting a tremendous amount of time picture material drop for identification
The image of matter assesses the target image got based on the target image that step S100 is obtained, by judge value with
Predetermined threshold value is compared, and to judge the good and bad of objective image quality, specific descriptions see above.
Further, step S200:Quality evaluation is carried out to the target image, is judged with the corresponding target image of determination
Value, and judge whether the judge value meets in predetermined threshold value, further include:
S210:When the judge value of the target image is less than predetermined threshold value, then the mesh to current frame image is exited
The identification of object is marked, and enters next frame image and carries out object identification;
Equally on the basis of step S200, if the judge value of the target image is less than predetermined threshold value, then no longer
The identification to the image is carried out, the identification of object is carried out into next frame image, next frame image can be same object
The following frame image by frame sequential in the multiframe of one-time continuous shooting, can also be that same object is repeatedly shot not
By the following frame image of rule sequence or a frame image of same type object at same frame, saved to figure with this
The recognition time of picture to improve the precision of identification and the speed of identification, and enters the identification that next frame image carries out object
That above-mentioned processing procedure is carried out to next frame image, i.e. step S100-S300.
Specifically, equally on the basis of step S200, when the judge value of the image of face is less than preset threshold value, then
The identification to the image is no longer carried out, the identification to current face's image is exited, and then enters next frame image and carries out object
Identification, next frame image can be in the multiframe of the same person's of being taken one-time continuous shooting by following the one of frame sequential
The following frame image to sort by rule in the different frame that frame image or same photographer repeatedly shoot, or it is another
The frame image of one photographer to save the time of identification, and avoids low quality facial image to accuracy of identification and knowledge
The reduction of other speed.
S220:When the judge value of the target image is not less than predetermined threshold value, the target image is known
It manages to determine target identification result in other places.
On the basis of step S200, if the judge value of the target image is not less than predetermined threshold value, by the target
The data of field that image is applied with it or the same type object to prestore in occasion are compared, and identify target,
So that it is determined that the recognition result of target, specific descriptions see above.
In wherein a kind of embodiment, step S100:Detect object from the image received, and from described image
In middle determination target image corresponding with the object, further include:
S110:Image is received, object is detected from described image, and obtains the object place from described image
Target area;
Such as above-mentioned original image that various scenes are received by server, computer any type terminal device, and from
After detecting object in the original image, the region of object is extracted from original image, i.e., is extracted from original image
The image of target portion in detection object and extraction target area is carried out according to the above-mentioned object module constructed in advance
It detects and extracts.
Specifically, such as above-mentioned server, computer any type terminal device receive the original image of various scenes,
And after detecting face presence in the original image, human face region is extracted from original image, i.e., is extracted from original image
Go out the image of the face part of people, and detect face exist and extraction human face region be according to the above-mentioned faceform constructed in advance into
Capable.
S120:According to the target area, the target area image is obtained, and is extracted in the target area image
At least one target signature information;
According to the target area that step S110 is extracted, the image of the target area is obtained, is carried from the target area
Take out at least one target signature information for identification, target signature information be manually set can obviously identify object
Distinctive target signature, target signature information include target signature, the profile of target signature, target signature in target area image
In shared area and each target signature relative position etc..
Specifically, as above-mentioned recognition of face obtains face area after extracting human face region on the basis of step S110
The image in domain is target area from facial image, is believed from the face characteristic being manually set on face is extracted in facial image
Breath, face characteristic information includes eyes, nose, profile, lip, eyebrow etc. and eyes, nose, profile, lip, eyebrow etc.
Shape, while further including the relative position in human face region between shared area and each feature of each feature of face
Acquisition methods Deng, the information of each feature mainly apply SDM, SDM to be mainly used for minimizing non-linear least square function, use
It carries out facial feature points detection mainly on the Optimization Solution of object function, i.e., the function to each characteristic information of face
Optimize solution.
S130:According to the target signature information, the target area image is pre-processed, obtains normalized mesh
Logo image.
According to the target signature information extracted in step S120, i.e., above-mentioned target signature, target signature profile,
The relative position etc. of target signature area shared in target area image and each target signature, according to target signature information pair
The target area image is pre-processed, and pretreatment includes correcting size, illumination and the rotation angle etc. of object feature
The variation of aspect, the target image to be standardized, antidote include that scaling formula calculates target area image,
Illumination pretreatment etc. is carried out to image by gamma transformation, difference of Gaussian filtering and contrast equalization, specific method is later
It is described in detail.
Specifically, the human face target characteristic information extracted according to step S120, i.e., above-mentioned eyes, nose, profile, mouth
The shape of lip, eyebrow etc. and eyes, nose, profile, lip, eyebrow etc., while further including each feature of face in face area
Relative position etc. in domain between shared area and each feature carries out human face region image according to target signature information pre-
Processing, includes the size of correction ocular, nose, profile, lip, eyebrow etc., the illumination of entire human face region image and each spy
The variation of sign entire human face region image rotation angle of rotation angle etc., the positive facial image to be standardized,
Antidote includes that scaling formula calculates target area image, passes through gamma transformation, difference of Gaussian filtering and contrast
Equalization carries out illumination pretreatment etc. to image, and specific method is described in detail later.
Further, step S110:Image is received, object is detected from described image, and obtained from described image
In target area where the object, further include:
S111:Target analyte detection is carried out to described image, there are objects in described image to determine;
As detected object in above-mentioned steps S100 and S110, detects in original image need the target being detected first
Object detects the presence of object, and into subsequent step, specific detection method will be described in detail later using Adboost algorithms.
Specifically, detecting facial image in step S100 and S110, detects need to be detected in original image image first
The face measured detects that, into the determination of next step, specific detection method will using Adboost algorithms there are after face
It is described in detail later.
S112:Specified target signature detection is carried out to described image, determines the target area in described image.
Object is detected using Adaboost algorithm in step S112, it, will on the basis of detecting object
AdaBoost algorithms are detected again for target signature, and the basic principle of target analyte detection is identical as target signature detection.
Specifically, face is detected using Adboost algorithms in step S112, it, will on the basis of detecting face
AdaBoost algorithms are used for human eye detection, and the basic principle of human eye detection is identical as Face datection.
In wherein a kind of embodiment, step S100:Target area where obtaining the object in described image,
Including:With target area of the first object object model where obtaining the object in described image.
The detection and extraction in the detection and target area for carrying out target image have been sketched in step S100 and S110
In use the object modules of Adboost algorithm constructions, equally Adboost algorithms can construct face in step S100 and S110
Model is for detecting human face region and isolating human face region.
In wherein a kind of embodiment, step S200:Quality evaluation is carried out to the target image, including:With the second mesh
Mark object model obtains the judge value of the target image.
Step S200 has carried out the summary to object module, and illustrates by Adboost algorithm constructions, specifically in face mould
It is also by Adboost algorithm constructions to carry out analysis founding mathematical models using the principle of biostatistics in type.
Further, the first object model and second object module are same model, and are identical use
In the Adaboost adaptive boosting algorithm graders of detection.
AdaBoost is a kind of iterative algorithm, and core concept is weak classifier set, and construction one is stronger
Grader.Specific method is:In all candidate Weak Classifiers, a Weak Classifier, the weak typing is selected to make error in classification
Function is minimum.If some sample point is accurately classified, as soon as under construction in training set, its weight by
It reduces;On the contrary, if some sample point is not classified accurately, its weight is just improved.Then, weight updates
The sample set crossed be used to that next Weak Classifier, entire training process be trained so to be made iteratively down.
Error function may be defined as:
Wherein N is sample size, is the weight of sample, and sample label is Weak Classifier, is sample characteristics.Final strong point
Class device is defined as:
When strong classifier is when the verification and measurement ratio and rate of accuracy reached of test sample collection are to given threshold value or all Weak Classifiers
It is used, then training stops.Strong classifier F (x) is the score for distinguishing object and background, is also used as object and target
The similarity score of model, specifically, strong classifier F (x) is the score for distinguishing face and background, also as image and face
Similarity score.
Specifically, by Face datection, human face target feature information extraction and facial image correction after, facial image into
Enter to face quality evaluation step S200.The model for directly using Face datection to train when carrying out quality of human face image assessment,
Using the score of Adaboost outputs as the affinity score of image and faceform.Flow as described in Figure 2, default facial image can
When threshold value A for identification is 450, when the quality assessment value B < A of facial image, then it is assumed that facial image is unsatisfactory for follow-up people
The requirement of face identification, and as quality assessment value B >=A of facial image, then it is assumed that facial image meets follow-up recognition of face
It is required that facial image enters follow-up identification step, such as step S300, wherein 450 be test it is obtaining as a result, herein under result,
Low-quality as much as possible is can remove, and retains clearly facial image.
Further, the target signature information includes the target signature in one or any number of target areas
Relative position and/or each feature area shared in the target area.
Target signature information has been sketched in step s 130, and wherein target signature specifically includes one or any number of described
Target signature in target area is essentially consisted in the target area of object due to shooting imperfect or extracting target
The damage for leading to picture material before characteristic information due to algorithm, so as to cause not including all mesh in target area
Mark feature, so as to cause when extract target signature information, can only fetching portion target signature information, wherein target signature letter
Breath includes relative position and each feature of the target signature in target area area shared in the target area, more specifically
, further include shape, the size of target signature.
Specifically, such as the recognition of face sketched in step s 130, carried out of human face region on the basis of step S120
Take target signature information, i.e. eyes, nose, profile, lip, eyebrow described in step S120 etc. and eyes, nose, wheel
The information of the shape of exterior feature, lip, eyebrow etc. further includes shared area and each feature in human face region of each feature of face
Between the information such as relative position, be used for the calculating and correction of subsequent step.
In wherein a kind of embodiment, step S130:According to the target signature information, to the target area image into
Row pretreatment, obtains in the normalized target image, including:
S131:According to target signature information, the target area image is calculated, obtains just handling image;
According to target signature information, target area image is calculated, it is first within the scope of pre-set specifications to obtain
Processing target image, since the image of conventional identification has one range of setting, the only size of image ability in the range of setting
The identification to object can be carried out, it, can be according to scaling formula etc. to the image and target of target area if original image is larger
Feature is calculated, and is made target feature shape, is accounted for area in size and the place target area and zoom in and out in proportion, obtains
Target image in size range of standardizing, in order to quality evaluation and follow-up identification.Directly calculated according to scaling formula
To target image in, certain map source coordinates may not be integer, to can not find corresponding location of pixels, it is therefore desirable into
One step carries out approximate processing.Approximate evaluation method mainly has:Closest interpolation, bilinear interpolation, high-order interpolation, lagrange are inserted
Value, Newton interpolation etc..Specifically, in recognition of face, according to the previously described target signature extracted from human face region
Information calculates human face region image, according to aforementioned scaling formula etc. to the image of human face region and target signature into
Row calculates, and makes the shapes such as target signature eyes, nose, profile, lip, eyebrow, the faces size and place the target area Nei Zhan
Product zooms in and out in proportion, obtains the target image in standardization size range, in order to quality evaluation and follow-up identification.
S132:Dimension normalization processing is carried out to the just processing image, obtains dimension normalization treated the mesh
Logo image;
Dimension normalization processing is carried out to just handling image, i.e., angle and size are carried out to the target signature in target area
Calibration again, the calibration of wherein angle includes being rotated around origin, carrying out image rotation centered on arbitrary point, for just handling
The pixel of image may be caused to increase after image rotation, image becomes larger, and has exceeded the range of defined image for identification, from
And need to shear image, make the size of target image in the range of setting.Specifically, as aforementioned face passes through calculating
The target image in standardization size range is obtained, then the angle of facial image needs not in the range of standardization to people
The angle of face image carries out rotational correction, and the angle of facial image is made to be maintained in the range of standardization, is convenient for subsequent identification,
Specific rotation can be rotated around origin, and image rotation can also be carried out centered on arbitrary point.
S133:To dimension normalization, treated that the target image carries out unitary of illumination, obtains normalized described
Target image.
After the target image for obtaining dimension normalization, since shooting environmental and intermediate treatment process etc. lead to target image
Low illumination, contrast etc. carry out unitary of illumination processing, so that the light conditions of target image is become convenient for identification, the normalizing of illumination
It includes gamma transformation, difference of Gaussian filtering and contrast equalization to change processing, and it is in a kind of tonal range that gamma transformation, which corrects,
Nonlinear transformation is converted original image gray scale by the parameter gamma for changing different, low in image to effectively enhance
The tonal range in light region, and the tonal range of the compression highlight area of phase, and then control the overall brightness of image.It is basic
Principle is determined by incident intensity and surface reflection coefficient according to the reflective light intensity of subject, and reflectance factor includes pair
The structural information of elephant, therefore logarithmic transformation can be carried out to image to obtain the object structure information independently of illumination.Gaussian difference
Point filtering can obtain the effect of bandpass filtering, it is similar to the second differential of Gaussian function, use center and peripheral region
The difference of two Gaussian function responses obtains Gaussian smoothing and carries out second differential processing to the result after smoothing, to real
Noise can be eliminated or be reduced to existing Image Edge-Detection and positioning function, difference of Gaussian filtering to the next distortion of picture strip and drop
Matter improves the accuracy of recognition of face to improve the quality of image, and contrast equalization purpose is the ash to whole image
Degree grade re-starts adjusting, and image overall contrast ratio and brightness change is made to tend to a kind of standardization.
Specifically, human face image information is easy to be made by the color of light source, the influence of the color error ratio of image capture device
It deviates essential color on color, causes partially warm, colder, partially blue, Huang etc. partially of image color, and computer storage,
Transmission, the process handled will also tend to generate color distortion, these factors make the gray value of image and its contrast occur very
Big variation.These variations often influence the effect of recognition of face, therefore to carry out effective light in the facial pretreatment stage
According to normalization.Aforementioned facial image carries out facial image gamma transformation, the Gauss of unitary of illumination into after crossing dimension normalization
The color of differential filtering and contrast equalization reduction said light source, the influence of the influence of the color error ratio of image capture device,
To obtain sense of vision factor necessary to necessary recognition of face.
Further, step S132:Dimension normalization processing is carried out to the just processing image, is obtained at dimension normalization
In the target image after reason, further include:
Step S1321:Target signature information calibration is carried out to the target image after the just processing image, and described
Target signature information calibration includes the calibration of target signature information size and the calibration of angle.
Include mainly in this step size and angle correction to target signature information to target image correction, makes target
Image is " standard picture " to standardize in range, such as the positive facial image of face of aforementioned facial image.Concrete implementation
Method is with reference to preceding method flow, and details are not described herein again.
In wherein a kind of embodiment, the target image pixel value is presetted pixel value.
For ease of calculation and identification, in normalized processing procedure, if step S131, S132, S133 are by target
The pixel value of image is preset, and the target image pixel value can be certain value, or a pixel value range, as before
The pixel value of facial image is stated after step S131, S132, S133, its pixel value is adjusted within the scope of presetted pixel value or
The preset pixel definite value of person.
In wherein a kind of embodiment, the Adaboost is detected simultaneously using local binaries of the MB-LBP based on region
Extract the target signature information in image and the target image.
Input of the MB-LBP features as Weak Classifier is used in the present embodiment.MB-LBP is the net region of 3x3, it
Any position of image can be placed on.The structure of MB-LBP can be as shown in Figure 3.
MB-LBP calculates the average value of each area pixel in grid, and by the average value and central network of grid pixel around
Lattice pixel average is compared, and comparison result uses binary coding.Then each MB-LBP is the number between one 0 to 255
Word:
In wherein a kind of embodiment, the Adaboost graders using Multiple Branch Tree multiway trees as
Weak Classifier.
Since MB-LBP features are immeasurable, using Multi-Branch Tree as Weak Classifier.Multi-
Branch Tree are defined as
Wherein, xkIt is MB-LBP characteristic values, aj(j=0 ..., 255) it is classifier parameters.Classifier parameters pass through following public
Obtained by formula:
Wherein it is the weight of sample i, is the MB-LBP characteristic values of sample i, is the label of sample i
A kind of pattern recognition device includes as shown in Figure 4:
S10:Detection module, for detecting object from the image received, and determine from described image with it is described
The corresponding target image of object;
S20:Evaluation module, for carrying out quality evaluation to the target image;
S30:Identification module, for processing to be identified to determine target identification result in the target image.
Those skilled in the art of the present technique be appreciated that can with computer program instructions come realize these structure charts and/or
The combination of each frame and these structure charts and/or the frame in block diagram and/or flow graph in block diagram and/or flow graph.This technology is led
Field technique personnel be appreciated that these computer program instructions can be supplied to all-purpose computer, special purpose computer or other
The processor of programmable data processing method is realized, to pass through the processing of computer or other programmable data processing methods
Device come execute structure chart and/or block diagram and/or flow graph disclosed by the invention frame or multiple frames in specify scheme.
Those skilled in the art of the present technique are appreciated that in the various operations crossed by discussion in the present invention, method, flow
Steps, measures, and schemes can be replaced, changed, combined or be deleted.Further, each with having been crossed by discussion in the present invention
Other steps, measures, and schemes in kind operation, method, flow may also be alternated, changed, rearranged, decomposed, combined or deleted.
Further, in the prior art to have and step, measure, the scheme in various operations, method, flow disclosed in the present invention
It may also be alternated, changed, rearranged, decomposed, combined or deleted.
The above is only some embodiments of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (15)
1. a kind of image-recognizing method, which is characterized in that including:
Object is detected from the image received, and target image corresponding with the object is determined from described image;
Quality evaluation is carried out to the target image;
Processing is identified to determine target identification result in the target image.
2. image-recognizing method according to claim 1, which is characterized in that step:Quality is carried out to the target image
Assessment, including:It determines the judge value of the target image, and judges whether the judge value meets predetermined threshold value.
3. image-recognizing method according to claim 2, which is characterized in that step:Quality is carried out to the target image
Assessment, with the corresponding target image judge value of determination, and judges whether the judge value meets in predetermined threshold value, further includes:
When the judge value of the target image is less than predetermined threshold value, then the knowledge to the object of current frame image is exited
Not, and enter the progress object identification of next frame image;
When the judge value of the target image is not less than predetermined threshold value, processing is identified with true in the target image
Set the goal recognition result.
4. image-recognizing method according to claim 1,2 or 3, which is characterized in that step:It is examined from the image received
Object is surveyed, and is determined in target image corresponding with the object from described image, further includes:
Image is received, object, and the target area where obtaining the object in described image are detected from described image
Domain;
According to the target area, obtain the target area image, and extract in the target area image at least one
A target signature information;
According to the target signature information, the target area image is pre-processed, obtains normalized target image.
5. image-recognizing method according to claim 4, which is characterized in that step:Image is received, is examined from described image
Object is surveyed, and in the target area where obtaining the object in described image, further includes:
Target analyte detection is carried out to described image, there are objects in described image to determine;
Specified target signature detection is carried out to described image, determines the target area in described image.
6. image-recognizing method according to claim 4, which is characterized in that step:The mesh is obtained from described image
The target area where object is marked, including:
With target area of the first object object model where obtaining the object in described image.
7. image-recognizing method according to claim 1, which is characterized in that step:Quality is carried out to the target image
Assessment, including:
The judge value of the target image is obtained with the second object model.
8. the image-recognizing method described according to claim 6 or 7, which is characterized in that the first object model and described
Two object modules are same model, and are the identical Adaboost adaptive boosting algorithm graders for detection.
9. image-recognizing method according to claim 4, which is characterized in that the target signature information includes one or appoints
The relative position and/or each feature for the target signature anticipated in multiple target areas face shared in the target area
Product.
10. image-recognizing method according to claim 4, which is characterized in that step:According to the target signature information,
The target area image is pre-processed, is obtained in the normalized target image, including:
According to target signature information, the target area image is calculated, obtains just handling image;
Dimension normalization processing is carried out to the just processing image, obtains dimension normalization treated the target image;
To dimension normalization, treated that the target image carries out unitary of illumination, obtains the normalized target image.
11. image-recognizing method according to claim 10, which is characterized in that step:The just processing image is carried out
Dimension normalization processing, obtains in dimension normalization treated the target image, further includes:
Target signature information calibration, and the target signature information school are carried out to the target image after the just processing image
Standard includes the calibration of target signature information size and/or the calibration of angle.
12. image-recognizing method according to claim 1, which is characterized in that the target image pixel value is default picture
Element value.
13. image-recognizing method according to claim 8, which is characterized in that the Adaboost is based on using MB-LBP
The local binary in region detects and extracts the target signature information in image and the target image.
14. image-recognizing method according to claim 8, which is characterized in that the Adaboost graders with
Multiple Branch Tree multiway trees are as Weak Classifier.
15. a kind of pattern recognition device, which is characterized in that including:
Detection module for detecting object from the image received, and determines and the object phase from described image
The target image answered;
Evaluation module, for carrying out quality evaluation to the target image;
Identification module, for processing to be identified to determine target identification result in the target image.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710188186.1A CN108664840A (en) | 2017-03-27 | 2017-03-27 | Image-recognizing method and device |
KR1020180000698A KR102641115B1 (en) | 2017-03-27 | 2018-01-03 | A method and apparatus of image processing for object detection |
US15/926,161 US10977509B2 (en) | 2017-03-27 | 2018-03-20 | Image processing method and apparatus for object detection |
US17/227,704 US11908117B2 (en) | 2017-03-27 | 2021-04-12 | Image processing method and apparatus for object detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710188186.1A CN108664840A (en) | 2017-03-27 | 2017-03-27 | Image-recognizing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108664840A true CN108664840A (en) | 2018-10-16 |
Family
ID=63785471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710188186.1A Pending CN108664840A (en) | 2017-03-27 | 2017-03-27 | Image-recognizing method and device |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102641115B1 (en) |
CN (1) | CN108664840A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635142A (en) * | 2018-11-15 | 2019-04-16 | 北京市商汤科技开发有限公司 | Image-selecting method and device, electronic equipment and storage medium |
CN109784274A (en) * | 2018-12-29 | 2019-05-21 | 杭州励飞软件技术有限公司 | Identify the method trailed and Related product |
CN110189300A (en) * | 2019-04-22 | 2019-08-30 | 中国科学院微电子研究所 | Detection method, detection device, storage medium and the processor of pass structure processing quality |
CN110473181A (en) * | 2019-07-31 | 2019-11-19 | 天津大学 | Screen content image based on edge feature information without ginseng quality evaluating method |
CN110837821A (en) * | 2019-12-05 | 2020-02-25 | 深圳市亚略特生物识别科技有限公司 | Identity recognition method, equipment and electronic system based on biological characteristics |
CN111340140A (en) * | 2020-03-30 | 2020-06-26 | 北京金山云网络技术有限公司 | Image data set acquisition method and device, electronic equipment and storage medium |
CN111339363A (en) * | 2020-02-28 | 2020-06-26 | 钱秀华 | Image recognition method and device and server |
CN111462069A (en) * | 2020-03-30 | 2020-07-28 | 北京金山云网络技术有限公司 | Target object detection model training method and device, electronic equipment and storage medium |
CN112101448A (en) * | 2020-09-10 | 2020-12-18 | 敬科(深圳)机器人科技有限公司 | Screen image recognition method, device and system and readable storage medium |
CN112308055A (en) * | 2020-12-30 | 2021-02-02 | 北京沃东天骏信息技术有限公司 | Evaluation method and device of face retrieval system, electronic equipment and storage medium |
CN112613492A (en) * | 2021-01-08 | 2021-04-06 | 哈尔滨师范大学 | Data processing method and device |
CN112967467A (en) * | 2021-02-24 | 2021-06-15 | 九江学院 | Cultural relic anti-theft method, system, mobile terminal and storage medium |
CN113933294A (en) * | 2021-11-08 | 2022-01-14 | 中国联合网络通信集团有限公司 | Concentration detection method and device |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102386150B1 (en) * | 2020-03-25 | 2022-04-12 | 경일대학교산학협력단 | Monitoring device and monitoring method |
KR102246471B1 (en) | 2020-05-13 | 2021-04-30 | 주식회사 파이리코 | Apparatus for detecting nose of animal in image and method thereof |
KR102243466B1 (en) | 2020-05-13 | 2021-04-22 | 주식회사 파이리코 | Method and apparatus for detecting eye of animal in image |
KR102649806B1 (en) * | 2021-12-01 | 2024-03-21 | 주식회사 포딕스시스템 | Object Image standardization apparatus and method thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101661557A (en) * | 2009-09-22 | 2010-03-03 | 中国科学院上海应用物理研究所 | Face recognition system and face recognition method based on intelligent card |
US20100086182A1 (en) * | 2008-10-07 | 2010-04-08 | Hui Luo | Diagnostic image processing with automatic self image quality validation |
CN103810466A (en) * | 2012-11-01 | 2014-05-21 | 三星电子株式会社 | Apparatus and method for face recognition |
CN104978550A (en) * | 2014-04-08 | 2015-10-14 | 上海骏聿数码科技有限公司 | Face recognition method and system based on large-scale face database |
CN105139003A (en) * | 2015-09-17 | 2015-12-09 | 桂林远望智能通信科技有限公司 | Dynamic face identification system and method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012109712A1 (en) * | 2011-02-18 | 2012-08-23 | National Ict Australia Limited | Image quality assessment |
KR101621157B1 (en) * | 2014-08-20 | 2016-05-13 | 세종대학교산학협력단 | Apparatus for recongnizing face using mct and method thereof |
-
2017
- 2017-03-27 CN CN201710188186.1A patent/CN108664840A/en active Pending
-
2018
- 2018-01-03 KR KR1020180000698A patent/KR102641115B1/en active IP Right Grant
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100086182A1 (en) * | 2008-10-07 | 2010-04-08 | Hui Luo | Diagnostic image processing with automatic self image quality validation |
CN101661557A (en) * | 2009-09-22 | 2010-03-03 | 中国科学院上海应用物理研究所 | Face recognition system and face recognition method based on intelligent card |
CN103810466A (en) * | 2012-11-01 | 2014-05-21 | 三星电子株式会社 | Apparatus and method for face recognition |
CN104978550A (en) * | 2014-04-08 | 2015-10-14 | 上海骏聿数码科技有限公司 | Face recognition method and system based on large-scale face database |
CN105139003A (en) * | 2015-09-17 | 2015-12-09 | 桂林远望智能通信科技有限公司 | Dynamic face identification system and method |
Non-Patent Citations (4)
Title |
---|
中国大百科全书出版社编辑部, 北京航空航天大学出版社 * |
孔学东 等: "中国电子学会第十四届青年学术年会论文集", 《中国电子学会第十四届青年学术年会论文集》 * |
李晓东: "基于子空间和流形学习的人脸识别算法研究", 《基于子空间和流形学习的人脸识别算法研究》 * |
赵守香 等: "大数据分析与应用", 《大数据分析与应用》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635142B (en) * | 2018-11-15 | 2022-05-03 | 北京市商汤科技开发有限公司 | Image selection method and device, electronic equipment and storage medium |
CN109635142A (en) * | 2018-11-15 | 2019-04-16 | 北京市商汤科技开发有限公司 | Image-selecting method and device, electronic equipment and storage medium |
CN109784274A (en) * | 2018-12-29 | 2019-05-21 | 杭州励飞软件技术有限公司 | Identify the method trailed and Related product |
CN110189300A (en) * | 2019-04-22 | 2019-08-30 | 中国科学院微电子研究所 | Detection method, detection device, storage medium and the processor of pass structure processing quality |
CN110189300B (en) * | 2019-04-22 | 2021-03-09 | 中国科学院微电子研究所 | Detection method and detection device for process quality of hole-shaped structure, storage medium and processor |
CN110473181A (en) * | 2019-07-31 | 2019-11-19 | 天津大学 | Screen content image based on edge feature information without ginseng quality evaluating method |
CN110837821A (en) * | 2019-12-05 | 2020-02-25 | 深圳市亚略特生物识别科技有限公司 | Identity recognition method, equipment and electronic system based on biological characteristics |
CN111339363A (en) * | 2020-02-28 | 2020-06-26 | 钱秀华 | Image recognition method and device and server |
CN111340140A (en) * | 2020-03-30 | 2020-06-26 | 北京金山云网络技术有限公司 | Image data set acquisition method and device, electronic equipment and storage medium |
CN111462069A (en) * | 2020-03-30 | 2020-07-28 | 北京金山云网络技术有限公司 | Target object detection model training method and device, electronic equipment and storage medium |
CN111462069B (en) * | 2020-03-30 | 2023-09-01 | 北京金山云网络技术有限公司 | Training method and device for target object detection model, electronic equipment and storage medium |
CN112101448A (en) * | 2020-09-10 | 2020-12-18 | 敬科(深圳)机器人科技有限公司 | Screen image recognition method, device and system and readable storage medium |
CN112308055A (en) * | 2020-12-30 | 2021-02-02 | 北京沃东天骏信息技术有限公司 | Evaluation method and device of face retrieval system, electronic equipment and storage medium |
CN112613492B (en) * | 2021-01-08 | 2022-02-11 | 哈尔滨师范大学 | Data processing method and device |
CN112613492A (en) * | 2021-01-08 | 2021-04-06 | 哈尔滨师范大学 | Data processing method and device |
CN112967467A (en) * | 2021-02-24 | 2021-06-15 | 九江学院 | Cultural relic anti-theft method, system, mobile terminal and storage medium |
CN113933294A (en) * | 2021-11-08 | 2022-01-14 | 中国联合网络通信集团有限公司 | Concentration detection method and device |
CN113933294B (en) * | 2021-11-08 | 2023-07-18 | 中国联合网络通信集团有限公司 | Concentration detection method and device |
Also Published As
Publication number | Publication date |
---|---|
KR102641115B1 (en) | 2024-02-27 |
KR20180109665A (en) | 2018-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108664840A (en) | Image-recognizing method and device | |
CN112380952B (en) | Power equipment infrared image real-time detection and identification method based on artificial intelligence | |
EP2905722B1 (en) | Method and apparatus for detecting salient region of image | |
CN101142584B (en) | Method for facial features detection | |
CN108734283B (en) | Neural network system | |
KR100772506B1 (en) | Method for classification of geological materials using image processing and apparatus thereof | |
CN110781836A (en) | Human body recognition method and device, computer equipment and storage medium | |
CN111507426B (en) | Non-reference image quality grading evaluation method and device based on visual fusion characteristics | |
CN103488974A (en) | Facial expression recognition method and system based on simulated biological vision neural network | |
US10803116B2 (en) | Logo detection system for automatic image search engines | |
CN111126366B (en) | Method, device, equipment and storage medium for distinguishing living human face | |
CN103034838A (en) | Special vehicle instrument type identification and calibration method based on image characteristics | |
CN109711322A (en) | A kind of people's vehicle separation method based on RFCN | |
CN109389105B (en) | Multitask-based iris detection and visual angle classification method | |
CN110059607B (en) | Living body multiplex detection method, living body multiplex detection device, computer equipment and storage medium | |
CN112307937A (en) | Deep learning-based identity card quality inspection method and system | |
Arafah et al. | Face recognition system using Viola Jones, histograms of oriented gradients and multi-class support vector machine | |
CN106682604B (en) | Blurred image detection method based on deep learning | |
CN117197700A (en) | Intelligent unmanned inspection contact net defect identification system | |
CN113052234A (en) | Jade classification method based on image features and deep learning technology | |
CN106169086B (en) | High-resolution optical image under navigation data auxiliary damages method for extracting roads | |
CN110210314B (en) | Face detection method, device, computer equipment and storage medium | |
CN112001336A (en) | Pedestrian boundary crossing alarm method, device, equipment and system | |
Anagnostopoulos et al. | Using sliding concentric windows for license plate segmentation and processing | |
CN108288041B (en) | Preprocessing method for removing false detection of pedestrian target |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |