CN109117773A - A kind of characteristics of image point detecting method, terminal device and storage medium - Google Patents

A kind of characteristics of image point detecting method, terminal device and storage medium Download PDF

Info

Publication number
CN109117773A
CN109117773A CN201810865350.2A CN201810865350A CN109117773A CN 109117773 A CN109117773 A CN 109117773A CN 201810865350 A CN201810865350 A CN 201810865350A CN 109117773 A CN109117773 A CN 109117773A
Authority
CN
China
Prior art keywords
point
initial
image
current class
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810865350.2A
Other languages
Chinese (zh)
Other versions
CN109117773B (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810865350.2A priority Critical patent/CN109117773B/en
Publication of CN109117773A publication Critical patent/CN109117773A/en
Priority to PCT/CN2019/093685 priority patent/WO2020024744A1/en
Application granted granted Critical
Publication of CN109117773B publication Critical patent/CN109117773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The application is suitable for image identification technical field, provide a kind of characteristics of image point detecting method terminal device and computer readable storage medium, the described method includes: obtaining the initial pictures of the natural scene of multiple classifications, for the natural scene of each classification, lift initial characteristics point from initial pictures, according to corresponding relationship of the initial characteristics point in the initial pictures of current natural scene, the initial characteristics point of preset condition will be met as the target feature point of current class, using the initial pictures comprising target feature point as the training image of current class, the training image concentrated by the training sample, the deep neural network of training building, deep neural network after being trained, based on the deep neural network after the training, image to be detected is detected, obtain the characteristic point in described image to be detected, pass through this The detection accuracy in scene detection can be improved in application.

Description

A kind of characteristics of image point detecting method, terminal device and storage medium
Technical field
The application belong to image identification technical field more particularly to a kind of characteristics of image point detecting method, terminal device and Computer readable storage medium.
Background technique
With the continuous improvement of continuous development and the user demand of computer vision, there are many image procossing skills Art.When carrying out various processing to image, in order to obtain preferable treatment effect, it is sometimes desirable to identify the scene of image,
Currently, being to calculate certain sound of image pixel one by one in graphical rule space mostly to the detection identification of image scene It should be worth, and local extremum is obtained to obtain characteristic point testing result based on location of pixels and scale.However, this image characteristic point The mode detection accuracy of detection is lower.
Summary of the invention
In view of this, the embodiment of the present application provide a kind of characteristics of image point detecting method, terminal device and computer can Storage medium is read, the characteristic point detection mode detection accuracy to solve the problems, such as current scene is lower.
The first aspect of the embodiment of the present application provides a kind of characteristics of image point detecting method, comprising:
Obtain the initial pictures of the natural scene of multiple classifications, wherein the natural scene of each classification includes multiple initial Image;
For each classification natural scene, initial characteristics point is extracted respectively from the initial pictures of current class;
Obtain corresponding relationship of the initial characteristics point in each initial pictures of current class;
Based on the corresponding relationship, is obtained from the initial characteristics point of the initial pictures of current class and meet preset condition Target feature point of the initial characteristics point as current class;
Using include in the initial pictures of each classification current class target feature point initial pictures as training image, Obtain the training sample set of the natural scene of multiple classifications;
The training image concentrated by the training sample, the deep neural network of training building, the depth after being trained Spend neural network;
Based on the deep neural network after the training, image to be detected is detected, obtains described image to be detected In characteristic point.
The second aspect of the embodiment of the present application provides a kind of terminal device, comprising:
Initial pictures obtain module, the initial pictures of the natural scene for obtaining multiple classifications, wherein each classification Natural scene includes multiple initial pictures;
Initial characteristics point obtains module, for dividing from the initial pictures of current class for each classification natural scene Indescribably take initial characteristics point;
Corresponding relationship obtains module, for obtaining pair of the initial characteristics point in each initial pictures of current class It should be related to;
Target feature point obtains module, for being based on the corresponding relationship, from the initial spy of the initial pictures of current class Target feature point of the initial characteristics point for meeting preset condition as current class is obtained in sign point;
Training image obtains module, for by include in the initial pictures of each classification current class target feature point Initial pictures obtain the training sample set of the natural scene of multiple classifications as training image;
Training module, the training image for being concentrated by the training sample, the deep neural network of training building obtain Deep neural network after must training;
Detection module obtains institute for being detected to image to be detected based on the deep neural network after the training State the characteristic point in image to be detected.
The third aspect of the embodiment of the present application provides a kind of terminal device, including memory, processor and is stored in In the memory and the computer program that can run on the processor, when the processor executes the computer program The step of realizing the method that the embodiment of the present application first aspect provides.
The fourth aspect of the embodiment of the present application provides a kind of computer readable storage medium, the computer-readable storage Media storage has computer program, and the computer program realizes the embodiment of the present application when being executed by one or more processors On the one hand the step of the method provided.
5th aspect of the embodiment of the present application provides a kind of computer program product, and the computer program product includes Computer program, the computer program realize that the embodiment of the present application first aspect provides when being executed by one or more processors The method the step of.
The embodiment of the present application provides the method for how detecting characteristic point when a kind of scene of detection image, obtains first more The initial pictures of the natural scene of a classification extract initial characteristics point for the natural scene of each classification, then will acquire just Corresponding relationship of the beginning characteristic point in different initial pictures, according to the corresponding relationship, filtering out from initial pictures being capable of table The target feature point for levying current natural scene constructs the initial pictures including target feature point as training image, training Deep neural network model, the deep neural network model after training are just provided with the ability of detection image scene characteristic point, by The training image of training deep neural network model is the energy by filtering out from initial characteristics point in the embodiment of the present application The image where the target feature point of the natural scene of each classification is enough characterized, therefore, can be improved characteristic point in scene detection Detection accuracy.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is a kind of implementation process schematic diagram of characteristics of image point detecting method provided by the embodiments of the present application;
Fig. 2 is the implementation process schematic diagram of another characteristics of image point detecting method provided by the embodiments of the present application;
Fig. 3 is a kind of schematic block diagram of terminal device provided by the embodiments of the present application;
Fig. 4 is the schematic block diagram of another terminal device provided by the embodiments of the present application.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step, Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment And be not intended to limit the application.As present specification and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In order to illustrate technical solution described herein, the application scenarios of the embodiment of the present application are introduced first, and the application can Applied to the scene detection to image, for example, can be with the classification of default settings scene the earth, stream, cloud, after rain, snow mountain Deng, certainly, in practical applications, it is also possible to the mode classification of other pairs of natural scenes, herein with no restrictions, detection image Scene be exactly detect in image to natural scene classification.Detection based on characteristic point is needed to the detection of scene type, However, unlike Face datection can be based in Face datection using the face with obvious characteristic as characteristic point in scene detection Specific face, which obtain characteristic point, may be implemented Face datection, and training image, which is difficult to calibrate manually, in scene detection has The characteristic point of obvious specific characteristic.So usually calculate certain response of image pixel one by one in graphical rule space, and Local extremum is sought in the three-dimensional space that location of pixels and scale constitute jointly to obtain characteristic point testing result.This characteristic point Detection it is inaccurate, and it is possible to the feature of scene can not be represented.The embodiment of the present application is first to get to represent Then the target feature point of the feature of scene demarcates the target feature point in image, by having demarcated target feature point Image training deep neural network, by the scene characteristic point of the deep neural network detection image after training, to obtain The scene of image.The following is a description of specific embodiments.
Fig. 1 is a kind of implementation process schematic diagram of characteristics of image point detecting method provided by the embodiments of the present application, as schemed institute Show that this method may comprise steps of:
Step S101 obtains the initial pictures of the natural scene of multiple classifications, wherein the natural scene of each classification includes Multiple initial pictures.
In the embodiment of the present application, in order to enable the deep neural network of training can identify the natural field of plurality of classes Scape, so, the training image of acquisition needs the corresponding image of natural scene comprising multiple classifications.It, can also be in practical application By the image of the other natural scene of unitary class training deep neural network, in this way, the deep neural network that obtains of training into It is just merely able to detect the characteristic point of the image of the other natural scene of unitary class when row scene detection.
If desired the deep neural network after training can carry out characteristic point detection to the scene of multiple classifications, available It is corresponding to need to collect each natural scene for example, being provided with 5 natural scenes for the initial pictures of the natural scene of multiple classifications A large amount of initial pictures.
Step S102 extracts initial characteristics from the initial pictures of current class for each classification natural scene respectively Point.
In the embodiment of the present application, from initial pictures extract initial characteristics point method include but is not limited to Harris, SUSAN, SIFT, SURF, FAST, MSER etc..By taking Harris angle point as an example, M × M fritter is first divided the image into, to each small Block carries out Harris angle point response computation, extracts the maximum N number of point of angle point response in each fritter as characteristic point, a figure At most extract M × M × N number of characteristic point.It is appreciated that can also be other methods for extracting image characteristic point in practical application.
Step S103 obtains corresponding relationship of the initial characteristics point in each initial pictures of current class.
In the embodiment of the present application, for the natural scene of some classification, the initial spy of initial pictures in the possible scene There are identical between sign point, there is also differences.For example, be extracted in initial pictures A initial characteristics point a1, initial characteristics point a2, Initial characteristics point b1, initial characteristics point b2, initial characteristics are extracted in initial characteristics point a3, initial characteristics point a4, initial pictures B Point b3.Due to being the characteristic point extracted from different initial pictures, possible initial characteristics point a1 and initial characteristics point b2 are same The characteristic point of one type.That is initial characteristics point a1 in initial pictures A is corresponding pass with the initial characteristics point b2 in initial pictures B System.Thus initial characteristics point a1 and initial characteristics point b2 can be labeled as same type of initial characteristics point.It is first in judgement When corresponding relationship of the beginning characteristic point in different initial pictures, it can be sentenced according to the characteristic information of initial characteristics point It is disconnected.
Step S104 is based on the corresponding relationship, obtains and meet from the initial characteristics point of the initial pictures of current class Target feature point of the initial characteristics point of preset condition as current class.
In the embodiment of the present application, after the corresponding relationship that initial characteristics point has been determined, so that it may obtain from it is current from The initial characteristics point of how many seed type is obtained in the initial pictures of right scene in total.
For more popular understanding, we are by taking Face datection as an example, it is assumed that mention from the initial pictures A of current natural scene Initial characteristics point a1 (left eye), initial characteristics point a2 (right eye), initial characteristics point a3 (nose), initial characteristics point a4 (face are taken The point in portion), initial characteristics point b1 (nose), initial characteristics point b2 (left eye), initial characteristics point b3 are extracted in initial pictures B (right eye), then, the type of the initial characteristics point of current natural scene is not: initial characteristics point a1, initial characteristics point a2, just Beginning characteristic point a3, initial characteristics point a4, initial characteristics point b1, initial characteristics point b2, initial characteristics point b3;Current natural scene The type of initial characteristics point should be: left eye, right eye, nose, face point.This is because initial characteristics point a1 and initial spy Sign point b2 is corresponding relationship, expression left eye, and initial characteristics point a2 and initial characteristics point b3 are corresponding relationship, indicates right Eye, initial characteristics point a3 and initial characteristics point b1 are corresponding relationship, and expression nose, initial characteristics point a4 indicate the point of face. Characteristic point in natural scene image has apparent feature unlike facial image, so if uncertain corresponding relationship, it will The problem of appearance indicates Same Scene characteristic point with different initial characteristics points.
After the initial characteristics point for obtaining how many seed type in total in the initial pictures that current natural scene has been determined, The conduct target feature point for meeting preset condition can be chosen from current initial characteristics point.For example, being chosen at different initial graphs The frequency occurred as in is higher as target feature point, and the conduct target of default feature can also will be met in initial characteristics point Characteristic point.The process that target feature point is obtained from initial characteristics point is actually to obtain the spy that can represent current natural scene The initial characteristics point of sign is as target feature point.For example, it is also possible to be from the initial characteristics point of current scene choose with it is other The difference of initial characteristics point in natural scene is greater than target feature point of the initial characteristics point of threshold value as current scene.When So, other preset conditions can also be set in practical application to obtain target feature point.
Step S105, using in the initial pictures of each classification include current class target feature point initial pictures as Training image obtains the training sample set of the natural scene of multiple classifications.
In the embodiment of the present application, the target feature point of acquisition be can represent the characteristic point of current natural scene, so Target feature point can be marked in the initial pictures for containing target feature point, and target feature point will be marked Initial pictures are as training image.The natural scene of each classification is required to by choosing target feature point from initial characteristics point Process be obtained with the nature of multiple classifications in this way then being obtained with the corresponding training image of each natural scene The training sample set of scene.
Step S106, the training image concentrated by the training sample, the deep neural network of training building are instructed Deep neural network after white silk.
In the embodiment of the present application, the deep neural network can be VGG neural network model.By having demarcated target The process of characteristic point training image training deep neural network, which may is that, is exported training image input deep neural network Image constructs loss function according to the difference of the characteristic point and target feature point that detect in output image, is based on the loss Function, the reversed parameter for updating each layer in deep neural network, until detecting that characteristic point is intended to by deep neural network The target feature point of calibration, i.e. deep neural network convergence, so that it may the deep neural network after being trained.Certainly, practical In, it can also be other training methods.
As the another embodiment of the application, in the training image concentrated by the training sample, the depth of training building Neural network, before the deep neural network after being trained, further includes:
The natural scene and target feature point of the training image are demarcated for each training image.
In the embodiment of the present application, it can be not only training image spotting characteristic point, trained figure can also be calibrated As corresponding natural scene, it can finally increase a classifier when deep neural network is set in this way, be used for root Classify according to natural scene of the characteristic point detected to image, in this way, passing through the deep neural network for increasing classifier The natural scene of image to be detected can be obtained when detection image characteristic point accordingly.
Step S107 detects image to be detected based on the deep neural network after the training, obtain it is described to Characteristic point in detection image.
In the embodiment of the present application, the deep neural network after training is provided with detection acquisition characteristic point and infinitely approaches mesh The ability of characteristic point is marked, therefore, after by the deep neural network after image to be detected input training, so that it may obtain to be checked The characteristic point of the scene of image to be detected can be characterized in altimetric image.
Since the training image of training deep neural network model in the embodiment of the present application is by from initial characteristics point What is filtered out can characterize the image where the target feature point of the natural scene of each classification, therefore, can be improved scene inspection The detection accuracy of characteristic point in survey.
Fig. 2 is the flow diagram of another characteristics of image point detecting method provided by the embodiments of the present application, and the application is real Apply example is to describe how to obtain the process of target feature point on the basis of embodiment shown in Fig. 1, may comprise steps of:
Step S201 obtains the initial pictures of the natural scene of multiple classifications, wherein the natural scene of each classification includes Multiple initial pictures.
Step S202 extracts initial characteristics from the initial pictures of current class for each classification natural scene respectively Point.
The content of step S201 to step S202 is consistent with the content of step S101 to step S102, specifically can refer to step S101 is to the description of step S102, and details are not described herein.
Step S203 obtains the threedimensional model of the natural scene of current class.
In the embodiment of the present application, the threedimensional model of the natural scene, which can be, pre-establishes, and is also possible to basis What the initial pictures of current natural scene were established.
As the another embodiment of the application, the threedimensional model of the natural scene for obtaining current class includes:
Based on image reconstruction algorithm, the three-dimensional mould of the natural scene of current class is established according to the initial pictures of current class Type.
In the embodiment of the present application, the threedimensional model that the natural scene of current class is established based on the initial pictures, can To be the threedimensional model for the natural scene that the image sequence formed according to multiple initial pictures establishes current class.Basis is appointed first The similarity anticipated between two initial pictures, is ranked up the initial pictures so that initial pictures it is adjacent with front and back two The similarity highest of a image.Then, since the head of image sequence, for adjacent first and second initial graph Picture, the SIFT feature of available each initial pictures, matches the SIFT feature of each initial pictures, to obtain The three-dimensional reconstruction of one and second initial pictures, then according between second initial pictures and third initial pictures SIFT feature matching, is modified and expands to the three-dimensional reconstruction of first and second initial pictures, obtains first initially Three-dimensional reconstruction between image, second initial pictures and third initial pictures, according to third initial pictures and the 4th Between initial pictures SIFT feature matching, to first initial pictures, second initial pictures and third initial pictures it Between three-dimensional reconstruction be modified and expand, obtain the three-dimensional reconstruction ... ... between first to the 4th initial graphics, successively Analogize, obtains the three-dimensional reconstruction result of all initial pictures under current natural scene.
It should be noted that the above-mentioned process for carrying out three-dimensional reconstruction acquisition threedimensional model to multiple initial pictures is only used for lifting , in practical application, it can also be other three-dimensional rebuilding methods.
Step S204, the projection matrix based on the initial pictures of current class in the threedimensional model obtain described first Corresponding relationship of the beginning characteristic point in each initial pictures of current class.
In the embodiment of the present application, by taking a natural scene as an example, the initial pictures of current natural scene be may map to In threedimensional model, obtain the projection matrix of each initial pictures, it is understood that for a visual angle to the threedimensional model into Row imaging can obtain an initial pictures.Obtain projection matrix of each initial pictures in the threedimensional model it Afterwards, since the initial characteristics point is located in the initial pictures, so, according to throwing of the initial pictures in the threedimensional model Shadow matrix, so that it may obtain corresponding relationship of the initial characteristics point in each initial pictures of current class, as shown in Figure 1 Description in embodiment obtains the process of corresponding relationship of the initial characteristics point in each initial pictures of current class It can be and matched process is carried out to the initial characteristics point, can be carried out according to information such as feature, the positions of initial characteristics point Matching.
As the another embodiment of the application, the projection based on the initial pictures of current class in the threedimensional model Matrix, obtaining corresponding relationship of the initial characteristics point in each initial pictures of current class includes:
Projection matrix based on the initial pictures of current class in the threedimensional model obtains each initial characteristics point and exists Position in the threedimensional model;
Position based on each initial characteristics point in the threedimensional model obtains the initial characteristics point in current class Each initial pictures in corresponding relationship.
In the embodiment of the present application, it can be matched based on the position of initial characteristics point, for example, according to first four characteristic points The projection matrix of position and initial pictures in the three-dimensional model in initial pictures can get initial characteristics point described Position in threedimensional model, the position based on each initial characteristics point in the threedimensional model, obtains the initial characteristics point Corresponding relationship in each initial pictures of current class.
Step S205 is based on the corresponding relationship, obtains each initial characteristics point and goes out in the initial pictures of current class The existing frequency.
The frequency is met the initial characteristics point of preset condition as the target feature point of current class by step S206.
In the embodiment of the present application, can be made according to the frequency that initial characteristics point occurs in the initial pictures of current class For the condition for screening target feature point, it is understood that be to have in N number of initial pictures initial characteristics point a1 occur, then initially The frequency of characteristic point a1 is just recorded as the number N for the initial pictures of initial characteristics point a1 occur.
It is described that the frequency is met into the initial characteristics point of preset condition as current class as the another embodiment of the application Other target feature point includes:
By in the initial characteristics point of current class, the frequency is greater than the initial characteristics point of the default frequency as current class Target feature point;
Or, be ranked up the initial characteristics point of current class according to the frequency, it is secondary from high frequency time to low frequency successively to select Take target feature point of the initial characteristics point of preset quantity as current class.
In the embodiment of the present application, the number that same initial characteristics point can be occurred in different initial pictures is greater than pre- It is secondary from high frequency time to low frequency successively to choose if the initial characteristics point of number can also preset quantity as target feature point Target feature point of the initial characteristics point of preset quantity as current class.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit It is fixed.
Fig. 3 is that the schematic block diagram for the terminal device that one embodiment of the application provides only is shown and this Shen for ease of description It please the relevant part of embodiment.
The terminal device 3 can be the software unit being built in the terminal devices such as mobile phone, tablet computer, notebook, hard Part unit or the unit of soft or hard combination can also be used as independent pendant and be integrated into the mobile phone, tablet computer, notebook etc. In terminal device.
The terminal device 3 includes:
Initial pictures obtain module 31, the initial pictures of the natural scene for obtaining multiple classifications, wherein each classification Natural scene include multiple initial pictures;
Initial characteristics point obtains module 32, is used for for each classification natural scene, from the initial pictures of current class Initial characteristics point is extracted respectively;
Corresponding relationship obtains module 33, for obtaining the initial characteristics point in each initial pictures of current class Corresponding relationship;
Target feature point obtains module 34, for being based on the corresponding relationship, from the initial of the initial pictures of current class Target feature point of the initial characteristics point for meeting preset condition as current class is obtained in characteristic point;
Training image obtains module 35, for will include the target feature point of current class in the initial pictures of each classification Initial pictures as training image, obtain the training sample set of the natural scene of multiple classifications;
Training module 36, the training image for being concentrated by the training sample, the deep neural network of training building, Deep neural network after being trained;
Detection module 37 is obtained for being detected to image to be detected based on the deep neural network after the training Characteristic point in described image to be detected.
Optionally, the corresponding relationship acquisition module 33 includes:
Obtaining three-dimensional model unit 331, the threedimensional model of the natural scene for obtaining current class;
Corresponding relationship acquiring unit 332, for projection of the initial pictures based on current class in the threedimensional model Matrix obtains corresponding relationship of the initial characteristics point in each initial pictures of current class.
Optionally, the obtaining three-dimensional model unit 331 is also used to:
Based on image reconstruction algorithm, the three-dimensional mould of the natural scene of current class is established according to the initial pictures of current class Type.
Optionally, the corresponding relationship acquiring unit 332 includes:
Initial characteristics point position acquisition subelement, for the initial pictures based on current class in the threedimensional model Projection matrix obtains position of each initial characteristics point in the threedimensional model;
Corresponding relationship obtains subelement, for the position based on each initial characteristics point in the threedimensional model, obtains Corresponding relationship of the initial characteristics point in each initial pictures of current class.
Optionally, the target feature point acquisition module 34 includes:
Initial characteristics point frequency acquiring unit 341 obtains each initial characteristics point and is working as being based on the corresponding relationship The frequency occurred in the initial pictures of preceding classification;
Target feature point acquiring unit 342, for the frequency to be met to the initial characteristics point of preset condition as current The target feature point of classification.
Optionally, the target feature point acquiring unit 342 is also used to:
By in the initial characteristics point of current class, the frequency is greater than the initial characteristics point of the default frequency as current class Target feature point;
Or, be ranked up the initial characteristics point of current class according to the frequency, it is secondary from high frequency time to low frequency successively to select Take target feature point of the initial characteristics point of preset quantity as current class.
Optionally, the terminal device 3 further include:
Demarcating module, the deep neural network for being constructed in the training image concentrated by the training sample, training, Before deep neural network after being trained, natural scene and the target spy of the training image are demarcated for each training image Sign point.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of the terminal device is divided into different functional unit or module, to complete All or part of function described above.Each functional unit in embodiment, module can integrate in one processing unit, It is also possible to each unit to physically exist alone, can also be integrated in one unit with two or more units, above-mentioned collection At unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function Unit, module specific name be also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above-mentioned end The specific work process of unit in end equipment, module, can refer to corresponding processes in the foregoing method embodiment, no longer superfluous herein It states.
Fig. 4 is the schematic block diagram for the terminal device that the another embodiment of the application provides.As shown in figure 4, the end of the embodiment End equipment 4 includes: one or more processors 40, memory 41 and is stored in the memory 41 and can be in the processing The computer program 42 run on device 40.The processor 40 realizes that above-mentioned each image is special when executing the computer program 42 Levy the step in point detecting method embodiment, such as step S101 to S107 shown in FIG. 1.Alternatively, the processor 40 executes The function of each module/unit in above-mentioned terminal device embodiment, such as module 31 shown in Fig. 3 are realized when the computer program 42 To 37 function.
Illustratively, the computer program 42 can be divided into one or more module/units, it is one or Multiple module/units are stored in the memory 41, and are executed by the processor 40, to complete the application.Described one A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for Implementation procedure of the computer program 42 in the terminal device 4 is described.For example, the computer program 42 can be divided Be cut into initial pictures obtain module, initial characteristics point obtain module, corresponding relationship obtain module, target feature point obtain module, Training image obtains module, training module, detection module.
Initial pictures obtain module, the initial pictures of the natural scene for obtaining multiple classifications, wherein each classification Natural scene includes multiple initial pictures;
Initial characteristics point obtains module, for dividing from the initial pictures of current class for each classification natural scene Indescribably take initial characteristics point;
Corresponding relationship obtains module, for obtaining pair of the initial characteristics point in each initial pictures of current class It should be related to;
Target feature point obtains module, for being based on the corresponding relationship, from the initial spy of the initial pictures of current class Target feature point of the initial characteristics point for meeting preset condition as current class is obtained in sign point;
Training image obtains module, for by include in the initial pictures of each classification current class target feature point Initial pictures obtain the training sample set of the natural scene of multiple classifications as training image;
Training module, the training image for being concentrated by the training sample, the deep neural network of training building obtain Deep neural network after must training;
Detection module obtains institute for being detected to image to be detected based on the deep neural network after the training State the characteristic point in image to be detected.
Other modules or unit can refer to the description in embodiment shown in Fig. 3, and details are not described herein.
The terminal device includes but are not limited to processor 40, memory 41.It will be understood by those skilled in the art that figure 4 be only an example of terminal device 4, does not constitute the restriction to terminal device 4, may include more more or less than illustrating Component, perhaps combine certain components or different components, for example, the terminal device can also include input equipment, it is defeated Equipment, network access equipment, bus etc. out.
The processor 40 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor Deng.
The memory 41 can be the internal storage unit of the terminal device 4, such as the hard disk or interior of terminal device 4 It deposits.The memory 41 is also possible to the External memory equipment of the terminal device 4, such as be equipped on the terminal device 4 Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge Deposit card (Flash Card) etc..Further, the memory 41 can also both include the storage inside list of the terminal device 4 Member also includes External memory equipment.The memory 41 is for storing needed for the computer program and the terminal device Other programs and data.The memory 41 can be also used for temporarily storing the data that has exported or will export.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
In embodiment provided herein, it should be understood that disclosed terminal device and method can pass through it Its mode is realized.For example, terminal device embodiment described above is only schematical, for example, the module or list Member division, only a kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or Component can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point is shown The mutual coupling or direct-coupling or communication connection shown or discussed can be through some interfaces, between device or unit Coupling or communication connection are connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice Subtract, such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium do not include be electric carrier signal and Telecommunication signal.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all Comprising within the scope of protection of this application.

Claims (10)

1. a kind of characteristics of image point detecting method characterized by comprising
Obtain the initial pictures of the natural scene of multiple classifications, wherein the natural scene of each classification includes multiple initial pictures;
For each classification natural scene, initial characteristics point is extracted respectively from the initial pictures of current class;
Obtain corresponding relationship of the initial characteristics point in each initial pictures of current class;
Based on the corresponding relationship, is obtained from the initial characteristics point of the initial pictures of current class and meet the initial of preset condition Target feature point of the characteristic point as current class;
Using the initial pictures of the target feature point in the initial pictures of each classification comprising current class as training image, obtain The training sample set of the natural scene of multiple classifications;
The training image concentrated by the training sample, the deep neural network of training building, the depth mind after being trained Through network;
Based on the deep neural network after the training, image to be detected is detected, is obtained in described image to be detected Characteristic point.
2. characteristics of image point detecting method as described in claim 1, which is characterized in that the acquisition initial characteristics point exists Corresponding relationship in each initial pictures of current class includes:
Obtain the threedimensional model of the natural scene of current class;
Projection matrix based on the initial pictures of current class in the threedimensional model obtains the initial characteristics point current Corresponding relationship in each initial pictures of classification.
3. characteristics of image point detecting method as claimed in claim 2, which is characterized in that the natural field for obtaining current class The threedimensional model of scape includes:
Based on image reconstruction algorithm, the threedimensional model of the natural scene of current class is established according to the initial pictures of current class.
4. characteristics of image point detecting method as claimed in claim 2, which is characterized in that the initial graph based on current class As the projection matrix in the threedimensional model, pair of the initial characteristics point in each initial pictures of current class is obtained It should be related to and include:
Projection matrix based on the initial pictures of current class in the threedimensional model obtains each initial characteristics point described Position in threedimensional model;
Position based on each initial characteristics point in the threedimensional model obtains the initial characteristics point in the every of current class Corresponding relationship in a initial pictures.
5. characteristics of image point detecting method as described in claim 1, which is characterized in that it is described to be based on the corresponding relationship, from Mesh of the initial characteristics point for meeting preset condition as current class is obtained in the initial characteristics point of the initial pictures of current class Marking characteristic point includes:
Based on the corresponding relationship, the frequency that each initial characteristics point occurs in the initial pictures of current class is obtained;
The frequency is met into the initial characteristics point of preset condition as the target feature point of current class.
6. characteristics of image point detecting method as claimed in claim 5, which is characterized in that described that the frequency is met default item The initial characteristics point of part includes: as the target feature point of current class
By in the initial characteristics point of current class, the frequency is greater than mesh of the initial characteristics point of the default frequency as current class Mark characteristic point;
Or, be ranked up the initial characteristics point of current class according to the frequency, it is secondary from high frequency time to low frequency successively to choose in advance If target feature point of the initial characteristics point of quantity as current class.
7. characteristics of image point detecting method as described in claim 1, which is characterized in that is concentrated by the training sample Training image, the deep neural network of training building, before the deep neural network after being trained, further includes:
The natural scene and target feature point of the training image are demarcated for each training image.
8. a kind of terminal device characterized by comprising
Initial pictures obtain module, the initial pictures of the natural scene for obtaining multiple classifications, wherein the nature of each classification Scene includes multiple initial pictures;
Initial characteristics point obtains module, for being mentioned respectively from the initial pictures of current class for each classification natural scene Take initial characteristics point;
Corresponding relationship obtains module, for obtaining corresponding pass of the initial characteristics point in each initial pictures of current class System;
Target feature point obtains module, for being based on the corresponding relationship, from the initial characteristics point of the initial pictures of current class The middle target feature point for obtaining the initial characteristics point for meeting preset condition as current class;
Training image obtain module, for by the initial pictures of each classification include current class target feature point it is initial Image obtains the training sample set of the natural scene of multiple classifications as training image;
Training module, the training image for being concentrated by the training sample, the deep neural network of training building are instructed Deep neural network after white silk;
Detection module, for being detected to image to be detected based on the deep neural network after the training, obtain it is described to Characteristic point in detection image.
9. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 7 when executing the computer program The step of any one the method.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey Sequence realizes the step such as any one of claim 1 to 7 the method when the computer program is executed by one or more processors Suddenly.
CN201810865350.2A 2018-08-01 2018-08-01 Image feature point detection method, terminal device and storage medium Active CN109117773B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810865350.2A CN109117773B (en) 2018-08-01 2018-08-01 Image feature point detection method, terminal device and storage medium
PCT/CN2019/093685 WO2020024744A1 (en) 2018-08-01 2019-06-28 Image feature point detecting method, terminal device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810865350.2A CN109117773B (en) 2018-08-01 2018-08-01 Image feature point detection method, terminal device and storage medium

Publications (2)

Publication Number Publication Date
CN109117773A true CN109117773A (en) 2019-01-01
CN109117773B CN109117773B (en) 2021-11-02

Family

ID=64863925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810865350.2A Active CN109117773B (en) 2018-08-01 2018-08-01 Image feature point detection method, terminal device and storage medium

Country Status (2)

Country Link
CN (1) CN109117773B (en)
WO (1) WO2020024744A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322399A (en) * 2019-07-05 2019-10-11 深圳开立生物医疗科技股份有限公司 A kind of ultrasound image method of adjustment, system, equipment and computer storage medium
WO2020024744A1 (en) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 Image feature point detecting method, terminal device, and storage medium
CN110942063A (en) * 2019-11-21 2020-03-31 望海康信(北京)科技股份公司 Certificate text information acquisition method and device and electronic equipment
CN113240031A (en) * 2021-05-25 2021-08-10 中德(珠海)人工智能研究院有限公司 Panoramic image feature point matching model training method and device and server

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470110A (en) * 2020-03-30 2021-10-01 北京四维图新科技股份有限公司 Distance measuring method and device
CN112907726B (en) * 2021-01-25 2022-09-20 重庆金山医疗技术研究院有限公司 Image processing method, device, equipment and computer readable storage medium
CN113361363B (en) * 2021-05-31 2024-02-06 北京百度网讯科技有限公司 Training method, device, equipment and storage medium for face image recognition model
CN114898354A (en) * 2022-03-24 2022-08-12 中德(珠海)人工智能研究院有限公司 Measuring method and device based on three-dimensional model, server and readable storage medium
CN115953567B (en) * 2023-03-14 2023-06-30 广州市玄武无线科技股份有限公司 Method and device for detecting quantity of stacked boxes, terminal equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859326A (en) * 2010-06-09 2010-10-13 南京大学 Image searching method
CN104008400A (en) * 2014-06-16 2014-08-27 河南科技大学 Object recognition method with combination of SIFT and BP network
CN105488541A (en) * 2015-12-17 2016-04-13 上海电机学院 Natural feature point identification method based on machine learning in augmented reality system
CN106446930A (en) * 2016-06-28 2017-02-22 沈阳工业大学 Deep convolutional neural network-based robot working scene identification method
US20170061198A1 (en) * 2015-08-25 2017-03-02 Research Cooperation Foundation Of Yeungnam University System and method for detecting feature points of face

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010170184A (en) * 2009-01-20 2010-08-05 Seiko Epson Corp Specifying position of characteristic portion of face image
CN103578093B (en) * 2012-07-18 2016-08-17 成都理想境界科技有限公司 Method for registering images, device and augmented reality system
CN103310445A (en) * 2013-06-01 2013-09-18 吉林大学 Parameter estimation method of virtual view point camera for drawing virtual view points
CN103617432B (en) * 2013-11-12 2017-10-03 华为技术有限公司 A kind of scene recognition method and device
CN105184271A (en) * 2015-09-18 2015-12-23 苏州派瑞雷尔智能科技有限公司 Automatic vehicle detection method based on deep learning
CN109117773B (en) * 2018-08-01 2021-11-02 Oppo广东移动通信有限公司 Image feature point detection method, terminal device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859326A (en) * 2010-06-09 2010-10-13 南京大学 Image searching method
CN104008400A (en) * 2014-06-16 2014-08-27 河南科技大学 Object recognition method with combination of SIFT and BP network
US20170061198A1 (en) * 2015-08-25 2017-03-02 Research Cooperation Foundation Of Yeungnam University System and method for detecting feature points of face
CN105488541A (en) * 2015-12-17 2016-04-13 上海电机学院 Natural feature point identification method based on machine learning in augmented reality system
CN106446930A (en) * 2016-06-28 2017-02-22 沈阳工业大学 Deep convolutional neural network-based robot working scene identification method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HIROKI YOSHIHARA等: "Automatic Feature Point Detection Using Deep Convolutional Networks for Quantitative Evaluation of Facial Paralysis", 《2016 9TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING,BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI)》 *
王楠: "基于SFM的建筑物三维重建技术研究及应用", 《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)计算机软件及计算机应用》 *
陈文武: "基于移动设备的增强现实家具布置研究", 《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)计算机软件及计算机应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020024744A1 (en) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 Image feature point detecting method, terminal device, and storage medium
CN110322399A (en) * 2019-07-05 2019-10-11 深圳开立生物医疗科技股份有限公司 A kind of ultrasound image method of adjustment, system, equipment and computer storage medium
CN110322399B (en) * 2019-07-05 2023-05-05 深圳开立生物医疗科技股份有限公司 Ultrasonic image adjustment method, system, equipment and computer storage medium
CN110942063A (en) * 2019-11-21 2020-03-31 望海康信(北京)科技股份公司 Certificate text information acquisition method and device and electronic equipment
CN110942063B (en) * 2019-11-21 2023-04-07 望海康信(北京)科技股份公司 Certificate text information acquisition method and device and electronic equipment
CN113240031A (en) * 2021-05-25 2021-08-10 中德(珠海)人工智能研究院有限公司 Panoramic image feature point matching model training method and device and server

Also Published As

Publication number Publication date
CN109117773B (en) 2021-11-02
WO2020024744A1 (en) 2020-02-06

Similar Documents

Publication Publication Date Title
CN109117773A (en) A kind of characteristics of image point detecting method, terminal device and storage medium
CN108319953B (en) Occlusion detection method and device, electronic equipment and the storage medium of target object
CN109784186B (en) Pedestrian re-identification method and device, electronic equipment and computer-readable storage medium
WO2020119527A1 (en) Human action recognition method and apparatus, and terminal device and storage medium
CN108765278A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN108776819A (en) A kind of target identification method, mobile terminal and computer readable storage medium
CN109492627B (en) Scene text erasing method based on depth model of full convolution network
CN109816769A (en) Scene based on depth camera ground drawing generating method, device and equipment
CN112052186B (en) Target detection method, device, equipment and storage medium
CN109583449A (en) Character identifying method and Related product
CN108038459A (en) A kind of detection recognition method of aquatic organism, terminal device and storage medium
CN107958230B (en) Facial expression recognition method and device
CN106485196A (en) Visual search method, corresponding system, equipment and computer program
CN110381369A (en) Determination method, apparatus, equipment and the storage medium of recommendation information implantation position
CN109948397A (en) A kind of face image correcting method, system and terminal device
CN110175980A (en) Image definition recognition methods, image definition identification device and terminal device
CN109840881A (en) A kind of 3D special efficacy image generating method, device and equipment
CN109101946A (en) A kind of extracting method of characteristics of image, terminal device and storage medium
CN106709404A (en) Image processing device and image processing method
CN106408037A (en) Image recognition method and apparatus
CN110400338A (en) Depth map processing method, device and electronic equipment
JP2002203242A (en) Plant recognition system
CN110321761A (en) A kind of Activity recognition method, terminal device and computer readable storage medium
CN111067522A (en) Brain addiction structural map assessment method and device
WO2019119396A1 (en) Facial expression recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant