CN109165572A - Method and apparatus for generating information - Google Patents

Method and apparatus for generating information Download PDF

Info

Publication number
CN109165572A
CN109165572A CN201810879258.1A CN201810879258A CN109165572A CN 109165572 A CN109165572 A CN 109165572A CN 201810879258 A CN201810879258 A CN 201810879258A CN 109165572 A CN109165572 A CN 109165572A
Authority
CN
China
Prior art keywords
submatrix
matched
eigenmatrix
sample
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810879258.1A
Other languages
Chinese (zh)
Other versions
CN109165572B (en
Inventor
王健
李甫
李旭斌
孙昊
文石磊
丁二锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810879258.1A priority Critical patent/CN109165572B/en
Publication of CN109165572A publication Critical patent/CN109165572A/en
Application granted granted Critical
Publication of CN109165572B publication Critical patent/CN109165572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The embodiment of the present application discloses the method and apparatus for generating information.One specific embodiment of this method includes: to extract the eigenmatrix of target object in images to be recognized, the dimension of eigenmatrix includes the first dimension and the second dimension, first dimension is used to characterize the characteristics of image of target object institute image in the image area, and the second dimension is used to characterize the position of image corresponding to characteristics of image in the image area;At least two submatrixs are obtained based on the second dimension cutting eigenmatrix;For the submatrix at least two submatrixs: obtaining the submatrix of the location matches in the eigenmatrix of object to be matched with the submatrix in the image area;The similarity of object and target object to be matched is generated based on the submatrix and acquired submatrix.This embodiment offers a kind of similarity information generting machanism based on local feature, enriches information generating method.

Description

Method and apparatus for generating information
Technical field
The invention relates to field of computer technology, more particularly, to generate the method and apparatus of information.
Background technique
With the very fast development of computer technology, digital image processing techniques development is more and swifter and more violent, has been deep into life Every aspect living.Target identification is widely used in state as one of digital image processing techniques field important subject The every field such as anti-military affairs, public transport, social safety and business application.So-called target identification is to identify spy in the picture It sets the goal, for example, how accurately to identify that pedestrian and pedestrian are converting posture in the video of camera shooting, or is hidden Any after gear re-recognizes the pedestrian.Existing target identification is mainly based upon the global characteristics progress of target.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for generating information.
In a first aspect, the embodiment of the present application provides a kind of method for generating information, this method comprises: extracting wait know The dimension of the eigenmatrix of target object in other image, eigenmatrix includes the first dimension and the second dimension, and the first dimension is used for The characteristics of image of target object institute image in the image area is characterized, the second dimension is used to characterize image corresponding to characteristics of image Position in the image area;At least two submatrixs are obtained based on the second dimension cutting eigenmatrix;For at least two sons Submatrix in matrix: the son of the location matches in the eigenmatrix of object to be matched with the submatrix in the image area is obtained Matrix, wherein the submatrix of the eigenmatrix of the slit mode and target object of the submatrix of the eigenmatrix of object to be matched Slit mode it is identical;The similarity of object and target object to be matched is generated based on the submatrix and acquired submatrix.
In some embodiments, in some embodiments, the second dimension includes horizontal direction dimension and/or vertical direction dimension Degree;And at least two submatrixs are obtained based on the second dimension cutting eigenmatrix, comprising: dimension and/or perpendicular in the horizontal direction Eigenmatrix is averaged cutting to dimension as preset number submatrix by histogram.
In some embodiments, extract images to be recognized in target object eigenmatrix, comprising: to images to be recognized into Row target detection, the location information of image-region where obtaining target object;Using deep neural network to where target object Image in image-region carries out feature extraction, obtains the eigenmatrix of target object.
In some embodiments, object and target object to be matched are generated based on the submatrix and acquired submatrix Similarity, comprising: by the submatrix and acquired submatrix be input to training in advance for the submatrix in image-region In position classification and metric learning model, obtain the mark and target object with the matched object to be matched of target object With the similarity of object to be matched, classification and metric learning model are used to characterize the submatrix, to be matched of the target object inputted Pair of the submatrix of object and the mark of the matched object to be matched of target object, the similarity of target object and object to be matched It should be related to.
In some embodiments, method further include: obtain sample set, wherein the sample in sample set includes sample object Object image, sample in the image area object institute to be matched in the image area image and with sample object object The mark for the object to be matched matched;Sample is chosen from sample set, and executes following training step: according to the sample mesh of selection Mark object institute image, sample in the image area object to be matched image in the image area, extract the sample mesh of selection Mark the eigenmatrix of object, sample object to be matched;The sample mesh that second dimension cutting of the eigenmatrix based on extraction is chosen The eigenmatrix for marking object, sample object to be matched, obtain the submatrix of the eigenmatrix of at least two sample object objects with The submatrix of the eigenmatrix of at least two samples object to be matched;The position in the image area that cutting is obtained is identical Submatrix input is directed to the preliminary classification and metric learning model of the position, according to the preliminary classification and tolerance for being directed to the position The output and the mark of the object to be matched for the sample object object matching chosen for practising model adjust initial point for the position Relevant parameter in class and metric learning model;Determine whether preliminary classification and metric learning model for the position have trained At;It, will be for initial point of the position in response to determining that preliminary classification and metric learning model training for the position are completed Class and metric learning model are as Face datection model;The preliminary classification and metric learning model of the position are directed in response to determining Training does not complete, and chooses sample again from sample set, uses the preliminary classification and metric learning adjusted for the position Model continues to execute training step as the preliminary classification and metric learning model that are directed to the position.
Second aspect, the embodiment of the present application provide it is a kind of for generating the device of information, the device include: extract it is single Member, is configured to extract the eigenmatrix of target object in images to be recognized, and the dimension of eigenmatrix includes the first dimension and the Two-dimensions, the first dimension are used to characterize the characteristics of image of target object institute image in the image area, and the second dimension is used to characterize The position of image corresponding to characteristics of image in the image area;Cutting unit is configured to based on the second dimension cutting feature Matrix obtains at least two submatrixs;Generation unit is configured to for the submatrix at least two submatrixs: obtain to The submatrix of location matches in eigenmatrix with object with the submatrix in the image area, wherein object to be matched The slit mode of submatrix of eigenmatrix of slit mode and target object of the submatrix of eigenmatrix is identical;Based on the son Matrix and acquired submatrix generate the similarity of object and target object to be matched.
In some embodiments, the second dimension includes horizontal direction dimension and/or vertical direction dimension;And cutting list Member is further configured to: the eigenmatrix cutting that is averaged is preset number by dimension and/or vertical direction dimension in the horizontal direction A submatrix.
In some embodiments, extraction unit, comprising: detection sub-unit is configured to carry out target to images to be recognized Detection, the location information of image-region where obtaining target object;Subelement is extracted, is configured to utilize deep neural network pair Target object image in the image area carry out feature extraction, obtain the eigenmatrix of target object.
In some embodiments, generation unit is further configured to: the submatrix and acquired submatrix are inputted To the classification for the position of the submatrix in the image area of training and metric learning model in advance, obtain and target object The mark of matched object to be matched and the similarity of target object and object to be matched, classification and metric learning model are used for Characterize the mark of the submatrix of the target object of input, the submatrix of object to be matched and the matched object to be matched of target object Know, the corresponding relationship of the similarity of target object and object to be matched.
In some embodiments, device further include: acquiring unit is configured to obtain sample set, wherein in sample set Sample include sample object object image, sample in the image area object institute to be matched in the image area image and With the mark of the object to be matched of sample object object matching;Training unit is configured to choose sample from sample set, and Execute following training step: according to the sample object object of selection image, sample object institute to be matched in the image area Image in the image area extracts the eigenmatrix of sample object object, the sample object to be matched of selection;Spy based on extraction The sample object object of the second dimension cutting selection of matrix, the eigenmatrix of sample object to be matched are levied, obtains at least two The submatrix of the eigenmatrix of the submatrix of the eigenmatrix of sample object object and at least two samples object to be matched;It will cut The identical submatrix input in position in the image area got is directed to the preliminary classification and metric learning model of the position, According to the to be matched of the sample object object matching for exporting with choosing of preliminary classification and metric learning model for the position The mark of object, preliminary classification of the adjustment for the position and the relevant parameter in metric learning model;It determines and is directed to the position Preliminary classification and metric learning model whether train completion;The preliminary classification and metric learning of the position are directed in response to determining Model training is completed, using the preliminary classification and metric learning model that are directed to the position as Face datection model;In response to determination Preliminary classification and metric learning model training for the position do not complete, and choose sample again from sample set, use adjustment Afterwards for the position preliminary classification and metric learning model as be directed to the position preliminary classification and metric learning model, Continue to execute training step.
The third aspect, the embodiment of the present application provide a kind of equipment, comprising: one or more processors;Storage device, On be stored with one or more programs, when said one or multiple programs are executed by said one or multiple processors so that on It states one or more processors and realizes such as the above-mentioned method of first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should Such as first aspect above-mentioned method is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating information, by extracting target pair in images to be recognized The eigenmatrix of elephant, and at least two submatrixs are obtained based on the second dimension cutting eigenmatrix, then at least two sons Submatrix in matrix: the son of the location matches in the eigenmatrix of object to be matched with the submatrix in the image area is obtained Matrix is finally generated the similarity of object and target object to be matched based on the submatrix and acquired submatrix, provided A kind of similarity information generting machanism based on local feature, enriches information generating method.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating information of the application;
Fig. 3 is a schematic diagram according to the application scenarios of the method for generating information of the application;
Fig. 4 is the flow chart according to another embodiment of the method for generating information of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating information of the application;
Fig. 6 is adapted for the structural schematic diagram of the computer system for the server or terminal of realizing the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for generating information of the application or the implementation of the device for generating information The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message etc..Various applications, such as the application of safety monitoring class, Image Acquisition can be installed on terminal device 101,102,103 Class application, image processing class application, the application of telecommunication customer end class, searching class application etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard When part, it can be the various electronic equipments with display screen, including but not limited to smart phone, tablet computer, on knee portable Computer and desktop computer etc..When terminal device 101,102,103 is software, above-mentioned cited electricity may be mounted at In sub- equipment.Multiple softwares or software module may be implemented into it, and single software or software module also may be implemented into.Herein not It is specifically limited.
Server 105 can be to provide the server of various services, such as to installing on terminal device 101,102,103 Using the background server supported is provided, server 105 can extract the eigenmatrix of target object in images to be recognized, feature The dimension of matrix includes the first dimension and the second dimension, the first dimension be used to characterize target object image in the image area Characteristics of image, the second dimension is for characterizing the position of image corresponding to characteristics of image in the image area;Based on the second dimension Cutting eigenmatrix obtains at least two submatrixs;For the submatrix at least two submatrixs: obtaining object to be matched The submatrix of location matches in eigenmatrix with the submatrix in the image area, wherein the eigenmatrix of object to be matched Submatrix slit mode it is identical as the slit mode of the submatrix of the eigenmatrix of target object;Based on the submatrix and institute The submatrix of acquisition generates the similarity of object and target object to be matched.
It should be noted that the method provided by the embodiment of the present application for generating information can be held by server 105 Row, can also be executed, correspondingly, the device for generating information can be set in server by terminal device 101,102,103 In 105, also it can be set in terminal device 101,102,103.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software To be implemented as multiple softwares or software module (such as providing Distributed Services), single software or software also may be implemented into Module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for generating information according to the application is shown 200.The method for being used to generate information, comprising the following steps:
Step 201, the eigenmatrix of target object in images to be recognized is extracted.
It in the present embodiment, can for generating the method executing subject (such as server shown in FIG. 1 or terminal) of information To extract the eigenmatrix of target object in images to be recognized first.Aforementioned body can be using one or more convolutional Neural nets The extraction of network progress characteristics of image.Images to be recognized can be any picture to be identified to it, for example, security protection camera The picture of the pedestrian to be identified taken.Target object can be the body part etc. of human or animal and people or animal.
Herein, the dimension of eigenmatrix includes the first dimension and the second dimension, and the first dimension is for characterizing target object Image in the image area characteristics of image, the second dimension is for characterizing image corresponding to characteristics of image in the image area Position.As an example, eigenmatrix can be a three-dimensional matrix, dimension C*H*W, wherein C can be used for table Levy target object image in the image area characteristics of image, H, which can be used for characterizing image corresponding to characteristics of image, to scheme As the position on the vertical direction in region, W can be used for characterizing the water of image corresponding to characteristics of image in the image area Square upward position.
In some optional implementations of the present embodiment, the eigenmatrix of target object in images to be recognized is extracted, comprising: Target detection is carried out to images to be recognized, the location information of image-region where obtaining target object;Utilize deep neural network To target object image in the image area carry out feature extraction, obtain the eigenmatrix of target object.
In this implementation, target detection be can be by the clarification of objective in analysis image or video, by mesh Mark identifies that the location information of image-region where target object can be any image district where can characterizing target object The information of the position in domain, for example, the coordinate of image-region, when target area is rectangle, location information may include upper right corner top Point abscissa, the ordinate of upper right angular vertex, the abscissa of lower-left angular vertex, lower-left angular vertex ordinate;Or phenogram As the abscissa at the center in region, the ordinate at center, the length in region, region width.Further, it is also possible to using based on template, The feature extracting method of edge, gray scale etc., the application do not limit this.
Step 202, at least two submatrixs are obtained based on the second dimension cutting eigenmatrix.
In the present embodiment, above-mentioned executing subject can be based on the eigenmatrix extracted in the second dimension dicing step 201 Obtain at least two submatrixs.Above-mentioned executing subject can be averaged cutting or according to pre-set ratio cutting eigenmatrix, It can specifically be configured according to actual needs.
In some optional implementations of the present embodiment, the second dimension includes horizontal direction dimension and/or vertical direction Dimension;And at least two submatrixs are obtained based on the second dimension cutting eigenmatrix, comprising: in the horizontal direction dimension and/or Eigenmatrix is averaged cutting as preset number submatrix by vertical direction dimension.
In this implementation, when the second dimension includes horizontal direction dimension, can in the horizontal direction dimension by feature square Battle array is averaged cutting as preset number submatrix, when the second dimension includes vertical direction dimension, can incite somebody to action in vertical direction dimension Eigenmatrix is averaged cutting as preset number submatrix.
As an example, eigenmatrix can be averaged cutting as 6 submatrixs in vertical direction dimension, tie up in the horizontal direction Eigenmatrix is averaged cutting as 2 submatrixs by degree.Specific numerical value can be configured according to actual needs.
Step 203, the location matches in the eigenmatrix of object to be matched with the submatrix in the image area are obtained Submatrix.
In the present embodiment, above-mentioned executing subject can be at least two submatrixs that are syncopated as in step 202 Submatrix obtains the submatrix of the location matches in the eigenmatrix of object to be matched with the submatrix in the image area.Its In, the cutting side of the submatrix of the eigenmatrix of the slit mode and target object of the submatrix of the eigenmatrix of object to be matched Formula is identical.As an example, slit mode be by eigenmatrix in the horizontal direction dimension by eigenmatrix be averaged cutting be 2 sub- squares Battle array, then, in the eigenmatrix of object to be matched in the submatrix on the left side and the eigenmatrix of target object the left side submatrix Location matches in the image area.Slit mode is to be by the eigenmatrix cutting that is averaged in vertical direction dimension by eigenmatrix 6 submatrixs, then, the eigenmatrix of the 2nd submatrix and target object from top to bottom in the eigenmatrix of object to be matched In the 2nd location matches of submatrix in the image area from top to bottom.
Step 204, the similarity of object and target object to be matched is generated based on the submatrix and acquired submatrix.
In the present embodiment, above-mentioned executing subject can be generated based on the submatrix obtained in the submatrix and step 203 The similarity of object and target object to be matched.Similarity can be determined by metric learning algorithm, can also pass through feature square The distance between battle array determines that distance here includes but is not limited to Euclidean distance, manhatton distance, Chebyshev's distance, Min Ke Paderewski distance, standardization Euclidean distance, mahalanobis distance, included angle cosine, Hamming distance, Jie Kade distance, correlation distance and letter Cease entropy and the distance between the matrix of other currently known or following exploitations.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for generating information of the present embodiment Figure.In the application scenarios of Fig. 3, server 301 can extract the feature square of target object 3021 in images to be recognized 302 first Battle array, the dimension of eigenmatrix include the first dimension and the second dimension, and the first dimension is for image-region where characterizing target object The characteristics of image of middle image, the second dimension is for characterizing the position of image corresponding to characteristics of image in the image area;Then At least two submatrixs are obtained based on the second dimension cutting eigenmatrix;For the submatrix at least two submatrixs: obtaining Location matches in image 303 in the eigenmatrix of object 3031,3022,3033 to be matched with the submatrix in the image area Submatrix;Submatrix based on the submatrix and acquired object 3031,3022,3033 to be matched generates object to be matched 3031,3022,3033 with the similarity of target object 3021.
Eigenmatrix of the method provided by the above embodiment of the application by target object in extraction images to be recognized, spy The dimension for levying matrix includes the first dimension and the second dimension, and the first dimension is for characterizing target object institute image in the image area Characteristics of image, the second dimension is for characterizing the position of image corresponding to characteristics of image in the image area;Based on the second dimension Degree cutting eigenmatrix obtains at least two submatrixs;For the submatrix at least two submatrixs: obtaining object to be matched Eigenmatrix in location matches with the submatrix in the image area submatrix, wherein the feature square of object to be matched The slit mode of the submatrix of battle array is identical as the slit mode of the submatrix of the eigenmatrix of target object;Based on the submatrix with Acquired submatrix generates the similarity of object and target object to be matched, provides a kind of similarity based on local feature Information producing mechanism enriches information generating method.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for generating information.The use In the process 400 for the method for generating information, comprising the following steps:
Step 401, the eigenmatrix of target object in images to be recognized is extracted.
It in the present embodiment, can for generating the method executing subject (such as server shown in FIG. 1 or terminal) of information To extract the eigenmatrix of target object in images to be recognized first.
Step 402, at least two submatrixs are obtained based on the second dimension cutting eigenmatrix.
In the present embodiment, above-mentioned executing subject can be based on the eigenmatrix extracted in the second dimension dicing step 401 Obtain at least two submatrixs.
Step 403, the location matches in the eigenmatrix of object to be matched with the submatrix in the image area are obtained Submatrix.
In the present embodiment, above-mentioned executing subject can be at least two submatrixs that are syncopated as in step 402 Submatrix obtains the submatrix of the location matches in the eigenmatrix of object to be matched with the submatrix in the image area.
Step 404, by the submatrix and acquired submatrix be input to training in advance for the submatrix in image The classification of position in region and metric learning model obtain the mark and target with the matched object to be matched of target object The similarity of object and object to be matched.
In the present embodiment, the submatrix and the submatrix obtained in step 403 can be input to by above-mentioned executing subject The classification for the position of the submatrix in the image area of training and metric learning model in advance, obtain and target object The mark for the object to be matched matched and the similarity of target object and object to be matched.Mark can be any can distinguish not With the information of object, as an example, the mark of object to be matched can be the collected figure of camera when target object is pedestrian The pedestrian ID being had detected that as in.
As an example, above-mentioned classification and metric learning model may include the submatrix of target object, object to be matched The corresponding relationship of the mark of submatrix and the matched object to be matched of target object, the similarity of target object and object to be matched Table.Mapping table can be submatrix of the technical staff based on submatrix, object to be matched to a large amount of target object with The mark of the matched object to be matched of target object, the statistics of target object and the similarity of object to be matched and pre-establish , the submatrix of submatrix that be stored with multiple target objects, object to be matched and the matched object to be matched of target object The mapping table of the corresponding relationship of mark, the similarity of target object and object to be matched.
In some optional implementations of the present embodiment, method further include: obtain sample set, wherein in sample set Sample include sample object object image, sample in the image area object institute to be matched in the image area image and With the mark of the object to be matched of sample object object matching;Sample is chosen from sample set, and executes following training step: According to the sample object object of selection image, sample in the image area object institute to be matched image in the image area, Extract the eigenmatrix of the sample object object, sample object to be matched chosen;Second dimension of the eigenmatrix based on extraction The eigenmatrix of sample object object, sample object to be matched that cutting is chosen, obtains the spy of at least two sample object objects Levy the submatrix of the submatrix of matrix and the eigenmatrix of at least two samples object to be matched;By cutting obtain in image district The identical submatrix input in position in domain is directed to the preliminary classification and metric learning model of the position, according to for the position The output and the mark of the object to be matched for the sample object object matching chosen of preliminary classification and metric learning model, adjust needle The relevant parameter in preliminary classification and metric learning model to the position;Determine the preliminary classification and tolerance for being directed to the position Practise whether model trains completion;In response to determining that preliminary classification and metric learning model training for the position are completed, by needle Preliminary classification and metric learning model to the position is as Face datection model;In response to determining initial point for the position Class and metric learning model training do not complete, and choose sample again from sample set, using adjusted for the first of the position Begin to classify and metric learning model is as the preliminary classification and metric learning model that are directed to the position, continues to execute training step.
In this implementation, preliminary classification and metric learning model can be the common disaggregated model of this neighborhood and measurement The combination of learning model, can be with the loss function of compressive classification model and metric learning model, such as to the loss function of the two It is weighted summation, preliminary classification and metric learning model are trained.Determine preliminary classification and metric learning model training Whether complete, whether the functional value that can be loss function is less than whether sample in preset threshold or sample set is chosen take. Adjustment for model parameter, can be using back-propagation algorithm (Back Propgation Algorithm, BP algorithm) and ladder The methods of descent method (such as stochastic gradient descent algorithm) is spent, the application does not limit this.
In the present embodiment, step 401, step 402, the operation of step 403 and step 201, step 202, step 203 Operate essentially identical, details are not described herein.
Figure 4, it is seen that the method for generating information compared with the corresponding embodiment of Fig. 2, in the present embodiment Process 400 in by the classification for the position of the submatrix in the image area of training and metric learning model in advance, The similarity with the mark of the matched object to be matched of target object and target object and object to be matched is obtained, as a result, originally The scheme of embodiment description further improves the accuracy of the information of generation.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating letter One embodiment of the device of breath, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 5, the present embodiment includes: extraction unit 501, cutting unit 502 for generating the device 500 of information With generation unit 503.Wherein, extraction unit is configured to extract the eigenmatrix of target object in images to be recognized, feature square The dimension of battle array includes the first dimension and the second dimension, the first dimension be used to characterize target object image in the image area figure As feature, the second dimension is for characterizing the position of image corresponding to characteristics of image in the image area;Cutting unit, is configured At least two submatrixs are obtained at based on the second dimension cutting eigenmatrix;Generation unit is configured at least two sons Submatrix in matrix: the son of the location matches in the eigenmatrix of object to be matched with the submatrix in the image area is obtained Matrix, wherein the submatrix of the eigenmatrix of the slit mode and target object of the submatrix of the eigenmatrix of object to be matched Slit mode it is identical;The similarity of object and target object to be matched is generated based on the submatrix and acquired submatrix.
In the present embodiment, for generating the extraction unit 501, cutting unit 502 and generation unit of the device 500 of information 503 specific processing can be with reference to step 201, step 202, step 203 and the step 204 in Fig. 2 corresponding embodiment.
In some optional implementations of the present embodiment, the second dimension includes horizontal direction dimension and/or vertical direction Dimension;And cutting unit, be further configured to: dimension and/or vertical direction dimension put down eigenmatrix in the horizontal direction Equal cutting is preset number submatrix.
In some optional implementations of the present embodiment, extraction unit, comprising: detection sub-unit is configured to treat Identify that image carries out target detection, the location information of image-region where obtaining target object;Subelement is extracted, benefit is configured to With deep neural network to target object image in the image area carry out feature extraction, obtain the feature square of target object Battle array.
In some optional implementations of the present embodiment, generation unit is further configured to: by the submatrix and institute The submatrix of acquisition is input to the classification trained in advance for the position of the submatrix in the image area and metric learning mould Type obtains the similarity with the mark of the matched object to be matched of target object and target object and object to be matched, classification And metric learning model is used to characterize the submatrix of target object of input, the submatrix of object to be matched is matched with target object The mark of object to be matched, the similarity of target object and object to be matched corresponding relationship.
In some optional implementations of the present embodiment, device further include: acquiring unit is configured to obtain sample Collection, wherein the sample in sample set includes sample object object institute image, sample in the image area object place to be matched Image and the mark with the object to be matched of sample object object matching in image-region;Training unit is configured to from sample This concentration chooses sample, and executes following training step: according to the sample object object of selection figure in the image area As, sample object institute to be matched image in the image area, the spy of sample object object, the sample object to be matched of selection is extracted Levy matrix;The spy of sample object object, sample object to be matched that second dimension cutting of the eigenmatrix based on extraction is chosen Matrix is levied, the submatrix of the eigenmatrix of at least two sample object objects and the spy of at least two samples object to be matched are obtained Levy the submatrix of matrix;The identical submatrix input in position in the image area that cutting is obtained is for the initial of the position Classification and metric learning model, according to the sample mesh of the output and selection of preliminary classification and metric learning model for the position Mark the mark of the object to be matched of object matching, related ginseng of the adjustment for the preliminary classification of the position and in metric learning model Number;Determine whether preliminary classification and metric learning model for the position train completion;In response to determining for the position Preliminary classification and metric learning model training are completed, and are examined for the preliminary classification of the position and metric learning model as face Survey model;In response to determining that preliminary classification and metric learning model training for the position do not complete, from sample set again Sample is chosen, uses the preliminary classification adjusted for the position and metric learning model as initial point for the position Class and metric learning model, continue to execute training step.
The device provided by the above embodiment of the application, by extracting the eigenmatrix of target object in images to be recognized, The dimension of eigenmatrix includes the first dimension and the second dimension, and the first dimension is schemed in the image area for characterizing target object The characteristics of image of picture, the second dimension is for characterizing the position of image corresponding to characteristics of image in the image area;Based on second Dimension cutting eigenmatrix obtains at least two submatrixs;For the submatrix at least two submatrixs: it is to be matched right to obtain The submatrix of location matches in the eigenmatrix of elephant with the submatrix in the image area, wherein the feature of object to be matched The slit mode of submatrix of eigenmatrix of slit mode and target object of the submatrix of matrix is identical;Based on the submatrix The similarity that object and target object to be matched are generated with acquired submatrix provides a kind of based on the similar of local feature Information producing mechanism is spent, information generating method is enriched.
Below with reference to Fig. 6, it illustrates the server for being suitable for being used to realize the embodiment of the present application or the departments of computer science of terminal The structural schematic diagram of system 600.Server or terminal shown in Fig. 6 are only an example, should not be to the function of the embodiment of the present application Any restrictions can be brought with use scope.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data. CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always Line 604.
It can connect with lower component to I/O interface 605: the importation 606 including keyboard, mouse etc.;Including all The output par, c 607 of such as cathode-ray tube (CRT), liquid crystal display (LCD) and loudspeaker etc.;Storage including hard disk etc. Part 608;And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 passes through Communication process is executed by the network of such as internet.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as disk, CD, magneto-optic disk, semiconductor memory etc., are mounted on as needed on driver 610, in order to from The computer program read thereon is mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media 611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer-readable medium either the two any combination.Computer-readable medium for example can be --- but it is unlimited In system, device or the device of --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or any above combination.It calculates The more specific example of machine readable medium can include but is not limited to: electrical connection, portable meter with one or more conducting wires Calculation machine disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or The above-mentioned any appropriate combination of person.In this application, computer-readable medium, which can be, any includes or storage program has Shape medium, the program can be commanded execution system, device or device use or in connection.And in the application In, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, wherein Carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to electric Magnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable Jie Any computer-readable medium other than matter, the computer-readable medium can be sent, propagated or transmitted for being held by instruction Row system, device or device use or program in connection.The program code for including on computer-readable medium It can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. or above-mentioned any conjunction Suitable combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as C language or similar programming language.Program code can be with It fully executes, partly execute on the user computer on the user computer, being executed as an independent software package, portion Divide and partially executes or executed on a remote computer or server completely on the remote computer on the user computer.? Be related in the situation of remote computer, remote computer can pass through the network of any kind --- including local area network (LAN) or Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as mentioned using Internet service It is connected for quotient by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet Include extraction unit, cutting unit and generation unit.Wherein, the title of these units is not constituted under certain conditions to the unit The restriction of itself, for example, extraction unit is also described as " being configured to extract the feature of target object in images to be recognized The unit of matrix ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should Device: extracting the eigenmatrix of target object in images to be recognized, and the dimension of eigenmatrix includes the first dimension and the second dimension, First dimension is used to characterize the characteristics of image of target object institute image in the image area, and the second dimension is used to characterize characteristics of image The position of corresponding image in the image area;At least two submatrixs are obtained based on the second dimension cutting eigenmatrix;It is right Submatrix at least two submatrixs: it obtains in the eigenmatrix of object to be matched with the submatrix in the image area The submatrix of location matches, wherein the slit mode of the submatrix of the eigenmatrix of object to be matched and the feature of target object The slit mode of the submatrix of matrix is identical;Object to be matched and target pair are generated based on the submatrix and acquired submatrix The similarity of elephant.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of method for generating information, comprising:
The eigenmatrix of target object in images to be recognized is extracted, the dimension of the eigenmatrix includes the first dimension and the second dimension Degree, first dimension be used to characterize the target object image in the image area characteristics of image, second dimension For characterizing position of the image corresponding to described image feature in described image region;
At least two submatrixs are obtained based on eigenmatrix described in the second dimension cutting;
For the submatrix at least two submatrix: obtaining in the eigenmatrix of object to be matched and scheming with the submatrix As the submatrix of the location matches in region, wherein the slit mode of the submatrix of the eigenmatrix of the object to be matched with The slit mode of the submatrix of the eigenmatrix of the target object is identical;It is generated based on the submatrix and acquired submatrix The similarity of the object to be matched and the target object.
2. according to the method described in claim 1, wherein, second dimension includes horizontal direction dimension and/or vertical direction Dimension;And
It is described that at least two submatrixs are obtained based on eigenmatrix described in the second dimension cutting, comprising:
The eigenmatrix cutting that is averaged is preset number submatrix by dimension and/or vertical direction dimension in the horizontal direction.
3. according to the method described in claim 1, wherein, the eigenmatrix for extracting target object in images to be recognized wraps It includes:
Target detection is carried out to the images to be recognized, the location information of image-region where obtaining target object;
Using deep neural network to the target object image in the image area carry out feature extraction, obtain the mesh Mark the eigenmatrix of object.
4. method according to any one of claim 1-3, wherein described based on the submatrix and acquired submatrix Generate the similarity of the object to be matched and the target object, comprising:
By the submatrix and acquired submatrix be input to training in advance for the position of the submatrix in the image area Classification and metric learning model, obtain and the mark of the matched object to be matched of the target object and the target object With the similarity of the object to be matched, the sub- square of the target object of the classification and metric learning model for characterizing input Battle array, the mark of the submatrix of object to be matched and the matched object to be matched of target object, target object and object to be matched The corresponding relationship of similarity.
5. according to the method described in claim 4, wherein, the method also includes:
Obtain sample set, wherein the sample in the sample set include sample object object image, sample in the image area This object to be matched image and the mark with the object to be matched of sample object object matching in the image area;
Sample is chosen from the sample set, and executes following training step: according to figure where the sample object object of selection As image, sample object object, the sample for extracting selection wait for image, sample the object institute to be matched in region in the image area Match the eigenmatrix of object;Sample object object, the sample of the second dimension cutting selection of eigenmatrix based on extraction wait for The eigenmatrix of object is matched, the submatrix and at least two samples for obtaining the eigenmatrix of at least two sample object objects wait for Match the submatrix of the eigenmatrix of object;The identical submatrix input in position in the image area that cutting is obtained is directed to The preliminary classification and metric learning model of the position, according to for the position preliminary classification and metric learning model output with The mark of the object to be matched of the sample object object matching of selection, adjustment are directed to the preliminary classification and metric learning mould of the position Relevant parameter in type;Determine whether preliminary classification and metric learning model for the position train completion;In response to determination Preliminary classification and metric learning model training for the position are completed, and the preliminary classification and metric learning mould of the position will be directed to Type is as Face datection model;In response to determining that preliminary classification and metric learning model training for the position do not complete, from Sample is chosen again in the sample set, uses the preliminary classification adjusted for the position and metric learning model as needle Preliminary classification and metric learning model to the position, continue to execute the training step.
6. a kind of for generating the device of information, comprising:
Extraction unit is configured to extract the eigenmatrix of target object in images to be recognized, the dimension packet of the eigenmatrix Include the first dimension and the second dimension, first dimension be used to characterize the target object image in the image area image Feature, second dimension is for characterizing position of the image corresponding to described image feature in described image region;
Cutting unit is configured to obtain at least two submatrixs based on eigenmatrix described in the second dimension cutting;
Generation unit is configured to for the submatrix at least two submatrix: obtaining the feature square of object to be matched The submatrix of location matches in battle array with the submatrix in the image area, wherein the eigenmatrix of the object to be matched The slit mode of submatrix is identical as the slit mode of the submatrix of the eigenmatrix of the target object;Based on the submatrix with Acquired submatrix generates the similarity of the object to be matched and the target object.
7. device according to claim 6, wherein second dimension includes horizontal direction dimension and/or vertical direction Dimension;And
The cutting unit, is further configured to:
The eigenmatrix cutting that is averaged is preset number submatrix by dimension and/or vertical direction dimension in the horizontal direction.
8. device according to claim 6, wherein the extraction unit, comprising:
Detection sub-unit is configured to carry out target detection to the images to be recognized, obtains image-region where target object Location information;
Extract subelement, be configured to using deep neural network to the target object in the image area image progress Feature extraction obtains the eigenmatrix of the target object.
9. device a method according to any one of claims 6-8, wherein the generation unit is further configured to:
By the submatrix and acquired submatrix be input to training in advance for the position of the submatrix in the image area Classification and metric learning model, obtain and the mark of the matched object to be matched of the target object and the target object With the similarity of the object to be matched, the sub- square of the target object of the classification and metric learning model for characterizing input Battle array, the mark of the submatrix of object to be matched and the matched object to be matched of target object, target object and object to be matched The corresponding relationship of similarity.
10. device according to claim 9, wherein described device further include:
Acquiring unit is configured to obtain sample set, wherein the sample in the sample set includes figure where sample object object As image, the sample object to be matched in region image and to be matched with sample object object matching in the image area The mark of object;
Training unit is configured to choose sample from the sample set, and executes following training step: according to the sample of selection This target object image, sample in the image area object institute to be matched image in the image area, extract the sample of selection The eigenmatrix of this target object, sample object to be matched;The sample that second dimension cutting of the eigenmatrix based on extraction is chosen The eigenmatrix of this target object, sample object to be matched obtains the sub- square of the eigenmatrix of at least two sample object objects The submatrix of battle array and the eigenmatrix of at least two samples object to be matched;The position phase in the image area that cutting is obtained Same submatrix input is directed to the preliminary classification and metric learning model of the position, according to the preliminary classification and degree for being directed to the position The mark of learning model exported with the object to be matched for the sample object object matching chosen is measured, adjustment is for the first of the position Relevant parameter in beginning classification and metric learning model;Determine whether preliminary classification and metric learning model for the position instruct Practice and completes;It, will be for the first of the position in response to determining that preliminary classification and metric learning model training for the position are completed Begin to classify and metric learning model is as Face datection model;The preliminary classification and metric learning of the position are directed in response to determining Model training does not complete, and chooses sample again from the sample set, using the preliminary classification adjusted for the position and Metric learning model continues to execute the training step as the preliminary classification and metric learning model that are directed to the position.
11. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors Realize such as method as claimed in any one of claims 1 to 5.
12. a kind of computer-readable medium, is stored thereon with computer program, such as right is realized when which is executed by processor It is required that any method in 1-5.
CN201810879258.1A 2018-08-03 2018-08-03 Method and apparatus for generating information Active CN109165572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810879258.1A CN109165572B (en) 2018-08-03 2018-08-03 Method and apparatus for generating information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810879258.1A CN109165572B (en) 2018-08-03 2018-08-03 Method and apparatus for generating information

Publications (2)

Publication Number Publication Date
CN109165572A true CN109165572A (en) 2019-01-08
CN109165572B CN109165572B (en) 2022-02-08

Family

ID=64898914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810879258.1A Active CN109165572B (en) 2018-08-03 2018-08-03 Method and apparatus for generating information

Country Status (1)

Country Link
CN (1) CN109165572B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590857A (en) * 2021-08-10 2021-11-02 北京有竹居网络技术有限公司 Key value matching method and device, readable medium and electronic equipment
CN114638774A (en) * 2020-12-01 2022-06-17 珠海碳云智能科技有限公司 Image data processing method and device, and nonvolatile storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040052413A1 (en) * 1999-02-18 2004-03-18 Mihoko Kunii Method of object recognition, apparatus of the same and recording medium therefor
US20110150284A1 (en) * 2009-12-22 2011-06-23 Samsung Electronics Co., Ltd. Method and terminal for detecting and tracking moving object using real-time camera motion
US9466109B1 (en) * 2015-06-30 2016-10-11 Gopro, Inc. Image stitching in a multi-camera array
CN106886771A (en) * 2017-03-15 2017-06-23 同济大学 The main information extracting method of image and face identification method based on modularization PCA
CN107679466A (en) * 2017-09-21 2018-02-09 百度在线网络技术(北京)有限公司 Information output method and device
CN108154196A (en) * 2018-01-19 2018-06-12 百度在线网络技术(北京)有限公司 For exporting the method and apparatus of image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040052413A1 (en) * 1999-02-18 2004-03-18 Mihoko Kunii Method of object recognition, apparatus of the same and recording medium therefor
US20110150284A1 (en) * 2009-12-22 2011-06-23 Samsung Electronics Co., Ltd. Method and terminal for detecting and tracking moving object using real-time camera motion
US9466109B1 (en) * 2015-06-30 2016-10-11 Gopro, Inc. Image stitching in a multi-camera array
CN106886771A (en) * 2017-03-15 2017-06-23 同济大学 The main information extracting method of image and face identification method based on modularization PCA
CN107679466A (en) * 2017-09-21 2018-02-09 百度在线网络技术(北京)有限公司 Information output method and device
CN108154196A (en) * 2018-01-19 2018-06-12 百度在线网络技术(北京)有限公司 For exporting the method and apparatus of image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WORAPAN KUSAKUNNIRAN ET AL: "Support vector regression for multi-view gait recognition based on local motion feature selection", 《2010 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
王亦民: "面向监控视频的行人重识别技术研究", 《中国博士学位论文全文数据库 信息科技辑 》 *
赵玉兰等: "最大后验耦合互信息的图像精确配准算法", 《计算机工程与设计》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638774A (en) * 2020-12-01 2022-06-17 珠海碳云智能科技有限公司 Image data processing method and device, and nonvolatile storage medium
CN114638774B (en) * 2020-12-01 2024-02-02 珠海碳云智能科技有限公司 Image data processing method and device and nonvolatile storage medium
CN113590857A (en) * 2021-08-10 2021-11-02 北京有竹居网络技术有限公司 Key value matching method and device, readable medium and electronic equipment

Also Published As

Publication number Publication date
CN109165572B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN108038469B (en) Method and apparatus for detecting human body
CN108898186B (en) Method and device for extracting image
CN109902659B (en) Method and apparatus for processing human body image
CN108154196B (en) Method and apparatus for exporting image
CN108898185A (en) Method and apparatus for generating image recognition model
CN110288049A (en) Method and apparatus for generating image recognition model
CN109086719A (en) Method and apparatus for output data
CN108494778A (en) Identity identifying method and device
CN108073910A (en) For generating the method and apparatus of face characteristic
CN108280477A (en) Method and apparatus for clustering image
CN109446990A (en) Method and apparatus for generating information
CN108989882A (en) Method and apparatus for exporting the snatch of music in video
CN108509921A (en) Method and apparatus for generating information
CN108229485A (en) For testing the method and apparatus of user interface
CN110363084A (en) A kind of class state detection method, device, storage medium and electronics
CN110428399A (en) Method, apparatus, equipment and storage medium for detection image
CN109034069A (en) Method and apparatus for generating information
CN109241934A (en) Method and apparatus for generating information
CN108229375B (en) Method and device for detecting face image
CN108062544A (en) For the method and apparatus of face In vivo detection
CN108509916A (en) Method and apparatus for generating image
CN109086780A (en) Method and apparatus for detecting electrode piece burr
CN108510084A (en) Method and apparatus for generating information
CN109887077A (en) Method and apparatus for generating threedimensional model
CN107729928A (en) Information acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant