CN110147708A - A kind of image processing method and relevant apparatus - Google Patents

A kind of image processing method and relevant apparatus Download PDF

Info

Publication number
CN110147708A
CN110147708A CN201811276686.1A CN201811276686A CN110147708A CN 110147708 A CN110147708 A CN 110147708A CN 201811276686 A CN201811276686 A CN 201811276686A CN 110147708 A CN110147708 A CN 110147708A
Authority
CN
China
Prior art keywords
angle point
mentioned
angle
point
picture frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811276686.1A
Other languages
Chinese (zh)
Other versions
CN110147708B (en
Inventor
郑克松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811276686.1A priority Critical patent/CN110147708B/en
Publication of CN110147708A publication Critical patent/CN110147708A/en
Application granted granted Critical
Publication of CN110147708B publication Critical patent/CN110147708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of image processing method and relevant apparatus, this method comprises: obtaining the first picture frame that the first moment included target object, and the first location information of multiple first angle points are obtained from the first picture frame;The second picture frame that the second moment included target object is obtained, and obtains the second location information of multiple second angle points in the second picture frame;Multiple second angle points are transformed into the first picture frame, and determine the third place information of the multiple third angle points converted in the first picture frame;According to the third place information of the first location information of multiple first angle points and multiple third angle points, object transformation matrix is determined, and image-region of the target object in the second picture frame is determined according to object variations matrix.Using the present invention, the accuracy being tracked to target object can be promoted.

Description

A kind of image processing method and relevant apparatus
Technical field
The present invention relates to Internet technical field more particularly to a kind of image processing methods and relevant apparatus.
Background technique
Current target object tracking (such as face tracking) scheme mainly uses general track algorithm, for example, core Correlation filtering (Kernel Correlation Filter, KCF) and tracking study detection (Tracking-Learning- Detection, TLD) algorithm is required to during being tracked using these track algorithms to face to target detection Device is constantly trained, to be gone in the predicted position in detection next frame according to trained object detector with the presence or absence of people Face, to realize the tracking to face.However since the movement of face is more flexible, and the freedom degree moved is very big (for example, can be with Neatly turn left or turn right), for example, will be unable to when the partial region in face is blocked because of movement to this Face is tracked, so exist with lose face face the phenomenon that.Further, since by these track algorithms to face carry out with During track, with adding up for time, the accumulative of tracking error will be present, so that based on obtained by above-mentioned track algorithm Tracing area (i.e. face frame) biggish offset will be generated, i.e., can not accurately be pasted with the region where practical face It closes, thereby reduces the accuracy of face tracking.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method and relevant apparatus, can promote target object tracking Accuracy.
On the one hand the embodiment of the present invention provides a kind of image processing method, comprising:
First picture frame of first moment comprising target object is obtained, and obtains multiple first from the first image frame The first location information of angle point;
Obtaining for the second moment includes the second picture frame of the target object, and obtains in second picture frame multiple The second location information of second angle point;
The multiple second angle point is transformed into the first image frame, and determination is converted in the first image frame The third place information of obtained multiple third angle points;
According to the third place information of the first location information of the multiple first angle point and the multiple third angle point, really Set the goal transformation matrix, and determines image of the target object in second picture frame according to the object transformation matrix Region.
Wherein, described to obtain first picture frame of first moment comprising target object, and obtained from the first image frame Take the first location information of multiple first angle points, comprising:
Obtain the first picture frame that the first moment included target object, and the determining and mesh from the first image frame Mark associated first image-region of object;
Multiple first angle points are determined from the first image-region, and each first angle point is determined in the first image frame First location information.
Wherein, the target object includes face;
It is described to obtain the first picture frame that the first moment included target object, and the determining and institute from the first image frame State associated first image-region of target object, comprising:
The first picture frame that the first moment included the face is obtained, and determines the face in the first image frame Corresponding face area;
Face key point associated with the face, and root are determined from the face area based on neural network model According to location information of the face key point in the face area, of the face in the first image frame is determined One image-region.
Wherein, described that multiple first angle points are determined from the first image region, and in the first image frame really The first location information of fixed each first angle point, comprising:
The first image region is evenly dividing as M sub-regions, wherein M is the natural number more than or equal to 2;
It extracts N number of first angle point respectively in each subregion in the M sub-regions, obtains M × first jiao N number of Point, wherein N is the natural number more than or equal to 3;
The first location information of each first angle point in M × N number of first angle point is determined in the first image frame.
Wherein, described to obtain the second picture frame that the second moment included the target object, and in second picture frame The middle second location information for obtaining multiple second angle points, comprising:
Obtain the second picture frame that the second moment included the target object, and first based on each first angle point Each first angle point is mapped to second picture frame by location information, and determination is reflected in second picture frame The second location information for multiple second angle points penetrated, second moment are the subsequent time at first moment.
It is wherein, described that the multiple second angle point is transformed into the first image frame, comprising:
It is each in first location information and the multiple second angle point based on the first angle point each in the multiple angle point The second location information of second angle point obtains multiple mapping parameters, wherein a mapping parameter base in the multiple mapping parameter It is obtained in the first location information of first angle point and the second location information of second angle point;
Transformation matrix corresponding with the multiple first angle point is generated according to the multiple mapping parameter, and obtains the change The corresponding inverse matrix of matrix is changed, and the inverse matrix is determined as the first transformation matrix;
According to the second location information of first transformation matrix and each second angle point, by described each second jiao Point transformation is to the first image frame.
Wherein, described according to the first location information of the multiple first angle point and the third position of the multiple third angle point Confidence breath, determines object transformation matrix, comprising:
Determine the corresponding third angle point of each first angle point in the multiple first angle point, wherein any first angle point pair The third angle point answered is determined based on the second angle point that any first angle point maps;
According to the third place information of the first location information of each first angle point and each third angle point, institute is removed State the bad point in the first image-region;The bad point is one or more first angle point in the multiple first angle point, institute State the first location information of any first angle point in one or more first angle point corresponding with any first angle point The third place information of triangulation point mismatches;
In the first image region, the first angle point for removing remaining after the bad point is determined as multiple first Update angle point;The multiple first one for updating in angle point updates angle point based on removing remaining first after the bad point First angle point in angle point determines;
Each first update angle point is mapped to described the by the first location information for updating angle point according to each first Two picture frames, and the map multiple second the second update position letters for updating angle point are determined in second picture frame Breath;
The first location information and each second for updating angle point according to described each first update the second of angle point and update position Confidence breath, generates object transformation matrix.
Wherein, the method also includes:
The number of bad point in statistics removal the first image region, if the number counted on is more than or equal to Frequency threshold value then executes in the first image region, will remove remaining the first angle point after the bad point, is determined as The step of multiple first updates angle points.
Wherein, described according to the first location information of each first angle point and the third position of each third angle point Confidence breath, removes the bad point in the first image region, comprising:
According to the third place information of the first location information of each first angle point and each third angle point, obtain Multiple location errors;First location information of any position error based on any first angle point in the multiple location error and The third place information of the corresponding third angle point of any first angle point determines;
Calculate the sum of the multiple location error;
If the sum of the multiple location error is greater than error threshold, remove one in the first image region or Multiple first angle points.
Wherein, the method also includes:
If the sum of the multiple location error is less than or equal to error threshold, will be corresponding with the multiple first angle point Transformation matrix be determined as object transformation matrix.
On the one hand the embodiment of the present invention provides a kind of image data processing system, comprising:
First position determining module, for obtaining the first picture frame that the first moment included target object, and from described the The first location information of multiple first angle points is obtained in one picture frame;
Second position determining module, for obtaining the second picture frame that the second moment included the target object, and in institute State the second location information that multiple second angle points are obtained in the second picture frame;
Angle point conversion module, for the multiple second angle point to be transformed to the first image frame;
The third place determining module, for determining the multiple third angle points converted in the first image frame The third place information;
Objective matrix generation module, for the first location information and the multiple third according to the multiple first angle point The third place information of angle point, determines object transformation matrix;
Area determination module, for determining the target object in second picture frame according to the object transformation matrix In image-region.
Wherein, the first position determining module includes:
First area determination unit, for obtaining the first picture frame that the first moment included target object, and from described the The first image-region associated with the target object is determined in one picture frame;
First position determination unit, for determining multiple first angle points from the first image-region, and in first figure First location information as determining each first angle point in frame.
Wherein, the target object includes face;
The first area determination unit, comprising:
Face area determines subelement, for obtaining the first picture frame that the first moment included the face, and described The corresponding face area of the face is determined in first picture frame;
Key point determines subelement, for being determined and the face phase from the face area based on neural network model Associated face key point, and the location information according to the face key point in the face area, determine the face The first image-region in the first image frame.
Wherein, the first position determination unit, comprising:
Sub-zone dividing subelement, for being evenly dividing in the first image region for M sub-regions, wherein M is big In the natural number for being equal to 2;
Angle point grid subelement, for extracting N number of first jiao respectively in each subregion in the M sub-regions Point obtains M × N number of first angle point, wherein N is the natural number more than or equal to 3;
Position determines subelement, for determining each first in M × N number of first angle point in the first image frame The first location information of angle point.
Wherein, the second position determining module, specifically for obtaining second of the second moment comprising the target object Each first angle point is mapped to described second by picture frame, and the first location information based on each first angle point Picture frame, and the second location information of multiple second angle points mapped is determined in second picture frame, described the Two moment were the subsequent time at first moment.
Wherein, the angle point conversion module, comprising:
Parameter generation unit is mapped, for first location information based on each first angle point and second jiao each The second location information of point, obtains multiple mapping parameters, wherein a mapping parameter is based on one in the multiple mapping parameter The first location information of first angle point and the second location information of second angle point obtain;
Transformation matrix generation unit, it is corresponding with the multiple first angle point for being generated according to the multiple mapping parameter Transformation matrix, and the corresponding inverse matrix of the transformation matrix is obtained, and the inverse matrix is determined as the first transformation matrix;
Angle point converter unit, for the second confidence according to first transformation matrix and each second angle point Breath, transforms to the first image frame for each second angle point.
Wherein, the objective matrix generation module, comprising:
Corresponding relationship determination unit, for determining the corresponding the third angle of each first angle point in the multiple first angle point Point, the corresponding third angle point of any first angle point are determined based on the second angle point that any first angle point maps;
Bad point removal unit, for according to the of the first location information of each first angle point and each third angle point Three location informations remove the bad point in the first image region;The bad point be the multiple first angle point in one or Multiple first angle points of person, in one or more of first angle points the first location information of any first angle point with it is described any The third place information of the corresponding third angle point of first angle point mismatches;
First updating unit will remove remaining first after the bad point in the first image region Angle point is determined as multiple first and updates angle point;A multiple first update angle point for updating in angle point is based on removing described bad First angle point after point in the first remaining angle point determines;
Second updating unit, for updating the first location information of angle point according to each first, more by described each first New angle point is mapped to second picture frame, and determines that map multiple second update angle in second picture frame Second more new location information of point;
Objective matrix generation unit, for updating the first location information and each second of angle point according to described each first The second more new location information of angle point is updated, object transformation matrix is generated.
The wherein objective matrix generation module, further includes:
Number statistic unit, for counting the number of the bad point in removal the first image region, if the institute counted on Number is stated more than or equal to frequency threshold value, then first updating unit is notified to execute in the first image region, it will Remaining the first angle point after the bad point is removed, is determined as multiple first and updates angle points.
Wherein, the bad point removal unit, comprising:
Error obtain subelement, for according to each first angle point first location information and each third angle point The third place information, obtain multiple location errors;Any position error in the multiple location error is based on any the Determined by the first location information of one angle point and the third place information of the corresponding third angle point of any first angle point.
Computation subunit, for calculating the sum of the multiple location error;
Angle point removes subelement, if being greater than error threshold for the sum of the multiple location error, removes described first One or more first angle point in image-region.
The wherein bad point removal unit, further includes:
Matrix determines subelement, will be with if being less than or equal to error threshold for the sum of the multiple location error The corresponding transformation matrix of the multiple first angle point is determined as object transformation matrix.
On the one hand the embodiment of the present invention provides a kind of image data processing system, comprising: processor and memory;
The processor is connected with memory, wherein for storing program code, the processor is used for the memory Said program code is called, to execute such as the method in the embodiment of the present invention in first aspect.
On the one hand the embodiment of the present invention provides a kind of computer storage medium, the computer storage medium is stored with meter Calculation machine program, the computer program include program instruction, and described program is instructed when being executed by a processor, executed such as the present invention Method in embodiment in first aspect.
The embodiment of the present invention passes through to the first angle point of each of multiple first angle points in the first picture frame first One location information is tracked, and the of the second angle point of each first angle point mapped can be found in the second picture frame Two location informations.Since the movement of the face is more flexible, to ensure multiple first angle point energy in the first image frame It is enough to be mapped in one by one in second picture frame as much as possible, it can be further by each second jiao of first angle point mapped Point reciprocal transformation to obtain the location matches situation of any first angle point in the first picture frame, that is, passes through to the first picture frame It will obtained each second angle point, reciprocal transformation, can be in institutes into the first image frame in second picture frame It states in the first picture frame and the angle point that each second angle point converts is determined as third angle point, and can further exist The third place information of obtained multiple third angle points is determined in the first image frame, and then can further basis be obtained Each first angle point first location information and the corresponding third angle point of each first angle point the third place letter Breath, determines object transformation matrix;Wherein, the object transformation matrix can be any first angle point and described any the Acquired matrix when location matches between the corresponding third angle point of one angle point, optionally, the object transformation matrix can be with Position mismatches when institute between any one or more first angle points and corresponding third angle point in the multiple first angle point Obtained matrix.It therefore, can be further according to the object transformation matrix to the image appeared in the first image frame Region (i.e. face frame) is tracked, it is to be understood that, can be to avoid single light stream tracing algorithm institute by reciprocal transformation Caused by face frame in tracing process offset, so as to be accurately located and the people in second picture frame The region of face fitting, it can track the image-region being bonded with the face frame in the second picture frame, and then can mention Rise the accuracy being tracked to the face.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of schematic diagram of a scenario of image processing method provided in an embodiment of the present invention;
Fig. 2 is a kind of flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram for obtaining the first image-region provided in an embodiment of the present invention;
Fig. 4 is a kind of schematic diagram for obtaining multiple first angle points provided in an embodiment of the present invention;
Fig. 5 is another schematic diagram for obtaining multiple first angle points provided in an embodiment of the present invention;
Fig. 6 is the schematic diagram of the mapping relations between a kind of multiple angle points provided in an embodiment of the present invention;
Fig. 7 is the flow diagram of another image processing method provided in an embodiment of the present invention;
Fig. 8 is a kind of overall flow frame diagram for obtaining the second image-region provided in an embodiment of the present invention;
Fig. 9 is a kind of structural schematic diagram of image data processing system provided in an embodiment of the present invention;
Figure 10 is the structural schematic diagram of another image data processing system provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
It referring to Figure 1, is a kind of schematic diagram of a scenario of image processing method provided in an embodiment of the present invention.Such as Fig. 1 Shown, target user can use video in the case where the front camera in target terminal (for example, smart phone) is opened Recording function, to obtain the video frame comprising the target user (video frame includes multiple images data).It should be appreciated that constituting Each image data in the video frame can carry out serializing distribution according to time shaft shown in FIG. 1, in order to by the video frame In each image data be respectively displayed in corresponding display interface according to time sequencing shown in FIG. 1.Wherein, for Fig. 1 institute Former and later two moment of the arbitrary neighborhood on time shaft shown can be referred to as the first moment and the second moment, wherein on Lower a moment that the second moment was above-mentioned first moment is stated, therefore, which can be further corresponding by above-mentioned first moment Image data be referred to as the first picture frame, and corresponding image data of above-mentioned second moment is referred to as the second picture frame.
For ease of understanding, the embodiment of the present invention on time shaft shown in FIG. 1 the 1st moment and the 2nd moment respectively correspond Image data for, be illustrated with the detailed process being further tracked to the face of the target user.Wherein, the mesh 1st moment institute's acquired image data shown in Fig. 1 can be shown in display interface 1a shown in FIG. 1 by mark terminal, Similarly, which can be shown in display shown in FIG. 1 for the 2nd moment institute's acquired image data shown in Fig. 1 In the 1b of interface.Assuming that the 1st moment was the first moment, then the target terminal will further can be shown corresponding to the 1st moment Show that the image data in the 1a of interface is referred to as the first picture frame, and can be determined as shown in Figure 1 from first picture frame Face frame in display interface 1a.Face (i.e. target object) phase with the target user that the face frame can be to detect First image-region composed by four key points in associated face key point is (in display interface 1a i.e. shown in FIG. 1 What face frame can be understood as being determined from first picture frame can be used in carrying out the region of face tracking).Wherein, structure Four key points at first image-region can be left eye angle, right eye angle, the left corners of the mouth and the right corners of the mouth this four of the user Point.It should be appreciated that above-mentioned first image-region can be the minimum rectangular area comprising this four key points, that is, determined First image-region has uniqueness.Due to lower a period of time that the 2nd moment on time shaft shown in FIG. 1 is above-mentioned 1st moment It carves, therefore can be referred to as to the second moment, and can further be acquired the target terminal at the 2nd moment for the 2nd moment To image data (image data in display interface 1b i.e. shown in FIG. 1) be referred to as the second picture frame.Then, above-mentioned target Terminal can be with the face frame in display interface 1a according to figure 1, further (can be in the first picture frame from the face frame It is middle that the corresponding image-region of face frame is referred to as the first image-region) in extract multiple first angle points, in order to the mesh The second angle point of multiple first angle point mappeds that mark SS later can extract these is tracked, it can will be extracted To the first location information of the first angle point of each of multiple first angle points be mapped in above-mentioned second picture frame, and above-mentioned The second location information of multiple second angle points mapped is determined in second picture frame, so as to every according to tracking The first location information of the second location information of a second angle point and corresponding first angle point obtains multiple for generating above-mentioned change Changing the mapping parameter of matrix, (i.e. the mapping parameter can be understood as first location information based on each first angle point and corresponding Mapping relations on position determined by the second location information of each second angle point).Further, according to the transformation matrix Inverse matrix, can be by each second angle point reciprocal transformation to the first picture frame, and determines and converted in above-mentioned first picture frame The third place information of obtained multiple third angle points.Further, by the first location information of each first angle point and right The third place information for the third angle point answered may further determine that (the object transformation matrix can be understood as object transformation matrix The finally obtained transformation matrix of the target terminal), and then the people of the target user can be determined according to the object transformation matrix Face appears in the second image-region in above-mentioned second picture frame, which is display interface 1b shown in FIG. 1 In face frame.It should be appreciated that the face frame is based on to four keys in the face frame in display interface 1a shown in FIG. 1 Determined by point and above-mentioned object transformation matrix.As shown in Figure 1, the position of four key points in face frame in display interface 1b Confidence is ceased the location information for four key points being different from the face frame in display interface 1a because of the movement of face.
It should be appreciated that when above-mentioned target terminal determines that the corresponding face frame of the face (can in the second picture frame The corresponding image-region of face frame is referred to as the second image-region in the second picture frame) when, can further by this second Picture frame is as the first new picture frame, and using the face frame in display interface 1b as the first image-region newly.Then should Target terminal can further determine multiple the first new angle points from the first new image-region, in order to target end Subsequent the first new angle point that can be extracted according to these is held, in the second new picture frame (on time shaft i.e. shown in Fig. 1 The 3rd moment image data in collected display interface 1c) in, new to these first new angle point mappeds the Two angle points are tracked, and can further be obtained new target according to the detailed process of above-mentioned determining object transformation matrix and be become Matrix is changed, so as to navigate to face frame of the above-mentioned face in above-mentioned display interface 1c according to the new object transformation square. Wherein, the detailed process that above-mentioned target terminal is tracked the face frame in above-mentioned display interface 1b can be together referring to above-mentioned The description for the detailed process that target terminal is tracked the face frame in above-mentioned display interface 1a.In consideration of it, the target terminal Image-region associated with the face of the target user (i.e. new can be further determined in the second new picture frame Two image-regions), which is the face frame in above-mentioned display interface 1c shown in FIG. 1.It should be appreciated that Face frame in display interface 1c is based on to four key points in the face frame in display interface 1b shown in FIG. 1 and upper It states in four key points and the display interface 1c in the face frame determined by new object transformation matrix, i.e. in display interface 1b The location informations of four key points may be identical, it is also possible to obtain the position of four new key points because of the movement of face Information.
For another example, the (n-1)th moment in reference axis shown in FIG. 1 can be referred to as by target terminal shown in FIG. 1 One moment, and by (n-1)th moment the image data of the collected face comprising the user be referred to as the first picture frame (image data in display interface 1m shown in FIG. 1 can be referred to as the first picture frame).Then, due to shown in FIG. 1 The n-th moment in reference axis is that therefore n-th moment can be referred to as second moment at the lower a moment at (n-1)th moment, And by (n-1)th moment the collected face comprising the user image data be referred to as the second picture frame (can Image data in display interface 1n shown in FIG. 1 is referred to as the second picture frame).Wherein, above-mentioned target terminal is to above-mentioned aobvious Show that the detailed process of the face frame progress face tracking in the 1m of interface can be together referring to above-mentioned target terminal to above-mentioned display circle The description for the detailed process of face frame in the 1a of face being tracked, then, above-mentioned target terminal are available shown in FIG. 1 aobvious Show the face frame (i.e. second image-region of the face in above-mentioned second picture frame) in the 1n of interface.It should be appreciated that the face Frame be based on in the face frame in display interface 1m shown in FIG. 1 four key points and above-mentioned new object transformation matrix institute Determining, i.e. the location information of four key points in face frame in display interface 1m and four keys in display interface 1n The location information of point can be identical.
Wherein, above-mentioned target terminal obtains above-mentioned first image-region, determines above-mentioned object transformation matrix and determine The detailed process of above-mentioned second image-region, may refer to realization side provided by embodiment corresponding to following Fig. 2 to Fig. 8 Formula.
Fig. 2 is referred to, is a kind of flow diagram of image processing method provided in an embodiment of the present invention.Such as Fig. 2 Shown, method provided in an embodiment of the present invention may include:
Step S101 obtains first picture frame of first moment comprising target object, and obtains from above-mentioned first picture frame Take the first location information of multiple first angle points.
Specifically, the first picture frame of the available face (i.e. target object) comprising target user of target terminal, on It in current time was the first moment that state the first picture frame, which can be the image data acquiring device that is integrated in above-mentioned target terminal, When a frame image data in collected video data.Optionally, above-mentioned first picture frame can also be the target terminal The image data acquiring device with above-mentioned target terminal with data connection relationship received was the first moment in current time When a frame image data in collected video data.Then, which can be in above-mentioned first picture frame really The corresponding face area of above-mentioned face is made, and is determined from above-mentioned face area and above-mentioned face based on neural network model Associated face key point, and the location information according to above-mentioned face key point in above-mentioned face area, are determined above-mentioned First image-region of the face in above-mentioned first picture frame;Further, which can be from the first image-region It determines multiple first angle points, and determines the first location information of each first angle point in above-mentioned first picture frame.
Wherein, above-mentioned image data acquiring device can be the equipment independently of above-mentioned target terminal, for example, scanner, The equipment that sensor etc. has image data acquiring function, these equipment can will be at first by wired or wireless mode The frame image data carved in the video data of the collected face comprising target user is transferred to above-mentioned target terminal, so that The target terminal can be using this frame image data received as the first picture frame.
Optionally, above-mentioned data acquisition device can also be the equipment being integrated in above-mentioned target terminal, for example, being built in Forward and backward in above-mentioned terminal sets camera, therefore, when the target terminal opens camera function, can by it is preposition or after The video data of face of the camera acquisition comprising target user is set, which can be by adopting in a continuous time period The multiple images frame collected, therefore, which can will be in first moment institute's acquired image in the video data Frame is as the first picture frame.
Wherein, above-mentioned target terminal can be the target terminal in embodiment corresponding to above-mentioned Fig. 1, and above-mentioned target terminal can To include: the intelligent terminal with camera function such as smart phone, tablet computer, desktop computer, smart television.
Further, Fig. 3 is referred to, is a kind of schematic diagram for obtaining the first image-region provided in an embodiment of the present invention. As shown in Figure 3, it is assumed that target user passes through mesh shown in Fig. 3 just on certain authentication platform (for example, bank finance platform) Terminal is marked, recognition of face is carried out to the image data of the collected face comprising the target user, can be tested in order to subsequent Card holds the identity information of the target user of the target terminal.Wherein, target terminal shown in Fig. 3 is known to above-mentioned face It before not, needs first to call the camera applications in the terminal, and by the corresponding camera of the camera applications (for example, being built in Front camera in the target terminal) acquisition comprising the target user face image data, at this point, the target terminal can The first picture frame will be referred to as in first moment institute's acquired image data.Then, which can be right on backstage First picture frame got carries out image procossing, for example, the front and back scene area in first picture frame can be divided It cuts, to take out the corresponding target area of overall profile of target user shown in Fig. 3 from the first picture frame shown in Fig. 3. Wherein, above-mentioned foreground area is the corresponding image-region of overall profile of above-mentioned target user, on above-mentioned background area is State the image-region after taking out the target user in the first picture frame.
Wherein, above-mentioned authentication platform can also include: gate inhibition, attendance, traffic, community, old-age pension qualification authentication etc. Need to carry out the authentication platform of recognition of face.
It should be appreciated that can be prevented in above-mentioned background area by filtering out the background area in above-mentioned first picture frame The interference of each pixel, so as to improve the accuracy of subsequent recognition of face.Then, above-mentioned target terminal can further exist Above-mentioned face (i.e. target object) is identified in target area shown in Fig. 3, to obtain the face of above-mentioned face shown in Fig. 3 Portion region.Then, target terminal shown in Fig. 3 can be based further on neural network model (for example, multitask convolutional Neural net Network) face key point associated with above-mentioned face is determined from face area shown in Fig. 3, and it is crucial according to above-mentioned face Location information of the point in above-mentioned face area, determines that the first image-region as shown in Figure 3, i.e. first image-region are The newest rectangular area that the associated face key point of face based on above-mentioned target user is constituted, it is above-mentioned so as to obtain First image-region of the face in above-mentioned first picture frame.Wherein, above-mentioned face key point can be understood as to characterize State the characteristic point at the face position of target user.
It, can first will be upper in order to improve the accuracy rate identified to the key point of the face data in above-mentioned face area State the corresponding face area of face (i.e. target object), as pending area, and further by the pending area adjust to Image data in pending area after adjustment size is then inputted above-mentioned multitask convolutional Neural net by fixed size Input layer in network.Above-mentioned multitask convolutional neural networks may include input layer, convolutional layer, pond layer, full articulamentum and defeated Layer out;Wherein the parameter size of input layer is equal to the size of the pending area after adjustment size.When in above-mentioned pending area Image data be input to the output layer of convolutional neural networks after, subsequently enter convolutional layer, randomly select the pending district first As sample, and from this small sample, then study utilizes a fritter in image data in domain to some characteristic informations This sample successively slips over all pixels region of the pending area as a window, that is to say, that learns from sample To characteristic information do convolution algorithm with the image data in pending area, it is hereby achieved that the figure in the pending area As data most significant characteristic information on different location, it can orient this wait locate by the multitask convolutional neural networks Manage the corresponding characteristic point in each face position of the target user in region).After finishing convolution algorithm, extract The characteristic information of image data in above-mentioned pending area, but it is big only by the feature quantity that convolution algorithm extracts, in order to Calculation amount is reduced, also needs to carry out pond operation, that is, the feature that will be extracted from above-mentioned pending area by convolution algorithm Information is transmitted to pond layer, carries out aggregate statistics to the characteristic information of extraction, the order of magnitude of these statistical nature information will be much Lower than the order of magnitude for the characteristic information that convolution algorithm extracts, while it can also improve classifying quality.Common pond method is main Including average pond operation method and maximum pond operation method.Average pond operation method is in a characteristic information set Calculate the feature that an average characteristics information represents this feature information aggregate;Maximum pond operation is in a characteristic information collection The feature that maximum characteristic information represents this feature information aggregate is extracted in conjunction.Pass through the process of convolution of convolutional layer and pond layer Pondization processing, the static structure characteristic information of the image data in the pending area can be extracted, it can obtain this to The corresponding characteristic information in face position in processing region.
Then, which can be identified further using the classifier in the multitask convolutional neural networks wait locate Manage multiple attribute type features in the static structure characteristic information and the multitask convolutional neural networks of the image data in region Matching degree, and maximum matching degree in multiple matching degrees that above-mentioned classifier is exported and respective attributes type feature are corresponding Label information be associated, so as to obtain the corresponding label information set in the face position in above-mentioned face area, i.e., Existing matching degree in the label information set also has label information associated with maximum matching degree, wherein above-mentioned label information Refer to the corresponding Attribute class of each attribute type feature in multiple attribute type features of above-mentioned multitask convolutional neural networks Type, then pass through each label information in the corresponding tag set in face position can identify the target user eyes, Nose and mouth, and then the characteristic point at each position in above-mentioned face can be navigated to, therefore, will can further it be positioned The characteristic point of the above-mentioned eyes and mouth that arrive is referred to as face key point associated with above-mentioned face.
Wherein, above-mentioned face key point can be orient can characterize the corresponding characteristic point in each face position, i.e., on Stating face key point can be the corresponding characteristic point in the significant position of the faces such as mouth, eyes.For example, for mouth, on Stating face key point can be two characteristic points at the left and right corners of the mouth of the target user, and for eyes, above-mentioned face is closed Key point can be two characteristic points at the left and right eye of the target user.
Wherein, the value volume and range of product for the attribute type feature for including in the multitask convolutional neural networks is that this is more in training When task convolutional neural networks the quantity of the label information as included in a large amount of training dataset (standard drawing image set) and What type determined.
Wherein, the multiple attribute type features for including in above-mentioned multitask neural network can be eyes type feature, nose Subtype feature, mouth type feature, ear type feature, ear type feature and face mask type feature, and this more Each attribute type feature in neural network of being engaged in corresponds to a label information, in order in the multitask neural network, Matching degree between the corresponding characteristic information in face position of available above-mentioned face and above-mentioned multiple attribute type features, so The target terminal can further will be more with this by the maximum matching degree in the obtained matching degree of multitask neural network afterwards The corresponding label information of respective attributes type feature in task neural network in multiple attribute type features is associated, with right Face in above-mentioned face area are classified, and can characterize above-mentioned target user so as to navigate in the first picture frame Eyes and mouth characteristic point location information, and then can be obtained as shown in Figure 3 according to the location information of these characteristic points The first image-region, and above-mentioned first image-region can be further shown in the first picture frame.For example, from first figure As the first image-region determined in frame can be the face frame in embodiment corresponding to above-mentioned Fig. 1.Wherein, above-mentioned target Terminal is by showing above-mentioned first image-region in above-mentioned first picture frame, in order to be able to further execute step S102, i.e., it is subsequent each of to determine to be extracted the first angle point in first picture frame from first image-region First location information.
Wherein, which obtains the specific of the first location information of multiple first angle points in above-mentioned first picture frame Process can be with are as follows: above-mentioned first image-region can be evenly dividing as multiple subregions by above-mentioned target terminal, and can be into one Step extract multiple first angle points respectively out of each subregion in above-mentioned multiple subregions, and in above-mentioned first picture frame into One step determines the first location information of the first angle point of each of all first angle points extracted.
In other words, above-mentioned first image-region can be evenly dividing as M sub-regions by above-mentioned target terminal, wherein M is Natural number more than or equal to 2, and can further extract N number of respectively in each subregion in above-mentioned M sub-regions One angle point, to obtain M × N number of first angle point, wherein N is the natural number more than or equal to 3;Then, which can be into one Walk the first location information that each first angle point in above-mentioned M × N number of first angle point is determined in above-mentioned first picture frame.
Further, Fig. 4 is referred to, is a kind of schematic diagram for obtaining multiple first angle points provided in an embodiment of the present invention. Wherein, above-mentioned target terminal can be obtained shown in Fig. 4 aobvious based on the first image-region in embodiment corresponding to above-mentioned Fig. 3 Show the first image-region in the 100a of interface, then, above-mentioned target terminal can further to first image-region (i.e. with this The corresponding face frame of the face of target user) region division is carried out, to obtain multiple sons in display interface 100b shown in Fig. 4 Region, at this point, above-mentioned first image-region can be evenly dividing by the quantity M=4 of above-mentioned multiple subregions, the i.e. target terminal For 4 sub-regions, this 4 sub-regions is located at the right side in the upper left corner of above-mentioned first image-region, above-mentioned first image-region Upper angle, the lower left corner of above-mentioned first image-region, above-mentioned first image-region the lower right corner.Further, above-mentioned target terminal Multiple first angle points (for example, N=4) can be extracted, respectively from all subregion shown in Fig. 4 so as to obtain Fig. 4 Shown in display interface 100d multiple first angle points (at this point, the quantity of the first angle point in display interface 100d be M × N=4 × 4=16).
Wherein, as shown in figure 4, the target subregion in above-mentioned display interface 100c is in display interface shown in Fig. 4 One sub-regions, i.e. the target subregion can be lower right-hand corner in 4 sub-regions in display interface 100b shown in Fig. 4 Region.Then, above-mentioned target terminal can obtain above-mentioned display interface by Corner Detection Algorithm in the target subregion A first angle point of 4 (N=4) in 100c.It should be appreciated that above-mentioned target terminal extracted from above-mentioned each subregion it is every The detailed process of a first angle point may refer to target terminal shown in Fig. 4 and carry out to each first angle point in target subregion The description of the detailed process of extraction will not continue to repeat here.It should be appreciated that i.e. above-mentioned target terminal from each sub-district The quantity (N) of the first angle point extracted in domain is the natural number more than or equal to 3.Further, above-mentioned target terminal can be with (16 first jiao i.e. in display interface shown in Fig. 4 of all first angle points will extracted from above-mentioned first image-region Point) it is shown in above-mentioned first picture frame, to determine the first position letter of each first angle point in above-mentioned first picture frame Breath.
It should be appreciated that the quantity of above-mentioned subregion can be evenly dividing according to the actual demand of user, for example, on State target terminal and can be evenly dividing first display area as two sub-regions of upper and lower two sub-regions or left and right, i.e., on The quantity M for stating subregion can be the natural number more than or equal to 2.For ease of understanding, further, Fig. 5 is referred to, is the present invention The another kind that embodiment provides obtains the schematic diagram of multiple first angle points.It, can should in display interface 200a shown in Fig. 5 First image-region is evenly dividing as upper and lower two sub-regions, i.e. M=2.As shown in figure 5, being drawn from first image-region The upper half area separated can be the first subregion in display interface 200b, and can extract from first subregion as A first angle point of 3 (i.e. N=3) shown in fig. 5, i.e. this 3 first angle points can for angle point A1 shown in display interface 200b, Angle point A2 and angle point A3;Similarly, as shown in figure 5, the lower half region marked off from above-mentioned first image-region can be aobvious Show the second subregion in the 200b of interface, and other 3 as shown in Figure 5 can be extracted from second subregion respectively First angle point, i.e. this 3 other first angle point can be angle point A4, angle point A5 and angle point shown in display interface 200b A6.In other words, above-mentioned target terminal can be extracted by above-mentioned Corner Detection Algorithm from the first image-region shown in fig. 5 6 (i.e. M × N=2 × 3=6) a first angle points, and can further show this 6 first angle points extracted above-mentioned In one picture frame, to determine the first location information of each first angle point in display interface 200c.
Further, table 1 is referred to, is that one kind provided in an embodiment of the present invention determines each in the first picture frame The distribution table of the first location information of one angle point.
As shown in table 1, in above-mentioned first picture frame, determine that the first location information of angle point A1 is coordinate (C1, B1), The first location information of angle point A2 is coordinate (C2, B2), and the first location information of angle point A3 is coordinate (C3, B3), angle point A4's First location information is coordinate (C4, B4), and the first location information of angle point A5 is coordinate (C5, B5), the first position of angle point A6 Information is coordinate (C6, B6).
Table 1
First angle point Angle point A1 Angle point A2 Angle point A3
First location information (C1, B1) (C2, B2) (C3, B3)
First angle point Angle point A4 Angle point A5 Angle point A6
First location information (C4, B4) (C5, B5) (C6, B6)
It should be appreciated that the embodiment of the present invention is only enumerated and as shown in Figure 5 above will draw in above-mentioned first image-region Two sub-regions are separated, and determine in above-mentioned first picture frame as shown in Table 1 above to be mentioned from the two subregions The first location information of the first angle point of each of taking-up.As it can be seen that for the M × N extracted from above-mentioned first image-region For a first angle point, corresponding with each first angle point can be determined in above-mentioned first picture frame accordingly Occurrence not to M and N is defined by one location information, therefore, the embodiment of the present invention, i.e. M can be oneself more than or equal to 2 So number, N can be the natural number more than or equal to 3.
Step S102 obtains the second picture frame that the second moment included above-mentioned target object, and in above-mentioned second picture frame The middle second location information for obtaining multiple second angle points.
Specifically, target terminal available second moment includes the second picture frame of above-mentioned target object, and based on upper Above-mentioned each first angle point is mapped to above-mentioned second picture frame by the first location information for stating each first angle point, and above-mentioned The second location information of multiple second angle points mapped is determined in second picture frame.
Wherein, above-mentioned target terminal further can obtain second of the face comprising above-mentioned target user at the second moment Picture frame;Wherein, which can be that the above-mentioned image data acquiring device being integrated in above-mentioned target terminal is being worked as When preceding time from the first moment became for the second moment a frame picture number in the collected video data comprising above-mentioned face According to.Optionally, what which can also receive for the target terminal closes with above-mentioned target terminal with data connection The above-mentioned image data acquiring device of system collected video counts comprising above-mentioned face when current time is the second moment A frame image data in.In consideration of it, above-mentioned target terminal can will be obtained above-mentioned each based on above-mentioned steps S101 Above-mentioned each first angle point is mapped to above-mentioned second picture frame by the first location information of the first angle point, and in above-mentioned second figure Using above-mentioned each first angle point mapped angle point as the second angle point in picture frame, can be determined in above-mentioned second picture frame The second location information of multiple second angle points mapped out.
It should be appreciated that for collected video data, above-mentioned first picture frame and above-mentioned second picture frame can be with For the image data in continuous two adjacent moments in the video data.Optionally, may be used also for above-mentioned authentication platform To include: authentication platform that gate inhibition, attendance, traffic, community, old-age pension qualification authentication etc. need to carry out recognition of face, above-mentioned One picture frame and above-mentioned second picture frame can be two field pictures data accessed in separated in time threshold range, but It is within the scope of the time threshold, which simultaneously has not been obtained the transmitted and next other packets of above-mentioned image data processing system Image data containing above-mentioned target object.In other words, what which can get continuous harvest includes same target pair A frame image data in the two field pictures data of the face of elephant is referred to as the first picture frame, and will be in the two field pictures data Another frame image data is referred to as the second picture frame, in order to be able to based on the face frame determined from the first picture frame, The region where above-mentioned face is further tracked in the second picture frame.
Further, Fig. 6 is referred to, is showing for the mapping relations between a kind of multiple angle points provided in an embodiment of the present invention It is intended to.Wherein, above-mentioned multiple angle points include the first angle point of each of above-mentioned first picture frame, above-mentioned in above-mentioned second picture frame Above-mentioned each second angle point mapped third in the second angle point of each first angle point mapped and above-mentioned first picture frame Angle point.For ease of understanding, the embodiment of the present invention is with from one in the multiple angle points determined in above-mentioned first picture frame For one angle point (for example, angle point A1), to describe above-mentioned angle point A1 and angle point A1 ', reflecting between angle point A1 ' and angle point a1 respectively Penetrate relationship, so as to be further described through out each first angle point and corresponding second angle point, and with each second angle point With the mapping relations between corresponding third angle point.As shown in fig. 6, above-mentioned target terminal can be based on real corresponding to above-mentioned Fig. 5 The first location information for each of being determined the first angle point (for example, angle point A1 shown in fig. 6) in example is applied, light is based further on The image characteristic point track algorithm of stream, i.e. optical flow tracking algorithm, to the first angle point of each of display interface 300a shown in Fig. 6 into Row tracking, to show that above-mentioned each first angle point is mapped in the corresponding display interface 300b of the second picture frame shown in Fig. 6 The second angle point, it can above-mentioned angle point A1 is mapped to by the second picture frame based on above-mentioned optical flow tracking algorithm, in the second figure As determining the second angle point (angle point A1 ' i.e. shown in fig. 6) mapped in frame, and can be further in above-mentioned second image The second location information of the second angle point of above-mentioned each first angle point mapped is determined in frame.It should be appreciated that for shown in Fig. 6 Multiple first angle points in other any first angle points (for example, angle point A2, angle point A3, in angle point A4, angle point A5 and angle point A6 Any one first angle point) the detailed process being tracked, may refer to above-mentioned be tracked angle point A1 shown in fig. 6 The detailed process for obtaining angle point A1 ' description, will not enumerate here.As it can be seen that by the optical flow tracking algorithm, Ke Yi Corresponding the second angle point for finding each first angle point and mapping in colour developing interface 300b, and can be in colour developing interface 300b In the second location information of each second angle point is determined in corresponding the second picture frame.In other words, above-mentioned target terminal can With according to above-mentioned target object the first angle point of each of previous frame first location information, to track above-mentioned each first Second angle point of angle point mapping, to determine above-mentioned each first angle point transforming to next frame in above-mentioned display interface 300b The middle second location information for each of mapping the second angle point, i.e. first angle point can be used for tracing and positioning to one Second angle point.
Further, table 2 is referred to, is second jiao of each first angle point mapped of one kind provided in an embodiment of the present invention The mapping table of point.Wherein, the first location information of above-mentioned each first angle point may refer to embodiment corresponding to above-mentioned table 1 In the cited corresponding first location information of each first angle point.
Table 2
First angle point Angle point A1 Angle point A2 Angle point A3
Second angle point Angle point A1 ' Angle point A2 ' Angle point A3 '
First angle point Angle point A4 Angle point A5 Angle point A6
Second angle point Angle point A4 ' Angle point A5 ' Angle point A6 '
As shown in Table 2 above, the second angle point of angle point A1 mapped is angle point A1 ', angle point A2 the second angle point of mapped For angle point A2 ', angle point A3 the second angle point of mapped is angle point A3 ', and angle point A4 the second angle point of mapped is angle point A4 ', angle Point A5 the second angle point of mapped is angle point A5 ', and angle point A6 the second angle point of mapped is angle point A6 '.Further, it refers to Table 3 is a kind of second location information that each second angle point is determined in the second picture frame provided by the embodiment of the present invention Distribution table.In other words, above-mentioned target terminal is tracking each first by above-mentioned light stream tracing algorithm in the second picture frame When corresponding second angle point of angle point, the second location information of each second angle point may further determine that out.
Table 3
Second angle point Angle point A1 ' Angle point A2 ' Angle point A3 '
Second location information (X1, Y1) (X2, Y2) (X3, Y3)
Second angle point Angle point A4 ' Angle point A5 ' Angle point A6 '
Second location information (X4, Y4) (X5, Y5) (X6, Y6)
As shown in Table 3 above, in above-mentioned second picture frame, above-mentioned target terminal may further determine that out each second The second location information of angle point, further, the second location information of each second angle point can be found in display interface shown in Fig. 6 The schematic diagram of the second angle point in 300b, i.e., in above-mentioned second picture frame, above-mentioned target terminal can determine angle point A1's ' Second location information is coordinate (X1, Y1), and the second location information of angle point A2 ' is coordinate (X2, Y2), the second of angle point A3 ' Confidence breath is coordinate (X3, Y3), and the second location information of angle point A4 ' is coordinate (X4, Y4), the second location information of angle point A5 ' For coordinate (X5, Y5), the second location information of angle point A6 ' is coordinate (X6, Y6).Then, above-mentioned target terminal can be further According to the of the second angle point of first location information and above-mentioned each first angle point mapped for each of determining the first angle point Two location informations further execute step S104.
Above-mentioned multiple second angle points are transformed to above-mentioned first picture frame by step S103, and in above-mentioned first picture frame Determine the third place information of the multiple third angle points converted.
Specifically, target terminal can be according to the second location information of the first transformation matrix and each second angle point, will be upper It states the second angle point of each of multiple second angle points and transforms to above-mentioned first picture frame, and determine institute in above-mentioned first picture frame Convert the third place information of obtained multiple third angle points.
Wherein, above-mentioned first transformation matrix can be the first location information based on above-mentioned each first angle point, and each The corresponding inverse matrix of transformation matrix determined by the second location information of second angle point.
Wherein, which can indicate are as follows:
In formula (1.1), parameter k is zooming parameter;θ is rotation angle, i.e. face transforms to the from the first picture frame Face rotation angle during two picture frames here will not rotate above-mentioned face since the movement of face is more flexible The specific angle value of angle point is limited.
Wherein, the element a in transformation matrix and element b can be with are as follows:
In formula (1.2), subscript i is for characterizing all first angle points extracted from above-mentioned first image-region Any one first angle point in (i.e. M × N number of first angle point);The seat of any one the first angle point in above-mentioned first picture frame Mark can be expressed as (Ci, Bi);The coordinate of any one first the second angle point of angle point mapped can be in above-mentioned second picture frame It is expressed as (Xi, Yi).Therefore, pass through the available multiple mapping parameters of above-mentioned formula (1.2), wherein above-mentioned multiple mapping parameters Any of mapping parameter be first location information and second angle point based on first angle point second confidence It ceases obtained.Then, element a and element in transformation matrix can further be obtained according to obtained multiple mapping parameters The value of b, (i.e. element a and element b) obtain formula to the two elements that then target terminal can be based further on (1.1) transformation matrix in.It should be appreciated that by above-mentioned formula (1.1) obtained transformation matrix T, it can be by the first image The first angle point of each of multiple first angle points in frame is mapped in the second picture frame, in the second picture frame by each One angle point mapped angle point is (at this point, can by the obtained above-mentioned each first angle point mapped angle point of transformation matrix T To be referred to as fourth angle point, determined in order to obtain being different from the second picture frame by above-mentioned light stream tracing algorithm The second angle point).Wherein, in above-mentioned second picture frame, the location information of each fourth angle point can be referred to as to each 4th location information of four angle points.Due in the mistake being tracked by the light stream tracing algorithm to the corresponding face frame of the face Cheng Zhong, over time, it will the phenomenon that there are deviation accumulations, so may result in the face frame that traces into exist compared with Big offset.Therefore, in the second picture frame, above-mentioned target terminal passes through the 4th of fourth angle point obtained by transformation matrix T Location information, it is possible to different from the second location information by the second angle point obtained by above-mentioned optical flow tracking algorithm.
In consideration of it, optionally, above-mentioned target terminal further can also carry out inversion operation to transformation matrix T, with To the corresponding inverse matrix of the transformation matrix, further, which can be determined the corresponding inverse matrix of the transformation matrix For above-mentioned first transformation matrix.
It should be appreciated that above-mentioned optical flow tracking algorithm obtained each second can will be based on according to first transformation matrix Angle point back mapping (converting) is into the first picture frame, to convert above-mentioned each second angle point in the first picture frame To angle point be referred to as third angle point, to determine the third place information of multiple third angle points in above-mentioned first picture frame. Further, the schematic diagram of each third angle point in display interface 300c shown in Figure 6, wherein above-mentioned target terminal Lead to after transforming to the above-mentioned angle point A1 ' in display interface 300b in the first picture frame, can be looked in display interface 300c The angle point converted to above-mentioned angle point A1 ' is angle point a1.It should be appreciated that above-mentioned target terminal can be by display interface 300b Each of the second angle point transform in above-mentioned first picture frame, with determine to convert in above-mentioned second picture frame The third place information of multiple third angle points, wherein above-mentioned each second angle point transforms to specific in above-mentioned first picture frame Process may refer to the above-mentioned angle point for converting above-mentioned angle point A1 ' be angle point a1 description, will not continue to here into Row repeats.
It can be seen that above-mentioned target terminal can be based on the second location information and above-mentioned first of above-mentioned each second angle point Transformation matrix will be transformed to by the second angle point of each of obtained above-mentioned second picture frame of above-mentioned optical flow tracking algorithm (back mapping returns) above-mentioned first picture frame, to obtain the third place information of each third angle point in the first picture frame, into And it may insure the accuracy of face tracking.
Further, table 4 is referred to, is each second angle point mapped the third angle of one kind provided in an embodiment of the present invention The mapping table of point.Wherein, the second location information of above-mentioned each second angle point may refer to embodiment corresponding to above-mentioned table 3 In the cited corresponding second location information of each second angle point.
Table 4
Second angle point Angle point A1 ' Angle point A2 ' Angle point A3 '
Third angle point Angle point a1 Angle point a2 Angle point a3
Second angle point Angle point A4 ' Angle point A5 ' Angle point A6 '
Second angle point Angle point a4 Angle point a5 Angle point a6
As shown in Table 4 above, the second angle point of angle point A1 mapped is angle point a1, and angle point A2 the second angle point of mapped is Angle point a2, angle point A3 the second angle point of mapped are angle point a3, and angle point A4 the second angle point of mapped is angle point a4, angle point A5 institute Second angle point of mapping is angle point a5, and angle point A6 the second angle point of mapped is angle point a6.Further, table 5 is referred to, is this A kind of distribution table of the third place information that determining each third angle point in the first picture frame provided by inventive embodiments. It in other words, can when above-mentioned target terminal tracks the corresponding third angle point of each second angle point in above-mentioned first picture frame To further determine that out the third place information of each third angle point.
Table 5
As shown in Table 5 above, in above-mentioned first picture frame, above-mentioned target terminal can further each third angle point The third place information, it can the third place information for determining angle point a1 is coordinate (x1, y1), and the third place of angle point a2 is believed Breath is coordinate (x2, y2), and the third place information of angle point a3 is coordinate (x3, y3), and the third place information of angle point a4 is coordinate (x4, y4), the third place information of angle point a5 are coordinate (x5, y5), and the third place information of angle point a6 is coordinate (x6, y6).
Step S104, according to the third position of the first location information of above-mentioned multiple first angle points and above-mentioned multiple third angle points Confidence breath, determines object transformation matrix, and determine above-mentioned target object in above-mentioned second image according to above-mentioned object transformation matrix Image-region in frame.
Specifically, above-mentioned target terminal may further determine that out in above-mentioned multiple first angle points each first angle point and every Corresponding relationship between a third angle point, it can determine each first angle point corresponding in above-mentioned multiple first angle points Triangulation point, wherein the corresponding third angle point of any first angle point is second jiao mapped based on any of the above-described first angle point Determined by point;Further, above-mentioned target terminal can be by the first location information of above-mentioned each first angle point and above-mentioned every The third place information of the corresponding third angle point of a first angle point as the input parameter in error calculation formula, with output with it is upper State the sum of corresponding location error of error calculation formula;Then, above-mentioned target terminal can be according to the sum of above-mentioned location error accidentally Comparison result between poor threshold value further obtains object transformation matrix, and determines above-mentioned target according to above-mentioned object transformation matrix Second image-region of the object in above-mentioned second picture frame.
Wherein, above-mentioned error calculation formula can be with are as follows:
It should be appreciated that above-mentioned first angle point can be transformed to the second figure for above-mentioned target terminal by the sum of above-mentioned location error In picture frame, between the 4th location information of obtained above-mentioned each fourth angle point and the second location information of above-mentioned second angle point Location error determined by, i.e., above-mentioned target terminal is based on the sum of formula (1.3) obtained location error.Due to above-mentioned Have between the second angle point of each of the first angle point of each of one picture frame and above-mentioned second picture frame as shown in Table 2 above Mapping relations, therefore, above-mentioned target terminal determines the second confidence of multiple second angle points in above-mentioned second picture frame There are the fourth angle points that the second location information of any second angle point in one or more second angle point is corresponding in breath The 4th location information mismatch when, the available one or more determined from above-mentioned multiple second angle points have position The second angle point for mismatching characteristic is set, so as to the mapping relations according to shown in above-mentioned table 2, accordingly in the first picture frame It finds and mismatches the first angle point corresponding to the second angle point of characteristic with position with these, and then can be in the first picture frame Middle the first angle point of one or more that will be found is determined as the bad point for needing to be filtered.
Optionally, wherein above-mentioned error calculation formula can be with are as follows:
Wherein, the inverse matrix (i.e. the first transformation matrix) of above-mentioned transformation matrix can indicate are as follows:
In formula (1.4), the sum of above-mentioned location error is the sum of corresponding location error of each first angle point, that is, is had: σ =σ 1+ σ 2+ ...+σ i, wherein the value of i can be used for characterizing for the value of the i in above-mentioned formula (1.2), i.e. σ i with it is upper State any one corresponding location error of the first angle point in multiple first angle points.Therefore, above-mentioned target terminal is by upper rheme Set the sum of error with it is above-mentioned accidentally touching threshold value be compared after, available corresponding comparison result is (for example, above-mentioned location error The sum of be greater than the sum of above-mentioned error threshold and above-mentioned location error be less than or equal to two kinds of comparison results of above-mentioned location error). In other words, when the sum of above-mentioned location error is greater than above-mentioned error threshold, above-mentioned target terminal can be further according to each the The corresponding location error of one angle point judges the position between the first angle point of each of above-mentioned first picture frame and corresponding third angle point Set match condition.Wherein, above-mentioned location matches situation may include location matches and position mismatches two kinds of situations.
Wherein, above-mentioned location matches it is to be understood that first angle point in above-mentioned first picture frame first position Information with the third place information of corresponding third angle point be it is matched, i.e., location error corresponding with first angle point is one In fixed allowable range of error;Optionally, above-mentioned position mismatches it is to be understood that one in above-mentioned first picture frame the The first location information of one angle point with the third place information of corresponding third angle point be it is unmatched, i.e., with first angle point pair The location error answered is greater than above-mentioned allowable range of error.
Therefore, when each of above-mentioned multiple first angle points corresponding location error of the first angle point allows in above-mentioned error When in range, the location error exported by the above-mentioned error calculation formula in above-mentioned formula (1.4) can be further obtained The sum of be less than or equal to above-mentioned error threshold, and then the sum of above-mentioned location error can be less than or equal to above-mentioned mistake Transformation matrix T when poor threshold value is determined as object transformation matrix, for example, if the sum of above-mentioned location error is calculated for the first time Be less than or be equal to above-mentioned error threshold when, can will transformation matrix corresponding with above-mentioned first transformation matrix it is (i.e. above-mentioned more The corresponding transformation matrix of a first angle point) it is determined as object transformation matrix, for another example, if above-mentioned position is calculated for the first time When the sum of error is greater than above-mentioned error threshold, then need to be iterated calculating to the sum of above-mentioned location error, until it is above-mentioned most When the sum of location error newly obtained is less than or is equal to above-mentioned error threshold, newest obtained transformation matrix can be determined For object transformation matrix.
Optionally, in above-mentioned first picture frame, there is the first location information of one or more first angle points, and it is right When the third place information for the third angle point answered mismatches, above-mentioned target terminal can be in above-mentioned first picture frame, will be above-mentioned One or more first angle points (i.e. bad point) remove from above-mentioned first image-region, and can be further according to first image In region, remaining first angle point after above-mentioned bad point will be removed, is determined as multiple first and updates angle points, above-mentioned multiple first Updating one in angle point and updating angle point is based on one first jiao in the first angle point remaining after the above-mentioned bad point of removal Determined by point;Therefore, above-mentioned target terminal can determine each first first for updating angle point in the first picture frame Confidence breath, and above-mentioned each first update angle point further can be remapped to above-mentioned second picture frame, and above-mentioned the The map multiple second the second more new location informations for updating angle point are determined in two picture frames, i.e., above-mentioned target terminal can The each first the second update position for updating the update angle point of angle point mapped second is found with corresponding in the second picture frame Information, in other words, when above-mentioned target terminal detects that there are after bad point, need will be existing in above-mentioned first image-region Bad point is removed, so as to allow the target terminal to repeat above-mentioned steps after removing above-mentioned bad point S102- step S104, with first location information and newest obtained the third place information based on newest the first obtained angle point, Determine object transformation matrix.At this point, the object transformation matrix can in the first picture frame, carried out multiple removal bad point it Afterwards, the newest transformation matrix (i.e. object transformation matrix) determined according to above-mentioned formula (1.1) and formula (1.2).Then, Above-mentioned target terminal can chase after the above-mentioned face frame appeared in above-mentioned first picture frame according to the object transformation matrix Track, i.e., can be (i.e. left by the face key point for the face frame determined from the first picture frame by the object transformation matrix The characteristic point at right eye angle and the left and right corners of the mouth) it is mapped in above-mentioned second picture frame, to be accurately positioned in above-mentioned second picture frame To the second image-region being bonded with above-mentioned face, wherein it is right that second image-region determined may refer to above-mentioned Fig. 1 institute Answer the face frame in the display interface 1b in embodiment.
It can be seen that above-mentioned target terminal can first location information based on each first angle point and each first angle point Location matches situation between the third place information of corresponding third angle point filters out and appoints in above-mentioned first picture frame It anticipates the first location information of first angle point, and unmatched first angle point of the third place information of corresponding third angle point, So as to which unmatched first angle point of the one or more location informations screened is referred to as bad point, with remove this first Bad point in image-region, and then can there are biggish inclined to avoid the region where the face frame that tracks and practical face It moves, so as to the face frame being bonded with above-mentioned face for ensuring accurately to track in the second picture frame.In other words, above-mentioned bad Point can be one or more first angle point in multiple first angle points in above-mentioned first image-region, said one or The third position of the first location information of any first angle point third angle point corresponding with any first angle point in multiple first angle points Confidence breath mismatches.
The embodiment of the present invention is first by each first angle point in multiple first angle points in above-mentioned first picture frame First location information is tracked, and the second angle point of above-mentioned each first angle point mapped can be found in the second picture frame Second location information.Since the movement of above-mentioned face is more flexible, to ensure multiple first angle points in above-mentioned first picture frame It can be mapped in one by one in above-mentioned second picture frame as much as possible, it can be further by each first angle point mapped second Angle point reciprocal transformation is to the first picture frame, to obtain the location matches situation of any first angle point in the first picture frame, i.e., logical Cross will in above-mentioned second picture frame obtained each second angle point, reciprocal transformation is into above-mentioned first picture frame, upper It states in the first picture frame and the angle point that above-mentioned each second angle point converts is determined as third angle point, and can further exist The third place information of obtained multiple third angle points is determined in above-mentioned first picture frame, and then can further basis be obtained Above-mentioned each first angle point first location information and the corresponding third angle point of above-mentioned each first angle point the third place letter Breath, determines object transformation matrix;Wherein, above-mentioned object transformation matrix can be any of the above-described first angle point and any of the above-described the Acquired matrix when location matches between the corresponding third angle point of one angle point, optionally, above-mentioned object transformation matrix can be with Position mismatches when institute between any one or more first angle points and corresponding third angle point in above-mentioned multiple first angle points Obtained matrix.It therefore, can be further according to above-mentioned object transformation matrix to the image appeared in above-mentioned first picture frame Region (i.e. face frame) is tracked, it is to be understood that, can be to avoid single light stream tracing algorithm institute by reciprocal transformation Caused by face frame in tracing process offset, so as to be accurately located and above-mentioned people in above-mentioned second picture frame The region of face fitting, it can the second image-region corresponding with above-mentioned first image-region is tracked in the second picture frame, And then the accuracy being tracked to above-mentioned face can be promoted.
Fig. 7 is referred to, is the flow diagram of another image processing method provided in an embodiment of the present invention.Such as figure Shown in 7, the above method includes:
Step S201 obtains the first picture frame that the first moment included target object, and from above-mentioned first picture frame really Fixed the first image-region associated with above-mentioned target object.
Step S202 determines multiple first angle points from the first image-region, and determines often in above-mentioned first picture frame The first location information of a first angle point.
For ease of understanding, multiple first angle points extracted from above-mentioned first image-region can be stored in it is above-mentioned In the corresponding first point set P1 of first picture frame, i.e. multiple first angle points stored in first point set P1 can be above-mentioned figure Shown multiple first angle points out in display interface 200c in embodiment corresponding to 5, and above-mentioned target terminal can be further Above-mentioned multiple first angle point (i.e. angle point A1, angle point A2, angle point A3, the angle point A4, angle point determined in above-mentioned first picture frame A5, angle point A6) first location information.Wherein, the first location information of above-mentioned each first angle point may refer to above-mentioned 1 institute of table Each of show the distribution table of the first location information of the first angle point.I.e. as shown in Table 1 above, the first location information of angle point A1 is Coordinate (C1, B1), the first location information of angle point A2 are coordinate (C2, B2), the first location information of angle point A3 be coordinate (C3, B3), the first location information of angle point A4 is coordinate (C4, B4), and the first location information of angle point A5 is coordinate (C5, B5), angle point The first location information of A6 is coordinate (C6, B6).For ease of understanding, it in above-mentioned point set P1, can will be stored from above-mentioned The first angle point of each of the M extracted in first image-region × N number of first angle point is collectively expressed as angle point Ai, then, The first location information of each first angle point can be collectively expressed as (Ci, Bi).Wherein, the value range of subscript i can for greater than Natural number equal to 1 and less than or equal to M × N.
Step S203 obtains the second picture frame that the second moment included above-mentioned target object, and is based on above-mentioned each first Above-mentioned each first angle point is mapped to above-mentioned second picture frame by the first location information of angle point, and in above-mentioned second picture frame The second location information for multiple second angle points that middle determination maps.
Wherein, above-mentioned second moment is the subsequent time at above-mentioned first moment.
Similarly, above-mentioned target terminal can further by by above-mentioned optical flow tracking algorithm to each of first picture frame First angle point carry out angle point tracking, with tracked in the second picture frame with the second angle point of each first angle point mapped, i.e., Above-mentioned each first angle point can be mapped in the second picture frame, and determine and map in above-mentioned second picture frame The second location information of multiple second angle points.Wherein, the mapping relations of above-mentioned each first angle point and above-mentioned each second angle point It may refer to mapping table shown in above-mentioned table 2.It for ease of understanding, can will be obtained every by the light stream tracing algorithm A second angle point is stored in point set P2 corresponding with above-mentioned second picture frame.Further, above-mentioned target terminal can be The second location information of each second point set is determined in one picture frame, it can obtain each second angle point in above-mentioned point set P2 Second location information, wherein the corresponding second location information of each second angle point can be each second position shown in above-mentioned table 3 The distribution table of information.Similarly, each second angle point in above-mentioned multiple second angle points can be collectively expressed as by above-mentioned target terminal Angle point Ai', and each first angle point mapped second location information can be with unified representation are as follows: (Xi, Yi)。
Wherein, the specific executive mode of above-mentioned steps S201- step S203 may refer in embodiment corresponding to above-mentioned Fig. 2 Description to step S101- step S102 will not continue to repeat here.
Step S204, the second position of first location information and each second angle point based on above-mentioned each first angle point Information obtains multiple mapping parameters.
Wherein, any of above-mentioned multiple mapping parameters mapping parameter is the first position letter based on first angle point The second location information of breath and second angle point is obtained.
Wherein, each mapping parameter in above-mentioned multiple mapping parameters can pass through each of above-mentioned formula (1.2) the First location information (position coordinates (the C i.e. in the first picture frame of one angle pointi, Bi)) and each first angle point mapped Second location information (position coordinates (the X i.e. in the second picture frame of second angle pointi, Yi)) determined by.On it should be appreciated that Stating formula (1.2) is that coordinate mapping relations between two points (i.e. the first angle point and the second angle point) can be characterized in mathematical meaning Least square method.Therefore, each mapping parameter in above-mentioned multiple mapping parameters is that the target terminal passes through the minimum two Multiplication each of is determined that the coordinate mapping between the second angle point of the first angle point and above-mentioned each first angle point mapped is closed (i.e. each mapping parameter can be understood as first location information and corresponding each second angle point based on each first angle point for system Second location information determined by mapping relations on position).
Step S205 generates transformation matrix corresponding with above-mentioned multiple first angle points according to above-mentioned multiple mapping parameters, and The corresponding inverse matrix of above-mentioned transformation matrix is obtained, and above-mentioned inverse matrix is determined as the first transformation matrix.
Wherein, above-mentioned target terminal can be with above-mentioned multiple mapping parameters obtained in S204 through the above steps, into one Step determines the value of element a and element b in transformation matrix T to get the value of above-mentioned formula (1.2) is arrived, so as to according to upper It states formula (1.1) and determines transformation matrix corresponding with above-mentioned multiple first angle points.Then above-mentioned target terminal can be to the change It changes matrix and carries out inversion operation, to obtain inverse matrix corresponding with the transformation matrix, so as to which obtain can be by above-mentioned second The second angle point in picture frame transforms to the first transformation matrix of the first picture frame.
Step S206, according to the second location information of the first transformation matrix and each second angle point, by above-mentioned each second Angle point transforms to above-mentioned first picture frame, and the of the multiple third angle points converted is determined in above-mentioned first picture frame Three location informations.
Specifically, above-mentioned target terminal can be corresponding inverse with above-mentioned transformation matrix generated in S205 through the above steps The second location information of matrix (i.e. above-mentioned first transformation matrix) and each second angle point, will pass through above-mentioned light stream tracing algorithm institute Each second angle point is obtained to transform in (i.e. back mapping arrives) first picture frame, it is every to obtain in above-mentioned first picture frame The third place information for the third angle point that a second angle point converts, and may further determine that above-mentioned multiple first angle points In the corresponding third angle point of each first angle point can determine each first angle point and every that is, in above-mentioned first picture frame Mapping relations between a third angle point.Further, Fig. 8 is referred to, is a kind of acquisition second provided in an embodiment of the present invention The overall flow frame diagram of image-region.Wherein each of the first angle point of each of above-mentioned point set P1 and above-mentioned point set P2 The mapping relations between each third angle point in two angle points and point set P3 can be together referring to above-mentioned each angle point shown in fig. 6 Between mapping relations schematic diagram.Here it will not continue to repeat.
Step S207 believes according to the first location information of above-mentioned each first angle point and the third place of each third angle point Breath, determines object transformation matrix, and determine above-mentioned target object in above-mentioned second picture frame according to above-mentioned object transformation matrix Image-region.
Wherein, above-mentioned target terminal can be in above-mentioned first picture frame, first based on above-mentioned each first angle point The third place information of confidence breath and above-mentioned each third angle point, obtains multiple location errors, and then can be according to above-mentioned formula (1.4) error calculation formula in obtains the sum of above-mentioned multiple location errors shown in Fig. 8, and then can be by above-mentioned multiple positions The sum of error is compared with error threshold shown in Fig. 8, and obtaining corresponding comparison result, (above-mentioned comparison result can be above-mentioned Two kinds of comparison results in step S105).Wherein, if above-mentioned comparison result is that the sum of above-mentioned multiple location errors are greater than above-mentioned mistake Poor threshold value, the then each location error that can be based further in above-mentioned multiple location errors find location error greater than above-mentioned The Target Location Error of allowable range of error.Since a location error is corresponding with first angle point corresponding to first angle point Third angle point, therefore, above-mentioned target terminal can determine that the Target Location Error is corresponding according to the Target Location Error First angle point in above-mentioned first picture frame, to filter out the first angle point corresponding with Target Location Error, and will filter out One or more first angle points are determined as bad point.As it can be seen that above-mentioned bad point can for one in above-mentioned multiple first angle points or The first location information of any first angle point is corresponding in multiple first angle points, said one or multiple first angle points The third place information of triangulation point mismatches, it is seen then that each location error in above-mentioned multiple location errors is used equally for reflecting The first location information of above-mentioned each first angle point, with the location matches feelings between the third place information of corresponding third angle point Condition;In consideration of it, above-mentioned target terminal can be removed further after determining the above-mentioned bad point in above-mentioned first image-region Bad point in above-mentioned first image-region, it can correspondingly to true in above-mentioned point set P1 corresponding with above-mentioned first picture frame The bad point made is filtered, and remaining first angle point for removing after above-mentioned bad point is determined as multiple first and updates angle Point, to be updated according to above-mentioned multiple first update angle points to above-mentioned multiple first angle points in point set P1 shown in Fig. 8, and Angle point is updated according to each of being stored first in updated point set P1, continues through above-mentioned light stream tracing algorithm, it will be above-mentioned It is each first update angle point is remapped in the second picture frame, and in above-mentioned second picture frame determine map it is more A second updates the second more new location information of angle point, and then the first position that can update angle point according to above-mentioned each first is believed Second more new location information of breath and each second update angle point, recalculates to obtain newest transformation matrix.Then, above-mentioned mesh The corresponding inverse matrix of the newest transformation matrix can be referred to as the second transformation matrix by mark terminal, i.e., second transformation matrix can To be interpreted as the first new transformation matrix, so as to obtain according to the specific steps of the above-mentioned the third place information of above-mentioned acquisition Multiple thirds update the third more new location information that each third in angle point updates angle point.So as to according to above-mentioned each One updates the third more new location information of the first location information of angle point and above-mentioned each third update angle point, generates shown in Fig. 8 Object transformation matrix (at this point, the object transformation matrix can be multiple first to update angle point institute after the above-mentioned bad point of removal Corresponding newest transformation matrix), so can and according to above-mentioned object transformation matrix and the first image-region shown in Fig. 8 In face key point further determine that image-region of the above-mentioned target object in above-mentioned second picture frame.It is understood that It is that target object is that the face key point is constituted in second picture frame in the image-region in above-mentioned second picture frame Image-region, it can obtain the second image-region shown in Fig. 8.Wherein, it should be understood that if first image-region is upper State the face frame in embodiment corresponding to Fig. 1 in display interface 1 (n-1), then it can be in the second picture frame in display interface 1n In track corresponding second image-region of the face frame, i.e., second image-region can be embodiment corresponding to above-mentioned Fig. 1 In display interface 1n in face frame.
It should be appreciated that each location error in above-mentioned multiple location errors can be used for judging above-mentioned first picture frame With the location matches between third angle point corresponding to the first angle point associated by above-mentioned each location error and first angle point , that is, there are location matches shown in Fig. 8 in situation and position mismatches both of these case.Wherein it is determined that above-mentioned object transformation matrix Detailed process can be found in the description of the detailed process in embodiment corresponding to above-mentioned Fig. 2 to the above-mentioned object transformation matrix of determination, Here it will not continue to repeat.
Optionally, during removing the bad point in above-mentioned first image-region, above-mentioned target terminal can also pass through Above-mentioned formula (1.3) further determines the second angle point that there is position to mismatch characteristic in the second picture frame, so as to Finding in above-mentioned first picture frame indirectly with above-mentioned there is position to mismatch corresponding first angle point of the second angle point of characteristic, Therefore, the bad point for needing to remove can be found in the first picture frame indirectly.In other words, above-mentioned target terminal is by above-mentioned It, can be according to the inverse matrix of the transformation matrix, by above-mentioned second figure after formula (1.1) and formula (1.1) obtain transformation matrix As the second angle point of each of the corresponding point set P2 of frame is mapped to the first picture frame, in order to be based on the first transformation square according to above-mentioned Battle array and the bad point in the presence looked for and above-mentioned first picture frame.Optionally, above-mentioned target terminal can also be directly according to the transformation The first angle point of each of point set P1 corresponding with the first picture frame is further transformed to the second picture frame, Jin Erke by matrix To obtain above-mentioned each first angle point, by fourth angle point obtained by the transformation matrix, (fourth angle point can be collectively expressed as ai'), and the 4th location information of each fourth angle point, i.e. the 4th of the fourth angle point can be determined in the second picture frame Location information can be collectively expressed as (ci, bi).In other words, optionally, above-mentioned target terminal can be further according to above-mentioned transformation The first location information of matrix and above-mentioned each first angle point determines that above-mentioned each first angle point transforms to above-mentioned second figure As the 4th location information of each fourth angle point in frame.I.e. in the second picture frame, the 4th position of each fourth angle point is believed Breath may be with the second location information of the second angle point mapped by each first angle point obtained by above-mentioned optical flow tracking algorithm not Matching.In consideration of it, above-mentioned target terminal can be further according to the second of above-mentioned each second angle point in the second picture frame Location matches situation between confidence breath and the 4th location information of above-mentioned each fourth angle point, in above-mentioned second picture frame really Make the second location information of any second angle point and the 4th location information of corresponding fourth angle point unmatched one or more A second angle point, and then one or more second angle point determined being referred to as to, there is position to mismatch characteristic Second angle point, so as to according to the mapping relations shown in above-mentioned table 2 between first angle point and the second angle point, indirectly upper Stating to find in the first picture frame with above-mentioned there is position to mismatch corresponding first angle point of the second angle point of characteristic, so as to Ground connection is found in the first picture frame needs the bad point that removes, and then can be according to remaining the after the above-mentioned bad point based on removal The first location information of one angle point (i.e. multiple first updates angle point) and the second more new location information of corresponding second update angle point, Transformation matrix is recalculated, to generate object transformation matrix.
Wherein, above-mentioned target terminal is further determined in the second picture frame with position by above-mentioned formula (1.3) The detailed process for mismatching the second angle point of characteristic can be with are as follows: based on the error calculation formula in above-mentioned formula (1.3), so as to With the sum of output location error corresponding with the mistake touching calculation formula, i.e., the sum of the location error can be based on to each second What location error corresponding to angle point determined after being summed.Therefore, when the 4th location information of each fourth angle point When being mismatched with the second location information of corresponding second angle point, it can pick out in the second picture frame and be mismatched with position The second angle point of each of characteristic, and closed based on the mapping between each first angle point and each second angle point shown in above-mentioned table 2 System searches in above-mentioned first picture frame and mismatches the first angle point corresponding to the second angle point of characteristic with position with each, And the first angle point of one or more found is determined as bad point, in order to remove above-mentioned bad in above-mentioned first image-region Point.
In consideration of it, above-mentioned target terminal is determined indirectly out in above-mentioned first image-region based on above-mentioned formula (1.3) Above-mentioned bad point, the needs that can also be directly looked in above-mentioned first image-region based on above-mentioned formula (1.4) remove upper State bad point.I.e. the present invention can be by using while both modes, it can be achieved that more accurately and efficiently to above-mentioned first figure As the bad point in the presence of region is removed.
It should be appreciated that during removing above-mentioned bad point, after above-mentioned target terminal can be according to the above-mentioned bad point of removal When the finally obtained location error of institute is less than or equal to above-mentioned error threshold, obtained newest transformation matrix is determined as target Transformation matrix.Optionally, above-mentioned target terminal can also count the number for removing the bad point in above-mentioned first image-region, if system Above-mentioned number is counted more than or equal to frequency threshold value, then can be repeated several times according to above-mentioned process frame diagram shown in Fig. 8 The step in above-mentioned steps S201- step S207 is executed, determining when above-mentioned number is more than or equal to frequency threshold value Newest transformation matrix out is determined as object transformation matrix.
The embodiment of the present invention first by the first location information to the first angle point of each of above-mentioned first picture frame into Row tracking, can find the second location information of the second angle point of above-mentioned each first angle point mapped in the second picture frame. Since the movement of above-mentioned face is more flexible, for ensure multiple first angle points in above-mentioned first picture frame can as much as possible by It is mapped in above-mentioned second picture frame, can further arrive the second angle point reciprocal transformation of each first angle point mapped one by one First picture frame, to obtain the location matches situation of any first angle point in the first picture frame, that is, passing through will be above-mentioned second Obtained each second angle point in picture frame, reciprocal transformation, can be in above-mentioned first picture frames into above-mentioned first picture frame The middle angle point for converting above-mentioned each second angle point is determined as third angle point, and can be further in above-mentioned first image Determine the third place information of obtained multiple third angle points in frame, and then can be further according to above-mentioned each the obtained The third place information of the first location information of one angle point and the corresponding third angle point of above-mentioned each first angle point, determines target Transformation matrix;Wherein, above-mentioned object transformation matrix can be corresponding for any of the above-described first angle point and any of the above-described first angle point Acquired matrix when location matches between third angle point, optionally, above-mentioned object transformation matrix can also be above-mentioned multiple the Obtained matrix when position mismatches between any one or more first angle points and corresponding third angle point in one angle point.Cause This, can be further according to above-mentioned object transformation matrix to the image-region (i.e. face frame) appeared in above-mentioned first picture frame It is tracked, it is to be understood that, can be to avoid tracing process caused by single light stream tracing algorithm by reciprocal transformation In face frame offset, so as to be accurately located the region being bonded with above-mentioned face in above-mentioned second picture frame, The image-region being bonded with above-mentioned face frame can be tracked in the second picture frame, so can be promoted to above-mentioned face into The accuracy of row tracking.
Further, Fig. 9 is referred to, is a kind of structural representation of image data processing system provided in an embodiment of the present invention Figure.As shown in figure 9, above-mentioned image data processing system 1 can be the target terminal in embodiment corresponding to above-mentioned Fig. 1.It is above-mentioned Image data processing system 1 may include: first position determining module 10, second position determining module 20, angle point conversion module 30, the third place determining module 40, objective matrix determining module 50 and second area determining module 60;
First position determining module 10, for obtaining the first picture frame that the first moment included target object, and from above-mentioned The first location information of multiple first angle points is obtained in first picture frame;
Wherein, first position determining module 10 may include: first area determination unit 101 and first position determination unit 102;
First area determination unit 101, for obtaining the first picture frame that the first moment included target object, and from above-mentioned The first image-region associated with above-mentioned target object is determined in first picture frame;
Wherein, above-mentioned target object includes face;
Above-mentioned first area determination unit 101, comprising: face area determines that subelement 1011 and key point determine subelement 1012;
Above-mentioned face area determines subelement 1011, for obtaining the first picture frame that the first moment included above-mentioned face, And the corresponding face area of above-mentioned face is determined in above-mentioned first picture frame;
Above-mentioned key point determines subelement 1012, for based on neural network model from above-mentioned face area determination with The associated face key point of face, and the location information according to above-mentioned face key point in above-mentioned face area are stated, is determined First image-region of the above-mentioned face in above-mentioned first picture frame.
Wherein, above-mentioned face area determines that subelement 1011 and key point determine that the specific executive mode of subelement 1012 can Step S101 is described referring in embodiment corresponding to above-mentioned Fig. 2, will not continue to repeat here.
First position determination unit 102, for determining multiple first angle points from the first image-region, and above-mentioned first The first location information of each first angle point is determined in picture frame.
Wherein, above-mentioned first position determination unit 102, comprising: sub-zone dividing subelement 1021, angle point grid subelement 1022 and position determine subelement 1023;
Above-mentioned sub-zone dividing subelement 1021, for being evenly dividing above-mentioned first image-region for M sub-regions, In, M is the natural number more than or equal to 2;
Above-mentioned angle point grid subelement 1022, it is N number of for being extracted respectively in each subregion in above-mentioned M sub-regions First angle point obtains M × N number of first angle point, wherein N is the natural number more than or equal to 3;
Above-mentioned position determination unit 1023, it is every in above-mentioned M × N number of first angle point for being determined in above-mentioned first picture frame The first location information of a first angle point.
Wherein, above-mentioned sub-zone dividing subelement 1021, angle point grid subelement 1022 and position determine subelement 1023 Specific executive mode can be found in the description in embodiment corresponding to above-mentioned Fig. 2 to step S101, will not continue to carry out here It repeats.
Second position determining module 20, for obtain the second moment include above-mentioned target object the second picture frame, and The second location information of multiple second angle points is obtained in above-mentioned second picture frame;
Wherein, second position determining module 20, specifically for obtaining the second figure that the second moment included above-mentioned target object Picture frame, and the first location information based on above-mentioned each first angle point, are mapped to above-mentioned second figure for above-mentioned each first angle point As frame, and the second location information of multiple second angle points mapped is determined in above-mentioned second picture frame, above-mentioned second Moment is the subsequent time at above-mentioned first moment.
Angle point conversion module 30, for above-mentioned multiple second angle points to be transformed to above-mentioned first picture frame;
Wherein, above-mentioned angle point conversion module 30 includes: mapping parameter generation unit 301,302 He of transformation matrix generation unit Angle point converter unit 303;
Parameter generation unit 301 is mapped, for the first location information and each second based on above-mentioned each first angle point The second location information of angle point obtains multiple mapping parameters, wherein a mapping parameter is based on one in above-mentioned multiple mapping parameters The first location information of a first angle point and the second location information of second angle point obtain;
Transformation matrix generation unit 302, for being generated and above-mentioned multiple first angle points pair according to above-mentioned multiple mapping parameters The transformation matrix answered, and the corresponding inverse matrix of above-mentioned transformation matrix is obtained, and above-mentioned inverse matrix is determined as the first transformation matrix;
Angle point converter unit 303, for the second position according to above-mentioned first transformation matrix and above-mentioned each second angle point Above-mentioned each second angle point is transformed to above-mentioned first picture frame by information.
Wherein, map parameter generation unit 301, transformation matrix generation unit 302 and angle point converter unit 303 it is specific Executive mode can be found in the description in embodiment corresponding to above-mentioned Fig. 2 to step S103, will not continue to repeat here.
The third place determining module 40, for determining the multiple third angle points converted in above-mentioned first picture frame The third place information;
Above-mentioned objective matrix determining module 50, for according to the first location information of above-mentioned multiple first angle points and above-mentioned more The third place information of a third angle point, determines object transformation matrix;
Wherein, above-mentioned to state objective matrix determining module 50, comprising: corresponding relationship determination unit 501, bad point removal unit 502, the first updating unit 503, the second updating unit 504 and objective matrix generation unit 505;Optionally, above-mentioned objective matrix Determining module 50 can also include number statistic unit 506;
Above-mentioned corresponding relationship determination unit 501, for determining, each first angle point is corresponding in above-mentioned multiple first angle points Third angle point, wherein the corresponding third angle point of any first angle point is second mapped based on any of the above-described first angle point Determined by angle point;
Above-mentioned bad point removal unit 502, for the first location information and each third according to above-mentioned each first angle point The third place information of angle point removes the bad point in above-mentioned first image-region;Above-mentioned bad point is in above-mentioned multiple first angle points One or more first angle point, in said one or multiple first angle points the first location information of any first angle point with The third place information of its corresponding third angle point mismatches;
Wherein, above-mentioned bad point removal unit 502, comprising: error obtains subelement 5021, computation subunit 5022, angle point Removal subelement 5023 and matrix determine subelement 5024;
Above-mentioned error obtains subelement 5021, for the first location information according to above-mentioned each first angle point, and it is each The third place information of third angle point, obtains multiple location errors;Any position error in above-mentioned multiple location errors is base It is determined in the third place information of the first location information of any first angle point and the corresponding third angle point of any of the above-described angle point 's.
Above-mentioned computation subunit 5022, for calculating the sum of above-mentioned multiple location errors;
Above-mentioned angle point removes subelement 5023, if being greater than error threshold for the sum of above-mentioned multiple location errors, removes One or more first angle point in above-mentioned first image-region.
Optionally, above-mentioned matrix determines subelement 5024, if being less than or equal to for the sum of above-mentioned multiple location errors Transformation matrix corresponding with above-mentioned multiple first angle points is then determined as object transformation matrix by error threshold.
Wherein, above-mentioned error obtains subelement 5021, and computation subunit 5022, angle point removal subelement 5023 and matrix are true The specific executive mode of stator unit 5024 can be found in embodiment corresponding to above-mentioned Fig. 2 to the detailed process for removing above-mentioned bad point Description, will not continue to repeat here.
Above-mentioned first updating unit 503 will remove remaining after above-mentioned bad point in above-mentioned first image-region The first angle point, be determined as it is multiple first update angle points;The above-mentioned multiple first update angle points updated in angle point are to be based on It removes determined by first angle point in remaining the first angle point after above-mentioned bad point;
Above-mentioned second updating unit 504 will be above-mentioned each for updating the first location information of angle point according to each first First update angle point is mapped to above-mentioned second picture frame, and multiple second mapped are determined in above-mentioned second picture frame Update the second more new location information of angle point;
Above-mentioned objective matrix generation unit 505, for according to it is above-mentioned it is each first update angle point first location information and Each second updates the second more new location information of angle point, generates object transformation matrix.
Wherein, above-mentioned corresponding relationship determination unit 501, bad point removal unit 502, the first updating unit 503, second updates The specific executive mode of unit 504 and objective matrix generation unit 505 can be found in embodiment corresponding to above-mentioned Fig. 2 to step The description of S104 will not continue to repeat here.
Above-mentioned number statistic unit 206, for counting the number for removing the bad point in above-mentioned first image-region, if statistics The above-mentioned number arrived is more than or equal to frequency threshold value, then above-mentioned first updating unit 503 is notified to execute in above-mentioned first image In region, remaining the first angle point after above-mentioned bad point will be removed, is determined as multiple first and updates angle points.
Wherein, the specific executive mode of above-mentioned number statistic unit 506 can be found in embodiment corresponding to above-mentioned Fig. 8 to system The description for counting the number of the above-mentioned bad point of above-mentioned removal will not continue to repeat here.
Above-mentioned zone determining module 60, for determining above-mentioned target object above-mentioned second according to above-mentioned object transformation matrix Image-region in picture frame.
Wherein, above-mentioned first position determining module 10, second position determining module 20, angle point conversion module 30, third position Determining module 40,50 He of objective matrix determining module are set, the specific executive mode of area determination module 60 can be found in above-mentioned Fig. 2 institute To the description of step S101- step S104 in corresponding embodiment, will not continue to repeat here.
The embodiment of the present invention first by the first location information to the first angle point of each of above-mentioned first picture frame into Row tracking, can find the second location information of the second angle point of above-mentioned each first angle point mapped in the second picture frame. Since the movement of above-mentioned face is more flexible, for ensure multiple first angle points in above-mentioned first picture frame can as much as possible by It is mapped in above-mentioned second picture frame, can further arrive the second angle point reciprocal transformation of each first angle point mapped one by one First picture frame, to obtain the location matches situation of any first angle point in the first picture frame, that is, passing through will be above-mentioned second Obtained each second angle point in picture frame, reciprocal transformation, can be in above-mentioned first picture frames into above-mentioned first picture frame The middle angle point for converting above-mentioned each second angle point is determined as third angle point, and can be further in above-mentioned first image Determine the third place information of obtained multiple third angle points in frame, and then can be further according to above-mentioned each the obtained The third place information of the first location information of one angle point and the corresponding third angle point of above-mentioned each first angle point, determines target Transformation matrix;Wherein, above-mentioned object transformation matrix can be corresponding for any of the above-described first angle point and any of the above-described first angle point Acquired matrix when location matches between third angle point, optionally, above-mentioned object transformation matrix can also be above-mentioned multiple the Obtained matrix when position mismatches between any one or more first angle points and corresponding third angle point in one angle point.Cause This, can be further according to above-mentioned object transformation matrix to the image-region (i.e. face frame) appeared in above-mentioned first picture frame It is tracked, it is to be understood that, can be to avoid tracing process caused by single light stream tracing algorithm by reciprocal transformation In face frame offset, so as to be accurately located the region being bonded with above-mentioned face in above-mentioned second picture frame, The image-region being bonded with above-mentioned face frame can be tracked in the second picture frame, so can be promoted to above-mentioned face into The accuracy of row tracking.
Further, referring to Figure 10, it is the structure of another image data processing system provided in an embodiment of the present invention Schematic diagram.As shown in Figure 10, above-mentioned image data processing system 1000 can be applied to the target in above-mentioned Fig. 1 corresponding embodiment Terminal.Above-mentioned image data processing system 1000 may include: processor 1001, network interface 1004 and memory 1005, this Outside, above-mentioned image data processing system 1000 can also include: user interface 1003 and at least one communication bus 1002.Its In, communication bus 1002 is for realizing the connection communication between these components.Wherein, user interface 1003 may include display screen (Display), keyboard (Keyboard), optional user interface 1003 can also include standard wireline interface and wireless interface.Net Network interface 1004 optionally may include standard wireline interface and wireless interface (such as WI-FI interface).Memory 1004 can be High speed RAM memory is also possible to non-labile memory (non-volatile memory), for example, at least a disk Memory.Memory 1005 optionally can also be that at least one is located remotely from the storage device of aforementioned processor 1001.Such as figure Shown in 10, as may include operating system, network communication module, user in a kind of memory 1005 of computer storage medium Interface module and equipment control application program.
In image data processing system 1000 shown in Fig. 10, network interface 1004 can provide network communication function;And User interface 1003 is mainly used for providing the interface of input for user;And processor 1001 can be used for calling in memory 1005 The equipment of storage controls application program, to realize:
First picture frame of first moment comprising target object is obtained, and obtains multiple first from above-mentioned first picture frame The first location information of angle point;
Obtaining for the second moment includes the second picture frame of above-mentioned target object, and obtains in above-mentioned second picture frame multiple The second location information of second angle point;
Above-mentioned multiple second angle points are transformed into above-mentioned first picture frame, and determination is converted in above-mentioned first picture frame The third place information of obtained multiple third angle points;
According to the third place information of the first location information of above-mentioned multiple first angle points and above-mentioned multiple third angle points, really Set the goal transformation matrix, and determines image of the above-mentioned target object in above-mentioned second picture frame according to above-mentioned object transformation matrix Region.
It should be appreciated that the executable Fig. 2 or Fig. 7 above of image data processing system 1000 described in the embodiment of the present invention To the description of above-mentioned image processing method in corresponding embodiment, also can be performed in embodiment corresponding to Fig. 9 above to upper The description of image data processing system 1 is stated, details are not described herein.In addition, being described to using the beneficial effect of same procedure, also not It is repeated again.
In addition, it need to be noted that: the embodiment of the invention also provides a kind of computer storage medium, and above-mentioned meter Computer program performed by the image data processing system 1 being mentioned above, and above-mentioned calculating are stored in calculation machine storage medium Machine program includes program instruction, when above-mentioned processor executes above procedure instruction, is able to carry out corresponding to Fig. 2 above or Fig. 7 To the description of above-mentioned image processing method in embodiment, therefore, will no longer repeat here.In addition, to using identical The beneficial effect of method describes, and is also no longer repeated.For in computer storage medium embodiment according to the present invention not The technical detail of disclosure please refers to the description of embodiment of the present invention method.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, above-mentioned program can be stored in a computer-readable storage medium In, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, above-mentioned storage medium can be magnetic Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
The above disclosure is only the preferred embodiments of the present invention, cannot limit the right model of the present invention with this certainly It encloses, therefore equivalent changes made in accordance with the claims of the present invention, is still within the scope of the present invention.

Claims (15)

1. a kind of image processing method characterized by comprising
First picture frame of first moment comprising target object is obtained, and obtains multiple first angle points from the first image frame First location information;
Second picture frame of second moment comprising the target object is obtained, and obtains multiple second in second picture frame The second location information of angle point;
The multiple second angle point is transformed into the first image frame, and determination converts to obtain in the first image frame Multiple third angle points the third place information;
According to the third place information of the first location information of the multiple first angle point and the multiple third angle point, mesh is determined Transformation matrix is marked, and image district of the target object in second picture frame is determined according to the object transformation matrix Domain.
2. the method according to claim 1, wherein described obtain the first figure that the first moment included target object As frame, and obtain from the first image frame the first location information of multiple first angle points, comprising:
Obtain the first picture frame that the first moment included target object, and the determining and target pair from the first image frame As associated first image-region;
Multiple first angle points are determined from the first image-region, and determine in the first image frame the of each first angle point One location information.
3. according to the method described in claim 2, it is characterized in that, the target object includes face;
It is described to obtain the first picture frame that the first moment included target object, and the determining and mesh from the first image frame Mark associated first image-region of object, comprising:
The first picture frame that the first moment included the face is obtained, and determines that the face is corresponding in the first image frame Face area;
Face key point associated with the face is determined from the face area based on neural network model, and according to institute Location information of the face key point in the face area is stated, determines first figure of the face in the first image frame As region.
4. according to the method described in claim 2, it is characterized in that, described determine multiple first from the first image region Angle point, and determine in the first image frame the first location information of each first angle point, comprising:
The first image region is evenly dividing as M sub-regions, wherein M is the natural number more than or equal to 2;
It extracts N number of first angle point respectively in each subregion in the M sub-regions, obtains M × N number of first angle point, In, N is the natural number more than or equal to 3;
The first location information of each first angle point in M × N number of first angle point is determined in the first image frame.
5. according to the method described in claim 4, it is characterized in that, obtain that the second moment included the target object the Two picture frames, and obtain in second picture frame second location information of multiple second angle points, comprising:
Obtain the second picture frame that the second moment included the target object, and the first position based on each first angle point Each first angle point is mapped to second picture frame by information, and determination maps in second picture frame The second location information of multiple second angle points arrived, second moment are the subsequent time at first moment.
6. the method according to claim 1, wherein described transform to described first for the multiple second angle point Picture frame, comprising:
It is each in first location information and the multiple second angle point based on each first angle point in the multiple first angle point The second location information of second angle point obtains multiple mapping parameters, wherein a mapping parameter base in the multiple mapping parameter It is obtained in the first location information of first angle point and the second location information of second angle point;
Transformation matrix corresponding with the multiple first angle point is generated according to the multiple mapping parameter, and obtains the transformation square The corresponding inverse matrix of battle array, and the inverse matrix is determined as the first transformation matrix;
According to the second location information of first transformation matrix and each second angle point, each second angle point is become Change to the first image frame.
7. according to the method described in claim 6, it is characterized in that, described believe according to the first position of the multiple first angle point The third place information of breath and the multiple third angle point, determines object transformation matrix, comprising:
Determine the corresponding third angle point of each first angle point in the multiple first angle point, wherein any first angle point is corresponding Third angle point is determined based on the second angle point that any first angle point maps;
According to the third place information of the first location information of each first angle point and each third angle point, described the is removed Bad point in one image-region;The bad point be the multiple first angle point in one or more first angle point, described one The first location information of any first angle point the third angle corresponding with any first angle point in a or multiple first angle points The third place information of point mismatches;
In the first image region, remaining the first angle point after the bad point will be removed it is determined as multiple first and update Angle point;The multiple first one for updating in angle point updates angle point based on the first angle point remaining after the removal bad point In first angle point determine;
Each first update angle point is mapped to second figure by the first location information for updating angle point according to each first As frame, and the map multiple second the second more new location informations for updating angle point are determined in second picture frame;
The first location information and each second for updating angle point according to described each first update the second of angle point and update position letter Breath generates object transformation matrix.
8. the method according to the description of claim 7 is characterized in that the method also includes:
The number of bad point in statistics removal the first image region, if the number counted on is more than or equal to number Threshold value then executes in the first image region, will remove remaining the first angle point after the bad point, is determined as multiple The step of first update angle point.
9. the method according to the description of claim 7 is characterized in that described believe according to the first position of each first angle point The third place information of breath and each third angle point removes the bad point in the first image region, comprising:
According to the third place information of the first location information of each first angle point and each third angle point, obtain multiple Location error;First location information based on any first angle point of any position error in the multiple location error and described The third place information of the corresponding third angle point of any first angle point determines;
Calculate the sum of the multiple location error;
If the sum of the multiple location error is greater than error threshold, one or more in the first image region is removed First angle point.
10. according to the method described in claim 9, it is characterized in that, the method also includes:
If the sum of the multiple location error is less than or equal to error threshold, will change corresponding with the multiple first angle point It changes matrix and is determined as object transformation matrix.
11. a kind of image data processing system characterized by comprising
First position determining module, for obtaining the first picture frame that the first moment included target object, and from first figure First location information as obtaining multiple first angle points in frame;
Second position determining module, for obtaining the second picture frame that the second moment included the target object, and described the The second location information of multiple second angle points is obtained in two picture frames;
The third place determining module, for the multiple second angle point to be transformed to the first image frame, and described first The third place information of the multiple third angle points converted is determined in picture frame;
Objective matrix generation module, for according to the of the first location information of each first angle point and each third angle point Three location informations determine object transformation matrix;
Area determination module, for determining the target object in second picture frame according to the object transformation matrix Image-region.
12. device according to claim 11, which is characterized in that the first position determining module includes:
First area determination unit, for obtaining the first picture frame that the first moment included target object, and from first figure As determining the first image-region associated with the target object in frame;
First position determination unit, for determining multiple first angle points from the first image-region, and in the first image frame The first location information of middle each first angle point of determination.
13. device according to claim 12, which is characterized in that the target object includes face;
The first area determination unit, comprising:
Face area determines subelement, for obtaining the first picture frame that the first moment included the face, and described first The corresponding face area of the face is determined in picture frame;
Key point determines subelement, for based on neural network model, determination to be associated with the face from the face area Face key point, and the location information according to the face key point in the face area determines the face in institute State the first image-region in the first picture frame.
14. a kind of image data processing system characterized by comprising processor and memory;
The processor is connected with memory, wherein the memory is for storing program code, and the processor is for calling Said program code, to execute such as the described in any item methods of claim 1-10.
15. a kind of computer storage medium, which is characterized in that the computer storage medium is stored with computer program, described Computer program includes program instruction, and described program is instructed when being executed by a processor, executed such as any one of claim 1-10 The method.
CN201811276686.1A 2018-10-30 2018-10-30 Image data processing method and related device Active CN110147708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811276686.1A CN110147708B (en) 2018-10-30 2018-10-30 Image data processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811276686.1A CN110147708B (en) 2018-10-30 2018-10-30 Image data processing method and related device

Publications (2)

Publication Number Publication Date
CN110147708A true CN110147708A (en) 2019-08-20
CN110147708B CN110147708B (en) 2023-03-31

Family

ID=67588424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811276686.1A Active CN110147708B (en) 2018-10-30 2018-10-30 Image data processing method and related device

Country Status (1)

Country Link
CN (1) CN110147708B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647156A (en) * 2019-09-17 2020-01-03 中国科学院自动化研究所 Target object docking ring-based docking equipment pose adjusting method and system
CN111372122A (en) * 2020-02-27 2020-07-03 腾讯科技(深圳)有限公司 Media content implantation method, model training method and related device
CN112732146A (en) * 2019-10-28 2021-04-30 广州极飞科技有限公司 Image display method and device and storage medium
CN113221841A (en) * 2021-06-02 2021-08-06 云知声(上海)智能科技有限公司 Face detection and tracking method and device, electronic equipment and storage medium
CN113642552A (en) * 2020-04-27 2021-11-12 上海高德威智能交通系统有限公司 Method, device and system for identifying target object in image and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2353126A1 (en) * 2008-11-05 2011-08-10 Imperial Innovations Limited Keypoint descriptor generation by complex wavelet analysis
CN107016646A (en) * 2017-04-12 2017-08-04 长沙全度影像科技有限公司 One kind approaches projective transformation image split-joint method based on improved
CN107424176A (en) * 2017-07-24 2017-12-01 福州智联敏睿科技有限公司 A kind of real-time tracking extracting method of weld bead feature points
CN107633526A (en) * 2017-09-04 2018-01-26 腾讯科技(深圳)有限公司 A kind of image trace point acquisition methods and equipment, storage medium
CN108108694A (en) * 2017-12-21 2018-06-01 北京搜狐新媒体信息技术有限公司 A kind of man face characteristic point positioning method and device
WO2018137623A1 (en) * 2017-01-24 2018-08-02 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2353126A1 (en) * 2008-11-05 2011-08-10 Imperial Innovations Limited Keypoint descriptor generation by complex wavelet analysis
WO2018137623A1 (en) * 2017-01-24 2018-08-02 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device
CN107016646A (en) * 2017-04-12 2017-08-04 长沙全度影像科技有限公司 One kind approaches projective transformation image split-joint method based on improved
CN107424176A (en) * 2017-07-24 2017-12-01 福州智联敏睿科技有限公司 A kind of real-time tracking extracting method of weld bead feature points
CN107633526A (en) * 2017-09-04 2018-01-26 腾讯科技(深圳)有限公司 A kind of image trace point acquisition methods and equipment, storage medium
CN108108694A (en) * 2017-12-21 2018-06-01 北京搜狐新媒体信息技术有限公司 A kind of man face characteristic point positioning method and device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647156A (en) * 2019-09-17 2020-01-03 中国科学院自动化研究所 Target object docking ring-based docking equipment pose adjusting method and system
CN110647156B (en) * 2019-09-17 2021-05-11 中国科学院自动化研究所 Target object docking ring-based docking equipment pose adjusting method and system
CN112732146A (en) * 2019-10-28 2021-04-30 广州极飞科技有限公司 Image display method and device and storage medium
CN112732146B (en) * 2019-10-28 2022-06-21 广州极飞科技股份有限公司 Image display method and device and storage medium
CN111372122A (en) * 2020-02-27 2020-07-03 腾讯科技(深圳)有限公司 Media content implantation method, model training method and related device
CN113642552A (en) * 2020-04-27 2021-11-12 上海高德威智能交通系统有限公司 Method, device and system for identifying target object in image and electronic equipment
CN113642552B (en) * 2020-04-27 2024-03-08 上海高德威智能交通系统有限公司 Method, device and system for identifying target object in image and electronic equipment
CN113221841A (en) * 2021-06-02 2021-08-06 云知声(上海)智能科技有限公司 Face detection and tracking method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110147708B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN110147708A (en) A kind of image processing method and relevant apparatus
CN110826519B (en) Face shielding detection method and device, computer equipment and storage medium
US11232286B2 (en) Method and apparatus for generating face rotation image
WO2018205676A1 (en) Processing method and system for convolutional neural network, and storage medium
WO2022110638A1 (en) Human image restoration method and apparatus, electronic device, storage medium and program product
CN108898043A (en) Image processing method, image processing apparatus and storage medium
US20220222776A1 (en) Multi-Stage Multi-Reference Bootstrapping for Video Super-Resolution
CN105981050B (en) For extracting the method and system of face characteristic from the data of facial image
Zheng et al. Learning frequency domain priors for image demoireing
CN108701359A (en) Across the video frame tracking interest region with corresponding depth map
CN111091075B (en) Face recognition method and device, electronic equipment and storage medium
TWI689894B (en) Image segmentation method and apparatus
CN105279769B (en) A kind of level particle filter tracking method for combining multiple features
CN107239735A (en) A kind of biopsy method and system based on video analysis
CN110163111A (en) Method, apparatus of calling out the numbers, electronic equipment and storage medium based on recognition of face
CN109919971B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111723707B (en) Gaze point estimation method and device based on visual saliency
CN109840485B (en) Micro-expression feature extraction method, device, equipment and readable storage medium
CN111160202A (en) AR equipment-based identity verification method, AR equipment-based identity verification device, AR equipment-based identity verification equipment and storage medium
WO2021147437A1 (en) Identity card edge detection method, device, and storage medium
CN111340077A (en) Disparity map acquisition method and device based on attention mechanism
CN111814564A (en) Multispectral image-based living body detection method, device, equipment and storage medium
CN112417991B (en) Double-attention face alignment method based on hourglass capsule network
CN111353325A (en) Key point detection model training method and device
WO2021176899A1 (en) Information processing method, information processing system, and information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant