CN108875548A - Personage's orbit generation method and device, storage medium, electronic equipment - Google Patents

Personage's orbit generation method and device, storage medium, electronic equipment Download PDF

Info

Publication number
CN108875548A
CN108875548A CN201810348274.8A CN201810348274A CN108875548A CN 108875548 A CN108875548 A CN 108875548A CN 201810348274 A CN201810348274 A CN 201810348274A CN 108875548 A CN108875548 A CN 108875548A
Authority
CN
China
Prior art keywords
target object
image
pedestrian
point
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810348274.8A
Other languages
Chinese (zh)
Other versions
CN108875548B (en
Inventor
陆磊
吴子扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN201810348274.8A priority Critical patent/CN108875548B/en
Publication of CN108875548A publication Critical patent/CN108875548A/en
Application granted granted Critical
Publication of CN108875548B publication Critical patent/CN108875548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Abstract

The disclosure provides a kind of personage's orbit generation method and device, storage medium, electronic equipment.This method includes:According to the characteristic information of target object, target object is searched for from monitor video, obtains the movement point of target object, generates initial personage track;The affiliated partner that there is corresponding relationship with target object is searched, and according to the characteristic information of affiliated partner, affiliated partner is searched for from monitor video, obtains the movement point of affiliated partner, target object has different attribute classifications from affiliated partner;Initial personage track is updated using the movement point of affiliated partner, obtains final personage track.Such scheme helps to improve the accuracy of personage track.

Description

Personage's orbit generation method and device, storage medium, electronic equipment
Technical field
This disclosure relates to technical field of video monitoring, and in particular, to a kind of personage's orbit generation method and device, storage Medium, electronic equipment.
Background technique
Important component of the video monitoring as safety and protection system, is widely used in multiple fields, for example, intelligence The fields such as traffic, public safety, daily life.In actual application, may be related to some under video scene The case where designated person is inquired, is retrieved, for example, chase suspect, search loss personnel etc..
Currently, identifying mostly by recognition of face or pedestrian, the single retrieval to designated person is realized.In conjunction with actually answering With needing it is found that single recognition of face has higher requirements to the ornaments direction of camera, larger and clearly face could very Good carry out recognition of face;Single pedestrian's identification there is a problem of expressing inaccuracy to pedestrian, be difficult accurately to determine pedestrian's Identity.That is, the effect is unsatisfactory for single retrieval, drawn in this way, the search result based on single retrieval certainly will be will affect The accuracy of the personage track of system.
Summary of the invention
It is a general object of the present disclosure to provide a kind of personage's orbit generation method and device, storage medium, electronic equipments, have Help improve the accuracy of personage track.
To achieve the goals above, the disclosure provides a kind of personage's orbit generation method, the method includes:
According to the characteristic information of target object, the target object is searched for from monitor video, obtains the target object Movement point, generate initial personage track;
The affiliated partner that there is corresponding relationship with the target object is searched, and is believed according to the feature of the affiliated partner Breath, searches for the affiliated partner from the monitor video, obtains the movement point of the affiliated partner, the target object with The affiliated partner has different attribute classifications;
The initial personage track is updated using the movement point of the affiliated partner, obtains final personage track.
Optionally, if the target object is contained in image to be retrieved, and the type of the image to be retrieved is row People's image, the then mode for obtaining the characteristic information of the target object are:
Using the image to be retrieved as input, after pedestrian's Feature Selection Model processing through constructing in advance, described in output The characteristic information of target object, the characteristic information of the target object include the global characteristics information of target object and/or described The local feature information of target object.
Optionally, if the network structure of pedestrian's Feature Selection Model is CNN, and the feature of the target object is believed Breath is the local feature information of the target object, then after pedestrian's Feature Selection Model processing through constructing in advance, output The characteristic information of the target object, including:
Obtain the characteristic point coordinate of target object and the image to be retrieved extracted from the image to be retrieved Characteristic pattern;
Down-sampling is carried out to the characteristic point coordinate, the characteristic point is mapped in the characteristic pattern;
Using the characteristic point coordinate being mapped in the characteristic pattern, at least one ROI sense is marked off in the characteristic pattern Interest region, the ROI region are used to indicate the local feature information of a component of the target object.
Optionally, the method also includes:
The line and neck and waist central point of waist characteristic point based on the target object in the image to be retrieved Line, obtain rotation angle [alpha];
Image alignment rotation is carried out to the image to be retrieved using the rotation angle, retrieves image after being rotated;
It is described using the image to be retrieved as input, including:Image will be retrieved after the rotation as input.
Optionally, the mode for establishing corresponding relationship between the target object and the affiliated partner is:
If establishing the corresponding relationship between face and license plate, mode is:License plate position is oriented in image to be processed And face frame position;According to the license plate position, the position of vehicle frame is determined in the image to be processed;Described in judgement Whether face frame is located at the upper area of internal, the separate license plate of the vehicle frame;If it is, establishing what the face frame indicated Corresponding relationship between face and the license plate;
If establishing the corresponding relationship between face and pedestrian, mode is:Oriented in image to be processed face frame with And pedestrian's frame;Judge the intersection area of the face frame and pedestrian's frame and the area ratio of the face frame, if be less than Default ratio;If it is not, then establishing the corresponding pass between the face and the pedestrian that pedestrian's frame indicates that the face frame indicates System.
Optionally, the movement point using the affiliated partner updates the initial personage track, including:
Using the movement point of the affiliated partner, the movement point in the initial personage track increase and/ Or delete operation.
Optionally, the attribute classification includes license plate, face and pedestrian.
Optionally, the method also includes:
The corresponding Image Acquisition of movement point for obtaining the movement point and/or the affiliated partner of the target object is set Standby spatial topotaxy;
The spatial topotaxy that equipment is acquired according to described image, judges movement shape of the personage between adjacent motion point Whether state is consistent with preset state;
If be not consistent, corresponding movement point is deleted.
The disclosure provides a kind of personage track generating means, and described device includes:
Initial personage track generation module, for the characteristic information according to target object, from monitor video described in search Target object obtains the movement point of the target object, generates initial personage track;
The movement point of affiliated partner obtains module, for searching being associated with pair with corresponding relationship with the target object As, and according to the characteristic information of the affiliated partner, the affiliated partner is searched for from the monitor video, obtains the association The movement point of object, the target object have different attribute classifications from the affiliated partner;
Personage track update module, for updating the initial personage track using the movement point of the affiliated partner, Obtain final personage track.
Optionally, if the target object is contained in image to be retrieved, and the type of the image to be retrieved is row People's image, described device further include:
The characteristic information extracting module of target object is used for using the image to be retrieved as input, through what is constructed in advance After the processing of pedestrian's Feature Selection Model, the characteristic information of the target object is exported, the characteristic information of the target object includes The global characteristics information of target object and/or the local feature information of the target object.
Optionally, if the network structure of pedestrian's Feature Selection Model is CNN, and the feature of the target object is believed Breath is the local feature information of the target object, then
The characteristic information extracting module of the target object, for obtaining the target pair extracted from the image to be retrieved The characteristic pattern of the characteristic point coordinate of elephant and the image to be retrieved;Down-sampling is carried out to the characteristic point coordinate, by the spy Sign point is mapped in the characteristic pattern;Using the characteristic point coordinate being mapped in the characteristic pattern, divided in the characteristic pattern At least one ROI area-of-interest out, the ROI region are used to indicate the local feature letter of a component of the target object Breath.
Optionally, described device further includes:
Image rotation module, for the waist characteristic point based on the target object in the image to be retrieved line, with And the line of neck and waist central point, obtain rotation angle [alpha];Figure is carried out to the image to be retrieved using the rotation angle As alignment rotation, image is retrieved after being rotated;
The characteristic information extracting module of the target object, for image will to be retrieved after the rotation as input, through institute After stating the processing of pedestrian's Feature Selection Model, the characteristic information of the target object is exported.
Optionally, described device further includes:
First corresponding relation building module, for orienting license plate position and face frame position in image to be processed; According to the license plate position, the position of vehicle frame is determined in the image to be processed;Judge whether the face frame is located at The upper area of internal, the separate license plate of the vehicle frame;If it is, establishing the face and the license plate that the face frame indicates Between corresponding relationship;
And/or
Second corresponding relation building module, for orienting face frame and pedestrian's frame in image to be processed;Judge institute State the intersection area of face frame and pedestrian's frame and the area ratio of the face frame, if be less than default ratio;If not, Then establish the corresponding relationship between the pedestrian that the face that the face frame indicates and pedestrian's frame indicate.
Optionally, personage track update module, for the movement point using the affiliated partner, to described initial Movement point in personage track increases and/or delete operation.
Optionally, the attribute classification includes license plate, face and pedestrian.
Optionally, described device further includes:
Point removing module is moved, for obtaining the movement point of the target object and/or the fortune of the affiliated partner The spatial topotaxy of the dynamic corresponding image capture device of point;The spatial topotaxy that equipment is acquired according to described image, sentences Whether disconnected motion state of the personage between adjacent motion point is consistent with preset state;If be not consistent, delete corresponding Move point.
The disclosure provides a kind of storage medium, wherein being stored with a plurality of instruction, described instruction is loaded by processor, in execution The step of stating personage's orbit generation method.
The disclosure provides a kind of electronic equipment, and the electronic equipment includes;
Above-mentioned storage medium;And
Processor, for executing the instruction in the storage medium.
In disclosure scheme, initial personage track can be drawn based on the characteristic information of target object, can also found out There is the affiliated partner of different attribute classification with target object, the characteristic information based on affiliated partner is associated retrieval, helps In the proper exercise point for finding to omit in initial personage track, and/or delete wrong motion extra in initial personage track Point improves the accuracy for generating personage track.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
Attached drawing is and to constitute part of specification for providing further understanding of the disclosure, with following tool Body embodiment is used to explain the disclosure together, but does not constitute the limitation to the disclosure.In the accompanying drawings:
Fig. 1 is the flow diagram of disclosure scheme personage's orbit generation method;
Fig. 2 is the schematic diagram for carrying out registration process in disclosure scheme to image to be retrieved;
Fig. 3 is the flow diagram that corresponding relationship between face and license plate is established in disclosure scheme;
Fig. 4 is the flow diagram that corresponding relationship between face and pedestrian is established in disclosure scheme;
Fig. 5 is the structural schematic diagram of the network layer of pedestrian's Feature Selection Model in disclosure scheme;
Fig. 6 is the composition schematic diagram of disclosure scheme personage track generating means;
Fig. 7 is the structural schematic diagram for the electronic equipment that disclosure scheme carries out the generation of personage track.
Specific embodiment
It is described in detail below in conjunction with specific embodiment of the attached drawing to the disclosure.It should be understood that this place is retouched The specific embodiment stated is only used for describing and explaining the disclosure, is not limited to the disclosure.
Referring to Fig. 1, the flow diagram of disclosure personage's orbit generation method is shown.It may comprise steps of:
S101 searches for the target object according to the characteristic information of target object from monitor video, obtains the target The movement point of object generates initial personage track.
As an example, in the disclosure scheme target object characteristic information, can be directly inputted by outside;Alternatively, It can be automatically extracted from image to be retrieved, disclosure scheme can be not specifically limited this.
By taking the characteristic information from extracting target from images object to be retrieved as an example, detailed process can embody as follows:
For example, if the type of image to be retrieved is license plate image, the attribute classification of corresponding target object can be with It is presented as license plate.At least license board information can be extracted by the library EasyPR of open source, the library OpenALPR etc..With EasyPR process For, SVM (English can be first passed through:Support Vector Machine, Chinese:Support vector machines), SSD (English: Single Shot Detector, Chinese:Single step detector) the methods of carry out car plate detection, selected from image center to be retrieved License plate area;Pass through artificial neural network (English again:Artificial Neural Network, referred to as:ANN) to license plate area Domain carries out Text region, extracts specific license board information.Detailed process can refer to the relevant technologies realization, be not detailed herein. For example, the network structure of ANN model can be MLP (English:Multi-layer Perceptron, Chinese:Multilayer Perception Device, CNN (English:Convolutional Neural Network, Chinese:Convolutional neural networks) etc., disclosure scheme is to this It can be not specifically limited.
For example, if the type of image to be retrieved is facial image, the attribute classification of corresponding target object can be with It is presented as face.MTCNN (English can be first passed through:Multi-task Cascaded Convolutional Networks, in Text:Multitask concatenated convolutional neural network), FaceBox, the schemes such as Faster RCNN, select face from image center to be retrieved Region carries out face alignment;Face characteristic is extracted by schemes such as FaceNet, SiameseNet, CenterLoss again.Specifically Process can refer to the relevant technologies realization, be not detailed herein.
For example, if the type of image to be retrieved is pedestrian image, the attribute classification of corresponding target object can be with It is presented as pedestrian.The characteristic information of target object can be extracted by the pedestrian's Feature Selection Model constructed in advance.Specifically, The characteristic information of target object can be exported after the processing of pedestrian's Feature Selection Model using image to be retrieved as input, In, the characteristic information of target object may include the global characteristics information of target object and/or the local feature letter of target object Breath.About the related introduction of pedestrian's Feature Selection Model, reference can be made to being explained at following FIG. 5, wouldn't be described in detail herein.
As an example, before the characteristic information for extracting target object by pedestrian's Feature Selection Model, Ke Yixian Registration process is carried out to image to be retrieved.Specifically, positioning feature point first can be carried out to the target object in image to be retrieved, Detailed process can refer to the relevant technologies realization, be not detailed herein;Line and neck and waist again based on waist characteristic point The line of central point obtains rotation angle [alpha], and for details, reference can be made to schematic diagrames shown in Fig. 2;Finally image is carried out using rotation angle [alpha] Alignment rotation, retrieves image after being rotated.In this way, image can will be retrieved as input after rotation, mentioned after model treatment Take out the characteristic information of target object.
For example, image each point coordinate being retrieved after rotation can be expressed as:
X=cos α * x0+sinα*y0
Y=-sin α * x0+cosα*y0
Wherein, (x0,y0) be image to be retrieved in each point coordinate, (x, y) be rotation after retrieval image in each point seat Mark.
After getting the characteristic information of target object, target object search can be carried out from monitor video based on this.Example Such as, image to be matched can be intercepted from monitor video, and extracts the characteristic information of object contained by image to be matched, with target pair The characteristic information of elephant compares, if the two is same or similar, determines object contained by image to be matched for target object.As A kind of example can will take the position where the image capture device of target object, be determined as a fortune of target object Dynamic point.It is to be appreciated that the position where image capture device is usually corresponded with the identity of image capture device, Therefore can also be by the identity of image capture device, as the movement point of target object, disclosure scheme can not do this It is specific to limit.
In this way, completing the retrieval of target object under the monitor video of multiple images acquisition equipment acquisition, mesh can be obtained All movement points of mark object can draw out initial personage track in conjunction with each movement point corresponding time.
As an example, the image capture device in disclosure scheme can be face snap machine, bayonet capture machine etc., Disclosure scheme can be not specifically limited this.
S102 searches the affiliated partner for having corresponding relationship with the target object, and according to the spy of the affiliated partner Reference breath, the affiliated partner is searched for from the monitor video, obtains the movement point of the affiliated partner, the target pair As there is different attribute classifications from the affiliated partner.
By practical application it is found that the single retrieval effectiveness of the prior art is unsatisfactory, in order to improve the generation of personage track Accuracy, disclosure scheme can be associated retrieval in conjunction with the other affiliated partner of other Attribute class, help to find initial people The proper exercise point omitted in object track deletes wrong motion point extra in initial personage track.
As an example, the attribute classification of affiliated partner can be at least one of license plate, face, pedestrian, with mesh Marking object has different attribute classifications.For example, the attribute classification of target object is license plate, the attribute of affiliated partner Classification can be at least one of face, pedestrian.
When being associated retrieval, the corresponding association of target object can be determined first based on the corresponding relationship pre-established Object is associated retrieval further according to the characteristic information of affiliated partner in monitor video, obtains the motor point of affiliated partner Position, for updating initial personage track.
In actual application, it can establish the corresponding relationship between any two objects, for example, establishing license plate and face Between corresponding relationship, the corresponding relationship between license plate and pedestrian, the corresponding relationship between face and pedestrian etc., disclosure scheme This can be not specifically limited.
In general, in the same time, vehicle is big compared to the motion range of pedestrian, therefore establishes pair between license plate and face It should be related to, the corresponding relationship between license plate and pedestrian, and car plate detection is carried out based on this, help to realize the big model of personage track Enclose series connection.
As an example, it is contemplated that the accuracy of recognition of face is higher, can establish people respectively for based on face Corresponding relationship, face between face and license plate and the corresponding relationship between pedestrian.It is to be appreciated that being based on this, can also need Series connection between Shi Shixian license plate and pedestrian, different license plates.
It is to be appreciated that the corresponding relationship between object, can be and know what simultaneously typing saved in advance;Alternatively, can also be with It is that automatic identification is established, disclosure scheme can be not specifically limited this.Automatic identification is established below corresponding between object The process of relationship, is illustrated.
Referring to Fig. 3, the flow diagram that corresponding relationship between face and license plate is established in the disclosure is shown.May include Following steps:
S201 orients license plate position and face frame position in image to be processed.
As an example, which mostly occurs in the region that can capture license plate, for example, the gate of garden Mouth, candid photograph of road monitoring etc..
After getting image to be processed, license plate position and face frame position can be therefrom oriented.Detailed process can refer to The relevant technologies are realized, are not detailed herein.It is to be appreciated that image to be processed may come from the monitoring view for generating personage track Frequently, it can be from other video files, as long as in video including face, the license plate for needing to establish corresponding relationship, this Open scheme can be not specifically limited the source of image to be processed.
S202 determines the position of vehicle frame according to the license plate position in the image to be processed.
In disclosure scheme, the corresponding relationship between face and license plate is established, it can be understood as determine that license plate is corresponding Whom has been taken in vehicle, therefore can first determine the position of vehicle frame according to license plate position, in conjunction with vehicle frame and face The relative positional relationship of frame determines the people of interior seating.
As an example, it after obtaining license plate position, can be determined in conjunction with the relationship between license plate size, vehicle size The position of vehicle frame out.For example, upwardly extending top of 5~6 times of the license plate height as vehicle frame based on license plate position Edge extends downwardly lower edge of 1~2 times of the license plate height as vehicle frame, each extends over 1~2 times of license plate width to left and right As the left and right edge of vehicle frame, the position of vehicle frame is obtained.Disclosure scheme, which can not do the numerical value extended up and down, to be had Body limits, and can set in conjunction with the actual conditions of vehicle.
S203, judges whether the face frame is located at the upper area of internal, the separate license plate of the vehicle frame.
S204, if it is, establishing the corresponding relationship between the face and the license plate that the face frame indicates.
It is to be appreciated that the face frame determined in S201 may belong to passenger inside the vehicle, it is also possible to belong to goers outside vehicle, also It may be erroneous detection, therefore disclosure scheme can be combined with the relative positional relationship of vehicle frame Yu face frame, identify passenger inside the vehicle, Resettle the corresponding relationship between face and license plate.
In conjunction with practical application it is found that the face of passenger inside the vehicle is usually located at the upper partial region of vehicle, can judge accordingly Whether the face frame determined in S201 belongs to passenger inside the vehicle.Specifically, if face frame be located at vehicle frame it is internal, far from license plate Upper area, i.e. the upper partial region of vehicle then belongs to passenger inside the vehicle, can establish the face and license plate of face frame expression Between corresponding relationship.
As an example, disclosure scheme can establish 1 between license plate and face:1 association, for example, establishing license plate With the corresponding relationship between operator seat face;Alternatively, 1 can be established between license plate and face:2 associations, for example, establishing license plate With operator seat, two people of co-driver face between corresponding relationship.Disclosure scheme can be not specifically limited this, can be with It is determined in conjunction with practical application request.
In actual application, if it is defined that having the quantity of the face of corresponding relationship with license plate, such as 1 is carried out:1 When association, however, it is determined that go out at least two face frame, meet the relative positional relationship of face and vehicle, then can therefrom choose at random One face frame, or the high face frame of confidence level can be chosen in conjunction with the confidence level of face frame, it is corresponding with license plate foundation to close System, disclosure scheme can be not specifically limited this.
Referring to fig. 4, the flow diagram that corresponding relationship between face and pedestrian is established in the disclosure is shown.May include Following steps:
S301 orients face frame and pedestrian's frame in image to be processed.
As an example, which mostly occurs in ordinary road monitoring, the scene containing face snap machine, example Such as, the day net environment etc. of safe city.
After getting image to be processed, face frame and pedestrian's frame can be therefrom oriented.Detailed process can refer to related skill Art is realized, is not detailed herein.It is to be appreciated that image to be processed may come from the monitor video for generating personage track, It may come from other video files, as long as in video including face, the pedestrian for needing to establish corresponding relationship, disclosure side Case can be not specifically limited the source of image to be processed.
S302 judges the intersection area of the face frame and pedestrian's frame and the area ratio of the face frame, if Less than default ratio.
S303, if it is not, then establishing pair between the pedestrian that the face that the face frame indicates and pedestrian's frame indicate It should be related to.
As an example, disclosure scheme can be by intersection than association face and pedestrian.Intersection ratio can be understood as The size of the intersection area of face frame and pedestrian's frame, the ratio with face frame size, i.e., by calculating intersection area and face frame Area ratio mode, judge between face and pedestrian whether there is corresponding relationship, i.e., whether be same people.If area ratio Value is not less than default ratio, then successful match, can establish between the face of face frame expression and the pedestrian of pedestrian's frame expression Corresponding relationship.For example, default ratio can be set to 0.9, and disclosure scheme can be not specifically limited this.
In actual application, frame image to be processed can be chosen, establishes above-mentioned corresponding relationship as disassociation frame.Example Such as, it is contemplated that the speed of pedestrian detection is slightly slow compared with the speed of Face datection, and pedestrian detection frame can be used as disassociation frame, choose After one frame, if the intersection ratio of face frame and pedestrian's frame is not less than default ratio in the frame, it can establish between face and pedestrian Corresponding relationship.Alternatively, can choose multiframe as disassociation frame establishes above-mentioned corresponding relationship.For example, choosing 100 frames as association Frame can establish the corresponding pass between face and pedestrian if wherein the quantity of the disassociation frame of successful match is not less than preset number System.For example, preset number can be set to 80, and disclosure scheme can be not specifically limited this.
In conjunction with practical application it is found that being 1 between face and pedestrian in disclosure scheme:1 association, therefore S301 determine to When lacking 2 face frames, other than it can combine intersection face more matched than determination, face frame and pedestrian's frame can be combined with Relative positional relationship, confidence level of face frame etc. finally match a face frame for each pedestrian's frame.Disclosure scheme is to true The mode of fixed unique face frame can be specific in combination with depending on practical situations without limitation.
S103 updates the initial personage track using the movement point of the affiliated partner, obtains final personage track.
As an example, the update in disclosure scheme, which can be presented as, increases proper exercise point, and/or deletes wrong Accidentally movement point.
For example, image to be retrieved is facial image, can be with base after the initial personage track S1 of generation described above Corresponding relationship between face and license plate, finds corresponding license board information, in this way, vehicle inspection can be carried out according to license plate number Rope.For example, if retrieving license plate in some movement point but face being not detected, then can make the movement point For the proper exercise point of omission, it is added on corresponding track S1;If detecting face in some movement point, But the license plate number in the point is different from the license plate number in other points, then can be using the movement point as extra mistake Point is moved, is deleted from corresponding track S1.In this way, the track S2 comprising face and license plate can be obtained, as final people Object track.
It is to be appreciated that for detecting face but the different movement point of license plate number, it is also possible to which user replaces institute Caused by by bus, in view of this, it can also retain the movement point, without delete operation.Disclosure scheme is to the situation Processing mode can be not specifically limited, and can be determined in conjunction with practical application request.
For example, it based on after the initial personage track S1 of Face image synthesis, is also based between face and pedestrian Corresponding relationship finds corresponding pedestrian, in this way, pedestrian retrieval can be carried out according to the characteristic information that the pedestrian has.Citing comes It says, it, then can be using the movement point as omission if detecting pedestrian in some movement point but face being not detected Proper exercise point is added on corresponding track S1;If detecting face, but the point in some movement point The characteristic information of pedestrian mismatches on the characteristic information of upper pedestrian and other points, then can be using the movement point as extra mistake Accidentally movement point, is deleted from corresponding track S1.In this way, the track S3 comprising face and pedestrian can be obtained, as final Personage track.
For example, can also be on the basis of the S1 of track, comprehensive pedestrian and license plate carry out initial personage track and update, and obtain To comprising face, license plate and the personage of pedestrian track S4, as final personage track.Detailed process can refer to Jie made above It continues, and will not be described here in detail.
By above-mentioned example it is found that opposite facial image is single to retrieve obtained track S1, disclosure scheme helps to find The proper exercise point that is missed deletes extra wrong motion point, keeps the accuracy of final personage track higher.
As an example, the scheme of following more new persona track can also be provided in disclosure scheme, helps further to mention The accuracy of high personage track.Specifically, the movement point of available target object and/or the movement point pair of affiliated partner The spatial topotaxy for the image capture device answered;According to the spatial topotaxy of image capture device, judge personage adjacent Whether the motion state between movement point is consistent with preset state;If be not consistent, corresponding movement point is deleted.
That is, disclosure scheme can delete extra mistake in conjunction with the spatial topotaxy of image capture device Move point.For example, on track, movement point A can reach movement point B, C, if can in conjunction with spatial topotaxy Know, it is practically impossible to move to B point, i.e. motion state of the personage between movement point A, B from A point in a short time by personage It is not consistent with preset state, then movement point B can be determined as extra wrong motion point, be deleted from corresponding track.
As an example, motion state of the personage between adjacent motion point can be movement velocity, corresponding pre- If state can be presented as pre-set velocity;Alternatively, motion state of the personage between adjacent motion point can be run duration, Corresponding preset state can be presented as preset time.Disclosure scheme can be not specifically limited this.
For example, it after obtaining initial personage track, can first be adopted according to the corresponding image of movement point of target object Collect the spatial topotaxy of equipment, carries out the update of personage track;The movement point of affiliated partner is recycled, carries out personage track more Newly, final personage track is obtained.Alternatively, can be carried out first with the movement point of affiliated partner after obtaining initial personage track Personage track updates;The corresponding Image Acquisition of movement point of the movement point and/or affiliated partner of target object is recycled to set Standby spatial topotaxy carries out the update of personage track, obtains final personage track.Disclosure scheme can not do specific limit to this It is fixed, it can be determined in conjunction with practical application request.
Below to pedestrian's Feature Selection Model in disclosure scheme, it is explained.
As an example, pedestrian's Feature Selection Model can be based on pedestrian image, extract the global characteristics information of pedestrian; And/or extract the local feature information of pedestrian.For example, the local feature information of pedestrian can be upper part of the body characteristic information, At least one of lower part of the body characteristic information;Alternatively, the local feature information of pedestrian can be further subdivided into head feature letter At least one of breath, chest characteristic information, abdomen characteristic information, leg characteristic information etc., disclosure scheme is to local division Mode can be not specifically limited.
Below for it can extract pedestrian's Feature Selection Model of global characteristics information and local characteristic information simultaneously, to this Pedestrian's Feature Selection Model in open scheme is explained.
As an example, the network structure of pedestrian's Feature Selection Model can be presented as CNN, for example, it may be depth The CNN network structure of residual error network ResNet50;Alternatively, may be embodied in the network structures such as GoogleNet, VGG, SENet, Disclosure scheme can be not specifically limited this.
In actual application, some network layer of model, such as Res4f layers, i.e. the 4th residual block last Layer, can be presented as structure shown in Fig. 5, including:For extracting the global branch of global characteristics information, for extracting local feature The localized branches of information, wherein N indicates the quantity locally divided, N >=2.It is to be appreciated that structure shown in Fig. 5 may also be at mould Other network layers of type, such as Res3d, Res5c etc., disclosure scheme can be not specifically limited this.
For the structure shown in Fig. 5 is located at Res4f layers, image to be retrieved, can after the network layer handles before Res4f layers To obtain the characteristic pattern of image to be retrieved, global branch can extract the semantic information of pedestrian's overall situation based on characteristic pattern, as The global characteristics information of pedestrian.
For the localized branches in Res4f layers, the spy of the target object extracted from image to be retrieved can use Sign point coordinate, that is, coordinate of the characteristic point in original image is carried out characteristic pattern cutting, and extracted based on the local feature figure after cutting Local feature information to the semantic information of pedestrian part, as pedestrian.
Specifically, the characteristic point coordinate and figure to be retrieved of the available target object extracted from image to be retrieved The characteristic pattern of picture;Down-sampling is carried out to characteristic point coordinate, characteristic point is mapped in characteristic pattern;Using being mapped in characteristic pattern Characteristic point coordinate, that is, coordinate of the characteristic point in characteristic pattern marks off at least one ROI (English in characteristic pattern:region Of interest, Chinese:Area-of-interest), each ROI region is used to indicate the local feature of a component of target object Information.
For being upper part of the body characteristic pattern, lower part of the body characteristic pattern by characteristic pattern cutting according to waist characteristic point coordinate, part The treatment process of branch can be presented as:Res4f layers get coordinate, to be retrieved figure of the waist characteristic point in image to be retrieved After the characteristic pattern of picture, down-sampling can be carried out to waist characteristic point coordinate, obtained first according to characteristic pattern corresponding down-sampling multiple The corresponding coordinate on characteristic pattern of waist characteristic point, namely complete the mapping of waist characteristic point to characteristic pattern;It is then possible to utilize The corresponding coordinate on characteristic pattern of waist characteristic point, marks off two ROI regions in characteristic pattern, that is, upper part of the body characteristic pattern and Lower part of the body characteristic pattern can extract the upper part of the body characteristic information of target object based on upper part of the body characteristic pattern, special based on the lower part of the body Sign figure can extract the lower part of the body characteristic information of target object.
In general, determining after which network layer extracts local feature information, it can know that characteristic pattern reaches the net in advance The multiple of down-sampling is carried out before network layers.For example, the size of image to be retrieved is 80*80, characteristic pattern carries out before Res4f layers 8 times of down-samplings, then the size of the Res4f layers of characteristic pattern got is 10*10.In order to realize waist characteristic point to characteristic pattern Mapping can carry out 8 times of down-samplings to waist characteristic point coordinate, coordinate of the waist characteristic point on characteristic pattern be obtained, if being based on This carries out characteristic pattern cutting, the upper part of the body characteristic pattern of available 10*5 size and the lower part of the body characteristic pattern of 10*5 size.
It is to be appreciated that in order to which the feature for extracting localized branches has more distinction, it can also be using ConvNet to cutting Local feature figure afterwards carries out process of convolution, and using the result of process of convolution as the local feature information of pedestrian.As one kind Example, ConvNet can be used 2~5 convolutional layers and complete process of convolution, and disclosure scheme can be not specifically limited this.
As an example, in order to facilitate subsequent carry out convolutional calculation, the local feature figure after cutting can be adjusted to Fixed dimension, then it is input to ConvNet processing.It is, for example, possible to use SPPNet (English:Spatial Pyramid Pooling, Chinese:Spatial pyramid pond) pond thought, realize fixed dimension adjustment.Disclosure scheme is to fixed dimension Size can be not specifically limited.
To sum up, the global characteristics information of pedestrian obtained by global branch, obtain the part spy of pedestrian by localized branches After reference breath, all characteristic informations can be subjected to linearly connected, obtain the characteristic information of target object.
For example, the global characteristics information of pedestrian can be presented as K1The feature vector of dimension, for example, the gender of pedestrian, Race, color development, jacket color, lower clothing color etc., dimensional information that disclosure scheme includes to global characteristics information, dimension number Etc. can be not specifically limited.The local feature information of pedestrian can be presented as K2The feature vector of dimension, disclosure scheme is to part Dimensional information, dimension number that characteristic information includes etc. can be also not specifically limited.
If each local feature information includes K2All characteristic informations are then carried out linearly connected by the feature vector of a dimension Afterwards, available (K1+N*K2) feature vector of dimension can calculate when characteristic information based on target object carries out pedestrian retrieval Two (K1+N*K2) dimension feature vector between similarity, if similarity be more than preset value, such as preset value be 0.8, then Think two (K1+N*K2) dimension feature vector indicate be the same person.
As an example, if training set when model training is smaller, in order to accelerate to restrain, disclosure scheme be can be used The model instructed on ImageNet uses SGD (English as pre-training model:Stochastic gradient Descent, Chinese:Stochastic gradient descent), Adam (English:Adaptive Moment Estimation Chinese:Adaptive square Estimation) etc. optimization methods, 20~30 epoch of training (Chinese on data set:Epoch) until network convergence.
For example, the loss function in disclosure scheme for restricted model training may include:SoftmaxLoss, VerifyLoss、MutualLoss。
SoftmaxLoss is used to constrain the class probability of pedestrian's identification.When the class probability of global branch, localized branches When class probability meets predetermined probabilities, indicate that model training finishes.
SoftmaxLoss can be presented as following formula:
Wherein, pjIndicate that a sample belongs to the probability of jth class in prediction;yjIt is an indicator function, indicates the sample True value classification, when sample belongs to jth class, yj=1;When sample is not belonging to jth class, yj=0;L is the sample in prediction The intersection entropy loss of generation.
VerifyLoss is used to constrain the distinction between different pedestrians.When two pedestrian images are the same person and spy When reference manner of breathing is close, corresponding Loss value is smaller;It is corresponding when two pedestrian images are different people and close characteristic information Loss value is larger, and VerifyLoss can be such that identical people more polymerize, and different people more disperses.As the Loss of global branch When value, the Loss value of localized branches meet preset condition, indicate that model training finishes.
VerifyLoss can be presented as following formula:
Wherein, fiAnd fjIt is the feature of two samples i and j respectively, m (margin) is super ginseng, is adjusted to different classes of sample The restriction ability of feature, the m the big, makes different classes of sample characteristics distance remoter, on the contrary then closer;And yij=1 indicates different Same category belonging to this, yij=-1 indicates different classes of belonging to two samples.
MutualLoss is used to constrain the KL distance between the class probability of global branch, the class probability of localized branches (English:Kullback-Leibler divergence, Chinese:Relative entropy).When model includes global branch and localized branches, And Liang Ge branch use SoftmaxLoss when, by global branch extract global characteristics information and localized branches extract office After portion's characteristic information linearly connected, the training of MutualLoss restricted model can also be used.In general, KL apart from smaller, illustrates two The class probability of a branch is closer.When KL distance meets preset value, indicate that model training finishes.
MutualLoss can be presented as following formula:
Wherein, M indicates classification sum to be sorted, and for pedestrian, M indicates the total number of pedestrian;Indicate sample This x is predicted as the probability of m class in global or local branch;Indicate that sample x is predicted in locally or globally branch For the probability of m class;It needs to fix in training processMakeStudyDue toBoth it can indicate global Probability can indicate local probability again, therefore above formula is exactly the global and local process mutually learnt.
It is to be appreciated that loss function meets preset condition in disclosure scheme, it can be presented as that loss function reaches most It is small;Alternatively, loss function is not more than predetermined value, disclosure scheme can be not specifically limited this.
Referring to Fig. 6, the composition schematic diagram of disclosure personage track generating means is shown.The apparatus may include:
Initial personage track generation module 401, for the characteristic information according to target object, searches for institute from monitor video Target object is stated, the movement point of the target object is obtained, generates initial personage track;
The movement point of affiliated partner obtains module 402, for searching the pass for having corresponding relationship with the target object Join object, and according to the characteristic information of the affiliated partner, search for the affiliated partner from the monitor video, described in acquisition The movement point of affiliated partner, the target object have different attribute classifications from the affiliated partner;
Personage track update module 403, for updating initial personage's rail using the movement point of the affiliated partner Mark obtains final personage track.
Optionally, if the target object is contained in image to be retrieved, and the type of the image to be retrieved is row People's image, described device further include:
The characteristic information extracting module of target object is used for using the image to be retrieved as input, through what is constructed in advance After the processing of pedestrian's Feature Selection Model, the characteristic information of the target object is exported, the characteristic information of the target object includes The global characteristics information of target object and/or the local feature information of the target object.
Optionally, if the network structure of pedestrian's Feature Selection Model is CNN, and the feature of the target object is believed Breath is the local feature information of the target object, then
The characteristic information extracting module of the target object, for obtaining the target pair extracted from the image to be retrieved The characteristic pattern of the characteristic point coordinate of elephant and the image to be retrieved;Down-sampling is carried out to the characteristic point coordinate, by the spy Sign point is mapped in the characteristic pattern;Using the characteristic point coordinate being mapped in the characteristic pattern, divided in the characteristic pattern At least one ROI area-of-interest out, the ROI region are used to indicate the local feature letter of a component of the target object Breath.
Optionally, described device further includes:
Image rotation module, for the waist characteristic point based on the target object in the image to be retrieved line, with And the line of neck and waist central point, obtain rotation angle [alpha];Figure is carried out to the image to be retrieved using the rotation angle As alignment rotation, image is retrieved after being rotated;
The characteristic information extracting module of the target object, for image will to be retrieved after the rotation as input, through institute After stating the processing of pedestrian's Feature Selection Model, the characteristic information of the target object is exported.
Optionally, described device further includes:
First corresponding relation building module, for orienting license plate position and face frame position in image to be processed; According to the license plate position, the position of vehicle frame is determined in the image to be processed;Judge whether the face frame is located at The upper area of internal, the separate license plate of the vehicle frame;If it is, establishing the face and the license plate that the face frame indicates Between corresponding relationship;
And/or
Second corresponding relation building module, for orienting face frame and pedestrian's frame in image to be processed;Judge institute State the intersection area of face frame and pedestrian's frame and the area ratio of the face frame, if be less than default ratio;If not, Then establish the corresponding relationship between the pedestrian that the face that the face frame indicates and pedestrian's frame indicate.
Optionally, personage track update module, for the movement point using the affiliated partner, to described initial Movement point in personage track increases and/or delete operation.
Optionally, the attribute classification includes license plate, face and pedestrian.
Optionally, described device further includes:
Point removing module is moved, for obtaining the movement point of the target object and/or the fortune of the affiliated partner The spatial topotaxy of the dynamic corresponding image capture device of point;The spatial topotaxy that equipment is acquired according to described image, sentences Whether disconnected motion state of the personage between adjacent motion point is consistent with preset state;If be not consistent, delete corresponding Move point.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
Referring to Fig. 7, the structural schematic diagram of electronic equipment 500 of the disclosure for carrying out the generation of personage track is shown.Electricity Sub- equipment 500 at least may include processor 501 and storage medium 502, as an example, processor 501 and storage medium 502 can be connected by bus or other means, shown in Fig. 7 for being connected by bus.The quantity of processor 501 can be One or more, takes a processor as an example shown in Fig. 7.Storage device resource representated by storage medium 502, for storing The instruction that can be executed by processor 501, such as application program.In addition, processor 501 can be configured as in load store medium Instruction, to execute above-mentioned personage's orbit generation method.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure Monotropic type, these simple variants belong to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance In the case where shield, can be combined in any appropriate way, in order to avoid unnecessary repetition, the disclosure to it is various can No further explanation will be given for the combination of energy.
In addition, any combination can also be carried out between a variety of different embodiments of the disclosure, as long as it is without prejudice to originally Disclosed thought equally should be considered as disclosure disclosure of that.

Claims (18)

1. a kind of personage's orbit generation method, which is characterized in that the method includes:
According to the characteristic information of target object, the target object is searched for from monitor video, obtains the fortune of the target object Dynamic point, generates initial personage track;
The affiliated partner that there is corresponding relationship with the target object is searched, and according to the characteristic information of the affiliated partner, from Search for the affiliated partner in the monitor video, obtain the movement point of the affiliated partner, the target object with it is described Affiliated partner has different attribute classifications;
The initial personage track is updated using the movement point of the affiliated partner, obtains final personage track.
2. the method according to claim 1, wherein if the target object is contained in image to be retrieved, And the type of the image to be retrieved is pedestrian image, then the mode for obtaining the characteristic information of the target object is:
Using the image to be retrieved as input, after pedestrian's Feature Selection Model processing through constructing in advance, the target is exported The characteristic information of object, the characteristic information of the target object include the global characteristics information and/or the target of target object The local feature information of object.
3. according to the method described in claim 2, it is characterized in that, if the network structure of pedestrian's Feature Selection Model is CNN, and the characteristic information of the target object is the local feature information of the target object, the then row through constructing in advance After the processing of people's Feature Selection Model, the characteristic information of the target object is exported, including:
Obtain the characteristic point coordinate of the target object extracted from the image to be retrieved and the feature of the image to be retrieved Figure;
Down-sampling is carried out to the characteristic point coordinate, the characteristic point is mapped in the characteristic pattern;
Using the characteristic point coordinate being mapped in the characteristic pattern, it is interested that at least one ROI is marked off in the characteristic pattern Region, the ROI region are used to indicate the local feature information of a component of the target object.
4. according to the method described in claim 2, it is characterized in that, the method also includes:
The line of waist characteristic point based on the target object in the image to be retrieved and the company of neck and waist central point Line obtains rotation angle [alpha];
Image alignment rotation is carried out to the image to be retrieved using the rotation angle, retrieves image after being rotated;
It is described using the image to be retrieved as input, including:Image will be retrieved after the rotation as input.
5. the method according to claim 1, wherein establish it is right between the target object and the affiliated partner The mode that should be related to is:
If establishing the corresponding relationship between face and license plate, mode is:Oriented in image to be processed license plate position and Face frame position;According to the license plate position, the position of vehicle frame is determined in the image to be processed;Judge the face Whether frame is located at the upper area of internal, the separate license plate of the vehicle frame;If it is, establishing the face that the face frame indicates With the corresponding relationship between the license plate;
If establishing the corresponding relationship between face and pedestrian, mode is:Face frame and row are oriented in image to be processed People's frame;Judge the intersection area of the face frame and pedestrian's frame and the area ratio of the face frame, if be less than default Ratio;If it is not, then establishing the corresponding relationship between the pedestrian that the face that the face frame indicates and pedestrian's frame indicate.
6. the method according to claim 1, wherein the movement point using the affiliated partner updates institute Initial personage track is stated, including:
Using the movement point of the affiliated partner, the movement point in the initial personage track is increased and/or deleted Except operation.
7. the method according to claim 1, wherein the attribute classification includes license plate, face and pedestrian.
8. method according to any one of claims 1 to 7, which is characterized in that the method also includes:
Obtain the corresponding image capture device of movement point of the movement point and/or the affiliated partner of the target object Spatial topotaxy;
The spatial topotaxy that equipment is acquired according to described image, judges that motion state of the personage between adjacent motion point is It is no to be consistent with preset state;
If be not consistent, corresponding movement point is deleted.
9. a kind of personage track generating means, which is characterized in that described device includes:
Initial personage track generation module searches for the target for the characteristic information according to target object from monitor video Object obtains the movement point of the target object, generates initial personage track;
The movement point of affiliated partner obtains module, for searching the affiliated partner that there is corresponding relationship with the target object, And according to the characteristic information of the affiliated partner, the affiliated partner is searched for from the monitor video, obtains the association pair The movement point of elephant, the target object have different attribute classifications from the affiliated partner;
Personage track update module is obtained for updating the initial personage track using the movement point of the affiliated partner Final personage track.
10. device according to claim 9, which is characterized in that if the target object is contained in image to be retrieved, And the type of the image to be retrieved is pedestrian image, described device further includes:
The characteristic information extracting module of target object is used for using the image to be retrieved as input, the pedestrian through constructing in advance After Feature Selection Model processing, the characteristic information of the target object is exported, the characteristic information of the target object includes target The global characteristics information of object and/or the local feature information of the target object.
11. device according to claim 10, which is characterized in that if the network structure of pedestrian's Feature Selection Model For CNN, and the characteristic information of the target object is the local feature information of the target object, then
The characteristic information extracting module of the target object, for obtaining the target object extracted from the image to be retrieved The characteristic pattern of characteristic point coordinate and the image to be retrieved;Down-sampling is carried out to the characteristic point coordinate, by the characteristic point It is mapped in the characteristic pattern;Using the characteristic point coordinate being mapped in the characteristic pattern, marked off in the characteristic pattern to A few ROI area-of-interest, the ROI region are used to indicate the local feature information of a component of the target object.
12. device according to claim 10, which is characterized in that described device further includes:
Image rotation module, line and neck for the waist characteristic point based on the target object in the image to be retrieved The line in portion and waist central point obtains rotation angle [alpha];Image pair is carried out to the image to be retrieved using the rotation angle Neat rotation, retrieves image after being rotated;
The characteristic information extracting module of the target object, for image will to be retrieved after the rotation as input, through the row After the processing of people's Feature Selection Model, the characteristic information of the target object is exported.
13. device according to claim 9, which is characterized in that described device further includes:
First corresponding relation building module, for orienting license plate position and face frame position in image to be processed;According to The position of vehicle frame is determined in the license plate position in the image to be processed;It is described to judge whether the face frame is located at The upper area of internal, the separate license plate of vehicle frame;If it is, establishing between the face and the license plate that the face frame indicates Corresponding relationship;
And/or
Second corresponding relation building module, for orienting face frame and pedestrian's frame in image to be processed;Judge the people The area ratio of the intersection area and the face frame of face frame and pedestrian's frame, if be less than default ratio;If it is not, then building Found the corresponding relationship between the pedestrian that the face that the face frame indicates and pedestrian's frame indicate.
14. device according to claim 9, which is characterized in that
Personage track update module, for the movement point using the affiliated partner, in the initial personage track Movement point carry out increase and/or delete operation.
15. device according to claim 9, which is characterized in that the attribute classification includes license plate, face and pedestrian.
16. according to the described in any item devices of claim 9 to 15, which is characterized in that described device further includes:
Point removing module is moved, for obtaining the movement point of the target object and/or the motor point of the affiliated partner The spatial topotaxy of the corresponding image capture device in position;The spatial topotaxy that equipment is acquired according to described image, judges people Whether motion state of the object between adjacent motion point is consistent with preset state;If be not consistent, corresponding movement is deleted Point.
17. a kind of storage medium, wherein being stored with a plurality of instruction, which is characterized in that described instruction is loaded by processor, right of execution Benefit requires the step of any one of 1 to 8 the method.
18. a kind of electronic equipment, which is characterized in that the electronic equipment includes;
Storage medium described in claim 17;And
Processor, for executing the instruction in the storage medium.
CN201810348274.8A 2018-04-18 2018-04-18 Character track generation method and device, storage medium and electronic equipment Active CN108875548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810348274.8A CN108875548B (en) 2018-04-18 2018-04-18 Character track generation method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810348274.8A CN108875548B (en) 2018-04-18 2018-04-18 Character track generation method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108875548A true CN108875548A (en) 2018-11-23
CN108875548B CN108875548B (en) 2022-02-01

Family

ID=64326981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810348274.8A Active CN108875548B (en) 2018-04-18 2018-04-18 Character track generation method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108875548B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753901A (en) * 2018-12-21 2019-05-14 上海交通大学 Indoor pedestrian's autonomous tracing in intelligent vehicle, device, computer equipment and storage medium based on pedestrian's identification
CN110008379A (en) * 2019-03-19 2019-07-12 北京旷视科技有限公司 Monitoring image processing method and processing device
CN110458113A (en) * 2019-08-14 2019-11-15 旭辉卓越健康信息科技有限公司 A kind of non-small face identification method cooperated under scene of face
CN110852269A (en) * 2019-11-11 2020-02-28 青岛海信网络科技股份有限公司 Cross-lens portrait correlation analysis method and device based on feature clustering
CN110955794A (en) * 2019-10-12 2020-04-03 北京地平线机器人技术研发有限公司 Method and device for searching associated object and electronic equipment
CN111291129A (en) * 2018-12-06 2020-06-16 浙江宇视科技有限公司 Target person tracking method and device based on multidimensional data research and judgment
CN111400550A (en) * 2019-12-30 2020-07-10 深圳市商汤科技有限公司 Target motion trajectory construction method and device and computer storage medium
CN111429476A (en) * 2019-01-09 2020-07-17 杭州海康威视系统技术有限公司 Method and device for determining action track of target person
CN111506691A (en) * 2020-04-20 2020-08-07 杭州数澜科技有限公司 Track matching method and system based on depth matching model
CN111582008A (en) * 2019-02-19 2020-08-25 富士通株式会社 Device and method for training classification model and device for classification by using classification model
WO2020172870A1 (en) * 2019-02-28 2020-09-03 深圳市大疆创新科技有限公司 Method and apparatus for determining motion trajectory of target object
CN112069993A (en) * 2020-09-04 2020-12-11 西安西图之光智能科技有限公司 Dense face detection method and system based on facial features mask constraint and storage medium
CN112734802A (en) * 2020-12-31 2021-04-30 杭州海康威视系统技术有限公司 Track acquisition method and device
CN110309765B (en) * 2019-06-27 2021-08-24 浙江工业大学 High-efficiency detection method for video moving target
CN113850243A (en) * 2021-11-29 2021-12-28 北京的卢深视科技有限公司 Model training method, face recognition method, electronic device and storage medium
CN116935305A (en) * 2023-06-20 2023-10-24 联城科技(河北)股份有限公司 Intelligent security monitoring method, system, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130035265A (en) * 2013-03-19 2013-04-08 (주)파슨텍 Image data search system
CN104717471A (en) * 2015-03-27 2015-06-17 成都逸泊科技有限公司 Distributed video monitoring parking anti-theft system
CN106448160A (en) * 2016-09-22 2017-02-22 江苏理工学院 Target person tracking method combining vehicle driving track and monitoring video data
CN106649487A (en) * 2016-10-09 2017-05-10 苏州大学 Image retrieval method based on interest target
CN107563323A (en) * 2017-08-30 2018-01-09 华中科技大学 A kind of video human face characteristic point positioning method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130035265A (en) * 2013-03-19 2013-04-08 (주)파슨텍 Image data search system
CN104717471A (en) * 2015-03-27 2015-06-17 成都逸泊科技有限公司 Distributed video monitoring parking anti-theft system
CN106448160A (en) * 2016-09-22 2017-02-22 江苏理工学院 Target person tracking method combining vehicle driving track and monitoring video data
CN106649487A (en) * 2016-10-09 2017-05-10 苏州大学 Image retrieval method based on interest target
CN107563323A (en) * 2017-08-30 2018-01-09 华中科技大学 A kind of video human face characteristic point positioning method

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291129A (en) * 2018-12-06 2020-06-16 浙江宇视科技有限公司 Target person tracking method and device based on multidimensional data research and judgment
CN111291129B (en) * 2018-12-06 2024-02-02 浙江宇视科技有限公司 Target person tracking method and device based on multidimensional data research and judgment
CN109753901B (en) * 2018-12-21 2023-03-24 上海交通大学 Indoor pedestrian tracing method and device based on pedestrian recognition, computer equipment and storage medium
CN109753901A (en) * 2018-12-21 2019-05-14 上海交通大学 Indoor pedestrian's autonomous tracing in intelligent vehicle, device, computer equipment and storage medium based on pedestrian's identification
CN111429476B (en) * 2019-01-09 2023-10-20 杭州海康威视系统技术有限公司 Method and device for determining action track of target person
CN111429476A (en) * 2019-01-09 2020-07-17 杭州海康威视系统技术有限公司 Method and device for determining action track of target person
CN111582008A (en) * 2019-02-19 2020-08-25 富士通株式会社 Device and method for training classification model and device for classification by using classification model
EP3699813A1 (en) * 2019-02-19 2020-08-26 Fujitsu Limited Apparatus and method for training classification model and apparatus for classifying with classification model
CN111582008B (en) * 2019-02-19 2023-09-08 富士通株式会社 Device and method for training classification model and device for classifying by using classification model
US11113513B2 (en) 2019-02-19 2021-09-07 Fujitsu Limited Apparatus and method for training classification model and apparatus for classifying with classification model
WO2020172870A1 (en) * 2019-02-28 2020-09-03 深圳市大疆创新科技有限公司 Method and apparatus for determining motion trajectory of target object
CN110008379A (en) * 2019-03-19 2019-07-12 北京旷视科技有限公司 Monitoring image processing method and processing device
CN110309765B (en) * 2019-06-27 2021-08-24 浙江工业大学 High-efficiency detection method for video moving target
CN110458113A (en) * 2019-08-14 2019-11-15 旭辉卓越健康信息科技有限公司 A kind of non-small face identification method cooperated under scene of face
CN110955794A (en) * 2019-10-12 2020-04-03 北京地平线机器人技术研发有限公司 Method and device for searching associated object and electronic equipment
CN110852269A (en) * 2019-11-11 2020-02-28 青岛海信网络科技股份有限公司 Cross-lens portrait correlation analysis method and device based on feature clustering
CN110852269B (en) * 2019-11-11 2022-05-20 青岛海信网络科技股份有限公司 Cross-lens portrait correlation analysis method and device based on feature clustering
CN111400550A (en) * 2019-12-30 2020-07-10 深圳市商汤科技有限公司 Target motion trajectory construction method and device and computer storage medium
CN111506691A (en) * 2020-04-20 2020-08-07 杭州数澜科技有限公司 Track matching method and system based on depth matching model
CN112069993A (en) * 2020-09-04 2020-12-11 西安西图之光智能科技有限公司 Dense face detection method and system based on facial features mask constraint and storage medium
CN112069993B (en) * 2020-09-04 2024-02-13 西安西图之光智能科技有限公司 Dense face detection method and system based on five-sense organ mask constraint and storage medium
CN112734802A (en) * 2020-12-31 2021-04-30 杭州海康威视系统技术有限公司 Track acquisition method and device
CN112734802B (en) * 2020-12-31 2024-02-09 杭州海康威视系统技术有限公司 Track acquisition method and device
CN113850243A (en) * 2021-11-29 2021-12-28 北京的卢深视科技有限公司 Model training method, face recognition method, electronic device and storage medium
CN116935305A (en) * 2023-06-20 2023-10-24 联城科技(河北)股份有限公司 Intelligent security monitoring method, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108875548B (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN108875548A (en) Personage's orbit generation method and device, storage medium, electronic equipment
Tran et al. Rich image captioning in the wild
CN109657533A (en) Pedestrian recognition methods and Related product again
US7333963B2 (en) Cognitive memory and auto-associative neural network based search engine for computer and network located images and photographs
CN109886078A (en) The retrieval localization method and device of target object
CN109271832A (en) Stream of people's analysis method, stream of people's analytical equipment and stream of people's analysis system
CN107506707A (en) Using the Face datection of the small-scale convolutional neural networks module in embedded system
WO2019214453A1 (en) Content sharing system, method, labeling method, server and terminal device
CN109241349A (en) A kind of monitor video multiple target classification retrieving method and system based on deep learning
CN109344404A (en) The dual attention natural language inference method of context aware
CN105913507A (en) Attendance checking method and system
CN112734803B (en) Single target tracking method, device, equipment and storage medium based on character description
CN108960124A (en) The image processing method and device identified again for pedestrian
Stylianou et al. Traffickcam: Crowdsourced and computer vision based approaches to fighting sex trafficking
CN112149494A (en) Multi-person posture recognition method and system
Hinami et al. Discriminative learning of open-vocabulary object retrieval and localization by negative phrase augmentation
Ou et al. Automatic drug pills detection based on convolution neural network
CN111582224A (en) Face recognition system and method
CN109784295A (en) Video stream characteristics recognition methods, device, equipment and storage medium
Chen et al. Smartphone based outdoor navigation and obstacle avoidance system for the visually impaired
CN113762331A (en) Relational self-distillation method, apparatus and system, and storage medium
Zhang et al. Human deep squat detection method based on MediaPipe combined with Yolov5 network
CN115018215B (en) Population residence prediction method, system and medium based on multi-modal cognitive atlas
CN116958512A (en) Target detection method, target detection device, computer readable medium and electronic equipment
US11783587B2 (en) Deep learning tattoo match system based

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant