CN109271842A - A kind of generic object detection method, system, terminal and storage medium returned based on key point - Google Patents

A kind of generic object detection method, system, terminal and storage medium returned based on key point Download PDF

Info

Publication number
CN109271842A
CN109271842A CN201810833046.XA CN201810833046A CN109271842A CN 109271842 A CN109271842 A CN 109271842A CN 201810833046 A CN201810833046 A CN 201810833046A CN 109271842 A CN109271842 A CN 109271842A
Authority
CN
China
Prior art keywords
key point
characteristic pattern
tracking target
object detection
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810833046.XA
Other languages
Chinese (zh)
Inventor
吴子章
王凡
唐锐
李坤仑
丁丽珠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anchi Zongmu Intelligent Technology Co Ltd
Original Assignee
Beijing Anchi Zongmu Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anchi Zongmu Intelligent Technology Co Ltd filed Critical Beijing Anchi Zongmu Intelligent Technology Co Ltd
Priority to CN201810833046.XA priority Critical patent/CN109271842A/en
Publication of CN109271842A publication Critical patent/CN109271842A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The present invention provides a kind of generic object detection method, system, terminal and storage medium returned based on key point, the following steps are included: S01: the key point label of default tracking target, determine the detection zone of tracking target, the relative position information of default tracking target critical point label in the detection area is obtained, key point first location information is labeled as;S02: extracting the characteristic pattern of object detection area, obtains relative position information of the tracking target critical point in characteristic pattern, is labeled as key point second location information;S03: it is input with key point first location information and key point second location information, obtains loss function to optimize network structure.The present invention is by carrying out multi-scale feature fusion for corresponding characteristic pattern, it carries out loss function recurrence in two stages again, reduces and return difficulty, improve the performance of network structure, and the tracking target of multiple classifications can be carried out simultaneously while being detected, it is different classes of between tracking target it is noiseless.

Description

A kind of generic object detection method, system, terminal and storage returned based on key point Medium
Technical field
The present invention relates to technical field of automotive electronics, detect more particularly to a kind of generic object returned based on key point Method, system, terminal and storage medium.
Background technique
ADAS, that is, advanced driving assistance system is also known as active safety system, main by obtaining image, radar data and locating Reason.Obtain the information such as distance, position, the shape of target object.In the tracking of target object, same target object is due to target The influence of environment locating for oneself state, scene, often difference is very big for imaging in different images for same type objects, In different time, different resolution, different illumination, different positions and pose, imaging is mutually difficult to match.Key point be The Local Extremum with directional information detected in the image in different scale space, in automatic driving vehicle driving process, Camera can collect the object on and around road, and for objects such as vehicle, pedestrian, guideboard, light poles, we can basis Critical point detection algorithm returns out its corresponding key point, automatic driving vehicle can be assisted to be determined according to key point information Position.
Summary of the invention
In order to solve above-mentioned and other potential technical problems, the present invention provides a kind of based on key point recurrence Generic object detection method, system, terminal and storage medium, first, multi-class tracking target and each classification are tracked Each individual is encoded in target, and the coding that certain classification tracks target has fixed digit, and one is only accounted in total coding Part, it is noiseless between each classification tracking target in learning process.Second, it is sub-category to ROI (area-of-interest) frame into Row expands, so that ROI has more validity.Third will track the corresponding standard feature figure of target and carry out multi-scale feature fusion. 4th, using standard feature figure, in the case where not influencing time-consuming, improve the precision of images.5th, it is lost in two stages Function regression first operates standard feature figure progress down-sampling to obtain down-sampling layer characteristic pattern.Stage one is in down-sampling layer feature Learnt in figure, after study is abundant, the key point position in down-sampling layer characteristic pattern is mapped in standard feature figure;Rank Section two is learnt in the standard feature figure that mapping obtains, and only learns the mapping position where key point using mask, is subtracted The difficulty of small recurrence.
A kind of generic object detection method returned based on key point, comprising the following steps:
S01: the key point label of default tracking target determines the detection zone of tracking target, obtains default tracking target The relative position information of key point label in the detection area is labeled as key point first location information;
S02: extracting the characteristic pattern of object detection area, obtains relative position letter of the tracking target critical point in characteristic pattern Breath is labeled as key point second location information;
S03: it is input with key point first location information and key point second location information, obtains loss function to optimize Network structure.
Further, the classification that target is tracked described in step S01 is at least two classifications, the default tracking target Key point label is pre- according to preset tracking target individual difference each in its preset tracking target category and each classification If key point.
Further, when determining the detection zone of tracking target in the step S01, the classification according to tracking target is different Determine the detection zone in each classification where each tracking target individual respectively, then by each preset tracking target individual Key point label matches with the detection zone where tracking target individual;Respectively obtain the corresponding pass of each tracking target individual Key point information.
Further, the key point label of tracking target is preset described in the step S01 according to its preset tracking mesh When each preset tracking target individual presets key point respectively in mark classification and each classification, the key point includes it Place tracking target preselects the location information of frame and states the whether visible information of the key point in the tracking target;The table State in the tracking target key point whether visible information when, give the visibility of the key point with weight.
Further, the characteristic pattern of object detection area is extracted described in the step S02, obtains tracking target critical point When relative position information in characteristic pattern, the acquisition modes of relative position information of the key point in characteristic pattern are: logical Crossing will be found out containing the key point part by mapping relations in the characteristic pattern of each convolutional layer acquisition of basic network section, and be marked Containing the position of the key point part in each characteristic pattern, and with the weight of the key point visibility in each characteristic pattern to each The position marking of key point in a characteristic pattern, and the position of the key point exists as key point in the highest characteristic pattern that selects to give a mark Location information in characteristic pattern is recorded.
It gives key point position weight and is finally bbox regression again compared to the process flow with traditional RCNN It says, simplifies network structure more.Because this is equivalent to a bbox regression and is placed into inside neural network, waited with region Select frame, classification and and become a multi-task model, so that the two tasks is shared convolution feature, and mutually promote Into.
It further, further include by each characteristic pattern after the characteristic pattern for extracting object detection area in the step S02 Fusion Features step S021.
Further, the characteristic pattern that the Fusion Features of the characteristic pattern in the step S021 are limited to middle low layer carries out feature Fusion.I.e. in the convolutional layer of neural network, the characteristic pattern of the convolutional layer in middle low layer carries out Fusion Features.
Further, the Fusion Features in the step S021 are intensive Fusion Features.I.e. Fusion Features when, selection mind When through convolutional layer in network, chooses convolutional layer as much as possible and carry out Fusion Features.
Further, after the detection zone for determining tracking target of step S01, in the extraction target detection of step S02 Before the characteristic pattern in region, include the steps that scale space converts S01a: by each target detection of the same category tracking target The characteristic pattern in region is converted into the standard feature figure of identical scale, then carries out critical point detection, obtains key point in characteristic pattern In relative positional relationship.The characteristic pattern of each object detection area of the same category tracking target is converted into identical scale Standard feature graphic operation use ROI Pooling layer, to the character representation of one fixed dimension of each region extraction, then lead to It crosses normal softmax and carries out type identification.
Further, the size of the standard feature figure is the characteristic pattern of 56*56, and the size of standard feature figure, which is slightly larger than, to be chased after The size of track target candidate frame is in key point except standard feature figure to prevent the edge exposure of tracking target.
It further, further include by standard before obtaining relative position information of the tracking target critical point in characteristic pattern The step S01b of characteristic pattern down-sampling: down-sampling layer characteristic pattern is obtained, following sample level characteristic pattern is as input, training down-sampling Localized network;Following sample level characteristic pattern is input down-sampling localized network again, exports key point location information and maps back mark In quasi- characteristic pattern.
Further, the standard feature figure of the 56*56 that will be obtained carries out down-sampling, and the down-sampling layer for obtaining 7*7 is special Sign figure, is learnt in the down-sampling layer characteristic pattern of 7*7, after study sufficiently, then obtained key point position is mapped to 56* In 56 standard feature figure.
Further, further include step S01c: the standard feature figure for having mapped down-sampling key point position is subjected to mask Operation, training standard feature localized network make it only learn the mapping position in standard feature figure where key point.
Further, in the standard feature figure of 56*56, using mask mask, only learn the part containing key point, drop Low learning difficulty, is learnt using loss function.
Further, it is origin that the object detection area, which marks its upper left angle point, is obtained parameter (X, Y), target detection The width in region is set as W, and the high setting of object detection area is H;Obtain the parameter (X, Y, W, H) of object detection area.
Further, in the network structure, section of foundation uses resnet50 network structure, and detection section uses rrc network Structure.
Further, the network structure of the critical point detection section includes the characteristic pattern for obtaining low layer in section of foundation, is passed through Pooling layers of RoI make the window of each characteristic pattern generate fixed-size characteristic pattern, merge fixed dimension with concat function Characteristic pattern, obtain standard feature figure using convolution at least once, pondization operation, standard feature figure and default tracking target The input of key point label generates first-loss function together.
Further, the critical point detection section operates before generating standard feature figure by cubic convolution, pondization.
Further, the standard feature figure obtains down-sampling layer characteristic pattern using convolution at least once, pondization operation, The label of down-sampling layer characteristic pattern and tracking target critical point in characteristic pattern is operated by mask again collectively as input, is generated Second loss function.
Further, the critical point detection section operates before generating down-sampling layer characteristic pattern by cubic convolution, pondization.
A kind of generic object detection system returned based on key point, including key point label for labelling module, target detection Module, characteristic extracting module, key point first position generation module, key point second location information generation module, loss function Generation module;
The module of target detection is used to obtain tracking target in original image, and detection zone is obtained based on tracking target Domain;
The key point label for labelling module exports key point label for marking tracking target critical point;
The characteristic extracting module generates characteristic pattern for extracting feature in self-test region;
Key point first position generation module is used for the pixel of module of target detection where key point label Location information generates key point first position array;
Key point second position generation module is used for the location information of the lattice point of characteristic pattern where key point label Generate key point second position array;
The loss function generation module be used for key point first position array digit corresponding with second position array it The sum of difference and the product of coefficient obtain loss function, with corrective networks structure.
It further, further include Fusion Features module, the Fusion Features module is used for the spy of low layer middle in section of foundation Sign figure carries out fusion and generates standard feature figure.
It further, further include scale space conversion module, the scale space conversion module is used for will be each in section of foundation A layer of characteristic pattern is converted into identical size and generates standard feature figure.
It further, further include down-sampling layer module, the down-sampling layer module is used for lattice each in standard feature figure Point down-sampling generates the down-sampling layer characteristic pattern that bulk is less than standard feature figure.
It further, further include mask module, the mask module is used for the second key point in down-sampling layer characteristic pattern During location information maps to standard feature figure, in standard feature figure in addition to relevant to the second key point confidence breath The operation of lattice point progress mask.
A kind of generic object detection terminal returned based on key point, which is characterized in that including processor and memory, institute It states memory and is stored with program instruction, the processor operation program instruction realizes the step in above-mentioned method.
A kind of computer readable storage medium, is stored thereon with computer program, it is characterised in that: the program is by processor The step in above-mentioned method is realized when execution.
As described above, of the invention has the advantages that first, to multi-class tracking target and each classification Each individual is encoded in tracking target, and the coding that certain classification tracks target has fixed digit, in total coding only A part is accounted for, it is noiseless between each classification tracking target in learning process.Second, it is sub-category to ROI (area-of-interest) Frame is expanded, so that ROI has more validity.Third will track the corresponding standard feature figure of target and carry out Analysis On Multi-scale Features Fusion.4th, using standard feature figure, in the case where not influencing time-consuming, improve the precision of images.5th, in two stages into Row loss function returns, and first operates standard feature figure progress down-sampling to obtain down-sampling layer characteristic pattern.Stage one is in down-sampling Learnt in layer characteristic pattern, after study is abundant, the key point position in down-sampling layer characteristic pattern is mapped to standard feature In figure;Stage two is learnt in the standard feature figure that mapping obtains, and only learns the mapping where key point using mask Position reduces the difficulty of recurrence.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is shown as flow chart of the invention.
Fig. 2 is shown as test effect figure of the invention.
Fig. 3 is shown as the schematic diagram of mask module operation of the present invention.
Fig. 4 is shown as the network structure of critical point detection section of the present invention.
Specific embodiment
Illustrate embodiments of the present invention below by way of specific specific example, those skilled in the art can be by this specification Other advantages and efficacy of the present invention can be easily understood for disclosed content.The present invention can also pass through in addition different specific realities The mode of applying is embodied or practiced, the various details in this specification can also based on different viewpoints and application, without departing from Various modifications or alterations are carried out under spirit of the invention.It should be noted that in the absence of conflict, following embodiment and implementation Feature in example can be combined with each other.
It should be clear that this specification structure depicted in this specification institute accompanying drawings, ratio, size etc., only to cooperate specification to be taken off The content shown is not intended to limit the invention enforceable qualifications so that those skilled in the art understands and reads, therefore Do not have technical essential meaning, the modification of any structure, the change of proportionate relationship or the adjustment of size are not influencing the present invention Under the effect of can be generated and the purpose that can reach, it should all still fall in disclosed technology contents and obtain the model that can cover In enclosing.Meanwhile cited such as "upper" in this specification, "lower", "left", "right", " centre " and " one " term, be also only Convenient for being illustrated for narration, rather than to limit the scope of the invention, relativeness is altered or modified, in no essence It changes under technology contents, when being also considered as the enforceable scope of the present invention.
Referring to figs. 1 to 4, a kind of generic object detection method returned based on key point, comprising the following steps:
S01: the key point label of default tracking target determines the detection zone of tracking target, obtains default tracking target The relative position information of key point label in the detection area is labeled as key point first location information;
S02: extracting the characteristic pattern of object detection area, obtains relative position letter of the tracking target critical point in characteristic pattern Breath is labeled as key point second location information;
S03: it is input with key point first location information and key point second location information, obtains loss function to optimize Network structure.
As a preferred embodiment, the classification that target is tracked described in step S01 is at least two classifications, the default tracking The key point label of target is according to preset tracking target individual each in its preset tracking target category and each classification Key point is preset respectively.
As a preferred embodiment, when determining the detection zone of tracking target in the step S01, according to the class of tracking target The not different detection zones determined in each classification where each tracking target individual respectively, then by each preset tracking target The key point label of individual matches with the detection zone where tracking target individual;Respectively obtain each tracking target individual institute Corresponding key point information.
As a preferred embodiment, the key point label that tracking target is preset described in the step S01 is preset according to its When each preset tracking target individual presets key point respectively in tracking target category and each classification, the key point Including the location information of tracking target pre-selection frame where it and state the whether visible information of the key point in the tracking target; In described statement tracking target the key point whether visible information when, give the visibility of the key point with weight.
As a preferred embodiment, the characteristic pattern of object detection area is extracted described in the step S02, obtains tracking target When relative position information of the key point in characteristic pattern, the acquisition modes of relative position information of the key point in characteristic pattern It is: by will be found out containing the key point part by mapping relations in the characteristic pattern of each convolutional layer acquisition of basic network section, And it marks containing the position of the key point part in each characteristic pattern, and with the power of the key point visibility in each characteristic pattern Again to the position marking of key point in each characteristic pattern, and in the highest characteristic pattern that selects to give a mark the key point position as pass Location information of the key point in characteristic pattern is recorded.
It gives key point position weight and is finally bbox regression again compared to the process flow with traditional RCNN It says, simplifies network structure more.Because this is equivalent to a bbox regression and is placed into inside neural network, waited with region Select frame, classification and and become a multi-task model, so that the two tasks is shared convolution feature, and mutually promote Into.
The code sequence is the code form of expression of multi-class tracking target code, is total in the present embodiment there are three classification, Classification one is guideboard, wherein there is a tracking target individual guideboard under guideboard classification, this tracking target individual guideboard includes four Key point.Classification two is light pole, and wherein there are two target light pole is tracked under light pole classification, each tracks target street lamp Bar includes two key points.Classification three is electric pole, each wherein there are two tracking target individual electric pole under electric pole classification A tracking target electric pole includes two key points.
But only indicate a kind of object in each such data list, " category_id ": 0 indicates that current class is road Board, id:2 indicates that this is second guideboard in this figure, referring to downstream code:
In this bracket of keypoints, whether each key point may be used by the key point under x coordinate, y-coordinate and the coordinate See that three data indicate.Such as the first digit 666 in keypoints this bracket, the second digit in this bracket of keypoints 237 indicate the x coordinate and y-coordinate of first key point, in this bracket of keypoints 1 indicates coordinate of third digit be (666, 237) whether key point is as it can be seen that 1 indicates the point as it can be seen that if 0, then it represents that first key point is invisible.Due to the list What is indicated is the attribute of guideboard, therefore only preceding 12 digit has data, remaining digit data is 0.The number of light pole key point According to for 13-18, totally six digits, the data of electric pole key point are 19-24 totally six digit.
As a preferred embodiment, in the step S02, after the characteristic pattern for extracting object detection area, further including will be each The step S021 of the Fusion Features of characteristic pattern.
As a preferred embodiment, the Fusion Features of the characteristic pattern in the step S021 be limited to the characteristic pattern of middle low layer into Row Fusion Features.I.e. in the convolutional layer of neural network, the characteristic pattern of the convolutional layer in middle low layer carries out Fusion Features.
As a preferred embodiment, the Fusion Features in the step S021 are intensive Fusion Features.I.e. Fusion Features when, When selecting the convolutional layer in neural network, chooses convolutional layer as much as possible and carry out Fusion Features.
As a preferred embodiment, after the detection zone for determining tracking target of step S01, in the extraction mesh of step S02 Before the characteristic pattern for marking detection zone, include the steps that scale space converts S01a: by the characteristic pattern of each object detection area It is converted into the standard feature figure of identical scale, then carries out critical point detection, obtains relative position of the key point in characteristic pattern Relationship.
As a preferred embodiment, the size of the standard feature figure is the characteristic pattern of 56*56, and the size of standard feature figure is slightly It is in key point except standard feature figure greater than the size of tracking target candidate frame to prevent the edge exposure of tracking target.
As a preferred embodiment, before obtaining relative position information of the tracking target critical point in characteristic pattern, further include By the step S01b of standard feature figure down-sampling: obtaining down-sampling layer characteristic pattern, following sample level characteristic pattern is as input, training Down-sampling localized network;Following sample level characteristic pattern is input down-sampling localized network again, exports key point location information and reflects It is emitted back towards in standard feature figure.
As a preferred embodiment, the standard feature figure of the 56*56 that will be obtained carries out down-sampling, obtains adopting under 7*7 Sample layer characteristic pattern is learnt in the down-sampling layer characteristic pattern of 7*7, after study sufficiently, then obtained key point position is reflected It is mapped in the standard feature figure of 56*56.
As a preferred embodiment, further include step S01c: will have mapped the standard feature figure of down-sampling key point position into The operation of row mask, training standard feature localized network make it only learn the mapping position in standard feature figure where key point.
As a preferred embodiment, in the standard feature figure of 56*56, using mask mask, only study contains key point Part is reduced learning difficulty, is learnt using loss function.
As a preferred embodiment, it is origin that the object detection area, which marks its upper left angle point, is obtained parameter (X, Y), mesh The width of mark detection zone is set as W, and the high setting of object detection area is H;Obtain object detection area parameter (X, Y, W, H)。
As a preferred embodiment, in the network structure, section of foundation uses resnet50 network structure, and detection section uses Rrc network structure.
As a preferred embodiment, the network structure of the critical point detection section includes obtaining the feature of low layer in section of foundation Figure makes the window of each characteristic pattern generate fixed-size characteristic pattern by pooling layers of RoI, solid with the fusion of concat function The characteristic pattern of scale cun obtains standard feature figure, standard feature figure and default tracking using convolution at least once, pondization operation The key point label input of target generates first-loss function together.
As a preferred embodiment, the critical point detection section operates before generating standard feature figure by cubic convolution, pondization.
As a preferred embodiment, the standard feature figure obtains down-sampling layer using convolution at least once, pondization operation Characteristic pattern, the label of down-sampling layer characteristic pattern and tracking target critical point in characteristic pattern are grasped by mask again collectively as input Make, generates the second loss function.
As a preferred embodiment, the critical point detection section passes through cubic convolution, Chi Hua before generating down-sampling layer characteristic pattern Operation.
A kind of generic object detection system returned based on key point, including key point label for labelling module, target detection Module, characteristic extracting module, key point first position generation module, key point second location information generation module, loss function Generation module;
The module of target detection is used to obtain tracking target in original image, and detection zone is obtained based on tracking target Domain;
The key point label for labelling module exports key point label for marking tracking target critical point;
The characteristic extracting module generates characteristic pattern for extracting feature in self-test region;
Key point first position generation module is used for the pixel of module of target detection where key point label Location information generates key point first position array;
Key point second position generation module is used for the location information of the lattice point of characteristic pattern where key point label Generate key point second position array;
The loss function generation module be used for key point first position array digit corresponding with second position array it The sum of difference and the product of coefficient obtain loss function, with corrective networks structure.
It as a preferred embodiment, further include Fusion Features module, the Fusion Features module is used for will be low in section of foundation The characteristic pattern of layer carries out fusion and generates standard feature figure.
It as a preferred embodiment, further include scale space conversion module, the scale space conversion module is used for will be basic Each layer of characteristic pattern is converted into identical size and generates standard feature figure in section.
It as a preferred embodiment, further include down-sampling layer module, the down-sampling layer module is used for will be in standard feature figure Each lattice point down-sampling generates the down-sampling layer characteristic pattern that bulk is less than standard feature figure.
It as a preferred embodiment, further include mask module, the mask module is used for second in down-sampling layer characteristic pattern During key point location information maps to standard feature figure, in standard feature figure in addition to the second key point location information Relevant lattice point carries out the operation of mask.
A kind of generic object detection terminal returned based on key point, which is characterized in that including processor and memory, institute It states memory and is stored with program instruction, the processor operation program instruction realizes the step in above-mentioned method.
As a preferred embodiment, the present embodiment also provides a kind of terminal device, can such as execute the smart phone of program, put down Plate computer, laptop, desktop computer, rack-mount server, blade server, tower server or cabinet-type service Device (including server cluster composed by independent server or multiple servers) etc..The terminal device of the present embodiment is extremely It is few to include but is not limited to: memory, the processor of connection can be in communication with each other by system bus.It should be pointed out that having group The terminal device of part memory, processor can substitute it should be understood that being not required for implementing all components shown Implementation is more or less component.
As a preferred embodiment, memory (i.e. readable storage medium storing program for executing) includes flash memory, hard disk, multimedia card, card-type storage Device (for example, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only storage Device (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, magnetic Disk, CD etc..In some embodiments, memory can be the internal storage unit of computer equipment, such as the computer is set Standby 20 hard disk or memory.In further embodiments, memory is also possible to the External memory equipment of computer equipment, such as The plug-in type hard disk being equipped in the computer equipment, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..Certainly, memory can also both include computer equipment Internal storage unit also include its External memory equipment.In the present embodiment, memory is installed on computer commonly used in storage The operating system and types of applications software of equipment, for example, in embodiment Case-based Reasoning segmentation target Re-ID program code Deng.In addition, memory can be also used for temporarily storing the Various types of data that has exported or will export.
Processor can be central processing unit (Central Processing Unit, CPU), control in some embodiments Device, microcontroller, microprocessor or other data processing chips processed.The processor is total commonly used in control computer equipment Gymnastics is made.In the present embodiment, program code or processing data of the processor for being stored in run memory, such as operation base In the target Re-ID program of example segmentation, to realize the function of the target Re-ID system of Case-based Reasoning segmentation in embodiment.
A kind of computer readable storage medium, is stored thereon with computer program, it is characterised in that: the program is by processor The step in above-mentioned method is realized when execution.
The present embodiment also provides a kind of computer readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, magnetic Disk, CD, server, App are stored thereon with computer program, phase are realized when program is executed by processor using store etc. Answer function.The computer readable storage medium of the present embodiment is used to store the target Re-ID program of Case-based Reasoning segmentation, processed The target Re-ID method of the Case-based Reasoning segmentation in embodiment is realized when device executes.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe The personage for knowing this technology all without departing from the spirit and scope of the present invention, carries out modifications and changes to above-described embodiment.Cause This, includes that institute is complete without departing from the spirit and technical ideas disclosed in the present invention for usual skill in technical field such as At all equivalent modifications or change, should be covered by the claims of the present invention.

Claims (11)

1. a kind of generic object detection method returned based on key point, which comprises the following steps:
S01: the key point label of default tracking target determines the detection zone of tracking target, obtains default tracking target critical The relative position information of point label in the detection area, is labeled as key point first location information;
S02: extracting the characteristic pattern of object detection area, obtains relative position information of the tracking target critical point in characteristic pattern, Labeled as key point second location information;
S03: it is input with key point first location information and key point second location information, obtains loss function to optimize network Structure.
2. the generic object detection method according to claim 1 returned based on key point, which is characterized in that step S01 Described in the classification of tracking target be at least two classifications, the key point label of the default tracking target preset is chased after according to its Each preset tracking target individual presets key point respectively in track target category and each classification;In the step S01 really When tracking the detection zone of target surely, each tracking target in each classification is determined according to the classification difference of tracking target respectively Detection zone where body, then by it is each it is preset tracking target individual key point label and tracking target individual where inspection Region is surveyed to match;Respectively obtain key point information corresponding to each tracking target individual;It presets and chases after described in the step S01 The key point label of track target is according to preset tracking target each in its preset tracking target category and each classification When body presets key point respectively, the key point includes the location information of tracking target pre-selection frame where it and states the tracking The whether visible information of the key point in target;In described statement tracking target the key point whether visible information when, give The visibility of the key point is given with weight.
3. the generic object detection method according to claim 2 returned based on key point, which is characterized in that the step The characteristic pattern that object detection area is extracted described in S02 obtains relative position information of the tracking target critical point in characteristic pattern When, the acquisition modes of relative position information of the key point in characteristic pattern are: by by each convolutional layer of basic network section It is found out, and marked containing the key point part in each spy by mapping relations containing the key point part in the characteristic pattern of acquisition The position in figure is levied, and the position with the weight of the key point visibility in each characteristic pattern to key point in each characteristic pattern is beaten Point, and the position of the key point is recorded as location information of the key point in characteristic pattern in the highest characteristic pattern that selects to give a mark Come.
4. the generic object detection method according to claim 3 returned based on key point, which is characterized in that in step It further include ruler before the characteristic pattern of the extraction object detection area of step S02 after the detection zone for determining tracking target of S01 The characteristic pattern of each object detection area: being converted into the standard feature figure of identical scale by the step S01a of degree space conversion, Critical point detection is carried out again, obtains relative positional relationship of the key point in characteristic pattern.
5. the generic object detection method according to claim 4 returned based on key point, which is characterized in that chased after in acquisition Before relative position information of the track target critical point in characteristic pattern, include thes steps that standard feature figure down-sampling S01b: obtaining Sample level characteristic pattern is removed, following sample level characteristic pattern is as input, training down-sampling localized network;Following sample level feature again Figure is input down-sampling localized network, exports key point location information and maps back in standard feature figure.
6. the generic object detection method according to claim 5 returned based on key point, which is characterized in that further include step Rapid S01c: will have mapped down-sampling key point position standard feature figure carry out mask operation, training standard feature localized network, It is set only to learn the mapping position in standard feature figure where key point.
7. the generic object detection method according to claim 1 returned based on key point, which is characterized in that the network In structure, section of foundation uses resnet50 network structure, and detection section uses rrc network structure.
8. the generic object detection method according to claim 1 returned based on key point, which is characterized in that the key The network structure of point detection section includes the characteristic pattern for obtaining low layer in section of foundation, makes each characteristic pattern by pooling layers of RoI Window generate fixed-size characteristic pattern, fixed-size characteristic pattern is merged with concat function, using rolling up at least once Product, pondization operation obtain standard feature figure, and the key point label of standard feature figure and default tracking target inputs generates the together One loss function;The standard feature figure obtains down-sampling layer characteristic pattern, down-sampling using convolution at least once, pondization operation The label of layer characteristic pattern and tracking target critical point in characteristic pattern is operated by mask again collectively as input, generates the second damage Lose function.
9. it is a kind of based on key point return generic object detection system, which is characterized in that including key point label for labelling module, Module of target detection, characteristic extracting module, key point first position generation module, key point second location information generation module, Loss function generation module;
The module of target detection is used to obtain tracking target in original image, and detection zone is obtained based on tracking target;
The key point label for labelling module exports key point label for marking tracking target critical point;
The characteristic extracting module generates characteristic pattern for extracting feature in self-test region;
Key point first position generation module is used for the position of the pixel of module of target detection where key point label Information generates key point first position array;
Key point second position generation module is used to generate with the location information of the lattice point of characteristic pattern where key point label Key point second position array;
The loss function generation module is used for the difference of key point first position array and the corresponding digit of second position array Loss function is obtained with the product with coefficient, with corrective networks structure.
10. a kind of generic object returned based on key point detects terminal, which is characterized in that including processor and memory, institute It states memory and is stored with program instruction, the processor operation program instruction is realized such as claim 1 to 8 any claim institute The step in method stated.
11. a kind of computer readable storage medium, is stored thereon with computer program, it is characterised in that: the program is by processor The step in the method as described in claim 1 to 8 any claim is realized when execution.
CN201810833046.XA 2018-07-26 2018-07-26 A kind of generic object detection method, system, terminal and storage medium returned based on key point Pending CN109271842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810833046.XA CN109271842A (en) 2018-07-26 2018-07-26 A kind of generic object detection method, system, terminal and storage medium returned based on key point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810833046.XA CN109271842A (en) 2018-07-26 2018-07-26 A kind of generic object detection method, system, terminal and storage medium returned based on key point

Publications (1)

Publication Number Publication Date
CN109271842A true CN109271842A (en) 2019-01-25

Family

ID=65153257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810833046.XA Pending CN109271842A (en) 2018-07-26 2018-07-26 A kind of generic object detection method, system, terminal and storage medium returned based on key point

Country Status (1)

Country Link
CN (1) CN109271842A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245664A (en) * 2019-06-26 2019-09-17 深兰科技(上海)有限公司 Licence plate recognition method
CN111507334A (en) * 2019-01-30 2020-08-07 中国科学院宁波材料技术与工程研究所 Example segmentation method based on key points
CN111523387A (en) * 2020-03-24 2020-08-11 杭州易现先进科技有限公司 Method and device for detecting hand key points and computer device
CN114241051A (en) * 2021-12-21 2022-03-25 盈嘉互联(北京)科技有限公司 Object attitude estimation method for indoor complex scene
CN111523387B (en) * 2020-03-24 2024-04-19 杭州易现先进科技有限公司 Method and device for detecting key points of hands and computer device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262736A1 (en) * 2016-03-11 2017-09-14 Nec Laboratories America, Inc. Deep Deformation Network for Object Landmark Localization
CN107909005A (en) * 2017-10-26 2018-04-13 西安电子科技大学 Personage's gesture recognition method under monitoring scene based on deep learning
US20180137642A1 (en) * 2016-11-15 2018-05-17 Magic Leap, Inc. Deep learning system for cuboid detection
WO2018108129A1 (en) * 2016-12-16 2018-06-21 北京市商汤科技开发有限公司 Method and apparatus for use in identifying object type, and electronic device
CN108205655A (en) * 2017-11-07 2018-06-26 北京市商汤科技开发有限公司 A kind of key point Forecasting Methodology, device, electronic equipment and storage medium
CN108230390A (en) * 2017-06-23 2018-06-29 北京市商汤科技开发有限公司 Training method, critical point detection method, apparatus, storage medium and electronic equipment
CN108229488A (en) * 2016-12-27 2018-06-29 北京市商汤科技开发有限公司 For the method, apparatus and electronic equipment of detection object key point
CN108229490A (en) * 2017-02-23 2018-06-29 北京市商汤科技开发有限公司 Critical point detection method, neural network training method, device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262736A1 (en) * 2016-03-11 2017-09-14 Nec Laboratories America, Inc. Deep Deformation Network for Object Landmark Localization
US20180137642A1 (en) * 2016-11-15 2018-05-17 Magic Leap, Inc. Deep learning system for cuboid detection
WO2018108129A1 (en) * 2016-12-16 2018-06-21 北京市商汤科技开发有限公司 Method and apparatus for use in identifying object type, and electronic device
CN108229509A (en) * 2016-12-16 2018-06-29 北京市商汤科技开发有限公司 For identifying object type method for distinguishing and device, electronic equipment
CN108229488A (en) * 2016-12-27 2018-06-29 北京市商汤科技开发有限公司 For the method, apparatus and electronic equipment of detection object key point
CN108229490A (en) * 2017-02-23 2018-06-29 北京市商汤科技开发有限公司 Critical point detection method, neural network training method, device and electronic equipment
CN108230390A (en) * 2017-06-23 2018-06-29 北京市商汤科技开发有限公司 Training method, critical point detection method, apparatus, storage medium and electronic equipment
CN107909005A (en) * 2017-10-26 2018-04-13 西安电子科技大学 Personage's gesture recognition method under monitoring scene based on deep learning
CN108205655A (en) * 2017-11-07 2018-06-26 北京市商汤科技开发有限公司 A kind of key point Forecasting Methodology, device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
THOMAS S. A. WALLIS 等: "A parametric texture model based on deep convolutional features closely matches texture appearance for humans", 《JOURNAL OF VISION》, vol. 17, no. 12, 31 October 2017 (2017-10-31), pages 1 - 29 *
岳文佩: "监控视频中基于显著运动的目标检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2018, 15 April 2018 (2018-04-15), pages 138 - 2342 *
彭营营: "基于深度学习的鲁棒表情关键点定位算法设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2018, 15 January 2018 (2018-01-15), pages 138 - 1794 *
梁军: "基于Multi-Agent和驾驶行为的汽车追尾预警系统关键理论与技术研究", 《中国博士学位论文全文数据库工程科技Ⅱ辑》, no. 2016, 15 December 2016 (2016-12-15), pages 035 - 4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507334A (en) * 2019-01-30 2020-08-07 中国科学院宁波材料技术与工程研究所 Example segmentation method based on key points
CN111507334B (en) * 2019-01-30 2024-03-12 中国科学院宁波材料技术与工程研究所 Instance segmentation method based on key points
CN110245664A (en) * 2019-06-26 2019-09-17 深兰科技(上海)有限公司 Licence plate recognition method
CN111523387A (en) * 2020-03-24 2020-08-11 杭州易现先进科技有限公司 Method and device for detecting hand key points and computer device
CN111523387B (en) * 2020-03-24 2024-04-19 杭州易现先进科技有限公司 Method and device for detecting key points of hands and computer device
CN114241051A (en) * 2021-12-21 2022-03-25 盈嘉互联(北京)科技有限公司 Object attitude estimation method for indoor complex scene

Similar Documents

Publication Publication Date Title
CN109117848B (en) Text line character recognition method, device, medium and electronic equipment
CN110148148A (en) A kind of training method, model and the storage medium of the lower edge detection model based on target detection
CN109118519A (en) Target Re-ID method, system, terminal and the storage medium of Case-based Reasoning segmentation
CN109190662A (en) A kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point
Min et al. New approach to vehicle license plate location based on new model YOLO‐L and plate pre‐identification
CN111814794A (en) Text detection method and device, electronic equipment and storage medium
CN109271842A (en) A kind of generic object detection method, system, terminal and storage medium returned based on key point
CN110176017A (en) A kind of Model for Edge Detection based on target detection, method and storage medium
CN111274981A (en) Target detection network construction method and device and target detection method
CN115512169B (en) Weak supervision semantic segmentation method and device based on gradient and region affinity optimization
CN112749606A (en) Text positioning method and device
CN110781856A (en) Heterogeneous face recognition model training method, face recognition method and related device
Fan Research and realization of video target detection system based on deep learning
Qin et al. A specially optimized one-stage network for object detection in remote sensing images
CN115830402A (en) Fine-grained image recognition classification model training method, device and equipment
CN115035367A (en) Picture identification method and device and electronic equipment
CN109190467A (en) A kind of more object detecting methods, system, terminal and storage medium returned based on key point
Nemade et al. Image segmentation using convolutional neural network for image annotation
Hu et al. MINet: Multilevel inheritance network-based aerial scene classification
Naiemi et al. Scene text detection using enhanced extremal region and convolutional neural network
Ke et al. Dense small face detection based on regional cascade multi‐scale method
CN113128496B (en) Method, device and equipment for extracting structured data from image
CN113096104A (en) Training method and device of target segmentation model and target segmentation method and device
Yang et al. Salient object detection based on global multi‐scale superpixel contrast
CN113449555A (en) Traffic sign recognition method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination