CN109190662A - A kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point - Google Patents

A kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point Download PDF

Info

Publication number
CN109190662A
CN109190662A CN201810834980.3A CN201810834980A CN109190662A CN 109190662 A CN109190662 A CN 109190662A CN 201810834980 A CN201810834980 A CN 201810834980A CN 109190662 A CN109190662 A CN 109190662A
Authority
CN
China
Prior art keywords
key point
characteristic pattern
tracking target
point
standard feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810834980.3A
Other languages
Chinese (zh)
Inventor
吴子章
王凡
唐锐
李坤仑
丁丽珠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anchi Zongmu Intelligent Technology Co Ltd
Original Assignee
Beijing Anchi Zongmu Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anchi Zongmu Intelligent Technology Co Ltd filed Critical Beijing Anchi Zongmu Intelligent Technology Co Ltd
Priority to CN201810834980.3A priority Critical patent/CN109190662A/en
Publication of CN109190662A publication Critical patent/CN109190662A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point, the following steps are included: S01: the key point label of default tracking target, determine the detection zone of tracking target, the relative position information of default tracking target critical point label in the detection area is obtained, key point first location information is labeled as;S02: extracting the characteristic pattern of object detection area, obtains relative position information of the tracking target critical point in characteristic pattern, is labeled as key point second location information;S03: it is input with key point first location information and key point second location information, obtains loss function to optimize network structure.The present invention is by reducing corresponding characteristic pattern progress multi-scale feature fusion, then progress loss function recurrence in two stages and returning difficulty, improve the performance of network structure.

Description

A kind of three-dimensional vehicle detection method, system, terminal and storage returned based on key point Medium
Technical field
The present invention relates to technical field of automotive electronics, detect more particularly to a kind of three-dimensional vehicle returned based on key point Method, system, terminal and storage medium.
Background technique
ADAS, that is, advanced driving assistance system is also known as active safety system, main by obtaining image, radar data and locating Reason.Obtain the information such as distance, position, the shape of target object.In the tracking of target object, same target object is due to target The influence of environment locating for oneself state, scene, often difference is very big for imaging in different images for same type objects, In different time, different resolution, different illumination, different positions and pose, imaging is mutually difficult to match.Key point be The Local Extremum with directional information detected in the image in different scale space, in automatic driving vehicle driving process, Camera can collect the object on and around road, and for objects such as vehicle, pedestrian, guideboard, light poles, we can basis Critical point detection algorithm returns out its corresponding key point, automatic driving vehicle can be assisted to be determined according to key point information Position.
Summary of the invention
In order to solve above-mentioned and other potential technical problems, the present invention provides a kind of based on key point recurrence Three-dimensional vehicle detection method, system, terminal and storage medium, first, the corresponding standard feature figure of target will be tracked and carry out more rulers Spend Fusion Features.Second, using standard feature figure, in the case where not influencing time-consuming, improve the precision of images.Third is divided to two Stage carries out loss function recurrence, first operates standard feature figure progress down-sampling to obtain down-sampling layer characteristic pattern.Stage one exists Learnt in down-sampling layer characteristic pattern, after study is abundant, the key point position in down-sampling layer characteristic pattern is mapped to mark In quasi- characteristic pattern;Stage two is learnt in the standard feature figure that mapping obtains, and only learns key point place using mask Mapping position, reduce the difficulty of recurrence.
A kind of three-dimensional vehicle detection method returned based on key point, comprising the following steps:
S01: the key point label of default tracking target determines the detection zone of tracking target, obtains default tracking target The relative position information of key point label in the detection area is labeled as key point first location information;
S02: extracting the characteristic pattern of object detection area, obtains relative position letter of the tracking target critical point in characteristic pattern Breath is labeled as key point second location information;
S03: it is input with key point first location information and key point second location information, obtains loss function to optimize Network structure.
It further, further include by each characteristic pattern after the characteristic pattern for extracting object detection area in the step S02 Fusion Features step S021.
Further, the characteristic pattern that the Fusion Features of the characteristic pattern in the step S021 are limited to middle low layer carries out feature Fusion.I.e. in the convolutional layer of neural network, the characteristic pattern of the convolutional layer in middle low layer carries out Fusion Features.
Further, the Fusion Features in the step S021 are intensive Fusion Features.I.e. Fusion Features when, selection mind When through convolutional layer in network, chooses convolutional layer as much as possible and carry out Fusion Features.
Further, after the detection zone for determining tracking target of step S01, in the extraction target detection of step S02 Before the characteristic pattern in region, include thes steps that scale space converts S01a: the characteristic pattern of each object detection area is converted into The standard feature figure of identical scale, then critical point detection is carried out, obtain relative positional relationship of the key point in characteristic pattern.
Further, the size of the standard feature figure is the characteristic pattern of 56*56, and the size of standard feature figure, which is slightly larger than, to be chased after The size of track target candidate frame is in key point except standard feature figure to prevent the edge exposure of tracking target.
It further, further include by standard before obtaining relative position information of the tracking target critical point in characteristic pattern The step S01b of characteristic pattern down-sampling: down-sampling layer characteristic pattern is obtained, following sample level characteristic pattern is as input, training down-sampling Localized network;Following sample level characteristic pattern is input down-sampling localized network again, exports key point location information and maps back mark In quasi- characteristic pattern.
Further, the standard feature figure of the 56*56 that will be obtained carries out down-sampling, and the down-sampling layer for obtaining 7*7 is special Sign figure, is learnt in the down-sampling layer characteristic pattern of 7*7, after study sufficiently, then obtained key point position is mapped to 56* In 56 standard feature figure.
Further, further include step S01c: the standard feature figure for having mapped down-sampling key point position is subjected to mask Operation, training standard feature localized network make it only learn the mapping position in standard feature figure where key point.
Further, in the standard feature figure of 56*56, using mask mask, only learn the part containing key point, drop Low learning difficulty, is learnt using loss function.
Further, it is origin that the object detection area, which marks its upper left angle point, is obtained parameter (X, Y), target detection The width in region is set as W, and the high setting of object detection area is H;Obtain the parameter (X, Y, W, H) of object detection area.
Further, in the network structure, section of foundation uses resnet50 network structure, and detection section uses rrc network Structure.
Further, the network structure of the critical point detection section includes the characteristic pattern for obtaining low layer in section of foundation, is passed through Pooling layers of RoI make the window of each characteristic pattern generate fixed-size characteristic pattern, merge fixed dimension with concat function Characteristic pattern, obtain standard feature figure using convolution at least once, pondization operation, standard feature figure and default tracking target The input of key point label generates first-loss function together.
Further, the critical point detection section operates before generating standard feature figure by cubic convolution, pondization.
Further, the standard feature figure obtains down-sampling layer characteristic pattern using convolution at least once, pondization operation, The label of down-sampling layer characteristic pattern and tracking target critical point in characteristic pattern is operated by mask again collectively as input, is generated Second loss function.
Further, the critical point detection section operates before generating down-sampling layer characteristic pattern by cubic convolution, pondization.
A kind of three-dimensional vehicle detection system returned based on key point, including key point label for labelling module, target detection Module, characteristic extracting module, key point first position generation module, key point second location information generation module, loss function Generation module;
The module of target detection is used to obtain tracking target in original image, and detection zone is obtained based on tracking target Domain;
The key point label for labelling module exports key point label for marking tracking target critical point;
The characteristic extracting module generates characteristic pattern for extracting feature in self-test region;
Key point first position generation module is used for the pixel of module of target detection where key point label Location information generates key point first position array;
Key point second position generation module is used for the location information of the lattice point of characteristic pattern where key point label Generate key point second position array;
The loss function generation module be used for key point first position array digit corresponding with second position array it The sum of difference and the product of coefficient obtain loss function, with corrective networks structure.
It further, further include Fusion Features module, the Fusion Features module is used for the spy of low layer middle in section of foundation Sign figure carries out fusion and generates standard feature figure.
It further, further include scale space conversion module, the scale space conversion module is used for will be each in section of foundation A layer of characteristic pattern is converted into identical size and generates standard feature figure.
It further, further include down-sampling layer module, the down-sampling layer module is used for lattice each in standard feature figure Point down-sampling generates the down-sampling layer characteristic pattern that bulk is less than standard feature figure.
It further, further include mask module, the mask module is used for the second key point in down-sampling layer characteristic pattern During location information maps to standard feature figure, in standard feature figure in addition to relevant to the second key point confidence breath The operation of lattice point progress mask.
A kind of three-dimensional vehicle detection terminal returned based on key point, which is characterized in that including processor and memory, institute It states memory and is stored with program instruction, the processor operation program instruction realizes the step in above-mentioned method.
A kind of computer readable storage medium, is stored thereon with computer program, it is characterised in that: the program is by processor The step in above-mentioned method is realized when execution.
As described above, of the invention has the advantages that first, the corresponding standard feature figure of target will be tracked and carried out Multi-scale feature fusion.Second, using standard feature figure, in the case where not influencing time-consuming, improve the precision of images.Third, point Two stages carry out loss function recurrence, first operate standard feature figure progress down-sampling to obtain down-sampling layer characteristic pattern.Stage One is learnt in down-sampling layer characteristic pattern, and after study is abundant, the key point position in down-sampling layer characteristic pattern is mapped Into standard feature figure;Stage two is learnt in the standard feature figure that mapping obtains, and only learns key point using mask The mapping position at place reduces the difficulty of recurrence.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is shown as flow chart of the invention.
Fig. 2 is shown as test effect figure of the invention.
Fig. 3 is shown as the schematic diagram of mask module operation of the present invention.
Fig. 4 is shown as the network structure of critical point detection section of the present invention.
Specific embodiment
Illustrate embodiments of the present invention below by way of specific specific example, those skilled in the art can be by this specification Other advantages and efficacy of the present invention can be easily understood for disclosed content.The present invention can also pass through in addition different specific realities The mode of applying is embodied or practiced, the various details in this specification can also based on different viewpoints and application, without departing from Various modifications or alterations are carried out under spirit of the invention.It should be noted that in the absence of conflict, following embodiment and implementation Feature in example can be combined with each other.
It should be clear that this specification structure depicted in this specification institute accompanying drawings, ratio, size etc., only to cooperate specification to be taken off The content shown is not intended to limit the invention enforceable qualifications so that those skilled in the art understands and reads, therefore Do not have technical essential meaning, the modification of any structure, the change of proportionate relationship or the adjustment of size are not influencing the present invention Under the effect of can be generated and the purpose that can reach, it should all still fall in disclosed technology contents and obtain the model that can cover In enclosing.Meanwhile cited such as "upper" in this specification, "lower", "left", "right", " centre " and " one " term, be also only Convenient for being illustrated for narration, rather than to limit the scope of the invention, relativeness is altered or modified, in no essence It changes under technology contents, when being also considered as the enforceable scope of the present invention.
Referring to figs. 1 to 4, a kind of three-dimensional vehicle detection method returned based on key point, comprising the following steps:
S01: the key point label of default tracking target determines the detection zone of tracking target, obtains default tracking target The relative position information of key point label in the detection area is labeled as key point first location information;
S02: extracting the characteristic pattern of object detection area, obtains relative position letter of the tracking target critical point in characteristic pattern Breath is labeled as key point second location information;
S03: it is input with key point first location information and key point second location information, obtains loss function to optimize Network structure.
As a preferred embodiment, in the step S02, after the characteristic pattern for extracting object detection area, further including will be each The step S021 of the Fusion Features of characteristic pattern.
As a preferred embodiment, the Fusion Features of the characteristic pattern in the step S021 be limited to the characteristic pattern of middle low layer into Row Fusion Features.I.e. in the convolutional layer of neural network, the characteristic pattern of the convolutional layer in middle low layer carries out Fusion Features.
As a preferred embodiment, the Fusion Features in the step S021 are intensive Fusion Features.I.e. Fusion Features when, When selecting the convolutional layer in neural network, chooses convolutional layer as much as possible and carry out Fusion Features.
As a preferred embodiment, after the detection zone for determining tracking target of step S01, in the extraction mesh of step S02 Before the characteristic pattern for marking detection zone, include the steps that scale space converts S01a: by the characteristic pattern of each object detection area It is converted into the standard feature figure of identical scale, then carries out critical point detection, obtains relative position of the key point in characteristic pattern Relationship.
As a preferred embodiment, the size of the standard feature figure is the characteristic pattern of 56*56, and the size of standard feature figure is slightly It is in key point except standard feature figure greater than the size of tracking target candidate frame to prevent the edge exposure of tracking target.
As a preferred embodiment, before obtaining relative position information of the tracking target critical point in characteristic pattern, further include By the step S01b of standard feature figure down-sampling: obtaining down-sampling layer characteristic pattern, following sample level characteristic pattern is as input, training Down-sampling localized network;Following sample level characteristic pattern is input down-sampling localized network again, exports key point location information and reflects It is emitted back towards in standard feature figure.
As a preferred embodiment, the standard feature figure of the 56*56 that will be obtained carries out down-sampling, obtains adopting under 7*7 Sample layer characteristic pattern is learnt in the down-sampling layer characteristic pattern of 7*7, after study sufficiently, then obtained key point position is reflected It is mapped in the standard feature figure of 56*56.
As a preferred embodiment, further include step S01c: will have mapped the standard feature figure of down-sampling key point position into The operation of row mask, training standard feature localized network make it only learn the mapping position in standard feature figure where key point.
As a preferred embodiment, in the standard feature figure of 56*56, using mask mask, only study contains key point Part is reduced learning difficulty, is learnt using loss function.
As a preferred embodiment, it is origin that the object detection area, which marks its upper left angle point, is obtained parameter (X, Y), mesh The width of mark detection zone is set as W, and the high setting of object detection area is H;Obtain object detection area parameter (X, Y, W, H)。
As a preferred embodiment, in the network structure, section of foundation uses resnet50 network structure, and detection section uses Rrc network
Structure.
As a preferred embodiment, the network structure of the critical point detection section includes obtaining the feature of low layer in section of foundation Figure makes the window of each characteristic pattern generate fixed-size characteristic pattern by pooling layers of RoI, solid with the fusion of concat function The characteristic pattern of scale cun obtains standard feature figure, standard feature figure and default tracking using convolution at least once, pondization operation The key point label input of target generates first-loss function together.
As a preferred embodiment, the critical point detection section operates before generating standard feature figure by cubic convolution, pondization.
As a preferred embodiment, the standard feature figure obtains down-sampling layer using convolution at least once, pondization operation Characteristic pattern, the label of down-sampling layer characteristic pattern and tracking target critical point in characteristic pattern are grasped by mask again collectively as input Make, generates the second loss function.
As a preferred embodiment, the critical point detection section passes through cubic convolution, Chi Hua before generating down-sampling layer characteristic pattern Operation.
A kind of three-dimensional vehicle detection system returned based on key point, including key point label for labelling module, target detection Module, characteristic extracting module, key point first position generation module, key point second location information generation module, loss function Generation module;
The module of target detection is used to obtain tracking target in original image, and detection zone is obtained based on tracking target Domain;
The key point label for labelling module exports key point label for marking tracking target critical point;
The characteristic extracting module generates characteristic pattern for extracting feature in self-test region;
Key point first position generation module is used for the pixel of module of target detection where key point label Location information generates key point first position array;
Key point second position generation module is used for the location information of the lattice point of characteristic pattern where key point label Generate key point second position array;
The loss function generation module be used for key point first position array digit corresponding with second position array it The sum of difference and the product of coefficient obtain loss function, with corrective networks structure.
It as a preferred embodiment, further include Fusion Features module, the Fusion Features module is used for will be low in section of foundation The characteristic pattern of layer carries out fusion and generates standard feature figure.
It as a preferred embodiment, further include scale space conversion module, the scale space conversion module is used for will be basic Each layer of characteristic pattern is converted into identical size and generates standard feature figure in section.
It as a preferred embodiment, further include down-sampling layer module, the down-sampling layer module is used for will be in standard feature figure Each lattice point down-sampling generates the down-sampling layer characteristic pattern that bulk is less than standard feature figure.
It as a preferred embodiment, further include mask module, the mask module is used for second in down-sampling layer characteristic pattern During key point location information maps to standard feature figure, in standard feature figure in addition to the second key point location information Relevant lattice point carries out the operation of mask.
A kind of three-dimensional vehicle detection terminal returned based on key point, which is characterized in that including processor and memory, institute It states memory and is stored with program instruction, the processor operation program instruction realizes the step in above-mentioned method.
As a preferred embodiment, the present embodiment also provides a kind of terminal device, can such as execute the smart phone of program, put down Plate computer, laptop, desktop computer, rack-mount server, blade server, tower server or cabinet-type service Device (including server cluster composed by independent server or multiple servers) etc..The terminal device of the present embodiment is extremely It is few to include but is not limited to: memory, the processor of connection can be in communication with each other by system bus.It should be pointed out that having group The terminal device of part memory, processor can substitute it should be understood that being not required for implementing all components shown Implementation is more or less component.
As a preferred embodiment, memory (i.e. readable storage medium storing program for executing) includes flash memory, hard disk, multimedia card, card-type storage Device (for example, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only storage Device (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, magnetic Disk, CD etc..In some embodiments, memory can be the internal storage unit of computer equipment, such as the computer is set Standby 20 hard disk or memory.In further embodiments, memory is also possible to the External memory equipment of computer equipment, such as The plug-in type hard disk being equipped in the computer equipment, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..Certainly, memory can also both include computer equipment Internal storage unit also include its External memory equipment.In the present embodiment, memory is installed on computer commonly used in storage The operating system and types of applications software of equipment, for example, in embodiment Case-based Reasoning segmentation target Re-ID program code Deng.In addition, memory can be also used for temporarily storing the Various types of data that has exported or will export.
Processor can be central processing unit (Central Processing Unit, CPU), control in some embodiments Device, microcontroller, microprocessor or other data processing chips processed.The processor is total commonly used in control computer equipment Gymnastics is made.In the present embodiment, program code or processing data of the processor for being stored in run memory, such as operation base In the target Re-ID program of example segmentation, to realize the function of the target Re-ID system of Case-based Reasoning segmentation in embodiment.
A kind of computer readable storage medium, is stored thereon with computer program, it is characterised in that: the program is by processor The step in above-mentioned method is realized when execution.
The present embodiment also provides a kind of computer readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, magnetic Disk, CD, server, App are stored thereon with computer program, phase are realized when program is executed by processor using store etc. Answer function.The computer readable storage medium of the present embodiment is used to store the target Re-ID program of Case-based Reasoning segmentation, processed The target Re-ID method of the Case-based Reasoning segmentation in embodiment is realized when device executes.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe The personage for knowing this technology all without departing from the spirit and scope of the present invention, carries out modifications and changes to above-described embodiment.Cause This, includes that institute is complete without departing from the spirit and technical ideas disclosed in the present invention for usual skill in technical field such as At all equivalent modifications or change, should be covered by the claims of the present invention.

Claims (11)

1. a kind of three-dimensional vehicle detection method returned based on key point, which comprises the following steps:
S01: the key point label of default tracking target determines the detection zone of tracking target, obtains default tracking target critical The relative position information of point label in the detection area, is labeled as key point first location information;
S02: extracting the characteristic pattern of object detection area, obtains relative position information of the tracking target critical point in characteristic pattern, Labeled as key point second location information;
S03: it is input with key point first location information and key point second location information, obtains loss function to optimize network Structure.
2. the three-dimensional vehicle detection method according to claim 1 returned based on key point, which is characterized in that the step In S02, after the characteristic pattern for extracting object detection area, include the steps that the Fusion Features S021 of each characteristic pattern.
3. the three-dimensional vehicle detection method according to claim 2 returned based on key point, which is characterized in that in step It further include ruler before the characteristic pattern of the extraction object detection area of step S02 after the detection zone for determining tracking target of S01 The characteristic pattern of each object detection area: being converted into the standard feature figure of identical scale by the step S01a of degree space conversion, Critical point detection is carried out again, obtains relative positional relationship of the key point in characteristic pattern.
4. the three-dimensional vehicle detection method according to claim 3 returned based on key point, which is characterized in that chased after in acquisition Before relative position information of the track target critical point in characteristic pattern, include thes steps that standard feature figure down-sampling S01b: obtaining Sample level characteristic pattern is removed, following sample level characteristic pattern is as input, training down-sampling localized network;Following sample level feature again Figure is input down-sampling localized network, exports key point location information and maps back in standard feature figure.
5. the three-dimensional vehicle detection method according to claim 4 returned based on key point, which is characterized in that further include step Rapid S01c: will have mapped down-sampling key point position standard feature figure carry out mask operation, training standard feature localized network, It is set only to learn the mapping position in standard feature figure where key point.
6. the three-dimensional vehicle detection method according to claim 5 returned based on key point, which is characterized in that the target It is origin that detection zone, which marks its upper left angle point, is obtained parameter (X, Y), the width of object detection area is set as W, target detection area The high setting in domain is H;Obtain the parameter (X, Y, W, H) of object detection area.
7. the three-dimensional vehicle detection method according to claim 1 returned based on key point, which is characterized in that the network In structure, section of foundation uses resnet50 network structure, and detection section uses rrc network structure.
8. the three-dimensional vehicle detection method according to claim 1 returned based on key point, which is characterized in that the key The network structure of point detection section includes the characteristic pattern for obtaining low layer in section of foundation, makes each characteristic pattern by pooling layers of RoI Window generate fixed-size characteristic pattern, fixed-size characteristic pattern is merged with concat function, using rolling up at least once Product, pondization operation obtain standard feature figure, and the key point label of standard feature figure and default tracking target inputs generates the together One loss function;The standard feature figure obtains down-sampling layer characteristic pattern, down-sampling using convolution at least once, pondization operation The label of layer characteristic pattern and tracking target critical point in characteristic pattern is operated by mask again collectively as input, generates the second damage Lose function.
9. it is a kind of based on key point return three-dimensional vehicle detection system, which is characterized in that including key point label for labelling module, Module of target detection, characteristic extracting module, key point first position generation module, key point second location information generation module, Loss function generation module;
The module of target detection is used to obtain tracking target in original image, and detection zone is obtained based on tracking target;
The key point label for labelling module exports key point label for marking tracking target critical point;
The characteristic extracting module generates characteristic pattern for extracting feature in self-test region;
Key point first position generation module is used for the position of the pixel of module of target detection where key point label Information generates key point first position array;
Key point second position generation module is used to generate with the location information of the lattice point of characteristic pattern where key point label Key point second position array;
The loss function generation module is used for the difference of key point first position array and the corresponding digit of second position array Loss function is obtained with the product with coefficient, with corrective networks structure.
10. a kind of three-dimensional vehicle detection system terminal returned based on key point, which is characterized in that including processor and storage Device, the memory are stored with program instruction, and the processor operation program instruction realizes such as any right of claim 1 to 8 It is required that the step in the method.
11. a kind of computer readable storage medium, is stored thereon with computer program, it is characterised in that: the program is by processor The step in the method as described in claim 1 to 8 any claim is realized when execution.
CN201810834980.3A 2018-07-26 2018-07-26 A kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point Pending CN109190662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810834980.3A CN109190662A (en) 2018-07-26 2018-07-26 A kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810834980.3A CN109190662A (en) 2018-07-26 2018-07-26 A kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point

Publications (1)

Publication Number Publication Date
CN109190662A true CN109190662A (en) 2019-01-11

Family

ID=64937632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810834980.3A Pending CN109190662A (en) 2018-07-26 2018-07-26 A kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point

Country Status (1)

Country Link
CN (1) CN109190662A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902629A (en) * 2019-03-01 2019-06-18 成都康乔电子有限责任公司 A kind of real-time vehicle target detection model under vehicles in complex traffic scene
CN111507156A (en) * 2019-01-30 2020-08-07 斯特拉德视觉公司 Method and device for detecting occupation of vehicle by using key points of passengers
CN112990050A (en) * 2021-03-26 2021-06-18 清华大学 Monocular 3D target detection method based on lightweight characteristic pyramid structure
CN113963060A (en) * 2021-09-22 2022-01-21 腾讯科技(深圳)有限公司 Vehicle information image processing method and device based on artificial intelligence and electronic equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507156A (en) * 2019-01-30 2020-08-07 斯特拉德视觉公司 Method and device for detecting occupation of vehicle by using key points of passengers
CN111507156B (en) * 2019-01-30 2023-09-15 斯特拉德视觉公司 Method and device for detecting vehicle occupancy by utilizing key points of passengers
CN109902629A (en) * 2019-03-01 2019-06-18 成都康乔电子有限责任公司 A kind of real-time vehicle target detection model under vehicles in complex traffic scene
CN112990050A (en) * 2021-03-26 2021-06-18 清华大学 Monocular 3D target detection method based on lightweight characteristic pyramid structure
CN112990050B (en) * 2021-03-26 2021-10-08 清华大学 Monocular 3D target detection method based on lightweight characteristic pyramid structure
CN113963060A (en) * 2021-09-22 2022-01-21 腾讯科技(深圳)有限公司 Vehicle information image processing method and device based on artificial intelligence and electronic equipment
CN113963060B (en) * 2021-09-22 2022-03-18 腾讯科技(深圳)有限公司 Vehicle information image processing method and device based on artificial intelligence and electronic equipment

Similar Documents

Publication Publication Date Title
CN109117848B (en) Text line character recognition method, device, medium and electronic equipment
CN109190662A (en) A kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point
CN110148148A (en) A kind of training method, model and the storage medium of the lower edge detection model based on target detection
Min et al. New approach to vehicle license plate location based on new model YOLO‐L and plate pre‐identification
CN109271842A (en) A kind of generic object detection method, system, terminal and storage medium returned based on key point
CN111291825A (en) Focus classification model training method and device, computer equipment and storage medium
CN110176017A (en) A kind of Model for Edge Detection based on target detection, method and storage medium
Shu et al. LVC-Net: Medical image segmentation with noisy label based on local visual cues
CN113808251B (en) Dense reconstruction method, system, device and medium based on semantic segmentation
CN115512169B (en) Weak supervision semantic segmentation method and device based on gradient and region affinity optimization
Xu et al. Fast ship detection combining visual saliency and a cascade CNN in SAR images
Panda et al. Kernel density estimation and correntropy based background modeling and camera model parameter estimation for underwater video object detection
CN115035367A (en) Picture identification method and device and electronic equipment
CN110879972A (en) Face detection method and device
CN109190467A (en) A kind of more object detecting methods, system, terminal and storage medium returned based on key point
CN114972492A (en) Position and pose determination method and device based on aerial view and computer storage medium
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
Ke et al. Dense small face detection based on regional cascade multi‐scale method
CN116843983A (en) Pavement disease recognition method, model training method, electronic equipment and medium
Li et al. Stereo neural vernier caliper
Dong et al. SiameseDenseU‐Net‐based Semantic Segmentation of Urban Remote Sensing Images
CN113096104A (en) Training method and device of target segmentation model and target segmentation method and device
CN114118127A (en) Visual scene mark detection and identification method and device
Barra et al. Can Existing 3D Monocular Object Detection Methods Work in Roadside Contexts? A Reproducibility Study
Tao et al. 3d semantic vslam of indoor environment based on mask scoring rcnn

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination