CN108985147A - Object detection method and device - Google Patents
Object detection method and device Download PDFInfo
- Publication number
- CN108985147A CN108985147A CN201810552844.5A CN201810552844A CN108985147A CN 108985147 A CN108985147 A CN 108985147A CN 201810552844 A CN201810552844 A CN 201810552844A CN 108985147 A CN108985147 A CN 108985147A
- Authority
- CN
- China
- Prior art keywords
- region
- input picture
- candidate
- probability distribution
- candidate area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present invention proposes a kind of object detection method and device, is related to technical field of computer vision.The object detection method and device obtain the destination probability distribution map of each input picture by multiple pre-established convolutional neural networks, the region for meeting preset condition to filter out from each input picture as includes mesh target area as object candidate area;Due to during obtaining final object candidate area, multiplex screening is carried out to input picture by multiple convolutional neural networks, so that the object candidate area quantity in treatment process is reduced, and the object candidate area of final output includes that the probability of target is bigger, to also improve detection efficiency while improving the detection accuracy to target.
Description
Technical field
The present invention relates to technical field of computer vision, in particular to a kind of object detection method and device.
Background technique
Hand detection is the important research direction in computer vision field, i.e., positions the position of hand in the picture.Family
The application directions such as front yard amusement, intelligent control propose increasingly higher demands to human-computer interaction.Equipment is directly controlled by hand,
It is able to ascend the convenience and naturality of human-computer interaction.
Hand detection at present, which mainly has, utilizes the certain methods such as the colour of skin, profile, depth map.The robustness that these methods have compared with
Difference, detection effect is bad under complex background;The higher storage operational capability of some need, there is certain requirement to hardware;Have
Needs special acquisition equipment imaging.These disadvantages affect the popularization and application of this technology.
Summary of the invention
The purpose of the present invention is to provide a kind of object detection method and devices, so that more to the testing result of target
Accurately, the process for detecting target is rapider.
To achieve the goals above, technical solution used in the embodiment of the present invention is as follows:
In a first aspect, the embodiment of the invention provides a kind of object detection method, the hand detection method includes:
Obtain the different input picture of multiple sizes;
The destination probability distribution map of each input picture is obtained based on pre-established multiple convolutional neural networks;
The area for meeting preset condition that will be filtered out from each input picture based on the destination probability distribution map
Domain is as object candidate area;
Export the object candidate area.
Second aspect, the embodiment of the invention also provides a kind of object detecting device, the hand detection device includes:
Data capture unit, for obtaining the different input picture of multiple sizes;
Destination probability acquiring unit, for obtaining each input picture based on pre-established multiple convolutional neural networks
Destination probability distribution map;
Object candidate area selectes unit, for that will be based on the destination probability distribution map from each input picture
The region for meeting preset condition filtered out is as object candidate area;
Output unit, for exporting the object candidate area.
A kind of object detection method and device provided in an embodiment of the present invention, pass through multiple pre-established convolutional neural networks
The destination probability distribution map of each input picture is obtained, so as to the area for meeting preset condition filtered out from each input picture
Domain as includes mesh target area as object candidate area;Due to leading to during obtaining final object candidate area
It crosses multiple convolutional neural networks and multiplex screening is carried out to input picture, so that the object candidate area quantity in treatment process
It reduces, and probability of the object candidate area of final output comprising target is bigger, thus improving the detection essence to target
While spending, detection efficiency is also improved.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate
Appended attached drawing, is described in detail below.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows the functional block diagram of server provided in an embodiment of the present invention.
Fig. 2 shows the flow charts of object detection method provided in an embodiment of the present invention.
Fig. 3 shows a kind of specific flow chart of step S203 in Fig. 2 provided in an embodiment of the present invention.
Fig. 4 shows another specific flow chart of step S203 in Fig. 2 provided in an embodiment of the present invention.
Fig. 5 shows the functional block diagram of object detecting device provided in an embodiment of the present invention.
Icon: 100- server;111- memory;112- processor;113- communication unit;200- object detecting device;
210- data capture unit;220- pretreatment unit;230- destination probability distribution map generation unit;240- object candidate area is true
Order member;250- comparing unit;260- combining unit;270- output unit.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below
Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention
In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
Referring to Fig. 1, Fig. 1 shows a kind of functional block diagram of server 100 that can be applied in the embodiment of the present invention.Packet
Object detecting device 200, memory 111, storage control are included, one or more (one is only shown in figure) processors 112 lead to
Believe unit 113.These components are mutually communicated by one or more communication bus/signal wire.The object detecting device 200 wraps
The service can be stored in the memory 111 or be solidificated in the form of software or firmware (firmware) by including at least one
SFU software functional unit in the operating system (operating S4stem, OS) of device 100.
Memory 111 can be used for storing software program and unit, as in the embodiment of the present invention software testing device and
Program instruction/unit corresponding to method, the object detecting device that processor 112 is stored in memory 111 by operation
200, the software program and unit of method, mentions thereby executing various function application and data processing, such as the embodiment of the present invention
The object detection method of confession.Wherein, the memory 111 may be, but not limited to, random access memory (Random
Access Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read only memory
(Programmable Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable
Read-Only Memory, EPROM), electricallyerasable ROM (EEROM) (Electric Erasable Programmable
Read-Only Memory, EEPROM) etc..Processor 112 and other possible components can deposit the access of memory 111
It stores up and is carried out under the control of controller.
The communication unit 113 by the network for being established between the server 100 and other communication terminals
Communication connection, and for passing through the network sending and receiving data.
It should be understood that structure shown in FIG. 1 is only to illustrate, server 100 may also include it is more than shown in Fig. 1 or
The less component of person, or with the configuration different from shown in Fig. 1.Each component shown in Fig. 1 can using hardware, software or
A combination thereof is realized.
First embodiment
Whether the embodiment of the invention provides a kind of object detection methods, special comprising target in input picture for detecting
Sign.Referring to Fig. 2, being the flow chart of object detection method provided in an embodiment of the present invention.Then the object detection method include with
Lower step:
Step S201: the different input picture of multiple sizes is obtained.
It should be noted that in the present embodiment, the different input picture of multiple sizes be by same image according to
Obtained from different proportion zooms in and out.
Step S202: each input picture is denoised, normalized.
Since input picture tastes the shadows such as imaging device and external environmental noise interference in digitlization and transmission process
It rings, therefore input picture includes noise mostly.
By carrying out denoising to each input picture, quality, the clarity of each input picture can be improved, to subtract
Servlets 100 carry out error present in detection process to input picture, so that detection knot of the server 100 to input picture
Fruit is more accurate.
By the way that each input picture is normalized, the size dimension of multiple input pictures can be limited in centainly
In the range of, quickening is restrained to guarantee server 100 during detecting target, improves detection efficiency.
Step S203: the destination probability distribution of each input picture is obtained based on pre-established multiple convolutional neural networks
Figure.
Referring to Fig. 3, being the specific flow chart of step S203.Step S203 includes:
Sub-step S2031: the first probability point of each input picture is obtained based on the first pre-established convolutional neural networks
Butut.
It is to be appreciated that by multiple the first pre-established convolutional neural networks, each input picture can be obtained therewith
Corresponding first probability distribution graph is convenient for server to obtain target signature in the distribution situation of the input picture each region
100 further filtered out from input picture with high probability include mesh target area.
Sub-step S2032: meeting of being filtered out from each input picture based on the first probability distribution graph first is preset
The region of condition is as the first candidate region.
It should be noted that each first candidate region be from multiple input pictures cut choosing include target signature
Region.In addition, the first different candidate regions may have different sizes, the first candidate region size is selected according to its section
Depending on the size of input picture.Specifically, input picture is smaller, then corresponding first candidate region is bigger;Input picture is bigger,
Corresponding first candidate region is smaller.
In a kind of preferred embodiment, if probability value corresponding to the region that input picture includes is greater than presetting
When first threshold, which is chosen to be the first candidate region.
By this step, server 100 can by input picture do not include targeted graphical region or include target
The lower region of the probability of feature screens.
Since in the input image, probability value is greater than the region of presetting first threshold, having larger possibility includes mesh
Feature is marked, thus is chosen to be the first candidate region, the process that server 100 detects input picture can be optimized,
Keep final result more accurate;Probability value is less than or equal to the region of presetting third threshold value, includes target signature
Probability is low, and possibility is small, therefore is rejected, with reduce server 100 to input picture carry out detection process in operand,
Improve detection efficiency.
Sub-step S2033: the second general of each first candidate region is obtained based on the second pre-established convolutional neural networks
Rate distribution map.
It should be noted that the second convolutional neural networks are bigger relative to the first convolutional neural networks, thus accuracy of identification
It is higher, further the first candidate region can be screened, to achieve the effect that improve detection accuracy.
Sub-step S2034: meet second for what is filtered out from each first candidate region based on the second probability distribution graph
The region of preset condition is as the second candidate region.
In a kind of preferred embodiment, preset if probability value corresponding to the region that the first candidate region includes is greater than
When fixed second threshold, which is chosen to be the second candidate region.
By this step, server 100 can be further reduced the quantity of object candidate area, thus reduce calculation amount,
Improve detection efficiency.
Sub-step S2035: the target for obtaining each second candidate region based on pre-established third convolutional neural networks is general
Rate distribution map.
It should be noted that third convolutional neural networks are bigger relative to the second convolutional neural networks, thus accuracy of identification
It is higher, further the second candidate region can be screened, to achieve the effect that improve detection accuracy.
It is to be appreciated that in the present embodiment, convolutional neural networks include the first convolutional neural networks, the second convolutional Neural
Network and third convolutional neural networks, and the first convolutional neural networks, the second convolutional neural networks and third convolutional Neural
Network is sequentially increased.It should be noted that in other embodiments, convolutional neural networks may also include than the first convolution nerve net
Network, the second convolutional neural networks and the more convolutional neural networks of third convolutional neural networks, are not particularly limited herein.
It should also be noted that, in a kind of preferred embodiment, between sub-step S2032 and sub-step S2033, step
Rapid S203 can also include (as shown in Figure 4):
Sub-step S2036: multiple first candidate regions are compared.
Sub-step S2037: two the first candidate regions that degree of overlapping is greater than the 5th presetting threshold value are merged.
By sub-step S2036, sub-step S2037, server 100 can be further reduced the quantity of the first candidate region,
To be further decreased again by the operand that the second convolutional neural networks are treated journey to the first candidate region, to mention
The high detection efficiency of server 100.
Between sub-step S2034 and sub-step S2035, step S203 can also include:
Sub-step S2038: multiple second candidate regions are compared.
Sub-step S2039: two the second candidate regions that degree of overlapping is greater than the 6th presetting threshold value are merged.
Similarly, by sub-step S2038, sub-step S2039, server 100 can be further reduced the second candidate region
Quantity, thus further decreased again by the operand that third convolutional neural networks are treated journey to the second candidate region,
To improve the detection efficiency of server 100.
It is to be appreciated that step S203 can not also include sub-step S2035, sub-step in other preferred embodiments
Rapid S2036, sub-step S2037, sub-step S2038, are not particularly limited herein.
Step S204: the area for meeting preset condition that will be filtered out from each input picture based on destination probability distribution map
Domain is as object candidate area.
In a kind of preferred embodiment, step S204 includes: will be candidate from each second based on destination probability distribution map
The region for meeting third preset condition filtered out in region is as object candidate area.
In a kind of preferred embodiment, meet third preset condition and refer to: if the region that the second candidate region includes
When corresponding probability value is greater than presetting third threshold value, which is chosen to be object candidate area.
By this step, server 100 can reduce the quantity of object candidate area again, to obtain accurate target
Detection efficiency is improved to reduce calculation amount in candidate region.
Step S205: multiple object candidate areas are compared.
Step S206: two object candidate areas that degree of overlapping is greater than the 4th presetting threshold value are merged.
Similarly, by step S205, step S206, server 100 can be further reduced the quantity of object candidate area, from
And improve the detection efficiency of server 100.
Step S207: output object candidate area.
Second embodiment
A kind of object detecting device 200 provided in an embodiment of the present invention is applied to server 100.It should be noted that this
The technical effect of object detecting device 200 provided by embodiment, basic principle and generation is identical with above-described embodiment, for letter
It describes, the present embodiment part does not refer to place, can refer to corresponding contents in the above embodiments.
Referring to Fig. 5, being the functional block diagram of object detecting device 200.The object detecting device 200 includes: that data obtain
Take unit 210, pretreatment unit 220, destination probability distribution map generation unit 230, object candidate area determination unit 240, ratio
To unit 250, combining unit 260 and output unit 270.
Data capture unit 210 is for obtaining the different input picture of multiple sizes.
It is to be appreciated that data capture unit 210 can be used for executing step S201.
Pretreatment unit 220 for each input picture is denoised, normalized.
It is to be appreciated that pretreatment unit 220 can be used for executing step S202.
Destination probability distribution map generation unit 230 is used to obtain each input based on pre-established multiple convolutional neural networks
The destination probability distribution map of image.
Specifically, firstly, destination probability distribution map generation unit 230 is obtained based on the first pre-established convolutional neural networks
First probability distribution graph of each input picture.
It is to be appreciated that by multiple the first pre-established convolutional neural networks, each input picture can be obtained therewith
Corresponding first probability distribution graph is convenient for server to obtain target signature in the distribution situation of the input picture each region
100 further filtered out from input picture with high probability include mesh target area.
Then, destination probability distribution map generation unit 230 will be sieved from each input picture based on the first probability distribution graph
The region for meeting the first preset condition selected is as the first candidate region.
It should be noted that each first candidate region be from multiple input pictures cut choosing include target signature
Region.In addition, the first different candidate regions may have different sizes, the first candidate region size is selected according to its section
Depending on the size of input picture.Specifically, input picture is smaller, then corresponding first candidate region is bigger;Input picture is bigger,
Corresponding first candidate region is smaller.
In a kind of preferred embodiment, if probability value corresponding to the region that input picture includes is greater than presetting
When first threshold, which is chosen to be the first candidate region.
By this step, server 100 can by input picture do not include targeted graphical region or include target
The lower region of the probability of feature screens.
Since in the input image, probability value is greater than the region of presetting first threshold, having larger possibility includes mesh
Feature is marked, thus is chosen to be the first candidate region, the process that server 100 detects input picture can be optimized,
Keep final result more accurate;Probability value is less than or equal to the region of presetting third threshold value, includes target signature
Probability is low, and possibility is small, therefore is rejected, with reduce server 100 to input picture carry out detection process in operand,
Improve detection efficiency.
Then, destination probability distribution map generation unit 230 obtains each the based on pre-established the second convolutional neural networks
Second probability distribution graph of one candidate region.
Destination probability distribution map generation unit 230 is also used to be based on the second probability distribution graph from each first candidate region
In the region for meeting the second preset condition that filters out as the second candidate region.
Destination probability distribution map generation unit 230 is also used to obtain each the based on pre-established third convolutional neural networks
The destination probability distribution map of two candidate regions.
It is to be appreciated that destination probability distribution map generation unit 230 can be used for executing step S203, sub-step S2031, son
Step S2032, sub-step S2033, sub-step S2034, sub-step S2034, sub-step S2035, sub-step S2036, sub-step
S2037, sub-step S2038, sub-step S2039.
Object candidate area determination unit 240 based on destination probability distribution map from each input picture for that will be filtered out
The region for meeting preset condition as object candidate area.
In a kind of preferred embodiment, object candidate area determination unit 240 will be for that will be based on destination probability distribution map
The region for meeting third preset condition filtered out from each second candidate region is as object candidate area.
In a kind of preferred embodiment, meet third preset condition and refer to: if the region that the second candidate region includes
When corresponding probability value is greater than presetting third threshold value, which is chosen to be object candidate area.
By object candidate area determination unit 240, server 100 can reduce the quantity of object candidate area again,
To obtain accurate object candidate area, to reduce calculation amount, detection efficiency is improved.
It is to be appreciated that object candidate area determination unit 240 can be used for executing step S204.
Comparing unit 250 is for multiple object candidate areas to be compared.
It is to be appreciated that comparing unit 250 can be used for executing step S205.
Two object candidate areas that combining unit 260 is used to for degree of overlapping being greater than the 4th presetting threshold value merge.
It is to be appreciated that combining unit 260 can be used for executing step S206.
Output unit 270 is for exporting object candidate area.
It is to be appreciated that output unit 270 can be used for executing step S207.
It should be noted that in embodiments of the present invention, target image refers to hand images.Certainly, in other embodiments
In, target image may be other.
In conclusion a kind of object detection method provided in an embodiment of the present invention and device, pass through multiple pre-established volumes
Product neural network obtains the destination probability distribution map of each input picture, so that meeting of filtering out from each input picture is pre-
If the region of condition as object candidate area, as includes mesh target area;Due to obtaining final object candidate area
During, multiplex screening is carried out to input picture by multiple convolutional neural networks, so that the target in treatment process
Candidate region quantity is reduced, and the object candidate area of final output include target probability it is bigger, to improve pair
While the detection accuracy of target, detection efficiency is also improved.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should also be noted that similar label and letter exist
Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing
It is further defined and explained.
Claims (10)
1. a kind of object detection method, which is characterized in that the hand detection method includes:
Obtain the different input picture of multiple sizes;
The destination probability distribution map of each input picture is obtained based on pre-established multiple convolutional neural networks;
The region for meeting preset condition filtered out from each input picture based on the destination probability distribution map is made
For object candidate area;
Export the object candidate area.
2. object detection method according to claim 1, which is characterized in that described based on pre-established convolutional neural networks
The step of obtaining the destination probability distribution map of each input picture include:
The first probability distribution graph of each input picture is obtained based on the first pre-established convolutional neural networks;
The area for meeting the first preset condition that will be filtered out from each input picture based on first probability distribution graph
Domain is as the first candidate region;The second of each first candidate region is obtained based on the second pre-established convolutional neural networks
Probability distribution graph;
Meet the second preset condition for what is filtered out from each first candidate region based on second probability distribution graph
Region as the second candidate region;
The destination probability distribution map of each second candidate region is obtained based on pre-established third convolutional neural networks;
The area for meeting preset condition that will be filtered out from each input picture based on the destination probability distribution map
Domain includes: as the step of object candidate area
Meet third preset condition for what is filtered out from each second candidate region based on the destination probability distribution map
Region as object candidate area.
3. object detection method according to claim 2, which is characterized in that will be based on first probability distribution graph from every
The region for meeting the first preset condition filtered out in a input picture includes: as the step of the first candidate region
If probability value corresponding to the region that the input picture includes is greater than presetting first threshold, which is selected
It is set to first candidate region;
It is described second to preset meeting of filtering out from each first candidate region based on second probability distribution graph
The region of condition includes: as the step of the second candidate region
If probability value corresponding to the region that first candidate region includes is greater than presetting second threshold, by the area
Domain is chosen to be second candidate region;
It is described to preset the third that meets filtered out from each second candidate region based on the destination probability distribution map
The region of condition includes: as the step of object candidate area
If probability value corresponding to the region that second candidate region includes is greater than presetting third threshold value, by the area
Domain is chosen to be the object candidate area.
4. object detection method according to claim 1, which is characterized in that will be distributed based on the destination probability described
Figure from the region for meeting preset condition filtered out in each input picture as object candidate area the step of after, institute
State object detection method further include:
Multiple object candidate areas are compared;
Two object candidate areas that degree of overlapping is greater than the 4th presetting threshold value are merged.
5. object detection method according to claim 1, which is characterized in that described based on pre-established multiple convolution mind
Before the step of obtaining the destination probability distribution map of each input picture through network, the object detection method further include:
Each input picture is denoised, normalized.
6. a kind of object detecting device, which is characterized in that the object detecting device includes:
Data capture unit, for obtaining the different input picture of multiple sizes;
Destination probability acquiring unit, for obtaining the mesh of each input picture based on pre-established multiple convolutional neural networks
Mark probability distribution graph;
Object candidate area selectes unit, screens from each input picture for that will be based on the destination probability distribution map
The region for meeting preset condition out is as object candidate area;
Output unit, for exporting the object candidate area.
7. object detecting device according to claim 6, which is characterized in that the object candidate area is selected unit and is used for
The first probability distribution graph of each input picture is obtained based on the first pre-established convolutional neural networks;
The object candidate area selectes unit and is also used to be based on first probability distribution graph from each input picture
In the region for meeting the first preset condition that filters out as the first candidate region;
The object candidate area is selected unit and is also used to be obtained each described the based on pre-established the second convolutional neural networks
Second probability distribution graph of one candidate region;
The object candidate area selectes unit and is also used to be based on second probability distribution graph from each first candidate
The region for meeting the second preset condition filtered out in region is as the second candidate region;
The object candidate area is selected unit and is also used to be obtained each described the based on pre-established third convolutional neural networks
The destination probability distribution map of two candidate regions;
The object candidate area selectes unit for that will be based on the destination probability distribution map from each second candidate regions
The region for meeting third preset condition filtered out in domain is as object candidate area.
8. object detecting device according to claim 6, which is characterized in that if object candidate area is selected unit and is also used to
When probability value corresponding to the region that the input picture includes is greater than presetting first threshold, which is chosen to be institute
If stating probability value corresponding to the region that the first candidate region and first candidate region include is greater than presetting the
When two threshold values, which is chosen to be first candidate region.
9. object detecting device according to claim 6, which is characterized in that the object detecting device further include:
Comparing unit, for multiple object candidate areas to be compared;
Combining unit, two object candidate areas for degree of overlapping to be greater than the 4th presetting threshold value merge.
10. object detecting device according to claim 6, which is characterized in that the object detecting device further include:
Pretreatment unit, for being denoised to each input picture, normalized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810552844.5A CN108985147A (en) | 2018-05-31 | 2018-05-31 | Object detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810552844.5A CN108985147A (en) | 2018-05-31 | 2018-05-31 | Object detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108985147A true CN108985147A (en) | 2018-12-11 |
Family
ID=64540281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810552844.5A Withdrawn CN108985147A (en) | 2018-05-31 | 2018-05-31 | Object detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108985147A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110132966A (en) * | 2019-05-14 | 2019-08-16 | 生态环境部卫星环境应用中心 | A kind of cure of Soil pollution spatial position risk evaluating method and system |
CN110210474A (en) * | 2019-04-30 | 2019-09-06 | 北京市商汤科技开发有限公司 | Object detection method and device, equipment and storage medium |
CN111373436A (en) * | 2018-12-18 | 2020-07-03 | 深圳市大疆创新科技有限公司 | Image processing method, terminal device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845406A (en) * | 2017-01-20 | 2017-06-13 | 深圳英飞拓科技股份有限公司 | Head and shoulder detection method and device based on multitask concatenated convolutional neutral net |
CN107527053A (en) * | 2017-08-31 | 2017-12-29 | 北京小米移动软件有限公司 | Object detection method and device |
CN107871102A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | A kind of method for detecting human face and device |
CN107871134A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | A kind of method for detecting human face and device |
CN108038455A (en) * | 2017-12-19 | 2018-05-15 | 中国科学院自动化研究所 | Bionic machine peacock image-recognizing method based on deep learning |
-
2018
- 2018-05-31 CN CN201810552844.5A patent/CN108985147A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107871102A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | A kind of method for detecting human face and device |
CN107871134A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | A kind of method for detecting human face and device |
CN106845406A (en) * | 2017-01-20 | 2017-06-13 | 深圳英飞拓科技股份有限公司 | Head and shoulder detection method and device based on multitask concatenated convolutional neutral net |
CN107527053A (en) * | 2017-08-31 | 2017-12-29 | 北京小米移动软件有限公司 | Object detection method and device |
CN108038455A (en) * | 2017-12-19 | 2018-05-15 | 中国科学院自动化研究所 | Bionic machine peacock image-recognizing method based on deep learning |
Non-Patent Citations (1)
Title |
---|
陈兵旗等: "《实用数字图像处理与分析》", 28 February 2014 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111373436A (en) * | 2018-12-18 | 2020-07-03 | 深圳市大疆创新科技有限公司 | Image processing method, terminal device and storage medium |
CN110210474A (en) * | 2019-04-30 | 2019-09-06 | 北京市商汤科技开发有限公司 | Object detection method and device, equipment and storage medium |
CN110210474B (en) * | 2019-04-30 | 2021-06-01 | 北京市商汤科技开发有限公司 | Target detection method and device, equipment and storage medium |
US11151358B2 (en) | 2019-04-30 | 2021-10-19 | Beijing Sensetime Technology Development Co., Ltd. | Target detection method and apparatus, device, and storage medium |
CN110132966A (en) * | 2019-05-14 | 2019-08-16 | 生态环境部卫星环境应用中心 | A kind of cure of Soil pollution spatial position risk evaluating method and system |
CN110132966B (en) * | 2019-05-14 | 2021-09-10 | 生态环境部卫星环境应用中心 | Method and system for evaluating risk of spatial position of soil pollution source |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106934397B (en) | Image processing method and device and electronic equipment | |
CN106845406A (en) | Head and shoulder detection method and device based on multitask concatenated convolutional neutral net | |
CN106650662B (en) | Target object shielding detection method and device | |
CN107122806A (en) | A kind of nude picture detection method and device | |
CN110443212B (en) | Positive sample acquisition method, device, equipment and storage medium for target detection | |
CN108985147A (en) | Object detection method and device | |
CN109902617B (en) | Picture identification method and device, computer equipment and medium | |
CN112883902B (en) | Video detection method and device, electronic equipment and storage medium | |
CN106680775A (en) | Method and system for automatically identifying radar signal modulation modes | |
CN108985148A (en) | A kind of hand critical point detection method and device | |
CN112949767B (en) | Sample image increment, image detection model training and image detection method | |
CN108416343B (en) | Face image recognition method and device | |
CN105718848B (en) | Quality evaluation method and device for fingerprint image | |
CN110659659A (en) | Method and system for intelligently identifying and early warning pests | |
CN109086734A (en) | The method and device that pupil image is positioned in a kind of pair of eye image | |
CN108205685A (en) | Video classification methods, visual classification device and electronic equipment | |
CN109886087B (en) | Living body detection method based on neural network and terminal equipment | |
CN106530311B (en) | Sectioning image processing method and processing device | |
CN112215271B (en) | Anti-occlusion target detection method and equipment based on multi-head attention mechanism | |
CN107240078A (en) | Lens articulation Method for Checking, device and electronic equipment | |
CN111950345B (en) | Camera identification method and device, electronic equipment and storage medium | |
CN115273123B (en) | Bill identification method, device and equipment and computer storage medium | |
CN111444788B (en) | Behavior recognition method, apparatus and computer storage medium | |
CN114220097A (en) | Anti-attack-based image semantic information sensitive pixel domain screening method and application method and system | |
CN112149570A (en) | Multi-person living body detection method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20181211 |
|
WW01 | Invention patent application withdrawn after publication |