Embodiment
The system architecture and business scenario of the embodiment of the present application description are in order to which more clearly explanation the application is implemented
The technical scheme of example, the restriction of the technical scheme provided for the embodiment of the present application, those of ordinary skill in the art are not formed
Understand, with the differentiation of system architecture and the appearance of new business scene, the technical scheme that the embodiment of the present application provides is for similar
Technical problem, it is equally applicable.
It should be noted that in the embodiment of the present application, " exemplary " or " such as " etc. word make example, example for expression
Card or explanation.Be described as in the embodiment of the present application " exemplary " or " such as " any embodiment or design should
It is interpreted than other embodiments or design more preferably or more advantage.Specifically, " exemplary " or " example are used
Such as " word is intended to that related notion is presented in a concrete fashion.
It should be noted that in the embodiment of the present application, " (English:Of) ", " corresponding (English:Corresponding,
Relevant it is) " and " corresponding (English:Corresponding) " can use with sometimes, it is noted that do not emphasizing it
During difference, its is to be expressed be meant that it is consistent.
The embodiment of the present application provides a kind of object detection system, as shown in figure 1, the system includes:Image capture device 11
With object detection apparatus 12.Wherein, image capture device 11, which is used to treating detection zone, carries out IMAQ and will collect
Image is sent to object detection apparatus 12, exemplary, and the image capture device 11 can obtain figure to be single or multiple
The video camera of the two-dimensional signal of picture can obtain binocular camera of three-dimensional information etc..Object detection apparatus 12 is used for from image
The image that collecting device 11 receives is analyzed and processed to carry out target detection, object detection apparatus using the image received
12 can be the equipment with processing function, such as server.The specific reality of image capture device 11 and object detection apparatus 12
Prior art is now referred to, here is omitted.
The embodiment of the present application provides a kind of object detection method, can be applied in the system shown in Fig. 1, when applied to Fig. 1
When in shown system, the executive agent of this method can be the object detection apparatus 12 shown in Fig. 1.Hereafter using executive agent as
Object detection apparatus 12 illustrates, as shown in Fig. 2 this method includes:
Step 101, the depth value for obtaining multiple pixels in image to be detected and image to be detected.
Wherein, image to be detected is to treat the image obtained after detection zone is shot, can be the figure directly obtained
Picture, or the image for the coloured image that this is directly obtained obtain after the processing such as gray scale, denoising.
Exemplary, treat detection zone using equipment such as common camera or the mobile phones with camera function and clapped
The image for taking the photograph to obtain can be used as image to be detected signified in the application.
Depth value can be by obtaining depth image corresponding to image to be detected;And mapping to be checked is determined from depth image
The depth value of each pixel of picture.
Wherein, depth image (depth image) is also referred to as range image (range image), refers to from image
Collector, such as image of the distance (or referred to as depth) as pixel value of binocular camera each point into region to be detected, energy
It is enough directly to reflect the geometry of object visible surface, namely can directly determine the contour line of each object.In depth image,
What each pixel represented is in the visual field of image acquisition device, and object is to imaging head plane at specific (x, the y) coordinate
Distance.Therefore, each pixel is corresponding with depth value in depth image, for representing the depth of each object in region to be detected
Angle value.The acquisition methods of common depth image have laser radar Depth Imaging method, computer stereo vision imaging, measurement of coordinates
Machine method, Moire fringe technique, Structure light method etc., the specific implementation of depth image refer to prior art, and here is omitted.
Step 102, the depth value with reference to each pixel carry out target detection to image to be detected.
, can be by image to be detected in a kind of implementation of step 102, depth value is located at the pixel in same scope
Point is divided into same candidate region.
Exemplary, as shown in figure 3, the application respectively illustrates image to be detected, depth image and performs step 102
Obtained candidate region 1 and candidate region 2 is divided afterwards.
After candidate region is determined according to the depth value of pixel, feature extraction directly can be carried out to candidate region, most
Candidate region is classified using the grader of training afterwards, so as to realize target detection.The specific implementation process of the step can
With reference to prior art, here is omitted.
In another implementation of this step 102, can using according to the candidate region that depth value is determined as just
Beginning candidate region, and to initial candidate region carry out further split obtain object candidate area, as shown in figure 4, step 102 has
Body can be realized:
Step 201, depth value is located at the pixel in same scope be divided into same initial candidate region.
Step 202, according to characteristics of image initial candidate region is split as object candidate area.
Step 203, target is determined in object candidate area.
Wherein, described image feature includes any one in following feature or any several:Color characteristic, texture are special
Any one or more in sign, architectural feature, face characteristic or contour feature.Should be according to characteristics of image to initial candidate region
The specific implementation for being split to obtain object candidate area refers to carry out region to a region in the prior art
Proposal implementation process.
For example, when the characteristics of image is contour line feature, above-mentioned steps 202 specifically can be implemented as:Detect respectively
Contour line in each initial candidate region;When exist at least one initial candidate region include at least two mutually it is independent
During closed contour, each initial candidate region at least one initial candidate region is split to obtain at least two
Individual object candidate area is to cause each object candidate area to include an at most closed contour.
The multiple initial candidate regions obtained with reference to Fig. 3, tear open to each initial candidate region according to contour line feature
After point, initial candidate region 1 can be split as to two object candidate areas, obtain object candidate area as shown in Figure 5.
And for example, when the characteristics of image is contour line feature, above-mentioned steps 202 specifically can be implemented as:Detect respectively
Color characteristic in each initial candidate region.When at least one initial candidate region be present and include at least two colors,
Each initial candidate region at least one initial candidate region is split as at least two object candidate areas respectively
To cause each object candidate area to include a kind of color characteristic.
It should be noted that above-mentioned only illustrate by taking contour line feature and color characteristic as an example, drawn in practical application
, may multiple features such as color combining, texture during partial objectives for candidate region.
In another implementation of step 102, as shown in fig. 6, including:
Step 301, according to characteristics of image image to be detected is divided into initial candidate region.
Wherein, described image feature includes any one in following feature or any several:Color characteristic, texture are special
Any one or more in sign, architectural feature, face characteristic or contour feature.Image to be detected should be entered according to characteristics of image
The specific implementation that row division obtains initial candidate region refers to carry out region proposal to a region in the prior art
Implementation process.
Step 302, the depth value of pixel is removed not in the initial candidate region of same scope.
Step 303, determine target in remaining initial candidate region.
The scheme that the application provides, when carrying out target detection, obtain first in image to be detected and image to be detected
The depth value of multiple pixels;And the depth value for combining each pixel carries out target detection to image to be detected.With existing skill
The information such as the texture based on image, edge, color in art carry out target detection compare, the application when carrying out target detection,
Depth value is considered, therefore in target detection, color, the target object that texture is similar but depth is different can be subjected to area
Point, and then the precision of target detection can be improved.
In addition, in general, the depth information included in region to be detected is in most cases less than its color included
Information, such as:One object may include multiple color, but depth value may only have one corresponding to this whole object, therefore,
The amount of calculation for dividing to obtain candidate region according to depth value also greatly reduces.
Optionally, after target is identified, object detection apparatus 12 can also export the information such as the classification of target, profile.
The method that the embodiment of the present application provides can be used in the scene for needing to carry out target identification, such as:Applied to shifting
In mobile robot, mobile robot can use the application body method automatic identification its object in the environment to carry out
Decision-making.During can be applied in auxiliary user's searching specific objective object.Any need detection, the process of identification object
The method that can all apply the application to provide.
Those skilled in the art should be readily appreciated that, each example described with reference to the embodiments described herein
Unit and algorithm steps, the application can be realized with the combining form of hardware or hardware and computer software.Some function is studied carefully
Unexpectedly performed in a manner of hardware or computer software driving hardware, application-specific and design constraint depending on technical scheme
Condition.Professional and technical personnel can realize described function using distinct methods to each specific application, but this
Kind is realized it is not considered that exceeding scope of the present application.
The embodiment of the present application can carry out the division of functional module, example according to above method example to object detection apparatus etc.
Such as, each function can be corresponded to and divide each functional module, two or more functions can also be integrated at one
Manage in module.Above-mentioned integrated module can both be realized in the form of hardware, can also use the form of software function module
Realize.It should be noted that the division in the embodiment of the present application to module is schematical, only a kind of logic function is drawn
Point, there can be other dividing mode when actually realizing.
In the case where dividing each functional module using corresponding each function, Fig. 7 shows involved in above-described embodiment
And object detection apparatus a kind of possible structural representation, object detection apparatus includes:Acquiring unit 401 and target
Detection unit 402.Acquiring unit 401 is used to support object detection apparatus to perform the process 101 in Fig. 2;Object detection unit 402
For supporting object detection apparatus to perform the process 102 in Fig. 2, process 201 in Fig. 4,202, the process 301 in 203, Fig. 6,
302、303.Wherein, all related contents for each step that above method embodiment is related to can quote corresponding function module
Function description, will not be repeated here.
In the case of using integrated unit, Fig. 8 shows object detection apparatus involved in above-described embodiment
A kind of possible structural representation.Object detection apparatus includes:Processing module 501 and communication module 502.Processing module 501 is used
Management is controlled in the action to object detection apparatus, for example, processing module 501 is used to support object detection apparatus execution figure
Process 301,302,303 in process 101 in 2,102, the process 201 in 103, Fig. 4,202,203, Fig. 6, and/or be used for
Other processes of techniques described herein.Communication module 502 is used to support the logical of object detection apparatus and other network entities
Letter, such as the communication between the functional module with being shown in Fig. 1 or network entity.Object detection apparatus can also include storage mould
Block 503, for storing the program code and data of object detection apparatus.
Wherein, processing module 501 can be processor or controller, such as can be central processing unit (Central
Processing Unit, CPU), general processor, digital signal processor (Digital Signal Processor, DSP),
Application specific integrated circuit (Application-Specific Integrated Circuit, ASIC), field programmable gate array
It is (Field Programmable Gate Array, FPGA) or other PLDs, transistor logic, hard
Part part or its any combination.What it can realize or perform with reference to described by present disclosure various exemplary patrols
Collect square frame, module and circuit.The processor can also be the combination for realizing computing function, such as include one or more micro- places
Manage device combination, combination of DSP and microprocessor etc..Communication module 502 can be transceiver, transmission circuit or communication interface
Deng.Memory module 503 can be memory.
When processing module 501 is processor, communication module 502 is communication interface, when memory module 503 is memory, this
It can be the electronic equipment shown in Fig. 9 to apply for the object detection apparatus involved by embodiment.
As shown in fig.9, the electronic equipment includes:Processor 601, communication interface 602, memory 603 and bus
604.Wherein, communication interface 602, processor 601 and memory 603 are connected with each other by bus 604;Bus 604 can be
Peripheral Component Interconnect standard (Peripheral Component Interconnect, PCI) bus or EISA
(Extended Industry Standard Architecture, EISA) bus etc..It is total that the bus can be divided into address
Line, data/address bus, controlling bus etc..For ease of representing, only represented in Fig. 9 with a thick line, it is not intended that only one total
Line or a type of bus.
The step of method or algorithm with reference to described by present disclosure, can be realized in a manner of hardware, also may be used
By be by computing device software instruction in a manner of realize.Software instruction can be made up of corresponding software module, software mould
Block can be stored on random access memory (Random Access Memory, RAM), flash memory, read-only storage (Read
Only Memory, ROM), Erasable Programmable Read Only Memory EPROM (Erasable Programmable ROM, EPROM), electricity can
EPROM (Electrically EPROM, EEPROM), register, hard disk, mobile hard disk, read-only optical disc
(CD-ROM) or in the storage medium of any other form well known in the art.A kind of exemplary storage medium is coupled to place
Device is managed, so as to enable a processor to from the read information, and information can be write to the storage medium.Certainly, store
Medium can also be the part of processor.Processor and storage medium can be located in ASIC.
Those skilled in the art are it will be appreciated that in said one or multiple examples, work(described herein
It is able to can be realized with hardware, software, firmware or their any combination.When implemented in software, can be by these functions
It is stored in computer-readable medium or is transmitted as one or more instructions on computer-readable medium or code.
Computer-readable medium includes computer-readable storage medium and communication media, and wherein communication media includes being easy to from a place to another
Any medium of one place transmission computer program.It is any that storage medium can be that universal or special computer can access
Usable medium.
Above-described embodiment, the purpose, technical scheme and beneficial effect of the application are carried out further
Describe in detail, should be understood that the embodiment that the foregoing is only the application, be not used to limit the application
Protection domain, all any modification, equivalent substitution and improvements on the basis of the technical scheme of the application, done etc., all should
It is included within the protection domain of the application.