CN109697444A - Object identifying method and device, equipment, storage medium based on depth image - Google Patents
Object identifying method and device, equipment, storage medium based on depth image Download PDFInfo
- Publication number
- CN109697444A CN109697444A CN201710982683.9A CN201710982683A CN109697444A CN 109697444 A CN109697444 A CN 109697444A CN 201710982683 A CN201710982683 A CN 201710982683A CN 109697444 A CN109697444 A CN 109697444A
- Authority
- CN
- China
- Prior art keywords
- depth image
- identified
- histogram
- grouped
- peak value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of object identifying method based on depth image and device, equipment, storage mediums, wherein the described method includes: obtaining the depth image of object to be identified in the target area of acquisition;The depth image is pre-processed, the corresponding histogram of the depth image is obtained;Object to be identified in the target area is identified according to the histogram.
Description
Technical field
The present invention relates to image recognition technology more particularly to a kind of object identifying method and device based on depth image,
Equipment, storage medium.
Background technique
In the prior art, identification occupancy passes through generally by camera collection image or video is installed indoors
The machine learning algorithms such as neural network parse acquisition data, carry out the identification of more people's behaviors.Indoor occupant is positioned
Then generally use the multinomial technologies such as wireless telecommunications, base station location and inertial navigation positioning.
Existing scheme, which has the disadvantage that, to be: 1) common camera destroys individual privacy;2) more people's Activity recognition algorithms are multiple
It is miscellaneous, it is interfered vulnerable to indoor light, poor anti jamming capability.3) positioning accuracy, performance and poor universality, and position at high cost.In view of
Disadvantages mentioned above carries out the method for more people's Activity recognition monitorings using ordinary video picture parsing and utilizes wireless telecommunications, base station
The technologies such as positioning and inertial navigation positioning realize that indoor positioning is difficult to be widely popularized in practical application.
Summary of the invention
In view of this, the embodiment of the present invention be solve the problems, such as it is existing in the prior art at least one and one kind is provided and is based on
Object identifying method and device, equipment, the storage medium of depth image, can protect individual privacy, not by light interference and energy
It realizes and is accurately positioned.
The technical solution of the embodiment of the present invention is achieved in that
The embodiment of the present invention provides a kind of object identifying method based on depth image, which comprises
Obtain the depth image of object to be identified in the target area of acquisition;
The depth image is pre-processed, the corresponding histogram of the depth image is obtained;
Object to be identified in the target area is identified according to the histogram.
The embodiment of the present invention provides a kind of object recognition equipment based on depth image, and described device includes:
Acquiring unit, for obtaining the depth image of object to be identified in the target area acquired;
Pretreatment unit obtains the corresponding histogram of the depth image for pre-processing to the depth image;
Recognition unit, for identifying object to be identified in the target area according to the histogram.
The embodiment of the present invention provides a kind of object identification device based on depth image, including memory and processor, institute
It states memory and is stored with the computer program that can be run on a processor, the processor is realized above-mentioned when executing described program
Object identifying method based on depth image.
The embodiment of the present invention provides a kind of computer readable storage medium, is stored thereon with computer program, the computer
The above-mentioned object identifying method based on depth image is realized when program is executed by processor.
In the embodiment of the present invention, the depth image of object to be identified in the target area of acquisition is obtained;To the depth map
As being pre-processed, the corresponding histogram of the depth image is obtained;According to the histogram identify in the target area to
Identify object;It so, it is possible protection individual privacy, the advantages of not interfered by light and be able to achieve accurate positioning.
Detailed description of the invention
Fig. 1 is the implementation process schematic diagram of object identifying method of the embodiment of the present invention based on depth image;
Fig. 2 is the schematic diagram that Kinect of embodiment of the present invention foreground depth image is grouped;
Fig. 3 is the schematic diagram of indoor positioning principle of the embodiment of the present invention based on Kinect;
Fig. 4 is the schematic diagram of histogram of the embodiment of the present invention;
Fig. 5 is the implementation process schematic diagram of the object identifying method of depth image of the embodiment of the present invention based on Kinect;
Fig. 6 is the composed structure schematic diagram of object recognition equipment of the embodiment of the present invention based on depth image;
Fig. 7 is a kind of hardware entities schematic diagram that equipment is calculated in the embodiment of the present invention.
Specific embodiment
The technical solution of the present invention is further elaborated in the following with reference to the drawings and specific embodiments.
The embodiment of the present invention provides a kind of object identifying method based on depth image, and this method is applied to calculate equipment,
The function that this method is realized can realize by calculating the processor caller code of equipment, and certain program code can be with
It is stored in computer storage medium, it is seen then that the calculating equipment includes at least pocessor and storage media.
Fig. 1 is the implementation process schematic diagram of object identifying method of the embodiment of the present invention based on depth image, such as Fig. 1 institute
Show, this method comprises:
Step S101 obtains the depth image of object to be identified in the target area of acquisition;
Here, depth image can be acquired during realization by depth camera, and depth camera is for example answered
With relatively broad Kinect camera.Target area refers to one or more depth cameras range collected, such as one
One depth camera is installed, then target area is the acquisition of this depth camera on a room or square or road
Range;If a room is mounted with multiple cameras, the room can be acquired completely, then target area is the room.
Here, object to be identified is certain target object of setting, such as the present embodiment is applied and identified and positioned in number
In, then it is people that object to be identified, which is,.If the present embodiment applies in a farm, the farm flock of sheep, cows for identification
Or chicken and duck group, then object to be identified is cattle and sheep chicken and duck etc..
Step S102 pre-processes the depth image, obtains the corresponding histogram of the depth image;
In other examples, pretreatment includes the process that depth image is normalized, such as by depth image
It is converted into gray level image, the depth image for then translating into gray level image is expressed as histogram.
Here, the histogram is formed according to the number of pixels of depth image, can be right during realization
Depth image carries out subregion (grouping), then counts the pixel number in each grouping, thus the three-dimensional histogram formed.
Step S103 identifies object to be identified in the target area according to the histogram.
It here, can be using the characteristic information of histogram as the characteristic information of object to be identified, example during realization
If the peak value in histogram can be determined as the number of object to be identified, the location information of peak value can determine object to be identified
Position, having several peak values just in such a histogram has several objects to be identified, then the position of peak value i.e. these wait knowing
The position of other object.
Using the depth information of depth camera acquisition target object in the present embodiment, the depth then exclusively enjoyed using target
Information carries out the identification of target object, due to being not related to the private data of target object in depth information, even if acquisition
The depth information of target object is leaked, and will not cause the privacy leakage of target object;However, being carried out using common camera
When recongnition of objects, such as when identification number, the identification of target object is carried out often through acquisition face etc. private datas, one
It is the destruction to individual privacy that aspect, which acquires image, on the other hand, if the image of acquisition is compromised, is caused personal hidden
Private leakage.
Secondly, depth information is generally based on infrared C MOS camera and realizes, the interference of light not will receive, and
When using common camera, it is easy the interference by light, such as common camera generally requires additional light source just at night
It can guarantee and collect clearly identifiable image, and infrared camera does not have the shortcomings that this respect.Common camera is adopted
Collection is two-dimensional signal, and often precision is poor in terms of positioning, however depth information carries the location information of target object, from
And available accurate position.Therefore the identification of the target object based on depth information has protection individual privacy, not light
Line interferes and is able to achieve accurate positioning.
Depth camera is described below, also known as three-dimensional (3D) sensor, the angle obtained from vision data is promoted
Machine sensing capability.General manufacturer includes external apple, Microsoft, Google, INTEL, Oculus, Sony (SONY), the country
Austria in than light, China victory Amy, figure ripple, happy row day it is inferior.3 dimension visions are compared to 2 dimension visions, more dimensions, Ke Yishi
Now more correct object segmentation, the three-dimensional measurement of suitable accuracy, three-dimensional data Model Reconstruction and intelligent vision identification and
Analysis.It is Microsoft's Kinect camera that application is more at present.Three kinds of main technique methods of depth camera and the company that represents point
Not are as follows: first is that monocular structure light, there are apple, Microsoft Kinect-1, Intel RealSense, Google (Google) in the company of representative
Project Tango etc.;Second is that binocular visible light, represents company LeapMotion;Third is that time-of-flight method (TOF), represents public affairs
Take charge of Microsoft Kinect-2.
The most well-known consumer level application of depth camera is body-sensing camera, such as the body-sensing camera of Microsoft's XBOX game machine
Kinect.The HOLOLENSE of Microsoft also largely uses depth camera.In the present embodiment mesh will be carried out using depth camera
Mark the identification of object (object to be identified).
Depth detection main technique methods include following several:
1) binocular ranging (double RGB cameras+optional lighting system), double RGB cameras+optional lighting system is benefit
With principle of triangulation, i.e. relationship parallax d inversely proportional to imaging plane distance ft with target point: Z=ft/d obtains depth
Spend information Z.Wherein parallax (Disparity) d is target point existing difference between the abscissa being imaged in two width views of left and right
It is different.
Binocular ranging uses principle of triangulation based entirely on image processing techniques, identical in two images by finding
Characteristic point obtain match point, to obtain depth value.Light source is environment light in binocular ranging or white light is this does not pass through
The light source of coding, image recognition depend entirely on the characteristic point of object being taken itself, therefore matching is always the one of binocular
A difficult point.Matched precision and correctness it is difficult to ensure that, therefore there is structured light technique to solve matching problem.Binocular ranging
Technical point: Stereo Matching Algorithm, general step: matching cost calculates, and matching cost superposition, parallax obtains, and parallax refinement is (sub-
Pixel-level);Because structured light light source has many characteristic points, perhaps coding thus provides many matching angle points or direct
Code word, can very easily carry out the matching of characteristic point.In other words, the characteristic point using subject itself is not needed,
Therefore good matching result can be provided.
2) a general structure light (RGB camera+structured light projector (infrared)+structure light depth inductor
(CMOS)), structure light measurement the difference lies in that projection source carried out coding characterize in other words.What is shot in this way is
The image crossed on light sources project to object encoded by the depth modulation of body surface.
Structure light basic principle:, will by one pre-designed pattern of projection as reference picture (encoded light source)
Project structured light reuses the structured light patterns that video camera receives body surface reflection, in this way, same obtain to body surface
Two images, a width are the reference pictures being pre-designed, and in addition a width is the structure light for the body surface reflection that camera obtains
Pattern must deform due to receiving pattern because of the solid type shape of object, therefore can be by the pattern on video camera
Position and deformation degree calculate the spatial information of body surface.Common method of structured light is still that part uses triangle survey
Depth calculation away from principle.It is equally to carry out images match, this method place better than binocular ranging is that reference picture is not
It is to obtain, but pass through the pattern specially designed, therefore characteristic point is known, and be easier to mention from test image
It takes.
3) laser speckle (Light coding) light source, different from structure light, Light coding technical know-how is to utilize
Continuous light (near infrared ray) encodes measurement space, and net inductor reads the light of coding, and chip operation is transferred to be solved
After code, the image with depth is generated;The key of Light coding technology is laser light speckle, it is that laser is (radium-shine
Light) it is irradiated to rough object or penetrates the diffraction spot being randomly formed after frosted glass.These speckles have the randomness of height, and
And can with the difference of distance changing patterns.That is any two in space at speckle pattern be all different.As long as
Such light is stamped in space, and entire space is all marked, and an object is put into this space, as long as looking at object
Speckle pattern above, so that it may know this object where.Certainly before this speckle in entire space
Pattern is all recorded, so first to do primary source calibration.Light coding is solved by space geometry relationship,
Its measurement accuracy is only related with the acquirement density of plane of reference when calibration, and the plane of reference is closeer, and measurement is more accurate.Without in order to improve
Precision and baseline is widened.
Widely used Kinect depth camera is described below, there are three what is be arranged side by side to take the photograph altogether by Kinect
As head, intermediate camera is RGB (RGB) colour imagery shot, the camera point of the right and left than for outside line transmitter altogether and
Infrared C MOS camera.Kinect, which has, chases after burnt technology, pedestal motor can it is mobile with focusing object (target object) and with
Rotation, the also built-in array microphone system for being used for speech recognition of Kinect;Infrared C MOS camera is with 30 frame per second
Speed generates depth image stream, reproduces ambient enviroment to real-time 3D.When finding mobile object, Kinect will using segmentation strategy
Target object goes people to come out from background environment, obtains the depth image of target object;
Based on embodiment above-mentioned, the embodiment of the present invention provides a kind of object identifying method based on depth image, the party
Method is applied to calculate equipment, and the function that this method is realized can be by calculating the processor caller code of equipment come real
Existing, certain program code can be stored in computer storage medium, it is seen then that the calculating equipment includes at least processor and storage
Medium.This method comprises:
Step S201 obtains the depth image of object to be identified in the target area of acquisition;
Here, depth image can be acquired during realization by depth camera, and depth camera is for example answered
With relatively broad Kinect camera.Object to be identified is certain target object of setting,
Step S202, by the depth image object to be identified and background extract, obtain having described wait know
The foreground depth image of other object;
In other examples, it can use moving object detection algorithm, such as unanimity of samples (SACON, SAmple
CONsensus) algorithm obtains the foreground depth image of object to be identified, such as the movement pixel of indoor occupant.
Step S203 is grouped the foreground depth image, obtains the histogram.
Here, step S202 and step S203 actually provides a kind of realization above-mentioned steps S102 " to the depth image
Pre-processed, obtain the corresponding histogram of the depth image " method.
Here, the histogram is formed according to the number of pixels of depth image, can be right during realization
Depth image carries out subregion (grouping), then counts the pixel number in each grouping, thus the three-dimensional histogram formed.
Step S204 identifies object to be identified in the target area according to the histogram.
Based on embodiment above-mentioned, the embodiment of the present invention provides a kind of object identifying method based on depth image, the party
Method is applied to calculate equipment, and the function that this method is realized can be by calculating the processor caller code of equipment come real
Existing, certain program code can be stored in computer storage medium, it is seen then that the calculating equipment includes at least processor and storage
Medium.This method comprises:
Step S301 obtains the depth image of object to be identified in the target area of acquisition;
Step S302, by the depth image object to be identified and background extract, obtain having described wait know
The foreground depth image of other object;
Step S303 is grouped the foreground depth image, counts the number of pixel in each grouping, and
According to the pixel number of each grouping of the position corresponding record of each grouping;
It is corresponding straight to draw the depth image according to the position of each grouping and number of corresponding pixels by step S304
Fang Tu.
Here, step S303 and step S304 actually provides a kind of above-mentioned " step S203, to the foreground depth of realization
Image is grouped, and obtains the histogram " method.
Here, the histogram is formed according to the number of pixels of depth image, can be right during realization
Depth image carries out subregion (grouping), then counts the pixel number in each grouping, thus the three-dimensional histogram formed.
Step S305 identifies number and the position of the peak value in the histogram;
The number of the peak value is determined the number of the object to be identified by step S306, and the position of the peak value is true
It is set to the position of the object to be identified.
Here, step S305 and step S306 actually provides a kind of above-mentioned " step S204, according to the histogram of realization
Identify object to be identified in the target area " method.
Here, the number by the peak value determines the number of the object to be identified, and the position of the peak value is true
It is set to the position of the object to be identified, comprising: determine the number that the peak value meets preset threshold condition described to be identified
The number of object;The position that will be greater than the peak value of the preset threshold is determined as the position of the object to be identified.
Preset threshold in the preset threshold condition is the thresholding of object to be identified, it is considered that greater than the peak value of thresholding
Just will be considered that it is effective peak, or, it is believed that the peak value less than thresholding just will be considered that it is effective peak, or in threshold range
Peak value just will be considered that effective peak, such as in a room, the object to be identified is people, then cat cat and dog dog cannot
It is identified as people, in general, the height of people is higher than the height of cat cat and dog dog, then can will be greater than presetting when peak value is arranged
The peak value of thresholding is determined as effective peak, and a talent of effective peak is considered as the number of people;Such as it in a room,
The object of identification is cat cat and dog dog, then people cannot be identified as target object, in general, the height of people is than cat cat and dog dog
Height it is high, then the peak value that be less than pre-determined threshold can be determined as to effective peak when peak value be arranged, effective peak it is a
Number is just considered as the number of cat cat and dog dog;For another example, such as in a room, the object to be identified is child, then cat cat
Dog dog and people cannot be identified as target object, and in general, the height of people is higher than child, and the height of cat cat and dog dog but compares
Child is short, then the peak value in predetermined threshold range can be determined as effective peak when peak value is arranged, effective peak
Number is just considered as the number of cat cat and dog dog;
In other examples, described that the foreground depth image is grouped, comprising: according to described to be identified right
As shared pixel size in depth image, determine group for being grouped away from;According to described group away to the preceding depth of field
Degree image is grouped.
Here, group is away from the parameter that can indicate two-dimensional areas with one, for example, group is away from can be the parameter for indicating circle, it can also
To be the parameter for indicating rectangle, in general, for relatively good division depth image, group indicates the parameter of rectangle away from using, under
It will be by taking square as an example, then group further includes moving when dividing depth image away from can be a side length in the embodiment of face
Dynamic step pitch, in order to guarantee comprehensive division to depth image, in general, moving step pitch is less than diameter, minimum side length etc..
In other examples, described that the foreground depth image is grouped, further includes: determine moving step pitch or
Overlapping ratio between adjacent packets, according to described group away from the overlapping ratio or according to described group away from the moving step pitch
The foreground depth image is grouped, the moving step pitch for characterize mobile described group away from when moving distance.
Based on embodiment above-mentioned, the embodiment of the present invention provides a kind of object identifying method based on depth image again, should
Method is applied to calculate equipment, and the function that this method is realized can be by calculating the processor caller code of equipment come real
Existing, certain program code can be stored in computer storage medium, it is seen then that the calculating equipment includes at least processor and storage
Medium.
In the present embodiment will using Kinect as depth camera, using room as target area for, be illustrated, first
The conversion of Kinect foreground picture is introduced, as shown in Fig. 2, left figure takes the photograph picture after motion detection algorithm filters by Kinect
Personage's foreground picture of display.Wherein grouping b is made of four pixels, i.e., a block plaid 21 is made of four lattices
, one of block plaid expression group is away from for the group away from for indicating two-dimensional areas, a lattice indicates a pixel.Point
Group (b+1) is also made of four pixels, i.e., a block plaid 22 is made of four lattices.Being grouped (b+1) is grouping b
Obtained by sliding after a pixel distance, wherein one pixel of sliding is moving step pitch.It is grouped between b and grouping (b+1) altogether
With area be 2 pixels, and each grouping is altogether there are four pixel, then overlapping ratio be common area divided by point
The coplanar product for cutting grouping is equal to half.The group of histogram away from being determined according to size of the people in Kinect top view,
Its best value is the figure that a grouping can just include a people.In the process of realization, certain amount can be acquired in advance
Target object depth image, the number of pixel shared by a target object is then analyzed, if these pixels
Using rectangle come when indicating, then the field side of rectangle can be determined as group away from.Such as target object is behaved, and can be acquired in advance
100 depth images comprising someone, then analyze the number of pixel shared by a people, then the number of shared pixel
When being indicated using rectangle, the side length of rectangle be group away from.Referring to fig. 2, the grouping that Fig. 2 center 23 marks includes with left figure respectively
Humanoid 24 grouping is corresponding.
Few indoor positioning principle based on Kinect once below, as shown in figure 3, obtaining histogram by depth image 31
32, analysis histogram 32 obtains the position of peak value and the number 33 of peak value, obtains the positioning of personnel by the position and number of peak value
With number 34.Depth image after elimination background is sized and is grouped, mobile picture in each grouping is counted after grouping
The number of vegetarian refreshments, since grouping b and grouping (b+1) have part overlapping, so the variation of the pixel number of adjacent packets is
Continuously, as shown in figure 4, stereogram has peak appearance.The coordinate position at peak is determined by peak-seeking algorithm, so that it is determined that room
The specific location of interior personnel.The number at peak determines indoor number in Data Analyzing Room.
The object identifying method of the depth image based on Kinect is described below, Fig. 5 is based on for the embodiment of the present invention
The implementation process schematic diagram of the object identifying method of the depth image of Kinect, as shown in figure 5, this method comprises:
Kinect is installed vertically on the ceiling of each indoor room by step S501, when having detected mobile personnel
Depth image is obtained by captured in real-time.
Step S502, using moving object detection algorithm, such as SACON (SAmple CONsensus) algorithm, to depth map
As extracting, the movement pixel of indoor occupant is obtained, and the depth image after extraction is converted into gray level image.
Step S503 determines the group of histogram away from and carrying out to gray level image according to size of the people in Kinect camera lens
Grouping, counts pixel number in each grouping.
The pixel number of step S504, the position coordinates of moving target and each grouping makees stereogram, calculates with peak-seeking
Method determines the position of each peak value such as NPFinder (Non-Parametric Peak Finder), realizes that indoor occupant is fixed
Position.
Step S505 counts the number sealed in each room histogram, determines indoor occupant number.
In the present embodiment, first the right angle setting Kinect camera on overhead room, utilizes moving object detection algorithm
After moving target in depth map and background removing, the pixel number moved in the statistics movement each grouping of histogram passes through
Peak-seeking algorithm determines the position of each peak value to realize identifying and positioning for occupancy, is later period user behavior recognition and row
Technical support is provided for abnormity early warning.
From the present embodiment as can be seen that obtaining indoor sport personnel's by Kinect device and moving object detection algorithm
Pixel map, by the pixel number histogram for obtaining moving target to pixel map classified statistic;Then it is determined with peak-seeking algorithm
The number at peak and there is the coordinate position of peak value in histogram.In the present embodiment, select Kinect for indoor occupant number and position
The equipment set obtains prospect personnel by operational objective detection algorithm and moves pixel, equidistantly drawn to moving target pixel map
Point, count pixel number in each grouping and finally obtain movement histogram, using peak-seeking algorithm obtain indoor occupant position and
Number.
Compared with prior art, the present embodiment, which has the advantage that, identifies occupancy and indoor positioning using Kinect,
Individual privacy is not destroyed, and is not influenced by indoor light power, and the number that can be completed at the same time identifies and positions two functions, simplifies
Identification process.
Based on embodiment above-mentioned, the embodiment of the present invention provides a kind of object recognition equipment based on depth image, the dress
Each module included by included each unit and each unit is set, can be realized by calculating the processor in equipment;
Certainly it can also be realized by logic circuit;In the process of implementation, processor can be central processing unit (CPU), microprocessor
(MPU), digital signal processor (DSP) or field programmable gate array (FPGA) etc..
Fig. 6 is the composed structure schematic diagram of object recognition equipment of the embodiment of the present invention based on depth image, such as Fig. 6 institute
Show, which includes:
Acquiring unit 601, for obtaining the depth image of object to be identified in the target area acquired;
Pretreatment unit 602 obtains the corresponding histogram of the depth image for pre-processing to the depth image
Figure;
Recognition unit 603, for identifying object to be identified in the target area according to the histogram.
In other examples, the pretreatment unit, comprising:
Extraction module, for by the depth image object to be identified and background extract, obtain having described
The foreground depth image of object to be identified;
Grouping module obtains the histogram for being grouped to the foreground depth image.
In other examples, the grouping module includes:
It is grouped submodule, for being grouped to the foreground depth image;
Statistic submodule, for counting the number of pixel in each grouping, and according to the position pair of each grouping
The pixel number of each grouping should be recorded;
Rendering submodule, for according to each grouping position and number of corresponding pixels draw the depth image pair
The histogram answered.
In other examples, the recognition unit includes:
Identification module, for identification number of the peak value in the histogram and position;
Determining module, for the number of the peak value to be determined to the number of the object to be identified, by the position of the peak value
Set the position for being determined as the object to be identified.
In other examples, the determining module, the number for the peak value to be met preset threshold condition are true
The number of the fixed object to be identified;The position for meeting the peak value of the preset threshold condition is determined as the object to be identified
Position.
In other examples, the grouping submodule, for according to the object to be identified in depth image institute
The pixel size accounted for, determine group for being grouped away from;According to described group away from being grouped to the foreground depth image.
In other examples, the grouping submodule is also used to determine the friendship between moving step pitch or adjacent packets
Folded ratio, according to described group away from the overlapping ratio or according to described group away from the moving step pitch to the foreground depth figure
As being grouped, the moving step pitch for characterize mobile described group away from when moving distance.
During realization, the calculating equipment can have letter during specific embodiment to be various types of
The equipment of processing capacity is ceased, such as the electronic equipment may include mobile phone, tablet computer, desktop computer, personal digital assistant etc..
The description of apparatus above embodiment, be with the description of above method embodiment it is similar, have same embodiment of the method
Similar beneficial effect.For undisclosed technical detail in apparatus of the present invention embodiment, embodiment of the present invention method is please referred to
Description and understand.
It should be noted that in the embodiment of the present invention, if realized in the form of software function module above-mentioned based on depth
The object identifying method of image is spent, and when sold or used as an independent product, also can store computer-readable at one
It takes in storage medium.Based on this understanding, the technical solution of the embodiment of the present invention substantially in other words makes the prior art
The part of contribution can be embodied in the form of software products, which is stored in a storage medium,
It uses including some instructions so that a computer equipment (can be personal computer, server or network equipment etc.) is held
The all or part of each embodiment the method for the row present invention.And storage medium above-mentioned include: USB flash disk, it is mobile hard disk, read-only
The various media that can store program code such as memory (Read Only Memory, ROM), magnetic or disk.In this way, this
Inventive embodiments are not limited to any specific hardware and software and combine.
Accordingly, the embodiment of the present invention provides a kind of object identification device (calculating equipment) based on depth image, packet again
Memory and processor are included, further includes the computer program that is stored on the memory and can run on a processor, it is described
Processor realizes the above-mentioned object identifying method based on depth image when executing described program.
Accordingly, the embodiment of the present invention provides a kind of computer readable storage medium again, is stored thereon with computer program,
The computer program realizes the above-mentioned object identifying method based on depth image when being executed by processor.
The description of medium stored above and apparatus embodiments, be with the description of above method embodiment it is similar, have same
The similar beneficial effect of embodiment of the method.For undisclosed technical detail in apparatus of the present invention embodiment, the present invention is please referred to
The description of storage medium and apparatus embodiments and understand.
It should be noted that Fig. 7 is a kind of hardware entities schematic diagram for calculating equipment in the embodiment of the present invention, referring to Fig. 7
Shown, the hardware entities of the calculating equipment 700 include: processor 701, communication interface 702 and memory 703, wherein
The usually control of processor 701 calculates the overall operation of equipment 700.
Communication interface 702 can make calculating equipment pass through network and other terminals or server communication.
Memory 703 is configured to store the instruction and application that can be performed by processor 701, can also cache device to be processed
701 and calculate equipment 700 in each module it is to be processed or processed data (for example, image data, audio data, voice
Communication data and video communication data), flash memory (FLASH) or random access storage device (Random Access can be passed through
Memory, RAM) it realizes.
It should be understood that " one embodiment " or " embodiment " that specification is mentioned in the whole text mean it is related with embodiment
A particular feature, structure, or characteristic is included at least one embodiment of the present invention.Therefore, occur everywhere in the whole instruction
" in one embodiment " or " in one embodiment " not necessarily refer to identical embodiment.In addition, these specific features, knot
Structure or characteristic can combine in any suitable manner in one or more embodiments.It should be understood that in various implementations of the invention
In example, magnitude of the sequence numbers of the above procedures are not meant that the order of the execution order, the execution sequence Ying Yiqi function of each process
It can determine that the implementation process of the embodiments of the invention shall not be constituted with any limitation with internal logic.The embodiments of the present invention
Serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can combine, or
It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion
Mutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unit
Or communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit;Both it can be located in one place, and may be distributed over multiple network lists
In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated in one processing unit, it can also
To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned
Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can store in computer-readable storage medium, which exists
When execution, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes: movable storage device, read-only deposits
The various media that can store program code such as reservoir (Read Only Memory, ROM), magnetic or disk.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent product
When selling or using, it also can store in a computer readable storage medium.Based on this understanding, the present invention is implemented
Substantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words,
The computer software product is stored in a storage medium, including some instructions are used so that a calculating equipment (can be
Personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention.And
Storage medium above-mentioned includes: the various media that can store program code such as movable storage device, ROM, magnetic or disk.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (10)
1. a kind of object identifying method based on depth image, which is characterized in that the described method includes:
Obtain the depth image of object to be identified in the target area of acquisition;
The depth image is pre-processed, the corresponding histogram of the depth image is obtained;
Object to be identified in the target area is identified according to the histogram.
2. obtaining institute the method according to claim 1, wherein described pre-process the depth image
State the corresponding histogram of depth image, comprising:
By in the depth image object to be identified and background extract, obtain the preceding depth of field with the object to be identified
Spend image;
The foreground depth image is grouped, the histogram is obtained.
3. according to the method described in claim 2, obtaining it is characterized in that, described be grouped the foreground depth image
The histogram, comprising:
The foreground depth image is grouped, counts the number of pixel in each grouping, and according to each grouping
The each grouping of position corresponding record pixel number;
The corresponding histogram of the depth image is drawn according to the position of each grouping and number of corresponding pixels.
4. according to the method described in claim 3, it is characterized in that, described identify in the target area according to the histogram
Object to be identified, comprising:
Identify number and the position of the peak value in the histogram;
The position of the peak value is determined as described to be identified by the number that the number of the peak value is determined to the object to be identified
The position of object.
5. according to the method described in claim 4, it is characterized in that, the number by the peak value determine it is described to be identified right
The position of the peak value is determined as the position of the object to be identified by the number of elephant, comprising:
The number that the peak value meets preset threshold condition is determined to the number of the object to be identified;The default threshold will be met
The position of the peak value of value condition is determined as the position of the object to be identified.
6. according to the method described in claim 3, it is characterized in that, described be grouped the foreground depth image, comprising:
According to object to be identified pixel size shared in depth image, determine group for being grouped away from;
According to described group away from being grouped to the foreground depth image.
7. according to the method described in claim 6, also wrapping it is characterized in that, described be grouped the foreground depth image
It includes:
The overlapping ratio between moving step pitch or adjacent packets is determined, according to described group away from the overlapping ratio or according to described
Group away from being grouped with the moving step pitch to the foreground depth image, the moving step pitch for characterize mobile described group away from
When moving distance.
8. a kind of object recognition equipment based on depth image, which is characterized in that described device includes:
Acquiring unit, for obtaining the depth image of object to be identified in the target area acquired;
Pretreatment unit obtains the corresponding histogram of the depth image for pre-processing to the depth image;
Recognition unit, for identifying object to be identified in the target area according to the histogram.
9. a kind of object identification device based on depth image, including memory and processor, the memory is stored with can be
The computer program run on processor, which is characterized in that the processor realizes claim 1 to 7 when executing described program
Described in any item object identifying methods based on depth image.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt
Claim 1 to 7 described in any item object identifying methods based on depth image are realized when processor executes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710982683.9A CN109697444B (en) | 2017-10-20 | 2017-10-20 | Object identification method and device based on depth image, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710982683.9A CN109697444B (en) | 2017-10-20 | 2017-10-20 | Object identification method and device based on depth image, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109697444A true CN109697444A (en) | 2019-04-30 |
CN109697444B CN109697444B (en) | 2021-04-13 |
Family
ID=66225217
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710982683.9A Active CN109697444B (en) | 2017-10-20 | 2017-10-20 | Object identification method and device based on depth image, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109697444B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110198411A (en) * | 2019-05-31 | 2019-09-03 | 努比亚技术有限公司 | Depth of field control method, equipment and computer readable storage medium during a kind of video capture |
CN110728259A (en) * | 2019-10-23 | 2020-01-24 | 南京农业大学 | Chicken group weight monitoring system based on depth image |
CN111210429A (en) * | 2020-04-17 | 2020-05-29 | 中联重科股份有限公司 | Point cloud data partitioning method and device and obstacle detection method and device |
CN112184722A (en) * | 2020-09-15 | 2021-01-05 | 上海传英信息技术有限公司 | Image processing method, terminal and computer storage medium |
CN113572958A (en) * | 2021-07-15 | 2021-10-29 | 杭州海康威视数字技术股份有限公司 | Method and equipment for automatically triggering camera to focus |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080240549A1 (en) * | 2007-03-29 | 2008-10-02 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling dynamic depth of stereo-view or multi-view sequence images |
CN102803991A (en) * | 2009-06-03 | 2012-11-28 | 学校法人中部大学 | Object detection device |
CN103150559A (en) * | 2013-03-01 | 2013-06-12 | 南京理工大学 | Kinect three-dimensional depth image-based head identification and tracking method |
CN103337081A (en) * | 2013-07-12 | 2013-10-02 | 南京大学 | Shading judgment method and device based on depth layer |
CN103366355A (en) * | 2012-03-31 | 2013-10-23 | 盛乐信息技术(上海)有限公司 | Method and system for enhancing layering of depth map |
CN103544492A (en) * | 2013-08-06 | 2014-01-29 | Tcl集团股份有限公司 | Method and device for identifying targets on basis of geometric features of three-dimensional curved surfaces of depth images |
CN102831398B (en) * | 2012-07-24 | 2014-12-10 | 中国农业大学 | Tree apple recognition method based on depth image |
US8995739B2 (en) * | 2013-08-21 | 2015-03-31 | Seiko Epson Corporation | Ultrasound image object boundary localization by intensity histogram classification using relationships among boundaries |
CN105118073A (en) * | 2015-08-19 | 2015-12-02 | 南京理工大学 | Human body head target identification method based on Xtion camera |
-
2017
- 2017-10-20 CN CN201710982683.9A patent/CN109697444B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080240549A1 (en) * | 2007-03-29 | 2008-10-02 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling dynamic depth of stereo-view or multi-view sequence images |
CN102803991A (en) * | 2009-06-03 | 2012-11-28 | 学校法人中部大学 | Object detection device |
CN103366355A (en) * | 2012-03-31 | 2013-10-23 | 盛乐信息技术(上海)有限公司 | Method and system for enhancing layering of depth map |
CN102831398B (en) * | 2012-07-24 | 2014-12-10 | 中国农业大学 | Tree apple recognition method based on depth image |
CN103150559A (en) * | 2013-03-01 | 2013-06-12 | 南京理工大学 | Kinect three-dimensional depth image-based head identification and tracking method |
CN103337081A (en) * | 2013-07-12 | 2013-10-02 | 南京大学 | Shading judgment method and device based on depth layer |
CN103544492A (en) * | 2013-08-06 | 2014-01-29 | Tcl集团股份有限公司 | Method and device for identifying targets on basis of geometric features of three-dimensional curved surfaces of depth images |
US8995739B2 (en) * | 2013-08-21 | 2015-03-31 | Seiko Epson Corporation | Ultrasound image object boundary localization by intensity histogram classification using relationships among boundaries |
CN105118073A (en) * | 2015-08-19 | 2015-12-02 | 南京理工大学 | Human body head target identification method based on Xtion camera |
Non-Patent Citations (1)
Title |
---|
杨林: "基于Kinect的人体目标检测与跟踪", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110198411A (en) * | 2019-05-31 | 2019-09-03 | 努比亚技术有限公司 | Depth of field control method, equipment and computer readable storage medium during a kind of video capture |
CN110198411B (en) * | 2019-05-31 | 2021-11-02 | 努比亚技术有限公司 | Depth of field control method and device in video shooting process and computer readable storage medium |
CN110728259A (en) * | 2019-10-23 | 2020-01-24 | 南京农业大学 | Chicken group weight monitoring system based on depth image |
CN110728259B (en) * | 2019-10-23 | 2023-08-22 | 南京农业大学 | Chicken crowd heavy monitoring system based on depth image |
CN111210429A (en) * | 2020-04-17 | 2020-05-29 | 中联重科股份有限公司 | Point cloud data partitioning method and device and obstacle detection method and device |
CN112184722A (en) * | 2020-09-15 | 2021-01-05 | 上海传英信息技术有限公司 | Image processing method, terminal and computer storage medium |
CN112184722B (en) * | 2020-09-15 | 2024-05-03 | 上海传英信息技术有限公司 | Image processing method, terminal and computer storage medium |
CN113572958A (en) * | 2021-07-15 | 2021-10-29 | 杭州海康威视数字技术股份有限公司 | Method and equipment for automatically triggering camera to focus |
Also Published As
Publication number | Publication date |
---|---|
CN109697444B (en) | 2021-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240046571A1 (en) | Systems and Methods for 3D Facial Modeling | |
CN109697444A (en) | Object identifying method and device, equipment, storage medium based on depth image | |
CN107392958B (en) | Method and device for determining object volume based on binocular stereo camera | |
CA2812117C (en) | A method for enhancing depth maps | |
CN107480613B (en) | Face recognition method and device, mobile terminal and computer readable storage medium | |
CN107466411B (en) | Two-dimensional infrared depth sensing | |
JP6295645B2 (en) | Object detection method and object detection apparatus | |
CN106897648B (en) | Method and system for identifying position of two-dimensional code | |
CN106909873B (en) | The method and apparatus of recognition of face | |
JP6793151B2 (en) | Object tracking device, object tracking method and object tracking program | |
JP2019075156A (en) | Method, circuit, device, and system for registering and tracking multifactorial image characteristic and code executable by related computer | |
CN110689577B (en) | Active rigid body pose positioning method in single-camera environment and related equipment | |
KR20160106514A (en) | Method and apparatus for detecting object in moving image and storage medium storing program thereof | |
JP4774818B2 (en) | Image processing apparatus and image processing method | |
JP7113013B2 (en) | Subject head tracking | |
CN107463659B (en) | Object searching method and device | |
CN110046560A (en) | A kind of dangerous driving behavior detection method and camera | |
CN110222616B (en) | Pedestrian abnormal behavior detection method, image processing device and storage device | |
US9323989B2 (en) | Tracking device | |
CN112633096A (en) | Passenger flow monitoring method and device, electronic equipment and storage medium | |
JP2010060451A (en) | Robotic apparatus and method for estimating position and attitude of object | |
KR101337423B1 (en) | Method of moving object detection and tracking using 3d depth and motion information | |
CN112926464A (en) | Face living body detection method and device | |
EP3035242A1 (en) | Method and electronic device for object tracking in a light-field capture | |
CN113424522A (en) | Three-dimensional tracking using hemispherical or spherical visible depth images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |