Summary of the invention
In order to overcome the problems, such as present in the relevant technologies, the disclosure provides a kind of method, apparatus of object detection, storage is situated between
Matter and electronic equipment.
To achieve the goals above, according to the first aspect of the embodiments of the present disclosure, a kind of method of object detection, institute are provided
The method of stating includes:
The target pixel points for meeting preset condition are extracted from the rgb image data of target scene;
According to the point cloud data of the target pixel points, the target pixel points are clustered, to obtain at least one
Target pixel points class, wherein each target pixel points class corresponds respectively to an object;
Determine the geometrical characteristic of at least one target pixel points class, and according to the target pixel points class determined
Geometrical characteristic and known target object geometrical characteristic, identify the corresponding object of target pixel points class.
Optionally, the target pixel points for meeting preset condition are extracted in the rgb image data from target scene
Before step, the method also includes:
Obtain the depth image data of the target scene;
The depth image data and the rgb image data are aligned, to obtain the point cloud data of the target scene.
Optionally, the method also includes:
Obtain the depth image data of the target pixel points;
By the rgb image data pair of the depth image data of the target pixel points and the target pixel points
Together, to obtain the point cloud datas of the target pixel points.
Optionally, the target pixel points for meeting preset condition, packet are extracted in the rgb image data from target scene
It includes:
The rgb image data is transformed into HSV space;
The pixel fallen on the corresponding section HSV of preset color of object is determined as the target pixel points.
Optionally, described according to the geometrical characteristic of the target pixel points class and the geometrical characteristic of known object, identify the mesh
Mark the corresponding object of pixel class, comprising:
Obtain the corresponding first geometrical characteristic vector of the target pixel points class, wherein the first geometrical characteristic vector
To arrange resulting vector according to default arrangement mode by the geometrical characteristic of the target pixel points class;
Obtain the corresponding second geometrical characteristic vector of each target object, wherein second geometrical characteristic
Vector is to arrange by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector, according to the default arrangement mode
Resulting vector;
If it exists the second geometrical characteristic similar with the first geometrical characteristic vector corresponding to the target pixel points class to
Amount then identifies that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
Optionally, the COS distance between the first geometrical characteristic vector corresponding to the target pixel points class if it exists
More than or equal to the second geometrical characteristic vector of pre-determined distance, then most by the COS distance between the first geometrical characteristic vector
The second big geometrical characteristic vector is determined as the similar second geometrical characteristic vector of the first geometrical characteristic vector.
Optionally, the method also includes:
The class center of each target pixel points class is calculated to the distance of origin;
The geometrical characteristic of at least one target pixel points class of the determination, and according to the object pixel determined
The geometrical characteristic of point class and the geometrical characteristic of known target object, identify the corresponding object of target pixel points class, comprising:
If the target object is not yet all identified, and has the target pixel points class not yet identified, then
The nearest target pixel points class of the distance of class center to origin in the target pixel points class not yet identified is determined as currently
Target pixel points class;
Determine the geometrical characteristic of the target pixel point class;
According to the geometrical characteristic of the geometrical characteristic for the target pixel point class determined and the target object, know
The not corresponding object of the current target pixel points class.
Optionally, the method also includes: after identifying the corresponding object of the target pixel points class, determine described in
The coordinate of the corresponding object of target pixel points class.
Optionally, the geometrical characteristic includes one or more in following: the long ratio of length, breadth length ratio, depth-width ratio, height,
Height divided by the sum of length, width and height and for characterize whether be straight line instruction information.
According to the second aspect of an embodiment of the present disclosure, a kind of article detection device is provided, described device includes:
Extraction module, for extracting the target pixel points for meeting preset condition from the rgb image data of target scene;
Cluster module clusters the target pixel points for the point cloud data according to the target pixel points, with
Obtain at least one target pixel points class, wherein each target pixel points class corresponds respectively to an object;
Identification module, for determining the geometrical characteristic of at least one target pixel points class, and according to the institute determined
The geometrical characteristic of target pixel points class and the geometrical characteristic of known target object are stated, identifies the corresponding object of target pixel points class
Body.
Optionally, described device further include:
First obtain module, for the extraction module extracted from the rgb image data of target scene meet it is default
Before the target pixel points of condition, the depth image data of the target scene is obtained;
First alignment module, for the depth image data and the rgb image data to be aligned, to obtain the mesh
Mark the point cloud data of scene.
Optionally, described device further include:
Second obtains module, for obtaining the depth image data of the target pixel points;
Second alignment module, for by the depth image data of the target pixel points and the target pixel points
The rgb image data alignment, to obtain the point cloud data of the target pixel points.
Optionally, the extraction module includes:
Transform subblock, for the rgb image data to be transformed into HSV space;
Determine submodule, it is described for the pixel fallen into the corresponding HSV space of preset color of object to be determined as
Target pixel points.
Optionally, the identification module is used for:
Obtain the corresponding first geometrical characteristic vector of the target pixel points class, wherein the first geometrical characteristic vector
To arrange resulting vector according to default arrangement mode by the geometrical characteristic of the target pixel points class;
Obtain the corresponding second geometrical characteristic vector of each target object, wherein second geometrical characteristic
Vector is to arrange by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector, according to the default arrangement mode
Resulting vector;
If it exists the second geometrical characteristic similar with the first geometrical characteristic vector corresponding to the target pixel points class to
Amount then identifies that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
Optionally, the COS distance between the first geometrical characteristic vector corresponding to the target pixel points class if it exists
More than or equal to the second geometrical characteristic vector of pre-determined distance, then most by the COS distance between the first geometrical characteristic vector
The second big geometrical characteristic vector is determined as the similar second geometrical characteristic vector of the first geometrical characteristic vector.
Optionally, described device further include:
Computing module, for calculating the class center of each target pixel points class to the distance of origin;
The identification module is used for:
If the target object is not yet all identified, and has the target pixel points class not yet identified, then
The nearest target pixel points class of the distance of class center to origin in the target pixel points class not yet identified is determined as currently
Target pixel points class;
Determine the geometrical characteristic of the target pixel point class;
According to the geometrical characteristic of the geometrical characteristic for the target pixel point class determined and the target object, know
The not corresponding object of the current target pixel points class.
Optionally, described device further include: determining module, for identifying the corresponding object of the target pixel points class
Later, the coordinate of the corresponding object of the target pixel points class is determined.
Optionally, the geometrical characteristic includes one or more in following: length, breadth length ratio, depth-width ratio, height are divided by length
The sum of wide height and for characterize whether be straight line instruction information.
According to the third aspect of an embodiment of the present disclosure, a kind of computer readable storage medium is provided, calculating is stored thereon with
The step of machine program, the object detecting method that realization disclosure first aspect provides when which is executed by processor.
According to a fourth aspect of embodiments of the present disclosure, a kind of electronic equipment is provided, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize that disclosure first aspect provides
The object detecting method the step of.
In the disclosure, the target pixel points for meeting preset condition are extracted first from the rgb image data of target scene, so
Afterwards, according to the point cloud data of target pixel points, target pixel points are clustered, to obtain at least one target pixel points class,
Wherein, each target pixel points class corresponds respectively to an object, finally, determining that the geometry of at least one target pixel points class is special
Sign, and according to the geometrical characteristic of the geometrical characteristic for the target pixel points class determined and known target object, identify the target
The corresponding object of pixel class.Scheme in the disclosure, by the geometrical characteristic of object and target detection skill based on two dimensional image
Art combines, in this way, can exclude include target object still image interference caused by robot, enhance object
The accuracy that physical examination is surveyed, so that auxiliary robot is more flexible accurately to execute crawl task.Also, this method can reduce calculation
Dependence of the method to image detection result, it is only necessary to examined object is contained in the target scene that image detection provides, from
And the robustness of detection method is significantly increased.In addition, the object detecting method that the disclosure provides, it is not necessary that the outer of object is transformed
It sees, and for different objects, only its geometrical property, which need to simply be arranged, can be completed object detection, without establishing complexity
Model, so that the detection method that the disclosure provides simply is easily achieved, detection efficiency is greatly improved and versatility is stronger.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Specific embodiment
It is described in detail below in conjunction with specific embodiment of the attached drawing to the disclosure.It should be understood that this place is retouched
The specific embodiment stated is only used for describing and explaining the disclosure, is not limited to the disclosure.
As shown in Figure 1, for the flow chart of the object detecting method shown in one exemplary embodiment of the disclosure, this method can be with
Applied to robot, especially for executing the robot of crawl task.Alternatively, this method can be applied to cloud, for example, answering
For controlling the server of robot manipulation.
As shown in Figure 1, this method may comprise steps of.
In S11, the target pixel points for meeting preset condition are extracted from the rgb image data of target scene.
Wherein, target scene is scene locating for object to be identified.For example, in one example, it is desirable to robot be enabled to grab
The one-pen on desk is taken, then target scene can be the scene for including this pen on the desk and desktop.Target field
The rgb image data of scape can be obtained by image collecting device (e.g., camera).
After the rgb image data for obtaining target scene, the rgb image data of target scene is handled, is extracted
Meet the target pixel points of preset condition.The purpose of this step is just to sift out to be likely to be the corresponding pixel of object to be identified.
In S12, according to the point cloud data of target pixel points, target pixel points are clustered, to obtain at least one
Target pixel points class, wherein each target pixel points class corresponds respectively to an object.
In one embodiment, clustering to target pixel points can be with are as follows: carries out according to the distance between target pixel points
Cluster, for example, the two o'clock is gathered for one kind when distance between two points is less than a distance threshold.In another embodiment,
Clustering to target pixel points can be with are as follows: according between target pixel points distance and normal vector clustered, for example, in two o'clock
The distance between less than a distance threshold and the two point normal vectors between angle less than an angle threshold value when, the two o'clock
Gathered for one kind.In this embodiment, judge whether two points are same class according to the angle of distance and normal vector, so that
The cluster result of target pixel points is more accurate.It is worth noting that the mode clustered to target pixel points is not limited to
In above two embodiment, other cluster modes are also applied for the disclosure.It is resulting every after being clustered to target pixel points
One target pixel points class corresponds respectively to an object.For example, it is assumed that obtaining two target pixel points classes, then can determine
There are two candidate objects.
In S13, the geometrical characteristic of at least one target pixel points class is determined, and according to the target pixel points class determined
Geometrical characteristic and known target object geometrical characteristic, identify the corresponding object of target pixel points class.
For example, can determine that the geometry of the target pixel points class is special according to the point cloud data of each point in target pixel points class
Sign.Wherein, which may include one or more in following: length, breadth length ratio, depth-width ratio, it is high long than, it is high divided by
The sum of length, width and height and for characterize whether be straight line instruction information.Illustratively, length can be by object pixel
Point class carries out principal component analysis and obtains, wherein maximum eigenvalue in long corresponding principal component analysis, in wide corresponding principal component analysis time
Big characteristic value, height correspond to minimal eigenvalue in principal component analysis.Alternatively, length can also be directly according to target pixel points class
Point cloud data be calculated, for example, calculate separately point on tri- directions X, Y, Z apart from maximum value, and by each direction
On point be identified as length and width and height apart from maximum value.After obtaining length, breadth length ratio, height can be calculated
It is wide than, it is high it is long than, it is high divided by the sum of length, width and height.
Above-mentioned known target object can be preset object, for example, robot wants in this crawl task
The object of crawl.The geometrical characteristic of the object can be inputted in advance, in this way, which kind of object corresponded in identification target pixel points class
When, can by by the geometrical characteristic of the geometrical characteristic of target pixel points and known target object carry out matched mode come
Judged.If it does, then determining that the corresponding object of the target pixel points is exactly the target object, if it does not match, really
The fixed corresponding object of the target pixel points is not the target object, at this point it is possible to terminate to identify or continue to use target pixel points
Geometrical characteristic matched with the geometrical characteristic of another target object, to continue to identify the corresponding object of the target pixel points.
Since the geometrical characteristic of target pixel points class can characterize the geometrical characteristic of its corresponding object, in basis
The geometrical characteristic for the target pixel points class determined includes can exclude when identifying the corresponding object of the target pixel points class
There is the interference caused by robot of the still image of target object, enhance the accuracy of object detection, thus auxiliary robot
It is more flexible accurately to execute crawl task.Also, this method can reduce dependence of the algorithm to image detection result, it is only necessary to
Examined object is contained in the target scene that image detection provides, so that the robust of detection method be significantly increased
Property.In addition, the object detecting method that the disclosure provides only needs letter it is not necessary that the appearance of object is transformed, and for different objects
Object detection can be completed in its geometrical property of the setting of list, without establishing complicated model, so that the inspection that the disclosure provides
Survey method is simply easily achieved, and detection efficiency greatly improves and versatility is stronger.
As shown in Fig. 2, for the flow chart of the object detecting method shown in disclosure another exemplary embodiment.This method is removed
It can also include S201, S202 except including above-mentioned S11-S13.
In S201, the depth image data of target scene is obtained;
In S202, the rgb image data of depth image data and target scene is aligned, to obtain the target scene
Point cloud data.
In one embodiment, can be obtained by depth camera synchronization target scene depth image data and
Rgb image data, alternatively, only obtaining the depth image data of target scene by depth camera.Getting target scene
Depth image data after, depth image data and rgb image data are aligned, wherein the mode being aligned can be depth
Under image alignment to RGB image coordinate system, it is also possible to snap to RGB image under depth image coordinate system.It is worth explanation
It is that the mode of the depth image data and rgb image data that obtain synchronization target scene is not limited to using depth phase
Machine is also possible to three-dimensional laser scanner, binocular camera etc..
Method through this embodiment, the depth image data and rgb image data of available target scene, then leads to
It crosses depth data and rgb image data alignment, obtains the point cloud data of target scene, to realize the geometry of object is special
The target detection technique based on image of seeking peace combines, and improves the accuracy of object detection.
In another embodiment, it can be the mesh for first extracting from the rgb image data of target scene and meeting preset condition
Pixel is marked, the depth image data (for example, can obtain by depth camera) of the target pixel points is then obtained, it later will be with
The corresponding depth image data of the target pixel points and rgb image data alignment, to directly acquire the point of target pixel points
Cloud data.In this way, it is possible to reduce data volume when depth image data and rgb image data are aligned improves alignment speed, makes
Obtain object detection more quickly, efficiently.
As shown in figure 3, determining the flow chart of method for the target pixel points shown in one exemplary embodiment of the disclosure, this is really
The method of determining may comprise steps of.
In S301, the rgb image data of target scene is transformed into HSV space;
In S302, the pixel fallen on the corresponding section HSV of preset color of object is determined as target pixel points.
Wherein it is possible to obtain the corresponding color of the target object, which is color of object in advance according to target object.Fall into this
Pixel on the corresponding section HSV of color of object probably belongs to target object, and therefore, these pixels are targeted
Pixel.Alternatively, in another embodiment, color of object be except the corresponding color of the background image portion of target scene it
Other outer colors, the disclosure are not defined specifically.
In one embodiment, the determination method of target pixel points, which can be, is transformed into HSV sky for rgb image data first
Between in, the pixel fallen on the corresponding section HSV of preset color of object is then determined as target pixel points.In another reality
Apply in example, the frame frame that the determination method of target pixel points can be deep learning select as a result, for example based on deep learning
Object detection model frame select as a result, as FasterR-CNN, SSD (Single Shot MultiBox Detector),
YOLO (You only look once) etc..
As shown in figure 4, for the flow chart of the object identification method shown in one exemplary embodiment of the disclosure, the recognition methods
It may comprise steps of.
In S401, obtain the corresponding first geometrical characteristic vector of target pixel points class, wherein first geometrical characteristic to
Amount is to arrange resulting vector according to default arrangement mode by the geometrical characteristic of the target pixel points class.
As described above, geometrical characteristic may include one or more in following: length, breadth length ratio, depth-width ratio, high length
Than, it is high divided by the sum of length, width and height and for characterize whether be straight line instruction information.When the geometry for obtaining target pixel points class
After feature, corresponding first geometrical characteristic vector can be generated according to the geometrical characteristic.For example, it is assumed that the target picture obtained
The geometrical characteristic of vegetarian refreshments class includes length, breadth length ratio, depth-width ratio, high long ratio, and presetting arrangement mode is length, depth-width ratio, wide length
Than, high long ratio, then corresponding first geometrical characteristic vector are as follows: [a1, b1, c1, d1], wherein a1 is the several of target pixel points class
Length in what feature, b1 are the depth-width ratio in the geometrical characteristic of target pixel points class, and c1 is that the geometry of target pixel points class is special
Breadth length ratio and d1 in sign are the long ratio of height in the geometrical characteristic of target pixel points class.
In S402, the corresponding second geometrical characteristic vector of each target object is obtained, wherein second geometry is special
Levying vector is to arrange by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector, according to above-mentioned default arrangement mode
Arrange resulting vector.
As set forth above, it is possible to pre-enter the geometrical characteristic of target object, later, can be generated according to the geometrical characteristic
The corresponding second geometrical characteristic vector of target object.Illustratively, item of information and mesh included by the geometrical characteristic of target pixel points
It is identical to mark item of information included by the geometrical characteristic of object.
For example, it is assumed that the first geometrical characteristic vector is [a1, b1, c1, d1], then corresponding second geometrical characteristic of target object
Vector are as follows: [a2, b2, c2, d2], wherein a2 is the length in the geometrical characteristic of target object, and b2 is that the geometry of target object is special
Depth-width ratio in sign, c2 are that the breadth length ratio and d2 in the geometrical characteristic of target object are in the geometrical characteristic of target object
High long ratio.
In S403, the second geometry similar with the first geometrical characteristic vector corresponding to target pixel points class is special if it exists
Vector is levied, then identifies that the corresponding object of target pixel points class is the corresponding object of the similar second geometrical characteristic vector
Body.
The first geometrical characteristic vector corresponding to acquisition target pixel points class and each target object are corresponding
After second geometrical characteristic vector, can by the first geometrical characteristic vector one by one with each second geometrical characteristic vector carry out
Match, to determine whether the two is similar.If it exists with the first geometrical characteristic vector similar second corresponding to target pixel points class
Geometrical characteristic vector, then it represents that the corresponding object of target pixel points class target object phase corresponding with the second geometrical characteristic vector
Seemingly, therefore, identify that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
Illustratively, the first geometrical characteristic vector is carried out one by one matching with each second geometrical characteristic vector can be with are as follows: meter
Calculate the similarity parameter between the first geometrical characteristic vector and each second geometrical characteristic vector.Optionally, the similarity parameter
For COS distance, then the COS distance between the first geometrical characteristic vector is greater than or equal to the of the first pre-determined distance if it exists
Two geometrical characteristic vectors, then it is the maximum second geometrical characteristic vector of COS distance between the first geometrical characteristic vector is true
It is set to the similar second geometrical characteristic vector of the first geometrical characteristic vector.Optionally, which is Euclidean distance, then
If it exists the Euclidean distance between the first geometrical characteristic vector be less than or equal to the second pre-determined distance the second geometrical characteristic to
Amount, then by the smallest second geometrical characteristic vector of Euclidean distance between the first geometrical characteristic vector be determined as this more than the first
What similar second geometrical characteristic vector of feature vector.It is worth noting that similarity parameter is not limited to above two implementation
Mode, other can determine that the parameter of the similitude between outgoing vector is equally applicable to the disclosure.
As shown in figure 5, for the flow chart of the object detecting method shown in disclosure another exemplary embodiment.Such as Fig. 5 institute
Show, this method may comprise steps of:
In S11, the target pixel points for meeting preset condition are extracted from the rgb image data of target scene.
In S12, according to the point cloud data of target pixel points, target pixel points are clustered, to obtain at least one
Target pixel points class.Wherein, the specific implementation of S11 and S12 has been described above, and details are not described herein again.
In S501, the class center of each target pixel points class is calculated to the distance of origin.
In S502, if target object is not yet all identified, and there are the target pixel points not yet identified
The nearest target pixel points class of the distance of class center to origin in the target pixel points class not yet identified is then determined as by class
Target pixel point class.
In S503, the geometrical characteristic of target pixel point class is determined.
In S504, according to the geometrical characteristic of the geometrical characteristic for the current pixel point class determined and target object, identification
The corresponding object of the current target pixel points class.Mode described in specific identification method Fig. 4 above in conjunction is similar, herein not
It repeats again.
In this embodiment, the class center of each target pixel points class is calculated first to the distance of origin, optionally, is calculated
Euclidean distance of the class center of target pixel points class to origin.Then, distance based on class center to origin from the near to the remote suitable
Sequence is ranked up target pixel points class, and optionally, sort method can be bubble sort, selected and sorted, insertion sort, fast
Speed sequence etc..For example, target pixel points class totally three that cluster obtains, respectively A class, B class, C class, according to class center to origin
Distance ranking results from the near to the remote be A, B, C.There are three known target objects, is target object 1, target object respectively
2 and target object 3.Then when executing S502 for the first time, at this point, since three target objects are not yet all identified, and three
A target pixel points class did not also all carry out identification, it therefore meets the condition in S502, at this point, A class is determined as current mesh
Mark pixel class.Later, S503 and S504 is executed.If which object the unidentified corresponding object of A class out is after S504
Body (that is, there is no the second geometrical characteristic similar with the first geometrical characteristic vector corresponding to target pixel point class to
Amount), or identify that the corresponding object of A class is target object 2, then S502 is returned to, circulation executes S502-S504, until whole
Until target pixel points class was all carried out identification or target object is all identified.
According to the distance sequence from the near to the distant of class centre distance origin, successively target pixel points class is judged, it can
To improve the priority for the class being closer, and will be reduced apart from the priority of farther away class, because under normal conditions, distance
What farther away class included is background data, itself is the probability of target object with regard to lower.In this way, object detection can be improved
Speed improves recognition efficiency.
In addition, this method can also include the following steps: identifying mesh in the object detecting method that the disclosure provides
After marking the corresponding object of pixel class, the coordinate of the corresponding object of target pixel points class is determined.
Wherein it is possible to determine the seat of the corresponding object of target pixel points class according to the point cloud data of target pixel points class
Mark.In this way, may further determine that out the pose of object after identifying the corresponding object of target pixel points class, thus auxiliary
Execute crawl task with helping robot more accurate and flexible.
As shown in fig. 6, for the block diagram of the article detection device shown in one exemplary embodiment of the disclosure.Described device 100
May include:
Extraction module 601, for extracting the target pixel points for meeting preset condition from the RGB data of target scene;
Cluster module 602 gathers the target pixel points for the point cloud data according to the target pixel points
Class, to obtain at least one target pixel points class, wherein each target pixel points class corresponds respectively to an object;
Identification module 603, for determining the geometrical characteristic of at least one target pixel points class, and according to determining
The geometrical characteristic of the target pixel points class and the geometrical characteristic of known target object identify that the target pixel points class is corresponding
Object.
Scheme in the disclosure combines the geometrical characteristic of object and the target detection technique based on two dimensional image,
In this way, can exclude include target object still image interference caused by robot, enhance the standard of object detection
True property, so that auxiliary robot is more flexible accurately to execute crawl task.Also, this method can reduce algorithm and examine to image
Survey the dependence of result, it is only necessary to examined object is contained in the target scene that image detection provides, thus significantly
Improve the robustness of detection method.In addition, the object detecting method that the disclosure provides, it is not necessary that the appearance of object is transformed, and it is right
In different objects, only its geometrical property, which need to simply be arranged, can be completed object detection, without establishing complicated model, thus
So that the detection method that the disclosure provides simply is easily achieved, detection efficiency is greatly improved and versatility is stronger.
Optionally, described device can also include:
First obtain module, for the extraction module extracted from the rgb image data of target scene meet it is default
Before the target pixel points of condition, the depth image data of the target scene is obtained;
First alignment module, for the depth image data and the rgb image data to be aligned, to obtain the mesh
Mark the point cloud data of scene.
Optionally, described device can also include:
Second obtains module, for obtaining the depth image data of the target pixel points;
Second alignment module, for by the depth image data of the target pixel points and the target pixel points
The rgb image data alignment, to obtain the point cloud data of the target pixel points.
Optionally, the extraction module 601 may include:
Transform subblock, for the rgb image data to be transformed into HSV space;
Determine submodule, it is described for the pixel fallen into the corresponding HSV space of preset color of object to be determined as
Target pixel points.
Optionally, the identification module 603 can be used for:
Obtain the corresponding first geometrical characteristic vector of the target pixel points class, wherein the first geometrical characteristic vector
To arrange resulting vector according to default arrangement mode by the geometrical characteristic of the target pixel points class;
Obtain the corresponding second geometrical characteristic vector of each target object, wherein second geometrical characteristic
Vector is to arrange by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector, according to the default arrangement mode
Resulting vector;
If it exists the second geometrical characteristic similar with the first geometrical characteristic vector corresponding to the target pixel points class to
Amount then identifies that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
Optionally, the COS distance between the first geometrical characteristic vector corresponding to the target pixel points class if it exists
More than or equal to the second geometrical characteristic vector of pre-determined distance, then most by the COS distance between the first geometrical characteristic vector
The second big geometrical characteristic vector is determined as the similar second geometrical characteristic vector of the first geometrical characteristic vector.
Optionally, described device can also include:
Computing module, for calculating the class center of each target pixel points class to the distance of origin;
The identification module 603 is used for:
If the target object is not yet all identified, and has the target pixel points class not yet identified, then
The nearest target pixel points class of the distance of class center to origin in the target pixel points class not yet identified is determined as currently
Target pixel points class;
Determine the geometrical characteristic of the target pixel point class;
According to the geometrical characteristic of the geometrical characteristic for the target pixel point class determined and the target object, know
The not corresponding object of the current target pixel points class.
Optionally, described device can also include:
Determining module, for determining the object pixel after identifying the corresponding object of the target pixel points class
The coordinate of the corresponding object of point class.
Optionally, the geometrical characteristic includes one or more in following: length, breadth length ratio, depth-width ratio, height are divided by length
The sum of wide height and for characterize whether be straight line instruction information.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Based on same design, the disclosure provides a kind of computer readable storage medium, is stored thereon with computer program, should
The step of object detecting method that the disclosure provides is realized when program is executed by processor.
Fig. 7 is the block diagram of a kind of electronic equipment 700 shown according to an exemplary embodiment.As shown in fig. 7, the electronics is set
Standby 700 may include: processor 701, memory 702.The electronic equipment 700 can also include multimedia component 703, input/
Export one or more of (I/O) interface 704 and communication component 705.
Wherein, processor 701 is used to control the integrated operation of the electronic equipment 700, to complete above-mentioned object detection side
All or part of the steps in method.Memory 702 is for storing various types of data to support the behaviour in the electronic equipment 700
To make, these data for example may include the instruction of any application or method for operating on the electronic equipment 700, with
And the relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc..The memory 702
It can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static random-access is deposited
Reservoir (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory
(Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), erasable programmable
Read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory
(Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as
ROM), magnetic memory, flash memory, disk or CD.Multimedia component 703 may include screen and audio component.Wherein
Screen for example can be touch screen, and audio component is used for output and/or input audio signal.For example, audio component may include
One microphone, microphone is for receiving external audio signal.The received audio signal can be further stored in storage
Device 702 is sent by communication component 705.Audio component further includes at least one loudspeaker, is used for output audio signal.I/O
Interface 704 provides interface between processor 701 and other interface modules, other above-mentioned interface modules can be keyboard, mouse,
Button etc..These buttons can be virtual push button or entity button.Communication component 705 is for the electronic equipment 700 and other
Wired or wireless communication is carried out between equipment.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field
Communication, abbreviation NFC), 2G, 3G, 4G, NB-IOT, eMTC or other 5G etc. or they one or more of
Combination, it is not limited here.Therefore the corresponding communication component 705 may include: Wi-Fi module, bluetooth module, NFC mould
Block etc..
In one exemplary embodiment, electronic equipment 700 can be by one or more application specific integrated circuit
(Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital
Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device,
Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array
(Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member
Part is realized, for executing above-mentioned object detecting method.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should
The step of above-mentioned object detecting method is realized when program instruction is executed by processor.For example, the computer readable storage medium
It can be the above-mentioned memory 702 including program instruction, above procedure instruction can be executed by the processor 701 of electronic equipment 700
To complete above-mentioned object detecting method.
Fig. 8 is the block diagram of a kind of electronic equipment 800 shown according to an exemplary embodiment.For example, electronic equipment 800 can
To be provided as a server.Referring to Fig. 8, electronic equipment 800 includes processor 822, and quantity can be one or more, with
And memory 832, for storing the computer program that can be executed by processor 822.The computer program stored in memory 832
May include it is one or more each correspond to one group of instruction module.In addition, processor 822 can be configured as
The computer program is executed, to execute above-mentioned object detecting method.
In addition, electronic equipment 800 can also include power supply module 826 and communication component 850, which can be with
It is configured as executing the power management of electronic equipment 800, which, which can be configured as, realizes electronic equipment 800
Communication, for example, wired or wireless communication.In addition, the electronic equipment 800 can also include input/output (I/O) interface 858.Electricity
Sub- equipment 800 can be operated based on the operating system for being stored in memory 832, such as Windows ServerTM, Mac OS
XTM, UnixTM, LinuxTM etc..
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should
The step of above-mentioned object detecting method is realized when program instruction is executed by processor.For example, the computer readable storage medium
It can be the above-mentioned memory 832 including program instruction, above procedure instruction can be executed by the processor 822 of electronic equipment 800
To complete above-mentioned object detecting method.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality
The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure
Monotropic type, these simple variants belong to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance
In the case where shield, can be combined in any appropriate way, in order to avoid unnecessary repetition, the disclosure to it is various can
No further explanation will be given for the combination of energy.
In addition, any combination can also be carried out between a variety of different embodiments of the disclosure, as long as it is without prejudice to originally
Disclosed thought equally should be considered as disclosure disclosure of that.