CN107221005A - Object detecting method and device - Google Patents
Object detecting method and device Download PDFInfo
- Publication number
- CN107221005A CN107221005A CN201710309200.9A CN201710309200A CN107221005A CN 107221005 A CN107221005 A CN 107221005A CN 201710309200 A CN201710309200 A CN 201710309200A CN 107221005 A CN107221005 A CN 107221005A
- Authority
- CN
- China
- Prior art keywords
- layer
- connected domain
- illustrative plates
- under test
- convolution kernel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention proposes a kind of object detecting method and device, wherein, method includes:Obtain the depth of field picture and RGB pictures of object under test;Connected domain is extracted from depth of field picture;Obtain target signature collection of illustrative plates layer residing when being returned to connected domain coordinate;Target area in RGB pictures is input in neutral net and handled until target signature collection of illustrative plates layer, wherein, target area is region corresponding with the connected domain including object under test in RGB pictures;Coordinate recurrence is carried out to the characteristic spectrum obtained in target signature collection of illustrative plates layer, the testing result of the object under test in target area is obtained;Wherein, testing result includes object under test coordinate and frame in RGB pictures.Thus, object detection area is reduced by connected domain, only the corresponding RGB pictures of connected domain are input in neutral net and handled, substantial amounts of calculate is saved to consume, the characteristic spectrum obtained in target signature collection of illustrative plates layer is only subjected to coordinate recurrence, accelerate object detection speed, improve object detection efficiency.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of object detecting method and device.
Background technology
With the fast development of artificial intelligence and big data technology, increasing product starts to intelligent development, figure
As identification is very important part in intellectuality, i.e., using image as input information, by different methods in image
Object carries out detection and localization, and identifies the classification of the object.
In correlation technique, object detection can be carried out by modes such as traditional images dividing method and deep neural networks.
Wherein, the mode of deep neural network is more preferable relative to the robustness of traditional images dividing method, but it needs substantial amounts of number
Supported according to computing resource, thus in computing resource by limited time, object detection speed and accuracy rate are substantially reduced.
The content of the invention
The purpose of the present invention is intended at least solve one of technical problem in correlation technique to a certain extent.
Therefore, first purpose of the present invention is to propose a kind of object detecting method, reduced with realizing by connected domain
The corresponding RGB pictures of connected domain, are only input in neutral net and are handled by object detection area, only will be in target signature
The characteristic spectrum that spectrum layer is obtained carries out coordinate recurrence, for solving object detection speed caused by computing resource is not enough in the prior art
The problem of degree and efficiency are substantially reduced.
Second object of the present invention is to propose a kind of article detection device.
Third object of the present invention is to propose another article detection device.
Fourth object of the present invention is to propose a kind of non-transitorycomputer readable storage medium.
The 5th purpose of the present invention is to propose a kind of computer program product.
For up to above-mentioned purpose, first aspect present invention embodiment proposes a kind of object detecting method, comprised the following steps:
Obtain the depth of field picture and RGB pictures of object under test;Connected domain is extracted from the depth of field picture;Obtain and the connected domain is sat
Mark target signature collection of illustrative plates layer residing when returning;Target area in the RGB pictures is input in neutral net and located
Reason until target signature collection of illustrative plates layer, wherein, the target area be in the RGB pictures with including the object under test
The corresponding region of connected domain;Coordinate recurrence is carried out to the characteristic spectrum obtained in target signature collection of illustrative plates layer, the mesh is obtained
Mark the testing result of the object under test in region;Wherein, the testing result is schemed including the object under test in the RGB
Coordinate and frame in piece.
The object detecting method of the embodiment of the present invention, by extracting connected domain from depth of field picture, and is obtained to connected domain
Coordinate target signature collection of illustrative plates layer residing when returning, then the corresponding RGB pictures of connected domain, which are input in neutral net, is located
Reason finally carries out coordinate recurrence to the characteristic spectrum obtained in target signature collection of illustrative plates layer and obtains target until target signature collection of illustrative plates layer
The testing result of object under test in region.Thus, object detection area is reduced by connected domain, it is only that connected domain is corresponding
RGB pictures, which are input in neutral net, to be handled, save it is substantial amounts of calculate consumption, will only be obtained in target signature collection of illustrative plates layer
Characteristic spectrum carries out coordinate recurrence, accelerates object detection speed, improves object detection efficiency.
For up to above-mentioned purpose, second aspect of the present invention embodiment proposes a kind of article detection device, including:Picture is obtained
Module, depth of field picture and RGB pictures for obtaining object under test;Extraction module, for the company of extraction from the depth of field picture
Logical domain;Acquisition module, for obtaining target signature collection of illustrative plates layer residing when the connected domain coordinate is returned;Processing module, is used for
Target area in the RGB pictures is input in neutral net and handled until target signature collection of illustrative plates layer, wherein,
The target area is region corresponding with including the connected domain in the RGB pictures;Detection module, in the mesh
The characteristic spectrum that mark characteristic spectrum layer is obtained carries out coordinate recurrence, obtains the detection of the object under test in the target area
As a result;Wherein, the testing result includes the object under test coordinate and frame in the RGB pictures.
The article detection device of the embodiment of the present invention, by extracting connected domain from depth of field picture, and is obtained to connected domain
Coordinate target signature collection of illustrative plates layer residing when returning, then the corresponding RGB pictures of connected domain, which are input in neutral net, is located
Reason finally carries out coordinate recurrence to the characteristic spectrum obtained in target signature collection of illustrative plates layer and obtains target until target signature collection of illustrative plates layer
The testing result of object under test in region.Thus, object detection area is reduced by connected domain, it is only that connected domain is corresponding
RGB pictures, which are input in neutral net, to be handled, save it is substantial amounts of calculate consumption, will only be obtained in target signature collection of illustrative plates layer
Characteristic spectrum carries out coordinate recurrence, accelerates object detection speed, improves object detection efficiency.
For up to above-mentioned purpose, third aspect present invention embodiment proposes another article detection device, including:Processing
Device;Memory for storing the processor-executable instruction;Wherein, the processor is configured as:Obtain object under test
Depth of field picture and RGB pictures;Connected domain is extracted from the depth of field picture;Acquisition is residing when being returned to the connected domain coordinate
Target signature collection of illustrative plates layer;Target area in the RGB pictures is input in neutral net and handled until the mesh
Characteristic spectrum layer is marked, wherein, the target area is corresponding with the connected domain including the object under test in the RGB pictures
Region;Coordinate recurrence is carried out to the characteristic spectrum obtained in target signature collection of illustrative plates layer, the institute in the target area is obtained
State the testing result of object under test;Wherein, the testing result include coordinate of the object under test in the RGB pictures and
Frame.
To achieve these goals, fourth aspect present invention embodiment proposes a kind of computer-readable storage of non-transitory
Medium, when the instruction in the storage medium is performed by the processor of server end so that server end is able to carry out one
Object detecting method is planted, methods described includes:Obtain the depth of field picture and RGB pictures of object under test;From the depth of field picture
Extract connected domain;Obtain target signature collection of illustrative plates layer residing when being returned to the connected domain coordinate;By the mesh in the RGB pictures
Mark region, which is input in neutral net, to be handled until target signature collection of illustrative plates layer, wherein, the target area is described
Region corresponding with the connected domain including the object under test in RGB pictures;To the spy obtained in target signature collection of illustrative plates layer
Levy collection of illustrative plates and carry out coordinate recurrence, obtain the testing result of the object under test in the target area;Wherein, the detection knot
Fruit includes the object under test coordinate and frame in the RGB pictures.
To achieve these goals, fifth aspect present invention embodiment proposes a kind of computer program product, when described
When instruction processing unit in computer program product is performed, a kind of object detecting method is performed, methods described includes:Obtain to be measured
The depth of field picture and RGB pictures of object;Connected domain is extracted from the depth of field picture;When acquisition is returned to the connected domain coordinate
Residing target signature collection of illustrative plates layer;Target area in the RGB pictures is input in neutral net and handled until institute
Target signature collection of illustrative plates layer is stated, wherein, the target area is with including the connected domain pair of the object under test in the RGB pictures
The region answered;Coordinate recurrence is carried out to the characteristic spectrum obtained in target signature collection of illustrative plates layer, obtained in the target area
The object under test testing result;Wherein, the testing result includes the object under test seat in the RGB pictures
Mark and frame.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and be readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of object detecting method according to an embodiment of the invention;
Fig. 2 is the schematic flow sheet of object detecting method in accordance with another embodiment of the present invention;
Fig. 3 is the schematic flow sheet of the object detecting method according to another embodiment of the invention;
Fig. 4 is the schematic diagram of model structure according to an embodiment of the invention;
Fig. 5 is the structural representation of article detection device according to an embodiment of the invention;
Fig. 6 is the structural representation of acquisition module according to an embodiment of the invention;
Fig. 7 is the structural representation of the first computing unit according to an embodiment of the invention;
Fig. 8 is the structural representation of article detection device in accordance with another embodiment of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and be not considered as limiting the invention.
Below with reference to the accompanying drawings the object detecting method and device of the embodiment of the present invention described.
With the continuous growth of view data, the application for carrying out object detection based on image is also increasingly wider, such as
In image recognition, to obtain target image, and the object that target image includes is detected.
At present, as scene becomes increasingly complex, when computing resource is not enough, object detecting method of the prior art is for thing
Speed and substantially reduced with accuracy rate that physical examination is surveyed.
The present invention proposes a kind of object detecting method, relative to object detecting method of the prior art, can accelerate thing
The speed that physical examination is surveyed, the accuracy of object detection is higher.
Fig. 1 is the schematic flow sheet of object detecting method according to an embodiment of the invention.As shown in figure 1, the object
Detection method comprises the following steps:
Step 101, the depth of field picture and RGB pictures of object under test are obtained.
Step 102, connected domain is extracted from depth of field picture.
In actual applications, depth of field picture and RGB pictures that first-class equipment obtains object under test can be imaged by 3D.
It is possible to further which according to practical application scene needs, connection is extracted from depth of field picture in different ways
Domain, is illustrated below:
The first example, the depth of field of each pixel in depth of field picture is obtained according to depth of field bivariate distribution function, and adjacent
Two pixels the depth of field between difference be less than or equal to the default depth of field threshold value when determine that two pixels belong to same
Connected domain, then using continuous and belong to all pixels point of same connected domain for depth of field picture and build connected domain.
As a kind of mode in the cards, obtaining depth of field bivariate distribution function is:
D1=D (x, y) (0<=x<=W1,0<=y<=H1), wherein, W1For the length of RGB pictures, H1For RGB pictures
Highly.
Assuming that default depth of field threshold value is dd, depth of field bivariate distribution function can be divided into according to depth of field threshold value by some connections
Domain, if the depth of field of two neighboring pixel is respectively D (x1,y1) and D (x2,y2), | D (x1,y1)-D(x2,y2)|<=ddWhen,
Two pixels belong to same connected domain.Connected domain interval, which can be recorded, is:w minj<=x<=w maxj,h minj<=y
<=h maxj。
Second of example, is handled depth of field picture by softwares such as opencv, matlab, is extracted from depth of field picture
Its connected domain.
It should be noted that with upper type be only from depth of field picture extract connected domain for example, can be according to reality
Border application needs to select or sets other modes.
In addition, it is necessary to explanation, the connected domain extracted through the above way from depth of field picture can be N number of, wherein N
Represent the interval quantity of connected domain included in depth of field picture.
Step 103, target signature collection of illustrative plates layer residing when being returned to connected domain coordinate is obtained.
Specifically, a depth convolutional neural networks can be designed, connected domain coordinate is solved using the depth convolutional network
Regression problem.And then, obtain target signature collection of illustrative plates layer residing when being returned to connected domain coordinate.
As a kind of implementation, first area of the object under test on RGB pictures is calculated first, each feature is then calculated
Second area of the convolution kernel on RGB pictures used in collection of illustrative plates layer, so as to obtain the first area and each characteristic spectrum layer
Difference between corresponding second area, and be that target is special by the layer where the corresponding second area of minimal difference in all differences
OL can be labeled as by target signature collection of illustrative plates layer by levying in collection of illustrative plates layer, the present embodimentj。
Step 104, the target area in RGB pictures is input in neutral net and handled until target signature collection of illustrative plates
Layer, wherein, target area is region corresponding with the connected domain including object under test in RGB pictures.
In the present embodiment, target signature collection of illustrative plates layer for the last target area in RGB pictures of neutral net carry out feature,
Last layer of the processing such as down-sampling.
Step 105, coordinate recurrence is carried out to the characteristic spectrum obtained in target signature collection of illustrative plates layer, obtained in target area
The testing result of object under test;Wherein, testing result includes object under test coordinate and frame in RGB pictures.
Specifically, obtain target signature collection of illustrative plates layer after, can by RGB pictures with include the connected domain pair of object under test
The region answered is input in neutral net as target area to be handled until target signature collection of illustrative plates layer.
It should be noted that the accuracy in order to further improve acquisition target signature collection of illustrative plates layer, target area is defeated
Enter to after neutral net, the processing such as feature extraction, down-sampling and dimensionality reduction is carried out, for j-th of connected domain, it is necessary to neutral net pair
Target area in RGB pictures corresponding to j-th of connected domain carries out processing and arrives OLjThe characteristic spectrum of layer.
It should be noted that the neutral net in the present embodiment refers to default neural network model, training in advance
Neural network model can utilize multiple types articulamentum, such as be used as sampling by convolutional layer and pond layer, can shorten feature
The length and width yardstick of collection of illustrative plates.
It is understood that the characteristic spectrum of each characteristic spectrum layer can be obtained after Processing with Neural Network, it is only necessary to
Coordinate recurrence is carried out to the characteristic spectrum obtained in target signature collection of illustrative plates layer, it is possible to obtain object under test in target area
Testing result.Wherein, testing result includes object under test coordinate and frame in RGB pictures.
More specifically, characteristic vector pickup is carried out to characteristic spectrum by using the convolution kernel of target signature collection of illustrative plates layer, and
Coordinate is carried out to this feature vector and returns calculating, so as to obtain the candidate result in RGB image of at least one object under test, most
Determine the actual coordinate and frame of object under test from candidate result based on maximum restrainable algorithms or clustering algorithm afterwards.
It is understood that above-mentioned candidate result includes coordinate and frame of the object under test in RGB image.
The object detecting method of the embodiment of the present invention, by extracting connected domain from depth of field picture, and is obtained to connected domain
Coordinate target signature collection of illustrative plates layer residing when returning, then the corresponding RGB pictures of connected domain, which are input in neutral net, is located
Reason finally carries out coordinate recurrence to the characteristic spectrum obtained in target signature collection of illustrative plates layer and obtains target until target signature collection of illustrative plates layer
The testing result of object under test in region.Thus, object detection area is reduced by connected domain, it is only that connected domain is corresponding
RGB pictures, which are input in neutral net, to be handled, save it is substantial amounts of calculate consumption, will only be obtained in target signature collection of illustrative plates layer
Characteristic spectrum carries out coordinate recurrence, accelerates object detection speed, improves object detection efficiency.
Based on above example, in order to which more clearly how description obtains target spy residing when connected domain coordinate is returned
Collection of illustrative plates layer is levied, is described as follows by embodiment illustrated in fig. 2:
Fig. 2 is the schematic flow sheet of object detecting method in accordance with another embodiment of the present invention.
The present embodiment calculates first area of the object under test on RGB pictures first, then calculates each characteristic spectrum layer institute
Second area of the convolution kernel used on RGB pictures, so as to obtain the first area and each characteristic spectrum layer corresponding the
Difference between two areas, and be target signature collection of illustrative plates by the layer where the corresponding second area of minimal difference in all differences
Layer.
Illustrate as shown in Fig. 2 the step S103 i.e. in above-described embodiment includes:S201-S204.
Step 201, first area of the object under test on RGB pictures is calculated.
Specifically, average distance of the connected domain apart from camera is obtained first.As a kind of example, to each in connected domain
The depth of field of pixel is summed, and the numerical value and the area of connected domain after summation are done into ratio, the average distance of connected domain is obtained.
It is d to remember the average distancej(1≤j≤N), N represents the interval quantity of connected domain, and each connected domain is calculated respectively apart from camera
Average distance be:
Further, the physical length and actual height of object under test are obtained.It is understood that in order to ensure acquisition
The physical length of object under test and the accuracy of actual height, can by the length and height of multiple object under test to be measured,
Then the length average and height average that the mode being averaging obtains object under test to be measured are used as the physical length of object under test
And actual height.
Further, the focal length of camera is multiplied with physical length, the result after multiplication and average distance is made into ratio
Obtain the picture length of object under test.As a kind of example, formula Ow can be passed throughj=f*Wr/djCalculate the picture of object under test
Length, wherein, f is the focal length of camera, WrFor physical length, djFor connected domain apart from camera average distance.
Further, the focal length of camera is multiplied with actual height, the result after multiplication and average distance is made into ratio
Obtain the picture height of object under test.As a kind of example, formula Oh can be passed throughj=f*Hr/djCalculate the picture of object under test
Highly, wherein, f be camera focal length, HrFor actual height, djFor connected domain apart from camera average distance.
Further, the first area is obtained according to picture length and picture height.As a kind of example, formula can be passed through
Osj=Owj*OhjThe first area is calculated, wherein, OwjFor picture length, OhjFor picture height.
Step 202, second area of the convolution kernel on RGB pictures used in each characteristic spectrum layer is calculated.
Specifically, characteristic spectrum obtained by i-th layer of characteristic spectrum layer is sampled to target area is obtained first
Map length and collection of illustrative plates height, wherein, 1≤i≤N.
Further, the convolution kernel length of convolution kernel used in this feature collection of illustrative plates layer and convolution kernel height are obtained.
Further, the map length after convolution kernel length is multiplied with map length with first layer makees ratio, obtains i-th
Convolution kernel picture length of the convolution kernel on RGB pictures used in layer characteristic spectrum layer.As a kind of example, public affairs can be passed through
Formula Bwi=Swi*Wi/W1Convolution kernel picture length is calculated, wherein, SwiFor convolution kernel length, WiFor map length, W1For first layer
Map length be RGB pictures length.
Further, the map length after convolution kernel height is highly multiplied with collection of illustrative plates with first layer makees ratio, obtains i-th
Convolution kernel picture height of the convolution kernel on RGB pictures used in layer characteristic spectrum layer.As a kind of example, public affairs can be passed through
Formula Bhi=Shi*Hi/H1Convolution kernel picture height is calculated, wherein, ShiFor convolution kernel height, HiFor collection of illustrative plates height, H1For first layer
Collection of illustrative plates highly be RGB pictures height.
Further, second area is obtained according to convolution kernel picture length and convolution kernel picture height.As a kind of example,
Formula S s can be passed throughi=Bwi*BhiThe first area is calculated, wherein, BwiFor convolution kernel picture length, BhiIt is high for convolution kernel picture
Degree.It is understood that BwiAnd BhiRespectively i-th layer convolution kernel corresponding length and height on RGB pictures.
Step 203, the difference between the first area second area corresponding with each characteristic spectrum layer is calculated.
Step 204, it is target signature collection of illustrative plates layer to determine the layer in all differences where the corresponding second area of minimal difference.
Specifically, by reference area difference and minimal difference is obtained, and finds the corresponding second area institute of minimal difference
Layer be target signature collection of illustrative plates layer.As a kind of example, reference area difference simultaneously determines that minimal difference can be with table in all differences
It is shown as:
Os minj=min (| Osj-Ss1|…|Osj-Ssi||Osj-SsL|)=| Osj-Sst|.Wherein, L is represented by god
The number of the characteristic spectrum obtained after network processes, min () such as obtains t layers of spy to take minimum value function by calculating
Levy the first area and second area on collection of illustrative plates closest, it may be determined that the corresponding target signature collection of illustrative plates layer of each connected domain is OLj
=t (1<=j<=N).That is the number of plies of target signature collection of illustrative plates layer is t layers, that is to say, that at neutral net is to target area
Manage to no longer handling it after t layers, the characteristic spectrums of t layers of generation are used for coordinate recurrence.
Thus, characteristic spectrum computer capacity is reduced by depth of field connected domain, by range information, calculates target signature collection of illustrative plates
Layer, and coordinate recurrence only is carried out in the layer, further improve object detection efficiency.
Description based on above-described embodiment, in order to which those skilled in the art more understand said process, below using gestures detection as
Example, is illustrated below with reference to Fig. 3 and Fig. 4:
Fig. 3 is the schematic flow sheet of the object detecting method according to another embodiment of the invention.As shown in figure 3, the thing
Body detecting method comprises the following steps:
Step 301, examined object actual size average is calculated.
Specifically, by taking gestures detection as an example, maximum length and width of some all kinds of gestures of people in space are gathered, and calculate equal
Value, obtains gesture actual size average (including length and width) to be detected.
Step 302, object detection deep neural network model is trained.
Specifically, SSD models are trained as the deep neural network model of detection by the use of gesture data, wherein, including it is many
Individual feature sample level and coordinate return layer.
Step 303, depth of field picture and RGB pictures are gathered by depth of field camera.
Step 304, the connected domain of depth of field picture is extracted, and calculates the number of plies of the corresponding low-level image feature collection of illustrative plates of connected domain.
Specifically, depth of field picture is collected, depth of field connected domain and connected domain is calculated apart from average.Then according to distance
Value and coordinate return layer convolution kernel size, calculate the bottom obtained after substitution SSD models in RGB image region corresponding to connected domain special
Levy the number of plies of the collection of illustrative plates layer i.e. where target signature collection of illustrative plates layer.
Step 305, connected domain correspondence RGB area samplings are obtained into low-level image feature figure to corresponding low-level image feature collection of illustrative plates layer
Spectrum.
Step 306, coordinate recurrence is carried out to low-level image feature collection of illustrative plates, and obtains the detection knot of the object under test in target area
Really.
Specifically, RGB image region is substituted into model until sampling low-level image feature collection of illustrative plates layer, and obtain corresponding bottom
Characteristic spectrum, herein, the low-level image feature collection of illustrative plates are the characteristic spectrum that is collected in target signature collection of illustrative plates layer.And then utilize volume
Product computing carries out coordinate recurrence, filters out the reality of gesture to be measured from multiple couple candidate detection structures using maximum restrainable algorithms
Coordinate and frame.
More specifically, model structure as shown in Figure 4, retrieves two connected domains on depth of field picture, will be corresponding
RGB image substitutes into model and sampled, until corresponding coordinate returns layer, does regressing calculation.
Thus, the region of object detection is reduced by depth of field connected domain, RGB pictures corresponding to connected domain are only substituted into god
Through network, substantial amounts of calculate is saved and has consumed.The frame size of examined object is determined using range information, and is calculated for sitting
Mark return the characteristic spectrum number of plies, only by connected component collection of illustrative plates the layer carry out coordinate recurrence, improve target detection efficiency with
And recall rate.
Fig. 5 is the structural representation of article detection device according to an embodiment of the invention.As shown in figure 5, the object
Detection means includes:Picture acquisition module 11, extraction module 12, acquisition module 13, processing module 14 and detection module 15.
Wherein, picture acquisition module 11, depth of field picture and RGB pictures for obtaining object under test.
Extraction module 12, for extracting connected domain from depth of field picture.
Acquisition module 13, for obtaining target signature collection of illustrative plates layer residing when connected domain coordinate is returned.
Processing module 14, is handled until target for the target area in RGB pictures to be input in neutral net
Characteristic spectrum layer, wherein, target area is region corresponding with including connected domain in the RGB pictures.
Detection module 15, for carrying out coordinate recurrence to the characteristic spectrum obtained in target signature collection of illustrative plates layer, obtains target
The testing result of object under test in region;Wherein, testing result includes object under test coordinate and frame in RGB pictures.
Further, extraction module 12, specifically for according to depth of field bivariate distribution function, obtaining each pixel in depth of field picture
The depth of field of point, if the difference between the depth of field of two adjacent pixels is less than or equal to default depth of field threshold value, it is determined that two
Individual pixel belongs to same connected domain, is depth of field picture using continuous and belong to all pixels point of same connected domain
Build the connected domain.
Fig. 6 is the structural representation of acquisition module according to an embodiment of the invention.Acquisition module 13 includes:First meter
Calculate unit 131, the second computing unit 132 and determining unit 133.
Wherein, the first computing unit 131, for calculating first area of the object under test on the RGB pictures.
Second computing unit 132, for calculating second face of the convolution kernel on RGB pictures used in each characteristic spectrum layer
Product.
Determining unit 133, for calculating the difference between the first area second area corresponding with each characteristic spectrum layer, really
Layer in fixed all differences where the corresponding second area of minimal difference is target signature collection of illustrative plates layer.
Fig. 7 is the structural representation of the first computing unit according to an embodiment of the invention.First computing unit 131 is wrapped
Include:First, which obtains subelement 1311, second, obtains the acquisition acquisition subelement of subelement 1313 and the 4th of subelement the 1312, the 3rd
1314。
Wherein, first subelement 1311 is obtained, for obtaining average distance of the connected domain apart from camera.
Second obtains subelement 1312, physical length and actual height for obtaining object under test.
3rd obtains subelement 1313, and the focal length for obtaining camera is multiplied with physical length, by the result after multiplication
Make the picture length than being worth to object under test with average distance, and the focal length of camera is multiplied with actual height, by phase
Result after multiplying makees the picture height than being worth to object under test with average distance.
4th obtains subelement 1314, for obtaining first area according to picture length and picture height.
Further, the second computing unit 132, is carried out specifically for obtaining in i-th layer of characteristic spectrum layer to target area
Map length and the collection of illustrative plates height of characteristic spectrum obtained by sampling, wherein, 1≤i≤N obtains this feature collection of illustrative plates layer and used
Convolution kernel convolution kernel length and convolution kernel height, the collection of illustrative plates after convolution kernel length is multiplied with map length with first layer is long
Degree makees ratio, convolution kernel picture length of the convolution kernel on RGB pictures used in i-th layer of characteristic spectrum layer is obtained, by convolution
Core height highly makees ratio after highly being multiplied with collection of illustrative plates with the collection of illustrative plates of first layer, obtains i-th layer of used volume of characteristic spectrum layer
Convolution kernel picture height of the product core on RGB pictures, and obtain the according to convolution kernel picture length and convolution kernel picture height
Two areas.
Further, first subelement 1311 is obtained, is asked specifically for the depth of field to each pixel in connected domain
With the numerical value and the area of connected domain after summation are done into ratio, the average distance of connected domain is obtained.
Further, detection module 15, are carried out specifically for the convolution kernel using target signature collection of illustrative plates layer to characteristic spectrum
Characteristic vector pickup, coordinate regressing calculation is carried out using the characteristic vector extracted, obtain at least one object under test in institute
State couple candidate detection result in RGB image, and based on maximum restrainable algorithms or clustering algorithm from couple candidate detection result really
Make the testing result of object under test.
The article detection device of the embodiment of the present invention, by extracting connected domain from depth of field picture, and is obtained to connected domain
Coordinate target signature collection of illustrative plates layer residing when returning, then the corresponding RGB pictures of connected domain, which are input in neutral net, is located
Reason finally carries out coordinate recurrence to the characteristic spectrum obtained in target signature collection of illustrative plates layer and obtains target until target signature collection of illustrative plates layer
The testing result of object under test in region.Thus, object detection area is reduced by connected domain, it is only that connected domain is corresponding
RGB pictures, which are input in neutral net, to be handled, save it is substantial amounts of calculate consumption, will only be obtained in target signature collection of illustrative plates layer
Characteristic spectrum carries out coordinate recurrence, accelerates object detection speed, improves object detection efficiency.
Fig. 8 is the structural representation of article detection device in accordance with another embodiment of the present invention.The article detection device
Including:
Memory 21, processor 22 and it is stored in the computer program that can be run on memory 21 and on processor 22.
Processor 22 realizes the object detecting method provided in above-described embodiment when performing described program.
Further, article detection device also includes:
Communication interface 23, for the communication between memory 21 and processor 22.
Memory 21, for depositing the computer program that can be run on processor 22.
Memory 21 may include high-speed RAM memory, it is also possible to also including nonvolatile memory (non-volatile
Memory), for example, at least one magnetic disk storage.
Processor 22, object detecting method described in above-described embodiment is realized for performing during described program.
If memory 21, processor 22 and the independent realization of communication interface 23, communication interface 21, memory 21 and processing
Device 22 can be connected with each other by bus and complete mutual communication.The bus can be industry standard architecture
(Industry Standard Architecture, referred to as ISA) bus, external equipment interconnection (Peripheral
Component, referred to as PCI) bus or extended industry-standard architecture (Extended Industry Standard
Architecture, referred to as EISA) bus etc..The bus can be divided into address bus, data/address bus, controlling bus etc..
For ease of representing, only represented in Fig. 8 with a thick line, it is not intended that only one bus or a type of bus.
Optionally, on implementing, if memory 21, processor 22 and communication interface 23, are integrated in chip piece
Upper to realize, then memory 21, processor 22 and communication interface 23 can complete mutual communication by internal interface.
Processor 22 is probably a central processing unit (Central Processing Unit, referred to as CPU), or
Specific integrated circuit (Application Specific Integrated Circuit, referred to as ASIC), or by with
It is set to the one or more integrated circuits for implementing the embodiment of the present invention.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means to combine specific features, structure, material or the spy that the embodiment or example are described
Point is contained at least one embodiment of the present invention or example.In this manual, to the schematic representation of above-mentioned term not
Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office
Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area
Art personnel can be tied the not be the same as Example or the feature of example and non-be the same as Example or example described in this specification
Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that indicating or implying relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can express or
Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three
It is individual etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, represent to include
Module, fragment or the portion of the code of one or more executable instructions for the step of realizing custom logic function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not be by shown or discussion suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Represent in flow charts or logic and/or step described otherwise above herein, for example, being considered use
In the order list for the executable instruction for realizing logic function, it may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress for combining these instruction execution systems, device or equipment and using
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wirings
Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, can even is that can be in the paper of printing described program thereon or other are suitable for computer-readable medium
Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, the software that multiple steps or method can in memory and by suitable instruction execution system be performed with storage
Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used
Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from
Scattered logic circuit, the application specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can be compiled
Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method is carried
Rapid to can be by program to instruct the hardware of correlation to complete, described program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, it would however also be possible to employ the form of software function module is realized.The integrated module is such as
Fruit is realized using in the form of software function module and as independent production marketing or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although having been shown and retouching above
Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention
System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention
Type.
Claims (14)
1. a kind of object detecting method, it is characterised in that comprise the following steps:
Obtain the depth of field picture and RGB pictures of object under test;
Connected domain is extracted from the depth of field picture;
Obtain target signature collection of illustrative plates layer residing when being returned to the connected domain coordinate;
Target area in the RGB pictures is input in neutral net and handled until target signature collection of illustrative plates layer,
Wherein, the target area is region corresponding with the connected domain including the object under test in the RGB pictures;
Coordinate recurrence is carried out to the characteristic spectrum obtained in target signature collection of illustrative plates layer, obtains described in the target area
The testing result of object under test;Wherein, the testing result includes the object under test coordinate and side in the RGB pictures
Frame.
2. object detecting method according to claim 1, it is characterised in that described that connection is extracted from the depth of field picture
Domain, including:
According to depth of field bivariate distribution function, the depth of field of each pixel in the depth of field picture is obtained;
If the difference between the depth of field of two adjacent pixels is less than or equal to default depth of field threshold value, it is determined that described two
Pixel belongs to same connected domain;
It is that the depth of field picture builds the connected domain using continuous and belong to all pixels point of same connected domain.
3. object detecting method according to claim 1, it is characterised in that when the acquisition connected domain coordinate is returned
Residing target signature collection of illustrative plates layer, including:
Calculate first area of the object under test on the RGB pictures;
Calculate second area of the convolution kernel on the RGB pictures used in each characteristic spectrum layer;
Calculate the difference between first area second area corresponding with each characteristic spectrum layer;
It is target signature collection of illustrative plates layer to determine the layer in all differences where the corresponding second area of minimal difference.
4. object detecting method according to claim 3, it is characterised in that the calculating object under test is described
The first area on RGB pictures, including:
Obtain average distance of the connected domain apart from camera;
Obtain the physical length and actual height of the object under test;
The focal length of the camera is multiplied with the physical length, the result after multiplication and the average distance are made than being worth
To the picture length of the object under test;
The focal length of the camera is multiplied with the actual height, the result after multiplication and the average distance are made than being worth
To the picture height of the object under test;
First area is obtained according to the picture length and the picture height.
5. object detecting method according to claim 3, it is characterised in that used in each characteristic spectrum layer of calculating
Second area of the convolution kernel on the RGB pictures, including:
Obtain the map length and figure of the characteristic spectrum obtained by i-th layer of characteristic spectrum layer is sampled to the target area
Spectrum height;Wherein, 1≤i≤N;
Obtain the convolution kernel length of convolution kernel used in this feature collection of illustrative plates layer and convolution kernel height;
Map length after the convolution kernel length is multiplied with the map length with first layer makees ratio, obtains i-th layer of spy
Levy convolution kernel picture length of the convolution kernel on the RGB pictures used in collection of illustrative plates layer;
Collection of illustrative plates after convolution kernel height is highly multiplied with the collection of illustrative plates with first layer highly makees ratio, obtains i-th layer of spy
Levy convolution kernel picture height of the convolution kernel on the RGB pictures used in collection of illustrative plates layer;
The second area is obtained according to the convolution kernel picture length and the convolution kernel picture height.
6. object detecting method according to claim 4, it is characterised in that the acquisition connected domain is apart from camera
Average distance, including:
The depth of field to each pixel in the connected domain is summed;
The area of numerical value after summation and the connected domain is done into ratio, the average distance of the connected domain is obtained.
7. the object detecting method according to claim any one of 1-6, it is characterised in that described in the target signature
The characteristic spectrum that collection of illustrative plates layer is obtained carries out coordinate recurrence, recognizes the object under test in the target area, including:
Characteristic vector pickup is carried out to the characteristic spectrum using the convolution kernel of target signature collection of illustrative plates layer;
Carry out coordinate regressing calculation using the characteristic vector extracted, obtain at least one object under test described
Candidate result in RGB image, wherein the candidate result includes coordinate and side of the object under test in the RGB image
Frame;
The actual seat of the object under test is determined from the candidate result based on maximum restrainable algorithms or clustering algorithm
Mark and frame.
8. a kind of article detection device, it is characterised in that including:
Picture acquisition module, depth of field picture and RGB pictures for obtaining object under test;
Extraction module, for extracting connected domain from the depth of field picture;
Acquisition module, for obtaining target signature collection of illustrative plates layer residing when the connected domain coordinate is returned;
Processing module, is handled until the mesh for the target area in the RGB pictures to be input in neutral net
Characteristic spectrum layer is marked, wherein, the target area is region corresponding with including the connected domain in the RGB pictures;
Detection module, for carrying out coordinate recurrence to the characteristic spectrum obtained in target signature collection of illustrative plates layer, obtains the mesh
Mark the testing result of the object under test in region;Wherein, the testing result is schemed including the object under test in the RGB
Coordinate and frame in piece.
9. article detection device according to claim 8, it is characterised in that the extraction module specifically for:
According to depth of field bivariate distribution function, the depth of field of each pixel in the depth of field picture is obtained;
If the difference between the depth of field of two adjacent pixels is less than or equal to default depth of field threshold value, it is determined that described two
Pixel belongs to same connected domain;
It is that the depth of field picture builds the connected domain using continuous and belong to all pixels point of same connected domain.
10. article detection device according to claim 8, it is characterised in that the acquisition module, including:
First computing unit, for calculating first area of the object under test on the RGB pictures;
Second computing unit, for calculating second area of the convolution kernel on the RGB pictures used in each characteristic spectrum layer;
Determining unit, for calculating the difference between first area second area corresponding with each characteristic spectrum layer,
It is target signature collection of illustrative plates layer to determine the layer in all differences where the corresponding second area of minimal difference.
11. article detection device according to claim 10, it is characterised in that first computing unit, including:
First obtains subelement, for obtaining average distance of the connected domain apart from camera;
Second obtains subelement, physical length and actual height for obtaining the object under test;
3rd obtains subelement, the focal length of the camera is multiplied with the physical length for obtaining, by the result after multiplication
Make the picture length than being worth to the object under test with the average distance, and by the focal length of the camera and the reality
Border is highly multiplied, and the result after multiplication and the average distance are made to the picture height than being worth to the object under test;
4th obtains subelement, for obtaining first area according to the picture length and the picture height.
12. article detection device according to claim 10, it is characterised in that second computing unit, specifically for
Obtain the map length and collection of illustrative plates height of the characteristic spectrum obtained by i-th layer of characteristic spectrum layer is sampled to the target area
Degree, wherein, 1≤i≤N obtains the convolution kernel length of convolution kernel used in this feature collection of illustrative plates layer and convolution kernel height, will be described
Convolution kernel length makees ratio after being multiplied with the map length with the map length of first layer, obtains i-th layer of characteristic spectrum layer institute
Convolution kernel picture length of the convolution kernel used on the RGB pictures, by convolution kernel height and the collection of illustrative plates height phase
Multiply the collection of illustrative plates afterwards with first layer and highly make ratio, obtain convolution kernel used in i-th layer of characteristic spectrum layer on the RGB pictures
Convolution kernel picture height, and second face is obtained according to the convolution kernel picture length and the convolution kernel picture height
Product.
13. article detection device according to claim 11, it is characterised in that described first obtains subelement, specific to use
Summed in the depth of field to each pixel in the connected domain, the area of the numerical value after summation and the connected domain is done into ratio
Value, obtains the average distance of the connected domain.
14. the article detection device according to claim any one of 8-13, it is characterised in that the detection module, specifically
Characteristic vector pickup is carried out to the characteristic spectrum for the convolution kernel using target signature collection of illustrative plates layer, extraction is utilized
The characteristic vector arrived carries out coordinate regressing calculation, obtains being waited in the RGB image at least one object under test
Testing result is selected, and described treat is determined from the couple candidate detection result based on maximum restrainable algorithms or clustering algorithm
Survey the testing result of object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710309200.9A CN107221005B (en) | 2017-05-04 | 2017-05-04 | Object detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710309200.9A CN107221005B (en) | 2017-05-04 | 2017-05-04 | Object detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107221005A true CN107221005A (en) | 2017-09-29 |
CN107221005B CN107221005B (en) | 2020-05-08 |
Family
ID=59943806
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710309200.9A Active CN107221005B (en) | 2017-05-04 | 2017-05-04 | Object detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107221005B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109344772A (en) * | 2018-09-30 | 2019-02-15 | 中国人民解放军战略支援部队信息工程大学 | Ultrashort wave signal specific reconnaissance method based on spectrogram and depth convolutional network |
CN109448058A (en) * | 2018-11-12 | 2019-03-08 | 北京拓疆者智能科技有限公司 | " loaded " position three-dimensional coordinate acquisition methods, system and image recognition apparatus |
CN111127395A (en) * | 2019-11-19 | 2020-05-08 | 中国人民解放军陆军军医大学第一附属医院 | Blood vessel identification method based on SWI image and recurrent neural network |
CN112991253A (en) * | 2019-12-02 | 2021-06-18 | 合肥美亚光电技术股份有限公司 | Central area determining method, foreign matter removing device and detecting equipment |
CN112991280A (en) * | 2021-03-03 | 2021-06-18 | 望知科技(深圳)有限公司 | Visual detection method and system and electronic equipment |
US11609414B2 (en) | 2018-06-21 | 2023-03-21 | Carl Zeiss Microscopy Gmbh | Method for calibrating a phase mask and microscope |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8787663B2 (en) * | 2010-03-01 | 2014-07-22 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
CN104143080B (en) * | 2014-05-21 | 2017-06-23 | 深圳市唯特视科技有限公司 | Three-dimensional face identifying device and method based on three-dimensional point cloud |
CN104751559B (en) * | 2015-03-25 | 2017-07-28 | 深圳怡化电脑股份有限公司 | Banknote tester and banknote detection method |
CN105059190B (en) * | 2015-08-17 | 2018-05-29 | 上海交通大学 | The automobile door opening collision warning device and method of view-based access control model |
CN105279484B (en) * | 2015-10-10 | 2019-08-06 | 北京旷视科技有限公司 | Method for checking object and object test equipment |
CN106355573B (en) * | 2016-08-24 | 2019-10-25 | 北京小米移动软件有限公司 | The localization method and device of object in picture |
-
2017
- 2017-05-04 CN CN201710309200.9A patent/CN107221005B/en active Active
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11609414B2 (en) | 2018-06-21 | 2023-03-21 | Carl Zeiss Microscopy Gmbh | Method for calibrating a phase mask and microscope |
CN109344772A (en) * | 2018-09-30 | 2019-02-15 | 中国人民解放军战略支援部队信息工程大学 | Ultrashort wave signal specific reconnaissance method based on spectrogram and depth convolutional network |
CN109344772B (en) * | 2018-09-30 | 2021-01-26 | 中国人民解放军战略支援部队信息工程大学 | Ultrashort wave specific signal reconnaissance method based on spectrogram and deep convolutional network |
CN109448058A (en) * | 2018-11-12 | 2019-03-08 | 北京拓疆者智能科技有限公司 | " loaded " position three-dimensional coordinate acquisition methods, system and image recognition apparatus |
CN111127395A (en) * | 2019-11-19 | 2020-05-08 | 中国人民解放军陆军军医大学第一附属医院 | Blood vessel identification method based on SWI image and recurrent neural network |
CN111127395B (en) * | 2019-11-19 | 2023-04-07 | 中国人民解放军陆军军医大学第一附属医院 | Blood vessel identification method based on SWI image and recurrent neural network |
CN112991253A (en) * | 2019-12-02 | 2021-06-18 | 合肥美亚光电技术股份有限公司 | Central area determining method, foreign matter removing device and detecting equipment |
CN112991253B (en) * | 2019-12-02 | 2024-05-31 | 合肥美亚光电技术股份有限公司 | Central area determining method, foreign matter removing device and detecting equipment |
CN112991280A (en) * | 2021-03-03 | 2021-06-18 | 望知科技(深圳)有限公司 | Visual detection method and system and electronic equipment |
CN112991280B (en) * | 2021-03-03 | 2024-05-28 | 望知科技(深圳)有限公司 | Visual detection method, visual detection system and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN107221005B (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107221005A (en) | Object detecting method and device | |
Oskal et al. | A U-net based approach to epidermal tissue segmentation in whole slide histopathological images | |
CN104268498B (en) | A kind of recognition methods of Quick Response Code and terminal | |
CN104867225B (en) | A kind of bank note towards recognition methods and device | |
CN109670452A (en) | Method for detecting human face, device, electronic equipment and Face datection model | |
Wahab et al. | Multifaceted fused-CNN based scoring of breast cancer whole-slide histopathology images | |
CN107016409A (en) | A kind of image classification method and system based on salient region of image | |
Jiao et al. | Burn image segmentation based on Mask Regions with Convolutional Neural Network deep learning framework: more accurate and more convenient | |
CN104809452A (en) | Fingerprint identification method | |
CN104812288B (en) | Image processing apparatus and image processing method | |
CN105989330A (en) | Picture detection method and apparatus | |
TW202014984A (en) | Image processing method, electronic device, and storage medium | |
CN104794479B (en) | This Chinese detection method of natural scene picture based on the transformation of local stroke width | |
CN102693528B (en) | Noise suppressed in low light images | |
CN103905737A (en) | Backlight detection method and device | |
CN107633237A (en) | Image background segmentation method, device, equipment and medium | |
CN104951765B (en) | Remote Sensing Target dividing method based on shape priors and visual contrast | |
CN112419326B (en) | Image segmentation data processing method, device, equipment and storage medium | |
CN109886330A (en) | Method for text detection, device, computer readable storage medium and computer equipment | |
CN109766818A (en) | Pupil center's localization method and system, computer equipment and readable storage medium storing program for executing | |
CN108182695A (en) | Target following model training method and device, electronic equipment and storage medium | |
CN109190639A (en) | A kind of vehicle color identification method, apparatus and system | |
CN112233061A (en) | Deep learning-based skin basal cell carcinoma and Babylonia disease identification method | |
CN108647264A (en) | A kind of image automatic annotation method and device based on support vector machines | |
CN108629405A (en) | The method and apparatus for improving convolutional neural networks computational efficiency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20201218 Address after: 528311 4 Global Innovation Center, industrial road, Beijiao Town, Shunde District, Foshan, Guangdong, China Patentee after: GUANGDONG MEIDI WHITE HOUSEHOLD ELECTRICAL APPLIANCE TECHNOLOGY INNOVATION CENTER Co.,Ltd. Patentee after: MIDEA GROUP Co.,Ltd. Address before: 528311, 26-28, B District, Mei headquarters building, 6 Mei Road, Beijiao Town, Shunde District, Foshan, Guangdong. Patentee before: MIDEA GROUP Co.,Ltd. |
|
TR01 | Transfer of patent right |