CN107636727A - Target detection method and device - Google Patents

Target detection method and device Download PDF

Info

Publication number
CN107636727A
CN107636727A CN201680016608.0A CN201680016608A CN107636727A CN 107636727 A CN107636727 A CN 107636727A CN 201680016608 A CN201680016608 A CN 201680016608A CN 107636727 A CN107636727 A CN 107636727A
Authority
CN
China
Prior art keywords
image
detected
depth value
candidate region
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201680016608.0A
Other languages
Chinese (zh)
Inventor
南冰
南一冰
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Publication of CN107636727A publication Critical patent/CN107636727A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A target detection method and a device relate to the technical field of artificial intelligence and are applied to the process of target detection. In order to solve the problem of limited target detection precision, the method comprises the following steps: acquiring an image to be detected and depth values of a plurality of pixel points in the image to be detected (101); and carrying out target detection (102) on the image to be detected by combining the depth values of all the pixel points.

Description

A kind of method and device of target detection
Technical field
The application is related to field of artificial intelligence, more particularly to a kind of method and device of target detection.
Background technology
The method of existing target detection is generally divided into three phases:First candidate region is selected in given image;So Feature extraction is carried out to these candidate regions afterwards, finally candidate region classified using the grader of training, and then is detected Obtain all possibility target objects.Wherein, it is main at present when selecting candidate region (region proposal) in the first stage The object detection method of stream is usually using information such as the texture in image, edge, colors, and finding out target in image in advance may The position of appearance, extraction hundreds to thousands candidate region.The above-mentioned object detection method based on candidate region exist it is following not Foot:
1st, follow-up feature extraction is carried out to hundreds to thousands candidate region, not only amount of calculation is larger, and candidate The quantity in region generally includes the quantity of object considerably beyond image so that many calculate is also unnecessary.
2nd, the two-dimensional spatial location of some objects is adjacent in image, but its depth value is different, because color, texture are similar, It is likely to be taken as integrally to extract in same candidate region, and then influences the precision of target detection.
The content of the invention
Embodiments herein provides a kind of object detection method and device, mainly solves target detection of the prior art Target detection limited precision existing for method and it is computationally intensive the problem of.
To reach above-mentioned purpose, embodiments herein adopts the following technical scheme that:
In a first aspect, the application provides a kind of object detection method, including:Obtain image to be detected and image to be detected In multiple pixels depth value;Target detection is carried out to image to be detected with reference to the depth value of each pixel.
Second aspect, the application provide a kind of object detecting device, including:Acquiring unit, for obtaining image to be detected And in image to be detected multiple pixels depth value;Object detection unit, it is each for being obtained with reference to the acquiring unit The depth value of individual pixel carries out target detection to image to be detected.
The third aspect, the application provides a kind of computer-readable storage medium, and for storing computer software instructions, it includes and held The program code designed by object detection method described in row first aspect.
Fourth aspect, the application provide a kind of computer program product, can be loaded directly into the internal storage of computer In, and contain software code, the computer program is loaded into via computer and can realized described in first aspect after performing Object detection method.
5th aspect, the application provide a kind of electronic equipment, including:Memory, communication interface and processor, the storage Device is used to store computer-executable code, and the processor is used to perform the computer-executable code control execution first The aspect object detection method, the communication interface are used for the data transfer of the electronic equipment and external equipment.
6th aspect, the application provide a kind of robot, including the electronic equipment described in the 5th aspect.
The scheme that the application provides, when carrying out target detection, obtain first in image to be detected and image to be detected The depth value of multiple pixels;And the depth value for combining each pixel carries out target detection to image to be detected.With existing skill The information such as the texture based on image, edge, color in art carry out target detection compare, the application when carrying out target detection, Depth value is considered, therefore in target detection, color, the target object that texture is similar but depth is different can be subjected to area Point, and then the precision of target detection can be improved.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of application, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the configuration diagram for the object detection system that the embodiment of the present application provides;
Fig. 2 is the schematic flow sheet for the object detection method that the embodiment of the present application provides;
Fig. 3 is that method shown in image to be detected, depth image and use Fig. 2 of the embodiment of the present application offer divides to obtain Initial candidate region schematic diagram;
Fig. 4 is the schematic flow sheet for another object detection method that the embodiment of the present application provides;
Fig. 5 splits to obtain mesh using the method shown in Fig. 4 for the embodiment of the present application to the initial candidate region shown in Fig. 3 Mark the schematic diagram of candidate region;
Fig. 6 is the schematic flow sheet for another object detection method that the embodiment of the present application provides;
Fig. 7 is a kind of structural representation for object detection apparatus that the embodiment of the present application provides;
Fig. 8 is the structural representation for another object detection apparatus that the embodiment of the present application provides;
Fig. 9 is the structural representation for a kind of electronic equipment that the embodiment of the present application provides.
Embodiment
The system architecture and business scenario of the embodiment of the present application description are in order to which more clearly explanation the application is implemented The technical scheme of example, the restriction of the technical scheme provided for the embodiment of the present application, those of ordinary skill in the art are not formed Understand, with the differentiation of system architecture and the appearance of new business scene, the technical scheme that the embodiment of the present application provides is for similar Technical problem, it is equally applicable.
It should be noted that in the embodiment of the present application, " exemplary " or " such as " etc. word make example, example for expression Card or explanation.Be described as in the embodiment of the present application " exemplary " or " such as " any embodiment or design should It is interpreted than other embodiments or design more preferably or more advantage.Specifically, " exemplary " or " example are used Such as " word is intended to that related notion is presented in a concrete fashion.
It should be noted that in the embodiment of the present application, " (English:Of) ", " corresponding (English:Corresponding, Relevant it is) " and " corresponding (English:Corresponding) " can use with sometimes, it is noted that do not emphasizing it During difference, its is to be expressed be meant that it is consistent.
The embodiment of the present application provides a kind of object detection system, as shown in figure 1, the system includes:Image capture device 11 With object detection apparatus 12.Wherein, image capture device 11, which is used to treating detection zone, carries out IMAQ and will collect Image is sent to object detection apparatus 12, exemplary, and the image capture device 11 can obtain figure to be single or multiple The video camera of the two-dimensional signal of picture can obtain binocular camera of three-dimensional information etc..Object detection apparatus 12 is used for from image The image that collecting device 11 receives is analyzed and processed to carry out target detection, object detection apparatus using the image received 12 can be the equipment with processing function, such as server.The specific reality of image capture device 11 and object detection apparatus 12 Prior art is now referred to, here is omitted.
The embodiment of the present application provides a kind of object detection method, can be applied in the system shown in Fig. 1, when applied to Fig. 1 When in shown system, the executive agent of this method can be the object detection apparatus 12 shown in Fig. 1.Hereafter using executive agent as Object detection apparatus 12 illustrates, as shown in Fig. 2 this method includes:
Step 101, the depth value for obtaining multiple pixels in image to be detected and image to be detected.
Wherein, image to be detected is to treat the image obtained after detection zone is shot, can be the figure directly obtained Picture, or the image for the coloured image that this is directly obtained obtain after the processing such as gray scale, denoising.
Exemplary, treat detection zone using equipment such as common camera or the mobile phones with camera function and clapped The image for taking the photograph to obtain can be used as image to be detected signified in the application.
Depth value can be by obtaining depth image corresponding to image to be detected;And mapping to be checked is determined from depth image The depth value of each pixel of picture.
Wherein, depth image (depth image) is also referred to as range image (range image), refers to from image Collector, such as image of the distance (or referred to as depth) as pixel value of binocular camera each point into region to be detected, energy It is enough directly to reflect the geometry of object visible surface, namely can directly determine the contour line of each object.In depth image, What each pixel represented is in the visual field of image acquisition device, and object is to imaging head plane at specific (x, the y) coordinate Distance.Therefore, each pixel is corresponding with depth value in depth image, for representing the depth of each object in region to be detected Angle value.The acquisition methods of common depth image have laser radar Depth Imaging method, computer stereo vision imaging, measurement of coordinates Machine method, Moire fringe technique, Structure light method etc., the specific implementation of depth image refer to prior art, and here is omitted.
Step 102, the depth value with reference to each pixel carry out target detection to image to be detected.
, can be by image to be detected in a kind of implementation of step 102, depth value is located at the pixel in same scope Point is divided into same candidate region.
Exemplary, as shown in figure 3, the application respectively illustrates image to be detected, depth image and performs step 102 Obtained candidate region 1 and candidate region 2 is divided afterwards.
After candidate region is determined according to the depth value of pixel, feature extraction directly can be carried out to candidate region, most Candidate region is classified using the grader of training afterwards, so as to realize target detection.The specific implementation process of the step can With reference to prior art, here is omitted.
In another implementation of this step 102, can using according to the candidate region that depth value is determined as just Beginning candidate region, and to initial candidate region carry out further split obtain object candidate area, as shown in figure 4, step 102 has Body can be realized:
Step 201, depth value is located at the pixel in same scope be divided into same initial candidate region.
Step 202, according to characteristics of image initial candidate region is split as object candidate area.
Step 203, target is determined in object candidate area.
Wherein, described image feature includes any one in following feature or any several:Color characteristic, texture are special Any one or more in sign, architectural feature, face characteristic or contour feature.Should be according to characteristics of image to initial candidate region The specific implementation for being split to obtain object candidate area refers to carry out region to a region in the prior art Proposal implementation process.
For example, when the characteristics of image is contour line feature, above-mentioned steps 202 specifically can be implemented as:Detect respectively Contour line in each initial candidate region;When exist at least one initial candidate region include at least two mutually it is independent During closed contour, each initial candidate region at least one initial candidate region is split to obtain at least two Individual object candidate area is to cause each object candidate area to include an at most closed contour.
The multiple initial candidate regions obtained with reference to Fig. 3, tear open to each initial candidate region according to contour line feature After point, initial candidate region 1 can be split as to two object candidate areas, obtain object candidate area as shown in Figure 5.
And for example, when the characteristics of image is contour line feature, above-mentioned steps 202 specifically can be implemented as:Detect respectively Color characteristic in each initial candidate region.When at least one initial candidate region be present and include at least two colors, Each initial candidate region at least one initial candidate region is split as at least two object candidate areas respectively To cause each object candidate area to include a kind of color characteristic.
It should be noted that above-mentioned only illustrate by taking contour line feature and color characteristic as an example, drawn in practical application , may multiple features such as color combining, texture during partial objectives for candidate region.
In another implementation of step 102, as shown in fig. 6, including:
Step 301, according to characteristics of image image to be detected is divided into initial candidate region.
Wherein, described image feature includes any one in following feature or any several:Color characteristic, texture are special Any one or more in sign, architectural feature, face characteristic or contour feature.Image to be detected should be entered according to characteristics of image The specific implementation that row division obtains initial candidate region refers to carry out region proposal to a region in the prior art Implementation process.
Step 302, the depth value of pixel is removed not in the initial candidate region of same scope.
Step 303, determine target in remaining initial candidate region.
The scheme that the application provides, when carrying out target detection, obtain first in image to be detected and image to be detected The depth value of multiple pixels;And the depth value for combining each pixel carries out target detection to image to be detected.With existing skill The information such as the texture based on image, edge, color in art carry out target detection compare, the application when carrying out target detection, Depth value is considered, therefore in target detection, color, the target object that texture is similar but depth is different can be subjected to area Point, and then the precision of target detection can be improved.
In addition, in general, the depth information included in region to be detected is in most cases less than its color included Information, such as:One object may include multiple color, but depth value may only have one corresponding to this whole object, therefore, The amount of calculation for dividing to obtain candidate region according to depth value also greatly reduces.
Optionally, after target is identified, object detection apparatus 12 can also export the information such as the classification of target, profile.
The method that the embodiment of the present application provides can be used in the scene for needing to carry out target identification, such as:Applied to shifting In mobile robot, mobile robot can use the application body method automatic identification its object in the environment to carry out Decision-making.During can be applied in auxiliary user's searching specific objective object.Any need detection, the process of identification object The method that can all apply the application to provide.
Those skilled in the art should be readily appreciated that, each example described with reference to the embodiments described herein Unit and algorithm steps, the application can be realized with the combining form of hardware or hardware and computer software.Some function is studied carefully Unexpectedly performed in a manner of hardware or computer software driving hardware, application-specific and design constraint depending on technical scheme Condition.Professional and technical personnel can realize described function using distinct methods to each specific application, but this Kind is realized it is not considered that exceeding scope of the present application.
The embodiment of the present application can carry out the division of functional module, example according to above method example to object detection apparatus etc. Such as, each function can be corresponded to and divide each functional module, two or more functions can also be integrated at one Manage in module.Above-mentioned integrated module can both be realized in the form of hardware, can also use the form of software function module Realize.It should be noted that the division in the embodiment of the present application to module is schematical, only a kind of logic function is drawn Point, there can be other dividing mode when actually realizing.
In the case where dividing each functional module using corresponding each function, Fig. 7 shows involved in above-described embodiment And object detection apparatus a kind of possible structural representation, object detection apparatus includes:Acquiring unit 401 and target Detection unit 402.Acquiring unit 401 is used to support object detection apparatus to perform the process 101 in Fig. 2;Object detection unit 402 For supporting object detection apparatus to perform the process 102 in Fig. 2, process 201 in Fig. 4,202, the process 301 in 203, Fig. 6, 302、303.Wherein, all related contents for each step that above method embodiment is related to can quote corresponding function module Function description, will not be repeated here.
In the case of using integrated unit, Fig. 8 shows object detection apparatus involved in above-described embodiment A kind of possible structural representation.Object detection apparatus includes:Processing module 501 and communication module 502.Processing module 501 is used Management is controlled in the action to object detection apparatus, for example, processing module 501 is used to support object detection apparatus execution figure Process 301,302,303 in process 101 in 2,102, the process 201 in 103, Fig. 4,202,203, Fig. 6, and/or be used for Other processes of techniques described herein.Communication module 502 is used to support the logical of object detection apparatus and other network entities Letter, such as the communication between the functional module with being shown in Fig. 1 or network entity.Object detection apparatus can also include storage mould Block 503, for storing the program code and data of object detection apparatus.
Wherein, processing module 501 can be processor or controller, such as can be central processing unit (Central Processing Unit, CPU), general processor, digital signal processor (Digital Signal Processor, DSP), Application specific integrated circuit (Application-Specific Integrated Circuit, ASIC), field programmable gate array It is (Field Programmable Gate Array, FPGA) or other PLDs, transistor logic, hard Part part or its any combination.What it can realize or perform with reference to described by present disclosure various exemplary patrols Collect square frame, module and circuit.The processor can also be the combination for realizing computing function, such as include one or more micro- places Manage device combination, combination of DSP and microprocessor etc..Communication module 502 can be transceiver, transmission circuit or communication interface Deng.Memory module 503 can be memory.
When processing module 501 is processor, communication module 502 is communication interface, when memory module 503 is memory, this It can be the electronic equipment shown in Fig. 9 to apply for the object detection apparatus involved by embodiment.
As shown in fig.9, the electronic equipment includes:Processor 601, communication interface 602, memory 603 and bus 604.Wherein, communication interface 602, processor 601 and memory 603 are connected with each other by bus 604;Bus 604 can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, PCI) bus or EISA (Extended Industry Standard Architecture, EISA) bus etc..It is total that the bus can be divided into address Line, data/address bus, controlling bus etc..For ease of representing, only represented in Fig. 9 with a thick line, it is not intended that only one total Line or a type of bus.
The step of method or algorithm with reference to described by present disclosure, can be realized in a manner of hardware, also may be used By be by computing device software instruction in a manner of realize.Software instruction can be made up of corresponding software module, software mould Block can be stored on random access memory (Random Access Memory, RAM), flash memory, read-only storage (Read Only Memory, ROM), Erasable Programmable Read Only Memory EPROM (Erasable Programmable ROM, EPROM), electricity can EPROM (Electrically EPROM, EEPROM), register, hard disk, mobile hard disk, read-only optical disc (CD-ROM) or in the storage medium of any other form well known in the art.A kind of exemplary storage medium is coupled to place Device is managed, so as to enable a processor to from the read information, and information can be write to the storage medium.Certainly, store Medium can also be the part of processor.Processor and storage medium can be located in ASIC.
Those skilled in the art are it will be appreciated that in said one or multiple examples, work(described herein It is able to can be realized with hardware, software, firmware or their any combination.When implemented in software, can be by these functions It is stored in computer-readable medium or is transmitted as one or more instructions on computer-readable medium or code. Computer-readable medium includes computer-readable storage medium and communication media, and wherein communication media includes being easy to from a place to another Any medium of one place transmission computer program.It is any that storage medium can be that universal or special computer can access Usable medium.
Above-described embodiment, the purpose, technical scheme and beneficial effect of the application are carried out further Describe in detail, should be understood that the embodiment that the foregoing is only the application, be not used to limit the application Protection domain, all any modification, equivalent substitution and improvements on the basis of the technical scheme of the application, done etc., all should It is included within the protection domain of the application.

Claims (14)

  1. A kind of 1. object detection method, it is characterised in that including:
    Obtain the depth value of multiple pixels in image to be detected and image to be detected;
    Target detection is carried out to image to be detected with reference to the depth value of each pixel.
  2. 2. according to the method for claim 1, it is characterised in that the depth for obtaining multiple pixels in image to be detected Value includes:
    Obtain depth image corresponding to image to be detected;
    The depth value of each pixel of image to be detected is determined from depth image.
  3. 3. according to the method for claim 1, it is characterised in that the depth value of each pixel of combination is to figure to be detected As carrying out target detection, including:
    Depth value is located at into the pixel in same scope to be divided into same initial candidate region;
    Initial candidate region is split as object candidate area according to characteristics of image;
    Target is determined in object candidate area.
  4. 4. according to the method for claim 1, it is characterised in that the depth value of each pixel of combination is to figure to be detected As carrying out target detection, including:
    Image to be detected is divided into initial candidate region according to characteristics of image;
    The depth value of pixel is removed not in the initial candidate region of same scope;
    Target is determined in remaining initial candidate region.
  5. 5. the method according to claim 3 or 4, it is characterised in that described image feature includes any in following feature It is one or any several:In color characteristic, textural characteristics, architectural feature, face characteristic or contour feature any one or It is a variety of.
  6. A kind of 6. object detecting device, it is characterised in that including:
    Acquiring unit, for obtaining the depth value of multiple pixels in image to be detected and image to be detected;
    Object detection unit, the depth value of each pixel for being obtained with reference to the acquiring unit are carried out to image to be detected Target detection.
  7. 7. device according to claim 6, it is characterised in that the acquiring unit, it is corresponding for obtaining image to be detected Depth image;The depth value of each pixel of image to be detected is determined from depth image.
  8. 8. device according to claim 6, it is characterised in that
    The object detection unit, is additionally operable to depth value being located at the pixel in same scope and is divided into same initial candidate area In domain;Initial candidate region is split as object candidate area according to characteristics of image;Target is determined in object candidate area.
  9. 9. device according to claim 6, it is characterised in that
    The object detection unit, it is additionally operable to that image to be detected is divided into initial candidate region according to characteristics of image;Remove picture The depth value of vegetarian refreshments is not or not the initial candidate region of same scope;Target is determined in remaining initial candidate region.
  10. 10. device according to claim 8 or claim 9, it is characterised in that described image feature includes any in following feature It is one or any several:In color characteristic, textural characteristics, architectural feature, face characteristic or contour feature any one or It is a variety of.
  11. 11. a kind of electronic equipment, it is characterised in that including:Memory, communication interface and processor, the memory are used to deposit Computer-executable code is stored up, the processor is used to perform the computer-executable code control perform claim requirement 1-5 Any one object detection method, the communication interface are used for the data transfer of the electronic equipment and external equipment.
  12. 12. a kind of robot, it is characterised in that including the electronic equipment described in claim 11.
  13. 13. a kind of computer-readable storage medium, it is characterised in that for storing computer software instructions, it will comprising perform claim Seek the program code designed by the object detection method described in any one of 1-5.
  14. 14. a kind of computer program product, it is characterised in that can be loaded directly into the internal storage of computer, and contain Software code, the computer program are loaded into via computer and can realized described in claim any one of 1-5 after performing Object detection method.
CN201680016608.0A 2016-12-30 2016-12-30 Target detection method and device Pending CN107636727A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113548 WO2018120038A1 (en) 2016-12-30 2016-12-30 Method and device for target detection

Publications (1)

Publication Number Publication Date
CN107636727A true CN107636727A (en) 2018-01-26

Family

ID=61113496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680016608.0A Pending CN107636727A (en) 2016-12-30 2016-12-30 Target detection method and device

Country Status (2)

Country Link
CN (1) CN107636727A (en)
WO (1) WO2018120038A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348333A (en) * 2019-06-21 2019-10-18 深圳前海达闼云端智能科技有限公司 Object detecting method, device, storage medium and electronic equipment
CN110502978A (en) * 2019-07-11 2019-11-26 哈尔滨工业大学 A kind of laser radar waveform Modulation recognition method based on BP neural network model
CN111179332A (en) * 2018-11-09 2020-05-19 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium
CN111210471A (en) * 2018-11-22 2020-05-29 北京欣奕华科技有限公司 Positioning method, device and system
CN111366916A (en) * 2020-02-17 2020-07-03 北京睿思奥图智能科技有限公司 Method and device for determining distance between interaction target and robot and electronic equipment
CN111383238A (en) * 2018-12-28 2020-07-07 Tcl集团股份有限公司 Target detection method, target detection device and intelligent terminal
CN111950543A (en) * 2019-05-14 2020-11-17 北京京东尚科信息技术有限公司 Target detection method and device
WO2023109069A1 (en) * 2021-12-13 2023-06-22 深圳前海微众银行股份有限公司 Image retrieval method and apparatus

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853127A (en) * 2018-08-20 2020-02-28 浙江宇视科技有限公司 Image processing method, device and equipment
CN111324139A (en) * 2018-12-13 2020-06-23 顺丰科技有限公司 Unmanned aerial vehicle landing method, device, equipment and storage medium
CN111353115B (en) * 2018-12-24 2023-10-27 中移(杭州)信息技术有限公司 Method and device for generating snowplow map
CN110276742B (en) * 2019-05-07 2023-10-10 平安科技(深圳)有限公司 Train tail lamp monitoring method, device, terminal and storage medium
CN110264460A (en) * 2019-06-24 2019-09-20 科大讯飞股份有限公司 A kind of discrimination method of object detection results, device, equipment and storage medium
CN112446918A (en) * 2019-09-04 2021-03-05 三赢科技(深圳)有限公司 Method and device for positioning target object in image, computer device and storage medium
CN111199198B (en) * 2019-12-27 2023-08-04 深圳市优必选科技股份有限公司 Image target positioning method, image target positioning device and mobile robot
CN111223111B (en) * 2020-01-03 2023-04-25 歌尔光学科技有限公司 Depth image contour generation method, device, equipment and storage medium
CN111507958B (en) * 2020-04-15 2023-05-26 全球能源互联网研究院有限公司 Target detection method, training method of detection model and electronic equipment
CN113538449A (en) * 2020-04-20 2021-10-22 顺丰科技有限公司 Image correction method, device, server and storage medium
CN111783584B (en) * 2020-06-22 2023-08-08 杭州飞步科技有限公司 Image target detection method, device, electronic equipment and readable storage medium
CN111898641A (en) * 2020-07-01 2020-11-06 中国建设银行股份有限公司 Target model detection device, electronic equipment and computer readable storage medium
CN112258482A (en) * 2020-10-23 2021-01-22 广东博智林机器人有限公司 Building exterior wall mortar flow drop detection method and device
CN113420735B (en) * 2021-08-23 2021-12-21 深圳市信润富联数字科技有限公司 Contour extraction method, device, equipment and storage medium
CN114004788A (en) * 2021-09-23 2022-02-01 中大(海南)智能科技有限公司 Defect detection method, device, equipment and storage medium
CN115019157B (en) * 2022-07-06 2024-03-22 武汉市聚芯微电子有限责任公司 Object detection method, device, equipment and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101657825A (en) * 2006-05-11 2010-02-24 普莱姆传感有限公司 Modeling of humanoid forms from depth maps
CN101840577A (en) * 2010-06-11 2010-09-22 西安电子科技大学 Image automatic segmentation method based on graph cut
CN102243759A (en) * 2010-05-10 2011-11-16 东北大学 Three-dimensional lung vessel image segmentation method based on geometric deformation model
CN103093473A (en) * 2013-01-25 2013-05-08 北京理工大学 Multi-target picture segmentation based on level set
CN104217225A (en) * 2014-09-02 2014-12-17 中国科学院自动化研究所 A visual target detection and labeling method
CN105354838A (en) * 2015-10-20 2016-02-24 努比亚技术有限公司 Method and terminal for acquiring depth information of weak texture region in image
CN105872477A (en) * 2016-05-27 2016-08-17 北京旷视科技有限公司 Video monitoring method and system
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7003134B1 (en) * 1999-03-08 2006-02-21 Vulcan Patents Llc Three dimensional object pose estimation which employs dense depth information
CN100579174C (en) * 2007-02-02 2010-01-06 华为技术有限公司 Motion detection method and device
CN102402687B (en) * 2010-09-13 2016-06-15 三星电子株式会社 Rigid body part direction detection method and device based on depth information
CN102855459B (en) * 2011-06-30 2015-11-25 株式会社理光 For the method and system of the detection validation of particular prospect object

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101657825A (en) * 2006-05-11 2010-02-24 普莱姆传感有限公司 Modeling of humanoid forms from depth maps
CN102243759A (en) * 2010-05-10 2011-11-16 东北大学 Three-dimensional lung vessel image segmentation method based on geometric deformation model
CN101840577A (en) * 2010-06-11 2010-09-22 西安电子科技大学 Image automatic segmentation method based on graph cut
CN103093473A (en) * 2013-01-25 2013-05-08 北京理工大学 Multi-target picture segmentation based on level set
CN104217225A (en) * 2014-09-02 2014-12-17 中国科学院自动化研究所 A visual target detection and labeling method
CN105354838A (en) * 2015-10-20 2016-02-24 努比亚技术有限公司 Method and terminal for acquiring depth information of weak texture region in image
CN105872477A (en) * 2016-05-27 2016-08-17 北京旷视科技有限公司 Video monitoring method and system
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴鑫: "融合Kinect深度和颜色信息的机器人视觉系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李海坤: "基于彩色和深度的前景分割研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179332A (en) * 2018-11-09 2020-05-19 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium
CN111179332B (en) * 2018-11-09 2023-12-19 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111210471A (en) * 2018-11-22 2020-05-29 北京欣奕华科技有限公司 Positioning method, device and system
CN111210471B (en) * 2018-11-22 2023-08-25 浙江欣奕华智能科技有限公司 Positioning method, device and system
CN111383238A (en) * 2018-12-28 2020-07-07 Tcl集团股份有限公司 Target detection method, target detection device and intelligent terminal
CN111950543A (en) * 2019-05-14 2020-11-17 北京京东尚科信息技术有限公司 Target detection method and device
CN111950543B (en) * 2019-05-14 2024-08-16 北京京东乾石科技有限公司 Target detection method and device
CN110348333A (en) * 2019-06-21 2019-10-18 深圳前海达闼云端智能科技有限公司 Object detecting method, device, storage medium and electronic equipment
CN110502978A (en) * 2019-07-11 2019-11-26 哈尔滨工业大学 A kind of laser radar waveform Modulation recognition method based on BP neural network model
CN111366916A (en) * 2020-02-17 2020-07-03 北京睿思奥图智能科技有限公司 Method and device for determining distance between interaction target and robot and electronic equipment
CN111366916B (en) * 2020-02-17 2021-04-06 山东睿思奥图智能科技有限公司 Method and device for determining distance between interaction target and robot and electronic equipment
WO2023109069A1 (en) * 2021-12-13 2023-06-22 深圳前海微众银行股份有限公司 Image retrieval method and apparatus

Also Published As

Publication number Publication date
WO2018120038A1 (en) 2018-07-05

Similar Documents

Publication Publication Date Title
CN107636727A (en) Target detection method and device
CN104778721B (en) The distance measurement method of conspicuousness target in a kind of binocular image
KR101854554B1 (en) Method, device and storage medium for calculating building height
EP2811423B1 (en) Method and apparatus for detecting target
CN108381549B (en) Binocular vision guide robot rapid grabbing method and device and storage medium
US20180018528A1 (en) Detecting method and device of obstacles based on disparity map and automobile driving assistance system
JP6955783B2 (en) Information processing methods, equipment, cloud processing devices and computer program products
CN106446862A (en) Face detection method and system
CN110458772B (en) Point cloud filtering method and device based on image processing and storage medium
CN108475433A (en) Method and system for determining RGBD camera postures on a large scale
CN105809651A (en) Image saliency detection method based on edge non-similarity comparison
CN110751620B (en) Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN107203742B (en) Gesture recognition method and device based on significant feature point extraction
CN104915642B (en) Front vehicles distance measuring method and device
WO2020258297A1 (en) Image semantic segmentation method, movable platform, and storage medium
CN111340834B (en) Lining plate assembly system and method based on laser radar and binocular camera data fusion
CN111837158A (en) Image processing method and device, shooting device and movable platform
CN111382658B (en) Road traffic sign detection method in natural environment based on image gray gradient consistency
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN109784171A (en) Car damage identification method for screening images, device, readable storage medium storing program for executing and server
CN108596032B (en) Detection method, device, equipment and medium for fighting behavior in video
CN114898321B (en) Road drivable area detection method, device, equipment, medium and system
CN116342519A (en) Image processing method based on machine learning
CN116643291A (en) SLAM method for removing dynamic targets by combining vision and laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210201

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
CB02 Change of applicant information

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

CB02 Change of applicant information