WO2022116676A1 - Appareil et procédé de détection d'objet, support de stockage et dispositif électronique - Google Patents

Appareil et procédé de détection d'objet, support de stockage et dispositif électronique Download PDF

Info

Publication number
WO2022116676A1
WO2022116676A1 PCT/CN2021/122452 CN2021122452W WO2022116676A1 WO 2022116676 A1 WO2022116676 A1 WO 2022116676A1 CN 2021122452 W CN2021122452 W CN 2021122452W WO 2022116676 A1 WO2022116676 A1 WO 2022116676A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection area
area
pixel point
pixel
lens
Prior art date
Application number
PCT/CN2021/122452
Other languages
English (en)
Chinese (zh)
Inventor
骆磊
黄晓庆
Original Assignee
达闼机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 达闼机器人股份有限公司 filed Critical 达闼机器人股份有限公司
Publication of WO2022116676A1 publication Critical patent/WO2022116676A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Definitions

  • the present disclosure relates to the technical field of object detection, and in particular, to an object detection apparatus, method, storage medium and electronic device.
  • the robot is usually required to have functions such as navigation, obstacle avoidance, and capture (for example, an intelligent sweeping robot), which requires the robot to be able to detect objects in the surrounding environment.
  • functions such as navigation, obstacle avoidance, and capture (for example, an intelligent sweeping robot), which requires the robot to be able to detect objects in the surrounding environment.
  • ultrasonic, infrared, laser radar and other sensing devices are mainly used as the navigation obstacle avoidance device of the robot, and object detection is realized through the navigation obstacle avoidance device.
  • object detection is realized through the navigation obstacle avoidance device.
  • using these sensing devices as navigation and obstacle avoidance devices has various drawbacks.
  • ultrasonic and infrared sensing devices are relatively cheap, but can only detect objects at close range, and when the situation is complicated, the number of object detection failures will be relatively high.
  • lidar can detect objects with high accuracy and long detection distance, it is expensive, consumes high power for active scanning, and its volume and weight are relatively large, so it is not suitable for most robots.
  • the present disclosure provides an object detection apparatus, method, storage medium and electronic device.
  • an object detection device includes a lens, an image sensor and a processor, the lens includes a plurality of microlenses with the same area and shape, or The lens includes multiple types of microlenses, and the different types of the microlenses are used to characterize at least one different microlens in area and shape;
  • the image sensor is configured to acquire first pixel information corresponding to the current detection area in at least one preset detection area, and send the first pixel information to the processor; each of the preset detection areas is determined by an area composed of target pixels on the image sensor, the target pixels are pixels included in the projection area of a plurality of the microlenses on the image sensor, and the first pixel information is the current detection The pixel point information collected by the target pixel point corresponding to the area;
  • the processor is configured to determine the first object feature of the object to be detected according to the first pixel information.
  • the target pixel point includes a first pixel point, and/or a second pixel point
  • the first pixel point is the pixel point included in the projection area of the first microlens on the image sensor
  • the second pixel point is the second microlens pixel points included in the projection area on the image sensor
  • the processor is further configured to determine a target other than the current detection area from at least one of the preset detection areas when the first object feature satisfies the area switching condition corresponding to the current detection area detection area;
  • the processor is further configured to determine a second object feature of the object to be detected according to second pixel information corresponding to the target detection area, where the second pixel information is the target corresponding to the target detection area The pixel point information collected by the pixel point.
  • the preset detection area includes a large lens detection area, a small lens detection area and an all-lens detection area
  • the target pixel corresponding to the large lens detection area includes the first pixel
  • the small lens The target pixel point corresponding to the lens detection area includes the second pixel point
  • the full-lens detection area includes the large lens detection area and the small lens detection area.
  • the shapes of the first microlenses and the second microlenses are polygons.
  • the target pixel point includes a third pixel point
  • the third pixel point is where the plurality of microlenses are located.
  • the pixel points included in the projection area on the image sensor, and the first pixel information is the pixel point information collected by the third pixel point.
  • an object detection method applied to an object detection device includes a lens, an image sensor and a processor, the lens includes a plurality of microlenses with the same area and shape, or The lens includes multiple types of microlenses, and the different types of the microlenses are used to characterize at least one different microlens in area and shape; the method includes:
  • each of the preset detection areas is determined by the An area composed of target pixels on the image sensor, the target pixels are pixels included in the projection area of a plurality of the microlenses on the image sensor, and the first pixel information corresponds to the current detection area
  • the processor determines the first object feature of the object to be detected according to the first pixel information.
  • the target pixel point includes a first pixel point, and/or a second pixel point
  • the first pixel point is the pixel point included in the projection area of the first microlens on the image sensor
  • the second pixel point is the second microlens
  • the method further includes:
  • the preset detection area includes a large lens detection area, a small lens detection area and an all-lens detection area
  • the target pixel corresponding to the large lens detection area includes the first pixel
  • the small lens The target pixel point corresponding to the lens detection area includes the second pixel point
  • the full-lens detection area includes the large lens detection area and the small lens detection area.
  • the shapes of the first microlenses and the second microlenses are polygons.
  • the target pixel point includes a third pixel point
  • the third pixel point is where the plurality of microlenses are located.
  • the pixel points included in the projection area on the image sensor, and the first pixel information is the pixel point information collected by the third pixel point.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the object detection method provided in the second aspect.
  • an electronic device comprising:
  • a processor configured to execute the computer program in the memory, to implement the steps of the object detection method provided by the second aspect.
  • the object detection device in the present disclosure includes a lens, an image sensor and a processor, the lens includes a plurality of microlenses with the same area and shape, or the lens includes multiple types of microlenses, and different types of microlenses use
  • An image sensor for characterizing at least one different microlens in area and shape, for acquiring first pixel information corresponding to the current detection area in at least one preset detection area, and sending the first pixel information to the processor, each preset detection area.
  • the detection area be an area composed of target pixels on the image sensor
  • the target pixels are pixels included in the projection area of a plurality of microlenses on the image sensor
  • the first pixel information is the target pixel collection corresponding to the current detection area.
  • the processor is configured to determine the first object feature of the object to be detected according to the first pixel information.
  • the present disclosure determines the first object feature of the object to be detected by using the first pixel information corresponding to the current detection area, and can realize the accurate detection of the object to be detected under the condition of low power consumption.
  • the structure of the object detection device is simple, which reduces the detection rate of the object. Cost, weight and volume of the device.
  • FIG. 1 is a block diagram of an object detection device according to an exemplary embodiment
  • FIG. 2 is a schematic diagram illustrating the distribution of a first microlens and a second microlens according to an exemplary embodiment
  • FIG. 3 is a schematic diagram of the distribution of a microlens according to an exemplary embodiment
  • FIG. 4 is a schematic diagram of the distribution of a microlens according to an exemplary embodiment
  • FIG. 5 is a flowchart of an object detection method according to an exemplary embodiment
  • FIG. 6 is a flowchart illustrating another object detection method according to an exemplary embodiment
  • Fig. 7 is a block diagram of an electronic device provided according to an exemplary embodiment.
  • the application scenario is to use the object detection device to detect objects to be detected in the surrounding environment.
  • the object detection device may be provided on the terminal device, and the object detection device may include a lens, an image sensor and a processor.
  • the image sensor is located on the side corresponding to the image-side surface of the lens, and has an imaging surface facing the image-side surface.
  • the imaging surface is composed of multiple pixels.
  • the lens and the image sensor can be completely fitted together, or they can be separated by a certain interval. the distance.
  • the lens can be a flat lens or a curved lens
  • the image sensor can be a CMOS (English: Complementary Metal-Oxide-Semiconductor, Chinese: Complementary Metal Oxide Semiconductor) sensor, or a CCD (English: Charge-coupled Device, Chinese: Charge Coupled Device) element), or any other photosensitive sensor, which is not specifically limited in the present disclosure.
  • the terminal device may be, for example, a terminal device such as a smart robot, a smart phone, a tablet computer, a smart watch, and a smart bracelet.
  • FIG. 1 is a block diagram of an object detection apparatus according to an exemplary embodiment.
  • the device 100 includes a lens 101, an image sensor 102 and a processor 103.
  • the lens 101 includes a plurality of first microlenses with the same area and shape, or the lens includes multiple types of microlenses, and different types of microlenses. Lenses are used to characterize microlenses that differ in at least one of area and shape.
  • the image sensor 102 is configured to acquire first pixel information corresponding to the current detection area in at least one preset detection area, and send the first pixel information to the processor 103 .
  • each preset detection area is an area composed of target pixels on the image sensor
  • the target pixels are pixels included in the projection area of a plurality of microlenses on the image sensor
  • the first pixel information corresponds to the current detection area
  • an object detection device 100 can be constructed through the lens 101, the image sensor 102 and the processor 103 to detect objects.
  • the lens 101 can be divided into a plurality of micro-lenses with the same area and shape arranged in a grid format, and the lens 101 can also be divided into various types of micro-lenses, and each type of micro-lens includes a plurality of micro-lenses. lens.
  • At least one of the areas and shapes of different types of microlenses is different, that is, different types of microlenses may be the same in area, but different in shape, or they may be in different areas, but the same shape may also be in area and microlenses of different shapes.
  • the way of dividing the lens 101 may be, for example, forming a plurality of microlenses (or multiple types of microlenses) with the same area and shape on the lens by photolithography (or other techniques), or A plurality of microlenses (or multiple types of microlenses) with the same area and shape are formed on the lens through a layer of nanofilm.
  • each micro-lens corresponds to one or more pixels on the image sensor 102, and the pixel corresponding to each micro-lens is the micro-lens in the image sensor 102. Pixels included in the projected area on .
  • each type of microlens corresponds to one type of pixel on the image sensor 102, and the pixel corresponding to each type of microlens is the type of microlens in the image sensor. Pixels included in the projection area on 102 .
  • the imaging surface of the entire image sensor can be used as a preset detection area, that is, the target pixel corresponding to the preset detection area only includes one type of pixel.
  • the imaging surface can be divided into a plurality of preset detection areas according to the types of pixel points, and each preset detection area can be an area composed of target pixel points. It is assumed that the target pixels corresponding to the detection area are different, and the number of target pixels is different.
  • the processor 103 selects a preset detection area from at least one preset detection area as the current detection area, and the image sensor 102 acquires the first pixel information collected by the target pixel point corresponding to the current detection area, and uses the first pixel information.
  • a pixel information is sent to the processor 103 .
  • the first pixel information may include the pixel value of each target pixel point corresponding to the current detection area.
  • the processor 103 is configured to determine the first object feature of the object to be detected according to the first pixel information.
  • the processor 103 may calculate the first object feature of the object to be detected according to the first pixel information.
  • the first object feature may include a first distance between the object to be detected and the object detection device, and a first size and a first moving speed of the object to be detected.
  • the object detection device in the present disclosure includes a lens, an image sensor and a processor, the lens includes a plurality of microlenses with the same area and shape, or the lens includes multiple types of An image sensor for characterizing at least one different microlens in area and shape, for acquiring first pixel information corresponding to the current detection area in at least one preset detection area, and sending the first pixel information to the processor, each preset detection area.
  • the detection area be an area composed of target pixels on the image sensor
  • the target pixels are pixels included in the projection area of a plurality of microlenses on the image sensor
  • the first pixel information is the target pixel collection corresponding to the current detection area.
  • the processor is configured to determine the first object feature of the object to be detected according to the first pixel information.
  • the present disclosure determines the first object feature of the object to be detected by using the first pixel information corresponding to the current detection area, and can realize accurate detection of the object to be detected under low power consumption. Cost, weight and volume of the device.
  • the target pixel point includes the first pixel point, and/or the second Pixel points
  • the first pixel point is the pixel point included in the projection area of the first microlens on the image sensor
  • the second pixel point is the pixel point included in the projection area of the second microlens on the image sensor.
  • the processor 103 is further configured to determine a target detection area other than the current detection area from the at least one preset detection area when the first object feature satisfies the area switching condition corresponding to the current detection area.
  • the imaging surface of the image sensor 102 may be divided into three preset detection areas.
  • the preset detection area may include a large-lens detection area, a small-lens detection area and a full-lens detection area
  • the target pixels corresponding to the large-lens detection area include the first pixel
  • the target pixels corresponding to the small-lens detection area include the second
  • the full-lens detection area includes a large-lens detection area and a small-lens detection area, that is, the target pixel point corresponding to the full-lens detection area includes both a first pixel point and a second pixel point.
  • the shapes of the first microlenses and the second microlenses may be polygons.
  • the lens 101 can be divided into a plurality of first microlenses and a plurality of second microlenses as shown in FIG. A square corresponds to one pixel.
  • the 6*6 square enclosed by the dotted line in FIG. 2 is the first microlens
  • the 3*3 square enclosed by the dotted line in FIG. 2 is the second microlens.
  • the number of target pixels used is different, and the data amount of pixel information collected by the target pixels is different, which will lead to different presets.
  • the target pixel points corresponding to the detection area have different resolution and power consumption for detecting the object to be detected. Wherein, the greater the number of target pixel points corresponding to the preset detection area, the higher the resolving power of detecting the object to be detected, and the higher the power consumption of detecting the object to be detected at the same time.
  • the resolving power can reflect the performance of detecting the object to be detected.
  • the preset detection area including a large lens detection area, a small lens detection area and a full lens detection area, and the number of the first microlenses included on the lens is greater than the number of the second microlenses, for example, the large lens detection area
  • the small lens The relationship between the resolution power of the detection area and the full-lens detection area is: full-lens detection area > small-lens detection area > large-lens detection area, and the power consumption relationship is: full-lens detection area ⁇ small-lens detection area ⁇ large-lens detection area area.
  • the area switching conditions corresponding to each preset detection area can be preset in the processor 103, so that when the processor 103 detects the object to be detected through the target pixels corresponding to each preset detection area, it can determine the preset detection area. It is assumed that whether the resolving power of the detection area to detect the object to be detected is insufficient or the resolving power is excessive.
  • the area switching condition corresponding to each preset detection area may be set according to the distance range between the object to be detected and the object detection device that can be detected by the preset detection area, and the size range and moving speed range of the object to be detected.
  • the processor 103 may select a preset detection area with higher resolution from a plurality of preset detection areas as the target detection area (that is, select The preset detection area with more target pixels) to detect the object to be detected.
  • the processor 103 may select a preset detection area with a smaller resolution from a plurality of preset detection areas as the target detection area, to deal with Detecting the object to detect, that is, selecting a preset detection area with a smaller number of target pixels to detect the object to be detected, thereby reducing the power consumption of detecting the object to be detected.
  • the power consumption of object detection can be reduced while ensuring the object detection performance (detection accuracy), so as to meet the needs of different scenarios.
  • the processor 103 is further configured to determine a second object feature of the object to be detected according to second pixel information corresponding to the target detection area, where the second pixel information is pixel point information collected from a target pixel point corresponding to the target detection area.
  • the processor 103 may calculate the second object feature of the object to be detected according to the second pixel information collected from the target pixel point corresponding to the target detection area.
  • the second object characteristic may include a second distance between the object to be detected and the object detection device, and a second size and a second moving speed of the object to be detected.
  • the processor 103 may directly use the first object feature as the second object feature.
  • the first object feature includes a first distance between the object to be detected and the object detection device, and a first size and a first moving speed of the object to be detected
  • the area switching condition may include any of the following conditions:
  • the first size is smaller than the first size threshold corresponding to the current detection area, or the first size is greater than or equal to the second size threshold corresponding to the current detection area, and the second size threshold is greater than the first size threshold.
  • the first distance is greater than the first distance threshold corresponding to the current detection area, or the first distance is less than or equal to the second distance threshold corresponding to the current detection area, and the second distance threshold is smaller than the first distance threshold.
  • the first moving speed is less than the first speed threshold corresponding to the current detection area, or the first moving speed is greater than or equal to the second speed threshold corresponding to the current detection area, and the second speed threshold is greater than the first speed threshold.
  • the first size threshold is the minimum size of the object to be detected that can be detected in the current detection area
  • the first distance threshold is the maximum distance between the object to be detected and the object detection device that can be detected in the current detection area
  • the first speed threshold is The minimum speed of the object to be detected that can be detected in the current detection area.
  • the second size threshold is the maximum size of the object to be detected that can be detected in the current detection area
  • the second distance threshold is the minimum distance between the object to be detected and the object detection device that can be detected in the current detection area
  • the second speed threshold is the current detection area. The maximum speed of the object to be detected that can be detected.
  • the preset detection area includes a large lens detection area, a small lens detection area and an all-lens detection area, and the number of first microlenses on the lens is greater than the number of second microlenses as an example, after the object detection device 100 is activated , the user can manually select a preset detection area from the large lens detection area, the small lens detection area and the full lens detection area as the current detection area to detect the object to be detected.
  • the method of automatic selection can also be adopted.
  • the large lens detection area can be selected by default, and then the first pixel information collected by the target pixel points corresponding to the large lens detection area can be used to calculate the first object feature of the object to be detected, and judge.
  • the first object feature satisfies the area switching condition corresponding to the detection area of the large lens. If it is determined that the first size is smaller than the first size threshold corresponding to the detection area of the large lens, the first distance is greater than the first distance threshold corresponding to the detection area of the large lens, and the first moving speed is smaller than any of the first speed thresholds corresponding to the detection area of the large lens.
  • One condition is satisfied (at this time, the resolution power of the target pixel corresponding to the detection area of the large lens to detect the object to be detected is insufficient, and the appearance is that the pixel value of the adjacent target pixel has a large difference or the pixel value changes too slowly).
  • the detection area is switched to the lenslet detection area.
  • the first object feature of the object to be detected is recalculated, and it is judged whether the recalculated first object feature satisfies the area switching condition corresponding to the detection area of the small lens .
  • the recalculated first size is smaller than the first size threshold corresponding to the lenslet detection area
  • the recalculated first distance is greater than the first distance threshold corresponding to the lenslet detection area
  • the recalculated first moving speed is smaller than the lenslet detection area Any one of the corresponding first speed thresholds is satisfied (at this time, the resolution power of the target pixel corresponding to the detection area of the small lens to detect the object to be detected is insufficient, and the appearance is that the pixel value of the adjacent target pixel has a large difference)
  • the first object feature of the object to be detected is recalculated, and it is judged whether the recalculated first object feature satisfies the area switching conditions corresponding to the full-lens detection area .
  • the recalculated first size is greater than or equal to the second size threshold corresponding to the current detection area
  • the recalculated first distance is less than or equal to the second distance threshold corresponding to the current detection area
  • the recalculated first moving speed is greater than or equal to Any one of the second velocity thresholds corresponding to the current detection area satisfies (at this time, the target pixels corresponding to the full-lens detection area have excessive resolving power to detect the object to be detected, and the appearance is that the difference in pixel values of adjacent target pixels is relatively high. small), switch the full-lens detection area to the small-lens detection area.
  • the target pixel point includes a third pixel point
  • the third pixel point is a pixel point included in the projection area of the plurality of microlenses on the image sensor.
  • the first pixel information is the pixel point information collected by the third pixel point.
  • the current detection area may be an area composed of all the third pixels.
  • the image sensor 102 may acquire the first pixel information collected by the third pixel point, and send the first pixel information to the processor 103 .
  • the processor 103 calculates the first object feature of the object to be detected according to the first pixel information.
  • the shape of the microlenses included in the lens 101 may be a polygon.
  • the microlens may be a rectangle of equal area, as shown in FIG. 3 , the smallest square in FIG. 3 corresponds to one pixel, and the 6*3 rectangular area enclosed by the dotted line in FIG. 3 is the microlens.
  • the microlenses may be hexagons of equal area. As shown in FIG. 4 , the smallest square in FIG. 4 corresponds to one pixel, and the hexagonal area surrounded by dotted lines in FIG. 4 is the microlens.
  • the object detection device in the present disclosure includes a lens, an image sensor and a processor, the lens includes a plurality of microlenses with the same area and shape, or the lens includes multiple types of An image sensor for characterizing at least one different microlens in area and shape, for acquiring first pixel information corresponding to the current detection area in at least one preset detection area, and sending the first pixel information to the processor, each preset detection area.
  • the detection area be an area composed of target pixels on the image sensor
  • the target pixels are pixels included in the projection area of a plurality of microlenses on the image sensor
  • the first pixel information is the target pixel collection corresponding to the current detection area.
  • the processor is configured to determine the first object feature of the object to be detected according to the first pixel information.
  • the present disclosure determines the first object feature of the object to be detected by using the first pixel information corresponding to the current detection area, and can realize the accurate detection of the object to be detected under the condition of low power consumption.
  • the structure of the object detection device is simple, which reduces the detection rate of the object. Cost, weight and volume of the device.
  • Fig. 5 is a flow chart of an object detection method according to an exemplary embodiment.
  • the object detection device includes a lens, an image sensor and a processor, the lens includes a plurality of microlenses with the same area and shape, or the lens includes multiple types of microlenses, different types
  • the microlenses are used to characterize at least one different microlens in area and shape, and the method may include the following steps:
  • Step 201 Acquire, through an image sensor, first pixel information corresponding to a current detection area in at least one preset detection area, and send the first pixel information to a processor.
  • each preset detection area is an area composed of target pixels on the image sensor
  • the target pixels are pixels included in the projection area of a plurality of microlenses on the image sensor
  • the first pixel information corresponds to the current detection area
  • step 202 the processor determines the first object feature of the object to be detected according to the first pixel information.
  • Fig. 6 is a flowchart showing another object detection method according to an exemplary embodiment.
  • the target pixel point includes the first pixel point, and/or the first Two pixel points
  • the first pixel point is the pixel point included in the projection area of the first microlens on the image sensor
  • the second pixel point is the pixel point included in the projection area of the second microlens on the image sensor
  • the method may also Include the following steps:
  • Step 203 the processor determines a target detection area other than the current detection area from at least one preset detection area under the condition that the first object feature satisfies the area switching condition corresponding to the current detection area.
  • Step 204 the processor determines the second object feature of the object to be detected according to the second pixel information corresponding to the target detection area, where the second pixel information is pixel point information collected from the target pixel point corresponding to the target detection area.
  • the preset detection area includes a large lens detection area, a small lens detection area and a full lens detection area
  • the target pixel point corresponding to the large lens detection area includes the first pixel point
  • the target pixel point corresponding to the small lens detection area includes the first pixel point.
  • the full-lens detection area includes a large-lens detection area and a small-lens detection area.
  • the shapes of the first microlenses and the second microlenses are polygons.
  • the target pixel point includes a third pixel point
  • the third pixel point is a pixel point included in the projection area of the plurality of microlenses on the image sensor
  • the first pixel information is pixel point information collected by the third pixel point.
  • the object detection device in the present disclosure includes a lens, an image sensor and a processor, the lens includes a plurality of microlenses with the same area and shape, or the lens includes multiple types of An image sensor for characterizing at least one different microlens in area and shape, for acquiring first pixel information corresponding to the current detection area in at least one preset detection area, and sending the first pixel information to the processor, each preset detection area.
  • the detection area be an area composed of target pixels on the image sensor
  • the target pixels are pixels included in the projection area of a plurality of microlenses on the image sensor
  • the first pixel information is the target pixel collection corresponding to the current detection area.
  • the processor is configured to determine the first object feature of the object to be detected according to the first pixel information.
  • the present disclosure determines the first object feature of the object to be detected by using the first pixel information corresponding to the current detection area, and can realize the accurate detection of the object to be detected under the condition of low power consumption.
  • the structure of the object detection device is simple, which reduces the detection rate of the object. Cost, weight and volume of the device.
  • FIG. 7 is a block diagram of an electronic device 700 according to an exemplary embodiment.
  • the electronic device 700 may include: a processor 701 and a memory 702 .
  • the electronic device 700 may also include one or more of a multimedia component 703 , an input/output (I/O) interface 704 , and a communication component 705 .
  • I/O input/output
  • the processor 701 is used to control the overall operation of the electronic device 700 to complete all or part of the steps in the above-mentioned object detection method.
  • the memory 702 is used to store various types of data to support operations on the electronic device 700, such data may include, for example, instructions for any application or method operating on the electronic device 700, and application-related data, Such as contact data, messages sent and received, pictures, audio, video, and so on.
  • the memory 702 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (Static Random Access Memory, SRAM for short), electrically erasable programmable read-only memory ( Electrically Erasable Programmable Read-Only Memory (EEPROM for short), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (Read-Only Memory, referred to as ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • Multimedia components 703 may include screen and audio components. Wherein the screen can be, for example, a touch screen, and the audio component is used for outputting and/or inputting audio signals.
  • the audio component may include a microphone for receiving external audio signals.
  • the received audio signal may be further stored in memory 702 or transmitted through communication component 705 .
  • the audio assembly also includes at least one speaker for outputting audio signals.
  • the I/O interface 704 provides an interface between the processor 701 and other interface modules, and the above-mentioned other interface modules may be a keyboard, a mouse, a button, and the like. These buttons can be virtual buttons or physical buttons.
  • the communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices.
  • Wireless communication such as Wi-Fi, Bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or more of them
  • the corresponding communication component 705 may include: Wi-Fi module, Bluetooth module, NFC module and so on.
  • the electronic device 700 may be implemented by one or more application-specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), digital signal processors (Digital Signal Processor, DSP for short), digital signal processing devices (Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic components Implementation is used to implement the above object detection method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSP Digital Signal Processor
  • DSP Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components Implementation is used to implement the above object detection method.
  • a computer-readable storage medium comprising program instructions, the program instructions implementing the steps of the above-mentioned object detection method when executed by a processor.
  • the computer-readable storage medium can be the above-mentioned memory 702 including program instructions, and the above-mentioned program instructions can be executed by the processor 701 of the electronic device 700 to implement the above-mentioned object detection method.
  • An object detection device the device includes a lens, an image sensor and a processor, the lens includes a plurality of microlenses with the same area and shape, or the lens includes multiple types of microlenses, different types of all microlenses; The microlenses are used to characterize at least one different microlens in area and shape;
  • the image sensor is configured to acquire first pixel information corresponding to the current detection area in at least one preset detection area, and send the first pixel information to the processor; each of the preset detection areas is determined by an area composed of target pixels on the image sensor, the target pixels are pixels included in the projection area of a plurality of the microlenses on the image sensor, and the first pixel information is the current detection The pixel point information collected by the target pixel point corresponding to the area;
  • the processor is configured to determine the first object feature of the object to be detected according to the first pixel information.
  • the lens when the lens includes multiple types of the microlenses, and the multiple types of the microlenses are a first microlens and a second microlens, the The target pixel point includes a first pixel point, and/or a second pixel point, the first pixel point is a pixel point included in the projection area of the first microlens on the image sensor, and the second pixel point pixel points included in the projection area of the second microlens on the image sensor;
  • the processor is further configured to determine a target other than the current detection area from at least one of the preset detection areas when the first object feature satisfies the area switching condition corresponding to the current detection area detection area;
  • the processor is further configured to determine a second object feature of the object to be detected according to second pixel information corresponding to the target detection area, where the second pixel information is the target corresponding to the target detection area The pixel point information collected by the pixel point.
  • the preset detection area includes a large lens detection area, a small lens detection area and an all-lens detection area
  • the target pixel corresponding to the large lens detection area includes the first detection area.
  • a pixel point, the target pixel point corresponding to the small lens detection area includes the second pixel point, and the full lens detection area includes the large lens detection area and the small lens detection area.
  • the target pixel when the lens includes a plurality of the microlenses with the same area and shape, the target pixel includes a third pixel, and the third pixel is multiple.
  • Each of the pixel points included in the projection area of the microlens on the image sensor, and the first pixel information is the pixel point information collected by the third pixel point.
  • An object detection method applied to an object detection device, the device includes a lens, an image sensor and a processor, the lens includes a plurality of microlenses with the same area and shape, or the lens includes multiple types of microlenses. Lenses, different types of the microlenses are used to characterize at least one microlens different in area and shape; the method includes:
  • each of the preset detection areas is determined by the An area composed of target pixels on the image sensor, the target pixels are pixels included in the projection area of a plurality of the microlenses on the image sensor, and the first pixel information corresponds to the current detection area
  • the processor determines the first object feature of the object to be detected according to the first pixel information.
  • the method when the lens includes multiple types of the microlenses, and the multiple types of the microlenses are a first microlens and a second microlens, the The target pixel point includes a first pixel point, and/or a second pixel point, the first pixel point is a pixel point included in the projection area of the first microlens on the image sensor, and the second pixel point For the pixels included in the projection area of the second microlens on the image sensor, the method further includes:
  • the target pixel when the lens includes a plurality of the microlenses with the same area and shape, the target pixel includes a third pixel, and the third pixel is multiple.
  • Each of the pixel points included in the projection area of the microlens on the image sensor, and the first pixel information is pixel point information collected by the third pixel point.
  • An electronic device comprising:
  • a processor configured to execute the computer program in the memory, to implement the steps of the method in any one of Embodiments 6 to 8.

Abstract

La présente divulgation concerne un appareil et un procédé de détection d'objet, ainsi qu'un support de stockage et un dispositif électronique. L'appareil comprend un objectif, un capteur d'image et un processeur. L'objectif comprend de multiples micro-objectifs ayant la même surface et la même forme. En variante, l'objectif comprend de multiples types de micro-objectifs. Différents types de micro-objectifs représentent des micro-objectifs différents selon la zone et/ou la forme. Le capteur d'image est utilisé pour acquérir des premières informations de pixel correspondant à une zone de détection actuelle dans au moins une zone de détection prédéfinie, ainsi que pour envoyer les premières informations de pixel au processeur. Le processeur est utilisé pour déterminer une première caractéristique d'un objet devant subir une détection en fonction des premières informations de pixel. Selon la présente divulgation, une première caractéristique d'un objet devant subir une détection est déterminée d'après les premières informations de pixel correspondant à une zone de détection actuelle de façon à pouvoir effectuer la détection avec précision sur l'objet moyennant une faible consommation d'énergie. De plus, l'appareil de détection d'objet présente une structure simple qui fournit un coût, un poids et un volume réduits pour l'appareil de détection d'objet.
PCT/CN2021/122452 2020-12-02 2021-09-30 Appareil et procédé de détection d'objet, support de stockage et dispositif électronique WO2022116676A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011401996.9A CN112600994B (zh) 2020-12-02 2020-12-02 物体探测装置、方法、存储介质和电子设备
CN202011401996.9 2020-12-02

Publications (1)

Publication Number Publication Date
WO2022116676A1 true WO2022116676A1 (fr) 2022-06-09

Family

ID=75188577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/122452 WO2022116676A1 (fr) 2020-12-02 2021-09-30 Appareil et procédé de détection d'objet, support de stockage et dispositif électronique

Country Status (2)

Country Link
CN (1) CN112600994B (fr)
WO (1) WO2022116676A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600994B (zh) * 2020-12-02 2023-04-07 达闼机器人股份有限公司 物体探测装置、方法、存储介质和电子设备
CN112584015B (zh) * 2020-12-02 2022-05-17 达闼机器人股份有限公司 物体探测方法、装置、存储介质和电子设备
CN117553910A (zh) * 2022-08-05 2024-02-13 上海禾赛科技有限公司 探测模块、探测器和激光雷达

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1501507A (zh) * 2002-10-25 2004-06-02 ��΢�뵼�壨�Ϻ����������ι�˾ 具有在外围区域的大微透镜的图像传感器
US20060227214A1 (en) * 2005-04-11 2006-10-12 Valeo Vision Method, device and camera for detecting objects from digital images
CN1937236A (zh) * 2005-09-19 2007-03-28 C.R.F.阿西安尼顾问公司 包含耦合至微透镜矩阵的光探测器矩阵的多功能光传感器
CN103209301A (zh) * 2012-01-13 2013-07-17 佳能株式会社 摄像设备
WO2013114999A1 (fr) * 2012-01-30 2013-08-08 オリンパス株式会社 Appareil de capture d'image
CN107005640A (zh) * 2014-12-04 2017-08-01 汤姆逊许可公司 图像传感器单元和成像装置
CN112584015A (zh) * 2020-12-02 2021-03-30 达闼机器人有限公司 物体探测方法、装置、存储介质和电子设备
CN112600994A (zh) * 2020-12-02 2021-04-02 达闼机器人有限公司 物体探测装置、方法、存储介质和电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840435A (zh) * 2010-05-14 2010-09-22 中兴通讯股份有限公司 一种实现视频预览和检索的方法及移动终端
JP5947548B2 (ja) * 2012-01-13 2016-07-06 キヤノン株式会社 撮像装置、その制御方法、画像処理装置、画像生成方法、プログラム
JP6019947B2 (ja) * 2012-08-31 2016-11-02 オムロン株式会社 ジェスチャ認識装置、その制御方法、表示機器、および制御プログラム
CN103049760B (zh) * 2012-12-27 2016-05-18 北京师范大学 基于图像分块和位置加权的稀疏表示目标识别方法
WO2020024079A1 (fr) * 2018-07-28 2020-02-06 合刃科技(深圳)有限公司 Système de reconnaissance d'image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1501507A (zh) * 2002-10-25 2004-06-02 ��΢�뵼�壨�Ϻ����������ι�˾ 具有在外围区域的大微透镜的图像传感器
US20060227214A1 (en) * 2005-04-11 2006-10-12 Valeo Vision Method, device and camera for detecting objects from digital images
CN1937236A (zh) * 2005-09-19 2007-03-28 C.R.F.阿西安尼顾问公司 包含耦合至微透镜矩阵的光探测器矩阵的多功能光传感器
CN103209301A (zh) * 2012-01-13 2013-07-17 佳能株式会社 摄像设备
WO2013114999A1 (fr) * 2012-01-30 2013-08-08 オリンパス株式会社 Appareil de capture d'image
CN107005640A (zh) * 2014-12-04 2017-08-01 汤姆逊许可公司 图像传感器单元和成像装置
CN112584015A (zh) * 2020-12-02 2021-03-30 达闼机器人有限公司 物体探测方法、装置、存储介质和电子设备
CN112600994A (zh) * 2020-12-02 2021-04-02 达闼机器人有限公司 物体探测装置、方法、存储介质和电子设备

Also Published As

Publication number Publication date
CN112600994B (zh) 2023-04-07
CN112600994A (zh) 2021-04-02

Similar Documents

Publication Publication Date Title
WO2022116676A1 (fr) Appareil et procédé de détection d'objet, support de stockage et dispositif électronique
US8724013B2 (en) Method and apparatus with fast camera auto focus
US8233077B2 (en) Method and apparatus with depth map generation
US9230306B2 (en) System for reducing depth of field with digital image processing
WO2022116675A1 (fr) Procédé et appareil de détection d'objet, support de stockage et dispositif électronique
US8345986B2 (en) Image processing apparatus, image processing method and computer readable-medium
US9900500B2 (en) Method and apparatus for auto-focusing of an photographing device
US10078198B2 (en) Photographing apparatus for automatically determining a focus area and a control method thereof
JP2011508268A5 (fr)
US10564390B2 (en) Focusing control device, focusing control method, focusing control program, lens device, and imaging device
US20200412967A1 (en) Imaging element and imaging apparatus
US11343422B2 (en) Focusing control device, focusing control method, focusing control program, lens device, and imaging device
US10958824B2 (en) Imaging apparatus and image processing method
US9407845B1 (en) Self powering camera
US20170094189A1 (en) Electronic apparatus, imaging method, and non-transitory computer readable recording medium
US10438372B2 (en) Arithmetic method, imaging apparatus, and storage medium
CN104215215B (zh) 一种测距方法
US20200221050A1 (en) Imaging control device, imaging apparatus, imaging control method, and imaging control program
CN114286011B (zh) 对焦方法和装置
US10939056B2 (en) Imaging apparatus, imaging method, imaging program
US10520793B2 (en) Focusing control device, focusing control method, focusing control program, lens device, and imaging device
CN114827438B (zh) 协处理芯片、电子设备以及触控响应方法
US11871106B2 (en) Imaging apparatus, imaging method, and program
US20230134771A1 (en) Image sensor, image acquisition apparatus, and electronic apparatus including the image acquisition apparatus
TW201611599A (zh) 影像擷取裝置及其控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21899710

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13-11-2023)