CN111246120A - Image data processing method, control system and storage medium for mobile device - Google Patents

Image data processing method, control system and storage medium for mobile device Download PDF

Info

Publication number
CN111246120A
CN111246120A CN202010069936.5A CN202010069936A CN111246120A CN 111246120 A CN111246120 A CN 111246120A CN 202010069936 A CN202010069936 A CN 202010069936A CN 111246120 A CN111246120 A CN 111246120A
Authority
CN
China
Prior art keywords
depth
data
light intensity
map
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010069936.5A
Other languages
Chinese (zh)
Other versions
CN111246120B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ankobot Shanghai Smart Technologies Co ltd
Shankou Shenzhen Intelligent Technology Co ltd
Original Assignee
Ankobot Shanghai Smart Technologies Co ltd
Shankou Shenzhen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ankobot Shanghai Smart Technologies Co ltd, Shankou Shenzhen Intelligent Technology Co ltd filed Critical Ankobot Shanghai Smart Technologies Co ltd
Priority to CN202010069936.5A priority Critical patent/CN111246120B/en
Publication of CN111246120A publication Critical patent/CN111246120A/en
Application granted granted Critical
Publication of CN111246120B publication Critical patent/CN111246120B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Studio Devices (AREA)

Abstract

The application provides an image data processing method, a control system and a storage medium of mobile equipment, wherein the image data processing method comprises the following steps: acquiring a first depth map and first light intensity data thereof, a second depth map and second light intensity data thereof shot by the depth detection device; determining the corresponding relation between the pixel position of the first depth map and the pixel position of the second depth map according to the position relation between the first position and the second position; extracting first depth data in a first depth map and second depth data in a second depth map respectively according to the first light intensity data and/or the second light intensity data and the light intensity condition; and synthesizing the first depth data and the second depth data according to the corresponding relation to obtain a new depth map. The method and the device can synthesize the images shot by the mobile device with different exposure amounts in the moving process, so that a high-quality image which is not subjected to overexposure and underexposure and has a long-distance view range is obtained.

Description

Image data processing method, control system and storage medium for mobile device
Technical Field
The present application relates to the field of data processing systems and methods, and in particular, to an image data processing method, a control system, and a storage medium for a mobile device.
Background
A mobile device is a device that can perform certain work during movement. Such as a mobile phone, a mobile robot, a robot arm with an image pickup device, and the like. Taking the mobile device as a mobile robot as an example, the mobile robot can be used indoors or outdoors, can be used for industry, business or families, can be used for replacing security patrol, welcome personnel or diners, or people to clean the ground, and can also be used for family accompanying, assistant office work and the like.
Since the image capturing device of the mobile device can only use one exposure amount when capturing each image, in the moving process of the mobile device such as a mobile robot, if an image of a long-distance scene is to be obtained, the image needs to be obtained with a high exposure amount, but the image of the short-distance scene may be overexposed. If a clear image of a scene at a short distance is to be obtained, it is necessary to acquire the image with a low exposure amount so as not to cause an overexposure phenomenon, but an image in a long-distance visual field range may have an underexposure phenomenon.
Therefore, how to obtain a high-quality image with a long viewing range without overexposure and underexposure in the moving process of the mobile device has become an urgent technical problem to be solved in the industry.
Disclosure of Invention
In view of the above-mentioned shortcomings of the related art, the present application aims to provide an image data processing method, a control system and a storage medium for a mobile device, which are used to solve the technical problem to be solved in the industry of how to obtain a high-quality image with a long-distance view range without overexposure and underexposure during the moving process of the mobile device.
To achieve the above and other related objects, a first aspect of the present application provides an image data processing method of a mobile device having a depth detection apparatus, the image data processing method comprising: acquiring a first depth map and first light intensity data thereof, a second depth map and second light intensity data thereof shot by the depth detection device; wherein the first depth map is captured by the depth detection device at a first position at a first exposure level, and the second depth map is captured by the depth detection device at a second position at a second exposure level; determining the corresponding relation between the pixel position of the first depth map and the pixel position of the second depth map according to the position relation between the first position and the second position; extracting first depth data in the first depth map and second depth data in the second depth map respectively according to the first light intensity data and/or the second light intensity data and a light intensity condition; and synthesizing the first depth data and the second depth data according to the corresponding relation to obtain a new depth map.
In certain embodiments of the first aspect of the present application, one of the first and second exposure levels corresponds to a high exposure and the other corresponds to a low exposure.
In certain embodiments of the first aspect of the present application, when the second exposure level is stronger than the first exposure level, the extracted first depth data comprises: removing the residual depth data from the first depth map; the extracted second depth data includes: and selecting the depth data which is selected from the second depth map and remains after the overexposure depth data is removed.
In certain embodiments of the first aspect of the present application, the depth detection device obtains a first light intensity map and a second light intensity map when capturing the first depth map and the second depth map respectively; the depth data of each pixel position in the first depth map corresponds to the light intensity data of each pixel position in the first light intensity map one by one, and the depth data of each pixel position in the second depth map corresponds to the light intensity data of each pixel position in the second light intensity map one by one.
In certain embodiments of the first aspect of the present application, the image data processing method further comprises: extracting third light intensity data corresponding to pixel positions of the first depth data and fourth light intensity data corresponding to pixel positions of the second depth data; and synthesizing the third light intensity data and the fourth light intensity data according to the corresponding relation to obtain a new light intensity map.
In certain embodiments of the first aspect of the present application, the synthesizing the first depth data and the second depth data according to the correspondence to obtain a new depth map includes at least one of: converting the second depth data into depth data corresponding to a first position according to the corresponding relation, and synthesizing the converted depth data and the first depth data to obtain a new depth map; or converting the first depth data into depth data corresponding to a second position according to the corresponding relation, and synthesizing the converted depth data and the second depth data to obtain a new depth map.
In certain embodiments of the first aspect of the present application, the depth data in the new depth map comprises: selecting the first depth data and/or the second depth data based on an overlapping region of the first depth data and the second depth data.
In certain embodiments of the first aspect of the present application, the first depth map and the second depth map are captured by the mobile device at a first exposure level and a second exposure level respectively according to a preset switching frequency during the motion process.
In certain embodiments of the first aspect of the present application, the first and second depth maps are any two adjacent images of a plurality of consecutively taken images.
In certain embodiments of the first aspect of the present application, the first exposure level and the second exposure level are obtained by controlling the depth detection device to have different shutter speeds.
In certain embodiments of the first aspect of the present application, the first exposure level and the second exposure level are obtained by controlling the depth detection device to be in different illumination environments.
In certain embodiments of the first aspect of the present application, the positional relationship is derived from motion data obtained from the mobile device between the first location and the second location.
To achieve the above and other related objects, a second aspect of the present application provides a mobile device comprising: the depth detection device is used for shooting a depth map and light intensity data; a storage device for storing at least one program; and the processing device is connected with the depth detection device and the storage device and is used for calling and executing the at least one program so as to coordinate the depth detection device and the storage device to execute and realize the image data processing method according to any one of the first aspect of the application.
In certain embodiments of the second aspect of the present application, the mobile device further comprises a motion measurement means for providing motion data of the mobile device.
In certain embodiments of the second aspect of the present application, the shutter in the depth detection device switches different shutter speeds at a preset switching frequency to provide the first exposure level and the second exposure level.
In certain embodiments of the second aspect of the present application, the light emitter in the depth detection device switches different illumination levels at a preset switching frequency to provide the first exposure level and the second exposure level.
To achieve the above and other related objects, a third aspect of the present application provides a control system of a mobile device equipped with a depth detection apparatus, the control system comprising: interface means for receiving the depth map and light intensity data captured from the depth detection means; a storage device for storing at least one program; and the processing device is connected with the interface device and the storage device and used for calling and executing the at least one program so as to coordinate the interface device, the storage device and the depth detection device to execute and realize the image data processing method according to any one of the first aspect of the application.
To achieve the above and other related objects, a fourth aspect of the present application provides a computer-readable storage medium storing at least one program which, when called, executes and implements the image data processing method according to any one of the first aspects of the present application.
To sum up, the image data processing method, the control system, and the storage medium of the mobile device disclosed in the present application obtain the first depth map and the first light intensity data thereof, and the second depth map and the second light intensity data thereof captured by the depth detection device, and may obtain a new synthesized depth map according to the first light intensity data and/or the second light intensity data and the light intensity conditions. According to the method and the device, the high-quality depth image which is free of overexposure and underexposure and has a long-distance view range can be obtained through the depth images obtained under different exposure levels in the moving process of the mobile equipment. High quality light intensity maps with a long range of field of view, without over-and under-exposure, are also obtained.
Drawings
The specific features of the invention to which this application relates are set forth in the appended claims. The features and advantages of the invention to which this application relates will be better understood by reference to the exemplary embodiments described in detail below and the accompanying drawings. The brief description of the drawings is as follows:
fig. 1 is a block diagram showing a hardware configuration of a control system of a mobile device according to the present application.
Fig. 2 is a flowchart illustrating an image data processing method of a mobile device according to an embodiment of the present application.
Fig. 3 is a schematic diagram illustrating a positional relationship between the first position and the second position of the present application in an embodiment.
Fig. 4 is a schematic diagram showing a hardware configuration of an embodiment in which the mobile device is an autonomous cleaning robot.
Fig. 5 is a schematic diagram showing a hardware configuration of another embodiment in which the mobile device is an autonomous cleaning robot.
Fig. 6 shows the relative position relationship between an object measurement point M and the first and second positions.
Fig. 7 shows the relative position relationship between an object measurement point N and the first and second positions.
Fig. 8 is a block diagram showing a hardware configuration of a mobile device according to the present application in an embodiment.
Detailed Description
The following description of the embodiments of the present application is provided for illustrative purposes, and other advantages and capabilities of the present application will become apparent to those skilled in the art from the present disclosure.
In the following description, reference is made to the accompanying drawings that describe several embodiments of the application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present disclosure. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
Although the terms first, second, etc. may be used herein to describe various elements or parameters in some instances, these elements or parameters should not be limited by these terms. These terms are only used to distinguish one element or parameter from another element or parameter. For example, the first depth data may be referred to as second depth data, and similarly, the second depth data may be referred to as first depth data, without departing from the scope of the various described embodiments. The first depth data and the second depth data are both describing one depth data, but they are not the same depth data unless the context clearly indicates otherwise. Similar situations also include first light intensity data and second light intensity data, or first depth map and second depth map.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
The mobile device is a device that can perform a specific task during movement, and the movement process may be movement generated based on a user operation or autonomous movement. The mobile device is exemplified as follows: mobile terminal equipment, mobile robots, movable mechanical arms and the like. The mobile terminal equipment can move in a physical space under the handheld operation of a user. The mobile robot can autonomously move based on a preset navigation route and the position information of the positioning thereof. The robot arm may perform autonomous movement or the like based on the detected positional relationship with the target object/non-target object. For example, the mobile device is a mobile robot, and the mobile robot can be classified into any one of the following types: a mobile robot used indoors or outdoors, a mobile robot used in industry, business or home, a mobile robot used for replacing security patrol, guests or diners, or people to clean the ground, and a mobile robot used for family accompanying, office assistance, and the like.
Due to the complexity of the work scenario environment, the mobile device is accurately positioned while moving for measuring the positional relationship with surrounding obstacles or initial positions. Therefore, the mobile equipment comprises a depth detection device for acquiring depth data between the position of the mobile equipment and the obstacle or the initial position, the position relation can be estimated by using the acquired depth data, and the control strategy of the mobile equipment is adjusted by using the position relation. Taking a mobile device as an example of a mobile robot, the mobile robot obtains depth data between the mobile robot and an obstacle through a depth map and/or a light intensity map provided by a depth detection device, and then processes such as avoiding the obstacle or adjusting a working mode and the like are timely and effectively performed on the obstacle. Taking mobile equipment as an example of a mobile phone, a game APP developed by using an augmented reality technology is built in the mobile phone, a user interacts with a virtual object in the game through the game APP, and when the mobile phone is moved for interaction, the mobile phone determines the position relation between the current position and the initial position of the mobile phone through depth data provided by a depth detection device, and further prompts the user to adjust the pose of the mobile phone. Taking the mobile device as a mechanical arm as an example, the mechanical arm determines the position relationship between the mechanical arm and the operation position through the depth data provided by the depth detection device, and then adjusts the moving position of the mechanical arm or determines to execute the next instruction (such as a grabbing instruction, a dropping instruction, etc.).
Here, the depth detection means obtains a depth map by emitting a specific light wave and detecting a time period required for the wave from emission to feedback to calculate depth data. The depth detection device is interfered by light waves, shutter speed, surrounding environment, barrier materials and the like, so that an area which is difficult to identify appears in an obtained depth map. The intensity of the light wave reflected back to the depth detection device in a unit time can be measured by the exposure amount. For example, the depth detection apparatus only uses one exposure when capturing each depth map, and needs to obtain the depth map of the scene at a long distance with a higher exposure during the moving process of the mobile device, and then the depth map of the scene at a short distance may have a phenomenon of excessive noise, such as an overexposure phenomenon. If a clear image of a short-distance scene needs to be acquired with a low exposure, the depth map of the long-distance scene may be less bright, which may result in a phenomenon that an accurate depth or light intensity cannot be measured, such as an underexposure phenomenon. However, with a depth map of the same exposure amount, it is difficult to obtain a high-quality depth map that is neither overexposed nor underexposed, and that has a long-distance field of view.
Based on the foregoing example and by extension to other mobile devices having depth detection apparatuses, the present application provides an image data processing method, a control system, and a storage medium for a mobile device. The depth map shot by the mobile equipment at different positions with different exposure amounts and the light intensity data corresponding to the depth map can be obtained in the moving process of the mobile equipment, then the depth data corresponding to the images without under exposure and over exposure in the depth map are extracted based on the light intensity data and the corresponding light intensity conditions, and the extracted depth data can be synthesized according to the relative position relation between the different positions to obtain the high-quality depth map with remote and close scenes. For example, the resulting high quality depth map is arranged with depth data that is neither overexposed nor underexposed.
Here, the mobile device is provided with a depth detection means. Wherein the depth detection means is detection means for providing a depth map and light intensity data at a preset pixel resolution.
The depth data acquired by each pixel position in the depth map represents the distance information between the depth detection device and the corresponding object detection point in the range of the shot field of view. And by utilizing the measurement field range of the depth detection device, the deflection angle of the measured distance information and a preset coordinate point in the depth map is also provided by each pixel position in the depth map. As such, the depth map describes the geometry of the visible surface of the object over the field of view of the depth detection apparatus using pixel-level point cloud data.
Wherein the light intensity data acquired using the preset pixel positions may represent an object image within the captured field of view in a light intensity map. Wherein, the light intensity data of each pixel position comprises: the light energy value reflected by each object measuring point, namely the light intensity data represents the brightness value corresponding to each object measuring point. Note that the same luminance value may be red or green. The luminance value is between 0 and 255, the luminance near 255 is high, and the luminance near 0 is low. For example, if the light intensity map is a gray scale map, the light intensity data may be represented by gray scale data of the gray scale map. For another example, the light intensity map is a color map, and the light intensity data may be grayscale data of an R channel, grayscale data of a G channel, or grayscale data of a B channel, that is, the color map includes three types of grayscale data. Each pixel position in the depth map or the light intensity map corresponds to an object measurement point in an actual physical space.
Here, the depth detection means includes, but is not limited to: an image pickup device including a light receiver, an image pickup device (such as a ToF sensor) integrating a light receiver and a light emitter, and the like. Wherein the light receiver is composed of a depth measuring unit and a light sensor. Wherein the depth measurement unit includes, but is not limited to, based on different principles of measuring depth data: a lidar sensor, a time-of-flight based depth measurement unit, a structured light technology based depth measurement unit, etc. Wherein the depth measurement unit includes, but is not limited to, based on the difference of the light emitters: an area array depth measuring unit and a point array depth measuring unit. The surface array depth measuring unit is exemplified by: an array of photodiodes. Examples of such light sensors are: a CCD or a CMOS. Examples of the light emitter are: an infrared emitter (e.g., an infrared LED array).
The mobile device performs an image data processing method of the mobile device by means of a control system configured therein. Please refer to fig. 1, which is a block diagram of a hardware structure of a control system of a mobile device according to the present application. The control system 11 comprises interface means 111, storage means 113, and processing means 112. The interface device 111 includes but is not limited to: a serial interface such as an HDMI interface or a USB interface, or a parallel interface, etc.
The storage device 113 is used for storing at least one program, and the at least one program can be used for the processing device 112 to execute the image data processing method of the mobile device. For example, the storage device 113 also stores a control strategy of the exposure level. The control strategy is used for controlling the mobile equipment to switch different exposure levels according to a preset switching frequency in the motion process of the mobile equipment so as to obtain depth maps and light intensity data under different exposure levels. In particular, the control strategy is used to control the mobile device to perform a periodic variation of the exposure level. For example, the exposure level in one control period is composed of one high exposure and one low exposure. As another example, the exposure level during a control period consists of one high exposure and a plurality of consecutive low exposures. For another example, the exposure level in one control period is constituted by a plurality of consecutive high exposures and one low exposure. The period is related to the exposure levels of the high and low exposures, the time interval between the two adjacent images taken by the depth detection device 12, etc.
Here, the storage device 113 includes, but is not limited to: Read-Only Memory (ROM), Random Access Memory (RAM), and non-volatile Memory (NVRAM). For example, the storage 113 includes a flash memory device or other non-volatile solid state storage device. In certain embodiments, the storage device may also include memory remote from the one or more processing devices, e.g., network attached memory accessed via RF circuitry or external ports and a communication network, which may be the internet, one or more intranets, Local Area Networks (LANs), Wide Area Networks (WANs), Storage Area Networks (SANs), etc., or a suitable combination thereof. The storage 113 also includes a memory controller that may control access control to memory by mobile devices such as a Central Processing Unit (CPU) and interface devices or other components.
The processing device 112 is connected to the interface device 111, the storage device 113 and the depth detection device 12, and is configured to call and execute the at least one program, so as to coordinate the interface device 111, the storage device 113 and the depth detection device 12 to execute and implement the image data processing method described in this application.
The processing device 112 includes one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable logic arrays (FPGAs), or any combination thereof. The processing device is also operatively coupled to I/O ports that enable the mobile device to interact with various other electronic devices. Taking the mobile device as a mobile robot as an example, the other electronic devices include but are not limited to: a moving motor in the moving device in the mobile robot, or a slave processor, such as a Micro Controller Unit (MCU), dedicated to control the moving device and the cleaning device in the mobile robot. The processing device is operable to perform data read and write operations with the storage device. The processing means may perform such functions as extracting images, extracting motion data of the mobile device, determining a relative positional relationship of the mobile device between different locations based on the images and motion data, etc.
Please refer to fig. 2, which is a flowchart illustrating an image data processing method of a mobile device according to an embodiment of the present application. The image data processing method may be executed by the control system 11 of the mobile device, or other computer devices that can execute the image data processing method of the present application.
In step S110, a first depth map and first light intensity data thereof, and a second depth map and second light intensity data thereof captured by the depth detection device are obtained; wherein the first depth map is captured by the depth detection device at a first position and at a first exposure level, and the second depth map is captured by the depth detection device at a second position and at a second exposure level.
And the mobile equipment with the depth detection device controls the depth detection device according to the control strategy of the exposure level in the moving process, so that the depth detection device shoots a depth map according to the corresponding exposure level in the control strategy. Further, the processing device may acquire a first depth map taken by the depth detection device at a first exposure level at a first position and a second depth map taken at a second exposure level at a second position. Wherein the first location and the second location are different locations of the mobile device during movement. The first exposure level and the second exposure level are different exposure levels corresponding to a control strategy of the exposure levels. In particular, one of the first and second exposure levels corresponds to a high exposure and the other corresponds to a low exposure. The actual exposure amounts of the high exposure and the low exposure may be set according to factors such as the operating environment of the mobile device and the focal length of the depth detection apparatus.
Taking the mobile device as a mobile robot as an example, the mobile robot is used for obtaining depth data between the mobile device and an obstacle, and then effectively avoiding the obstacle in time or adjusting a working mode and the like. And shooting a first depth map and a second depth map by the mobile robot according to the control strategy of the exposure level in the moving process. Taking a mobile device as an example, when the mobile device is carried by a user and the user selects to shoot a depth map, the depth detection device of the mobile device may execute an instruction of shooting the depth map by the user to shoot the first depth map and the second depth map according to a preset control strategy of the exposure level during the moving process.
Taking the mobile device as a robot arm as an example, in order to determine the position relationship between the robot arm and the working position during the working process of the robot arm, the depth detection device of the robot arm may capture the first depth map and the second depth map according to the control strategy of the exposure level during the moving process.
Wherein the actual physical space corresponding to the first depth map and the actual physical space corresponding to the second depth map have overlapping field of view regions. Taking the mobile device as an example of an autonomous cleaning robot, the depth maps shot by the first position and the second position of the autonomous cleaning robot in the moving process both include images of the area near the couch. The sofa image in the first depth map captured in the first position represents a sofa image at a first exposure level and the sofa image in the second depth map captured in the second position represents a sofa image at a second exposure level.
In an embodiment, the first depth map and the second depth map are obtained by shooting the mobile device at a first exposure level and a second exposure level respectively according to a preset switching frequency during the motion process. Specifically, the mobile device executes a control strategy of the exposure level, and images are shot at different exposure levels in a control period based on a switching frequency corresponding to the control strategy so as to obtain depth maps at different exposure levels. The first depth map and the second depth map may be depth maps captured by the depth detection device in the same control period, or depth maps captured by the depth detection device in different control periods, as long as both depth maps include images of the same field of view.
The switching frequency is used to describe the number of times the first exposure level is performed and the number of times the second exposure level is performed within the control period. For example, the exposure level in one control cycle is composed of one time of the first exposure level and one time of the second exposure level, and then the second exposure level is switched to when the next depth map is taken after the depth map taken at the first exposure level is completed. For another example, the exposure level in a control period is composed of a first exposure level and a plurality of consecutive second exposure levels, and after completing a depth map shot at the first exposure level, the control period is switched to the second exposure level, and a plurality of depth maps are shot at the second exposure level, where the obtained first depth map may be the depth map shot at the first exposure level in the control period, or may be the depth map shot at the first exposure level in other control periods. The acquired second depth map may be any one of the plurality of depth maps in the control period, or may be a depth map captured at the second exposure level in another control period. For another example, if the exposure level in a control period is composed of a plurality of consecutive first exposure levels and a single second exposure level, after completing a plurality of depth maps captured at the first exposure level, the control period is switched to the second exposure level, and a depth map is captured at the second exposure level, where the obtained second depth map may be the depth map captured at the second exposure level in the control period, or the depth map captured at the second exposure level in another control period. The acquired first depth map may be any one of a plurality of depth maps in the control period, or may be a depth map captured at the first exposure level in another control period.
The control period is related to the exposure time of the first exposure level and the second exposure level, the exposure times, the time interval of the depth detection device for shooting two adjacent depth maps and other factors. The switching frequency is not limited to the above example, and may be switched between a plurality of first exposure levels and a plurality of second exposure levels.
In a specific embodiment, the first depth map and the second depth map are any two adjacent images of a plurality of continuously captured images. Taking an example in which the exposure level in a control period is composed of a first exposure level at one time and a second exposure level which is continuous a plurality of times, the first depth map being a depth map captured at the first exposure level in the control period, the second depth map being a depth map corresponding to a second exposure level adjacent to the first exposure level in the control period; or taking the second depth map as the depth map corresponding to the last second exposure level in the control period, and taking the first depth map as the depth map corresponding to the first exposure level in the next control period.
Wherein the first exposure level and the second exposure level are obtained by controlling the depth detection device to have different shutter speeds. The shutter is a device that can control the exposure amount of the depth detection device. The depth detection device is in the same illumination environment, the faster the shutter speed is, namely the shorter the shutter time is, the smaller the light inlet quantity is, and the lower the exposure level is; conversely, the slower the shutter speed, the higher the exposure level. The first exposure level and the second exposure level may also be obtained by controlling the depth detection device to be in different illumination environments. Under the condition that the depth detection device has the same shutter speed, the darker the illumination environment of the depth detection device is, the lower the exposure level is; conversely, the brighter the illumination environment in which the depth detection device is located, the higher the exposure level. The different lighting environments may be provided by controlling the light emitters to have different output powers. Specifically, under the condition that other conditions are not changed, the higher the output power of the light emitter is, the brighter the illumination environment in which the depth detection device is located is, and the higher the exposure level is; conversely, the lower the exposure level.
And the depth detection device correspondingly obtains first light intensity data and second light intensity data when respectively shooting the first depth map and the second depth map. The first light intensity data may be data in a first light intensity map and the second light intensity data may be data in a second light intensity map. The depth data of each pixel position in the first depth map corresponds to the light intensity data of each pixel position in the first light intensity map one by one, and the depth data of each pixel position in the second depth map corresponds to the light intensity data of each pixel position in the second light intensity map one by one. Specifically, the pixel position of the object measurement point in the actual object space in the depth map is the same as the pixel position in the light intensity map. For example, the actual physical space corresponds to an indoor environment, the depth data corresponding to one end point of an indoor table leg contacting with the ground is at the central pixel position in the depth map, and the light intensity data corresponding to the end point is also at the central pixel position in the light intensity map. Wherein the light intensity data and the depth data are the same or similar to those described above and will not be described in detail herein.
In step S120, a corresponding relationship between the pixel position of the first depth map and the pixel position of the second depth map is determined according to the position relationship between the first position and the second position.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a position relationship between a first position and a second position of the present application in an embodiment. As shown, the distance between the first position a and the second position B is d, the dotted line is the moving route of the mobile device, and the arrow indicates that the mobile device is right in front of the first position a. The line connecting the second position B and the first position A has a declination angle theta relative to the right front of the first position A. Wherein the distance d and the declination angle theta are used to represent a positional relationship between the first position and the second position. It should be noted that the mobile device may also be moved in a three-dimensional space, and is not limited to the movement in the same plane as shown in fig. 3.
In an embodiment, the first position a and the second position B of the mobile device are the first position and the second position of the depth detection apparatus.
In another embodiment, the first and second positions of the depth detection apparatus may be determined according to the first and second positions a and B of the mobile device and the position of the depth detection apparatus in the mobile device. In particular, the distance between the first position of the depth detection means and the second position of the depth detection means and the off-angle of the line connecting the first position of the depth detection means and the second position of the depth detection means with respect to the position directly in front of the first position may be determined from the first position a and the second position B of the mobile device.
In one embodiment, the positional relationship is derived from motion data obtained from the mobile device between the first location and the second location. The motion data includes, but is not limited to: velocity, acceleration, yaw angle, displacement data, etc. Wherein the motion data may be provided by a motion measurement device of the mobile device. The motion measurement devices include, but are not limited to: odometers, accelerometers, inertial navigation devices, and the like. For example, the motion data may be acquired by an accelerometer or other instrument that measures acceleration. And integrating the acceleration with time to obtain the actual motion data of the mobile equipment, such as speed, deflection angle, physical position and the like. When the mobile device has a mobile apparatus, the motion data may further include: a deflection angle and a number of rotations of the mobile device. The deflection angle and the number of rotations may be obtained from drive data of a drive motor that drives the moving device.
In another embodiment, the positional relationship is derived from map coordinates of the first and second locations in a map. Wherein the map is a pre-constructed abstract description of the physical space in which it is located. The mobile device captures a first image and a second image at a first position and a second position, respectively, using a camera. According to the first image corresponding to the first position, the second image corresponding to the second position and the positioning characteristics of the map, the map coordinates of the first position and the second position in the map can be determined, and further the position relation of the first position and the second position in the actual physical space can be determined. Specifically, from matching image features included in the first image with positioning features of the map, map coordinates of the first location in the map may be determined. Similarly, the map coordinates of the second location in the map may also be obtained by locating feature matching. The positional relationship of the two positions in the actual physical space can be found based on the map coordinates of the two positions.
And determining the corresponding relation between the pixel position of the first depth map and the pixel position of the second depth map according to the position relation.
In an embodiment, first point cloud data of the object measuring points can be calculated according to the first depth map, and second point cloud data of the object measuring points can be calculated according to the second depth map. According to the position relation and the first point cloud data, third point cloud data of the object measuring points can be determined, further, according to the third point cloud data and the second point cloud data, matching is carried out, the pixel positions of the object measuring points corresponding to the first point cloud data in the second depth map are determined, and further, the corresponding relation between the pixel positions of the first depth map and the pixel positions of the second depth map is determined. Each object measuring point corresponds to depth data in the depth map, that is, point cloud data of a plurality of object measuring points corresponding to pixel positions one by one can be obtained according to the depth map.
Referring to fig. 6, fig. 6 shows the relative position relationship between an object measuring point M and the first and second positions, as shown,according to the examples of determining the position relationship, the mobile device knows that the position relationship between the first position C and the second position D includes that the distance between the first position C and the second position D is l, and the deviation angle of the connecting line of the second position D and the first position C relative to the right front of the first position C is α. at the first position C and the second position D, the depth detection device also provides two depth maps, namely a first depth map and a second depth map, which are shot by different exposure quantities, wherein the first point cloud data of an object measuring point M in the first depth map is x1, and the coordinate conversion of the first point cloud data according to the position relationship can calculate the third point cloud data of the object measuring point M detected at the second position D
Figure BDA0002376453400000121
And matching the third point cloud data with each second point cloud data in the corresponding region in the second depth map. And according to the pixel position in the second depth map corresponding to the matched second point cloud data, determining the corresponding relation between the pixel position of the object measuring point M in the first depth map and the pixel position in the second depth map respectively.
In another embodiment, still referring to fig. 6, the mobile device matches the image feature pairs in the two light intensity maps; and obtaining the pixel positions and the corresponding relations of the same object measuring point corresponding to the image characteristic pair in the first depth map and the second depth map respectively by utilizing the corresponding relation between the pixels in the first depth map and the first light intensity map shot at the first position C and the corresponding relation between the pixels in the second depth map and the second light intensity map shot at the second position D.
It should be noted that, in this embodiment, it is selected that the mobile device moves in the same plane, and how to determine the correspondence between the pixel position of the first depth map and the pixel position of the second depth map according to the position relationship is described. In other embodiments, the mobile device may also be moved between different planes.
In step S130, according to the first light intensity data and/or the second light intensity data, a light intensity condition, respectively extracting first depth data in the first depth map and second depth data in the second depth map.
The light intensity conditions include: an exposure threshold and a rule set based on the exposure threshold to select/deselect depth data in the first depth map or the second depth map. Wherein the exposure threshold includes, but is not limited to, at least one of: an overexposure intensity threshold, an underexposure intensity threshold, and a segmentation threshold for segmenting the foreground and background. And the light intensity image part corresponding to the light intensity data higher than the overexposure intensity threshold value has an overexposure phenomenon, and the depth image part corresponding to the light intensity image part cannot measure an accurate depth value. And the part of the light intensity image corresponding to the light intensity data which is lower than the underexposure intensity threshold value has an underexposure phenomenon, and the depth image part corresponding to the light intensity image part can not be measured to obtain an accurate depth value. The depth map or light intensity map may determine foreground and background portions of the depth map and light intensity map according to different segmentation thresholds. Here, the exposure threshold value may be a fixed value, or determined by analyzing the first light intensity map or the second light intensity map. The exposure threshold may also be set to at least two according to the first exposure level and the second exposure level. The rules include at least one of: rules for selecting depth data set for preserving depth data of some regions in the depth map, and rules for not selecting depth data set for removing depth data of some regions in the depth map.
The light intensity conditions include: presetting an overexposure intensity threshold value, and selecting corresponding second depth data in the second depth map based on the light intensity data in the second light intensity map, wherein the rule comprises the following steps: presetting an underexposure intensity threshold value, and selecting corresponding first depth data in the first depth map based on light intensity data in the first light intensity map, where an exposure level of the second light intensity map is higher than that of the first light intensity map, where the first depth data extracted by the mobile device from the first depth map includes: depth data for corresponding pixel locations in the first depth map determined based on pixel locations of selected light intensity data from the first light intensity map; the second depth data extracted by the mobile device from the second depth map comprises: depth data for corresponding pixel locations in the second depth map is determined based on pixel locations of selected light intensity data from the second light intensity map.
Taking the light intensity condition including a preset overexposure intensity threshold and a rule for selecting depth data in the depth map corresponding to the corresponding light intensity map based on light intensity data in any light intensity map lower than the overexposure intensity threshold, when the second exposure level is higher than the first exposure level, the extracted first depth data includes: depth data for corresponding pixel locations in the first depth map determined based on pixel locations of selected light intensity data from the first light intensity map; the extracted second depth data includes: depth data for corresponding pixel locations in the second depth map is determined based on pixel locations of selected light intensity data from the second light intensity map.
It should be noted that the overexposure intensity threshold and the underexposure intensity threshold in each of the above examples may be fixed values or may be set according to the first light intensity map and the second light intensity map. For example, the overexposure intensity threshold value or the underexposure intensity threshold value is set with one of the first light intensity map and the second light intensity map which is closer to/located in a preset ideal exposure range.
In yet another embodiment, the mobile device may choose to analyze the contour of the object or the indistinguishable edges of the image area in the first light intensity map or the second light intensity map, thereby obtaining a segmentation threshold, and set the light intensity condition using the obtained segmentation threshold. For example, the first light intensity map is divided into a plurality of image blocks, a partition threshold value of each image block in the first light intensity map is determined by utilizing a law major algorithm, the light intensity condition of the corresponding image block is set according to the partition threshold value of each image block, and the light intensity data and the pixel position thereof selected from each image block in the first light intensity map are obtained according to the determined light intensity condition, so that the foreground part or the background part in the first light intensity map is obtained. The determined pixel positions in the first light intensity map where the light intensity data is retained are used to determine the depth data retained in the corresponding first depth map. And determining second depth data reserved in the second depth map by using the determined pixel position of the missing depth data in the first depth map and the corresponding relation between the pixel position and the pixel position in the second depth map. Thereby, a foreground depth area and a background depth area of the overlapping field of view area can be obtained from the two depth maps respectively.
In the above-described example of the segmentation threshold, the method of determining the segmentation threshold is not limited to the megalaw algorithm, and the segmentation threshold may be determined by recognizing the object contour or the contour of the overexposed/underexposed region. In addition, the manner of extracting the first depth data and the second depth data is not limited to the example shown, and for example, the first depth data and the second depth data are extracted using the segmentation threshold values set for the two light intensity maps, respectively, and the regions formed by the extracted first depth data and the extracted second depth data at least match or overlap in contour. And will not be described in detail herein.
Step S140 is performed according to the first depth data and the second depth data extracted in step S130 and the correspondence determined in step S120.
In step S140, the first depth data and the second depth data are synthesized according to the corresponding relationship to obtain a new depth map.
In an embodiment, according to a correspondence between a pixel position of the first depth map and a pixel position of the second depth map, the second depth data is converted into depth data corresponding to the first position, and the converted depth data and the first depth data are synthesized to obtain a new depth map. Specifically, the second depth data of the object measurement point is converted into depth data corresponding to the first position with the first depth data as a reference. A new depth map may be derived based on the converted depth data and/or the first depth data. Wherein the depth data in the new depth map comprises: the first depth data and/or the second depth data are selected based on an overlapping region of the first depth data and the second depth data, the first depth data or the second depth data are directly selected when there is no overlapping region based on the first depth data and the second depth data. For example, by comparing the first light intensity data and the second light intensity data of the same object measurement point, and determining that the first light intensity data is appropriate, the first depth data is selected as the depth data of the overlapping area. The suitable light intensity data for determining an object measurement point may be determined based on the working environment of the mobile device, the type of the object corresponding to the object measurement point, the color of the object corresponding to the object measurement point, and the like. As another example, a weighted sum of the converted depth data and the first depth data is selected as the depth data of the overlap region.
In an embodiment, referring to fig. 7 again, the object measurement point N in fig. 7 is in both the first depth map and the second depth map, the depth data in the second depth map is x2, the pixel position of the depth data x2 in the first depth map can be determined according to the corresponding relationship, and the depth data x2 can be converted into the depth data at the pixel position
Figure BDA0002376453400000141
The converted depth data can be selected according to the light intensity data respectively corresponding to the object measuring points N in the first depth map and the second depth map
Figure BDA0002376453400000142
As the depth data of the object measurement point N in the new depth map, or selecting the first depth data of the object measurement point N as its depth data in the new depth map. Or directly converting the converted depth data
Figure BDA0002376453400000143
And the first depth data is weighted and then used as the depth data of the object measuring point N in the new depth map.
In another embodiment, the object measurement point O is only in the first depth data, and the second depth data is free of depth data for the object measurement point O. The first depth data of the object measurement point O can be directly used as the depth data of the object measurement point O in the new depth map.
In yet another embodiment, the object measurement point P is only in the second depth data, the depth data of the object measurement point P is not in the first depth data, the second depth data of the object measurement point P is converted into the depth data at the first position, and the converted depth data can be directly used as the depth data of the object measurement point P in the new depth map.
It should be noted that, in this embodiment, it is selected that the mobile device moves in the same plane to describe how to convert the second depth data into the depth data corresponding to the first position. In other embodiments, the mobile device may also be moved between different planes. However, if the relative positional relationship between the two positions is determined from information such as motion data of the mobile device, the second depth data can be converted into depth data corresponding to the first position.
In yet another embodiment, the first depth data may be converted into depth data corresponding to a second location according to the correspondence, and the converted depth data and the second depth data may be synthesized to obtain a new depth map. The synthesis procedure and manner are the same or similar to those described in the previous example and will not be described in detail herein.
Because each pixel position of the first depth map corresponds to each pixel position of the first light intensity map one to one, and each pixel position of the second depth map corresponds to each pixel position of the second light intensity map one to one. Therefore, the image data processing method of the mobile device of the present application further includes step S210 and step S220. According to the two steps, a new high-quality light intensity map without overexposure and underexposure can be obtained for the first light intensity map and the second light intensity.
In step S210, third light intensity data corresponding to a pixel position of the first depth data and fourth light intensity data corresponding to a pixel position of the second depth data are extracted.
Specifically, a third light intensity degree corresponding to the pixel position of the first depth data in the first light intensity data is extracted. And extracting fourth light intensity data corresponding to the pixel positions of the second depth data in the second light intensity data one by one.
In step S220, the third light intensity data and the fourth light intensity data are synthesized according to the corresponding relationship to obtain a new light intensity map.
Specifically, according to a correspondence between a pixel position of the first depth map and a pixel position of the second depth map, a correspondence between the third light intensity data and the fourth light intensity data is determined, and then the third light intensity data corresponding to the pixel position of the first depth data and the fourth light intensity data corresponding to the pixel position of the second depth data are synthesized.
In an embodiment, a matching image feature pair in the first light intensity map and the second light intensity map is determined by using an image feature tracking/matching mode, and the corresponding relation of all pixel positions of the overlapped field of view region in the two light intensity maps is determined by using the image feature pair. And matching the two light intensity maps by using the determined corresponding relation.
In yet another embodiment, knowing the correspondence of the pixel position of the object measurement point Q in the second depth map, the pixel position of the object measurement point Q in the second light intensity map, and the pixel position between the first depth map and the second depth map, the pixel position of the object measurement point Q in the first depth map can be determined, and further, based on the one-to-one correspondence of the pixel positions of the light intensity map and the depth map, the pixel position of the object measurement point Q in the first light intensity map can be determined.
And synthesizing the third light intensity data and the fourth light intensity data according to the corresponding relation between the pixel position of the first light intensity map and the pixel position of the second light intensity map to obtain a new light intensity map. Wherein the depth data in the new light intensity map comprises: the third light intensity data and/or the fourth light intensity data are selected based on an overlapping region of the third light intensity data and the fourth light intensity data, and the third light intensity data or the fourth light intensity data are directly selected when there is no overlapping region based on the third light intensity data and the fourth light intensity data. For example, by comparing the third light intensity data and the fourth light intensity data of the same object measurement point, comparatively suitable light intensity data is determined. The suitable light intensity data of an object measuring point can be determined based on the working environment of the mobile device, the object type corresponding to the object measuring point, the object color corresponding to the object measuring point and the like.
In one embodiment, referring again to fig. 7, the object measurement point N in fig. 7 is in both the first depth map and the second depth map. The object measurement point N is therefore both in the first and in the second light intensity map. When synthesizing the new light intensity, the fourth light intensity data is selected as the light intensity data of the object measuring point N in the new light intensity map, or the third light intensity data of the object measuring point N is selected as the light intensity data thereof in the new light intensity map. Or directly weighting and summing the fourth light intensity data and the third light intensity data to obtain the light intensity data of the object measuring point N in the new light intensity map.
In another embodiment, the object measurement point O is only in the third light intensity data, and the fourth light intensity data is free of light intensity data for the object measurement point O. The third light intensity data of the object measurement point O can be directly used as the light intensity data of the object measurement point O in the new light intensity map.
In another embodiment, the object measurement point P is only in the fourth light intensity data, the third light intensity data does not include the light intensity data of the object measurement point P, and the fourth light intensity data of the object measurement point P is taken as the light intensity data of the object measurement point P in the new light intensity map.
Referring to fig. 8, fig. 8 is a block diagram illustrating a hardware structure of a mobile device 20 according to an embodiment of the present application, where the mobile device includes a depth detection device 201, a storage device 203, and a processing device 202. The mobile device 20 of the present application may synthesize the depth map and the light intensity map captured with different exposure amounts during the movement of the mobile device 20, thereby obtaining the depth map and the light intensity map of the field range without over exposure and under exposure and with a long distance.
The mobile device 20 is a device that can perform a specific work during movement. The mobile device 20 is exemplified by: mobile phones, mobile robots, robotic arms with depth detection devices, and the like. For example, the mobile device is a mobile robot, which can be used indoors or outdoors, can be used in industry, business or at home, can be used for replacing security patrol, welcome or meal ordering personnel, or people to clean the ground, and can be used for family accompanying, assistant office work and the like.
The mobile device 20 is provided with depth detection means 201. The depth detection device 201 is used for capturing a depth map and light intensity data. The first depth map and the first light intensity data thereof, and the second depth map and the second light intensity data thereof required in the image data processing method of the application are provided. Wherein the first depth map is captured by the depth detection device at a first position and at a first exposure level, and the second depth map is captured by the depth detection device at a second position and at a second exposure level. The first position, the first exposure level, the second position, and the second exposure level are the same as or similar to those described above, and are not described in detail herein. The depth data of each pixel in the depth map corresponds to light intensity data, and the pixel position of the light intensity data in the light intensity map corresponds to the pixel position of the depth data in the depth map in a one-to-one mode.
Here, the depth detection device 201 includes, but is not limited to: an image pickup device including a light receiver, an image pickup device (such as a ToF sensor) integrating a light receiver and a light emitter, and the like. The light receiver is exemplified by a depth measuring unit and a light sensor. Wherein the depth measuring unit includes: an area array depth measuring unit and a point array depth measuring unit, wherein the depth measuring unit comprises but is not limited to: a lidar sensor, a time-of-flight based depth measurement unit, a structured light technology based depth measurement unit, etc. The surface array depth measuring unit is exemplified by: an array of photodiodes. Examples of such light sensors are: a CCD or a CMOS. Examples of the light emitter are: an infrared emitter (e.g., an infrared LED array).
Wherein the depth map represents an object image within the range of the captured field of view using depth data acquired at preset pixel locations. The depth data for each pixel location in each depth map includes: and the distance information between the mobile equipment and the object measuring point can determine the angle information between the mobile equipment and the object measuring point based on the position of each pixel point in the depth map. The depth map directly reflects the geometry of the visible surface of the object within the field of view captured by the depth detection device. And obtaining three-dimensional space point cloud data of the object in the field of view according to the depth map.
Wherein the light intensity data acquired using the preset pixel positions may represent an object image within the captured field of view in a light intensity map. Wherein, the light intensity data of each pixel position comprises: the light energy value reflected by each object measuring point, namely the light intensity data represents the brightness value corresponding to each object measuring point. Note that the same luminance value may be red or green. The luminance value is between 0 and 255, the luminance near 255 is high, and the luminance near 0 is low. For example, if the light intensity map is a gray scale map, the light intensity data may be represented by gray scale data of the gray scale map. For another example, the light intensity map is a color map, and the light intensity data may be grayscale data of an R channel, grayscale data of a G channel, or grayscale data of a B channel, that is, the color map includes three types of grayscale data. Each pixel position in the depth map or the light intensity map corresponds to an object measurement point in an actual physical space.
Wherein one of the first and second exposure levels corresponds to a high exposure and the other corresponds to a low exposure. Specifically, the second exposure level is a low exposure if the first exposure level is a high exposure; the second exposure level is a high exposure if the first exposure level is a low exposure. The actual exposure amounts of the high exposure and the low exposure may be set according to factors such as the operating environment of the mobile device and the focal length of the depth detection apparatus.
In one embodiment, the shutter in the depth detection apparatus 201 switches different shutter speeds at a preset switching frequency to provide a first exposure level and a second exposure level. The depth detection device 201 is in the same illumination environment, the faster the shutter speed is, i.e. the shorter the shutter time is, the less the light entering amount is, and the lower the exposure level is; conversely, the slower the shutter speed, the higher the exposure level.
In another embodiment, the light emitter in the depth detection device 201 switches different illumination levels at a preset switching frequency to provide a first exposure level and a second exposure level. The different exposure levels of the depth detection device 201 can be realized by controlling the light emitters to have different output powers under the condition of having the same shutter speed. Specifically, under the condition that other conditions are not changed, the higher the output power of the light emitter, the brighter the illumination environment in which the depth detection device 201 is located, the higher the exposure level; conversely, the brighter and darker the illumination environment in which the depth detection device 201 is located, the lower the exposure level.
The storage device 203 is used for storing at least one program, and the at least one program can be used for the processing device 202 to execute the image data processing method of the mobile device. For example, the storage device 203 also stores a control strategy of the exposure level. The control strategy is used for controlling the mobile device to switch different exposure levels according to a preset frequency in the motion process of the mobile device so as to obtain depth maps at different exposure levels. In particular, the control strategy is used to control the mobile device to perform a periodic variation of the exposure level. For example, the exposure level in one period is composed of one high exposure and one low exposure. As another example, the exposure level during a cycle consists of one high exposure and a plurality of consecutive low exposures. As another example, the exposure level in one cycle is composed of a plurality of consecutive high exposures and one low exposure. The period is related to the exposure levels of the high and low exposures, the time interval between two adjacent images taken by the depth detection device 201, etc.
Here, the storage device 203 includes, but is not limited to: Read-Only Memory (ROM), Random Access Memory (RAM), and non-volatile Memory (NVRAM). For example, storage 203 comprises a flash memory device or other non-volatile solid state storage device. In certain embodiments, the storage device 203 may also include memory remote from the one or more processing devices, such as network-attached memory accessed via RF circuitry or external ports and a communication network, which may be the internet, one or more intranets, Local Area Networks (LANs), Wide Area Networks (WANs), Storage Area Networks (SANs), etc., or a suitable combination thereof. The storage 203 also includes a memory controller that may control access control to memory by mobile devices such as a Central Processing Unit (CPU) and interface devices or other components.
The processing device 202 is connected to the depth detection device 201 and the storage device 203, and is configured to call and execute the at least one program, so as to coordinate the depth detection device 201 and the storage device 203 to execute and implement the image data processing method of the mobile device described above. The processing device includes one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable logic arrays (FPGAs), or any combination thereof. The processing apparatus is also operatively coupled with I/O ports that enable the mobile device to interact with various other electronic devices, such as a mobile robot, and input structures, including but not limited to: a moving motor in the moving device in the mobile robot, or a slave processor, such as a Micro Controller Unit (MCU), dedicated to control the moving device and the cleaning device in the mobile robot. The processing device 202 is operable to perform data read and write operations with the storage device 203. The processing means 202 performs such functions as extracting images, extracting motion data of the mobile device, determining a relative positional relationship of the mobile device between different locations based on the motion data, etc.
The processing device 202 may acquire the first and second depth maps from a camera device integrated with a light receiver and a light emitter. Please refer to fig. 4, which is a schematic diagram of a hardware structure of an embodiment in which the mobile device is an autonomous cleaning robot. Wherein the body side of the mobile device is equipped with a camera 22 integrated with a light receiver and a light emitter. The camera 22 may be disposed at other positions where the mobile robot can obtain a depth map of the traveling direction, for example, at the edge of the top surface of the mobile robot. The processing means is connected to the camera means 22 integrating a light receiver and a light emitter through interface means (not shown) to obtain said depth map. The processing means of the mobile device may also obtain the first and second depth maps from a camera device containing an optical receiver. Please refer to fig. 5, which is a schematic diagram of a hardware configuration of another embodiment in which the mobile device is an autonomous cleaning robot. Wherein, the body side of the mobile equipment is equipped with a camera 422 integrated with a light receiver, and the body side of the mobile equipment is also provided with a light emitter 421 connected with the camera 422. The installation position of the light emitter 421 and the parameters such as the wavelength thereof need to be matched with the light receiver in the image capturing device 422, so that the light receiver can receive the signal emitted by the light emitter and generate a corresponding depth map.
In an embodiment, the mobile device 20 further comprises a motion measurement apparatus (not shown) for providing motion data of the mobile device 20. The motion measurement devices include, but are not limited to: odometers, accelerometers, inertial navigation devices, and the like. The motion data includes, but is not limited to: velocity, acceleration, yaw angle, displacement data, etc. From the motion data, the positional relationship of the first position and the second position described in the foregoing can be determined.
Those of ordinary skill in the art will appreciate that the various illustrative algorithmic steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Additionally, the flowcharts and system block diagrams in the figures described above illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the present application also discloses a computer readable storage medium storing a program, and the computer program related to the foregoing embodiments, such as a computer program implementing the image data processing method of the mobile device described in the present application, may be stored in the computer readable storage medium. The computer readable and writable storage medium may include Read-only memory (ROM), random-access memory (RAM), EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, a usb disk, a removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable-writable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be non-transitory, tangible storage media. Disk and disc, as used in this application, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
The image data processing method of the mobile device described in the present application may be implemented by hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of the methods disclosed herein may be embodied in processor-executable software modules, which may be located on a tangible, non-transitory computer-readable and writable storage medium. Tangible, non-transitory computer readable and writable storage media may be any available media that can be accessed by a computer.
In summary, the image data processing method of the mobile device of the present application is implemented for synthesizing a high-quality depth map and a high-quality light intensity map without overexposure and without underexposure. In the scheme of the application, a first depth map and first light intensity data thereof, a second depth map and second light intensity data thereof, which are shot by the depth detection device, are obtained; determining the corresponding relation between the pixel position of the first depth map and the pixel position of the second depth map according to the position relation between the first position and the second position; extracting first depth data in a first depth map and second depth data in a second depth map respectively according to the first light intensity data and/or the second light intensity data and the light intensity condition; and synthesizing the first depth data and the second depth data according to the corresponding relation to obtain a new depth map. The scheme of the application can synthesize the images shot by different exposure amounts in the moving process of the mobile equipment, so that the image which is not over-exposed or under-exposed and has a long-distance view range is obtained.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (18)

1. An image data processing method of a mobile device, the mobile device having a depth detection apparatus, the image data processing method comprising:
acquiring a first depth map and first light intensity data thereof, a second depth map and second light intensity data thereof shot by the depth detection device; wherein the first depth map is captured by the depth detection device at a first position at a first exposure level, and the second depth map is captured by the depth detection device at a second position at a second exposure level;
determining the corresponding relation between the pixel position of the first depth map and the pixel position of the second depth map according to the position relation between the first position and the second position;
extracting first depth data in the first depth map and second depth data in the second depth map respectively according to the first light intensity data and/or the second light intensity data and a light intensity condition;
and synthesizing the first depth data and the second depth data according to the corresponding relation to obtain a new depth map.
2. The method of claim 1, wherein one of the first exposure level and the second exposure level corresponds to a high exposure and the other corresponds to a low exposure.
3. The method of image data processing of a mobile device of claim 1, wherein when the second exposure level is stronger than the first exposure level, the extracted first depth data comprises: removing the residual depth data from the first depth map; the extracted second depth data includes: and selecting the depth data which is selected from the second depth map and remains after the overexposure depth data is removed.
4. The image data processing method of the mobile device according to claim 1 or 3, wherein the depth detection device obtains a first light intensity map and a second light intensity map when respectively capturing the first depth map and the second depth map; the depth data of each pixel position in the first depth map corresponds to the light intensity data of each pixel position in the first light intensity map one by one, and the depth data of each pixel position in the second depth map corresponds to the light intensity data of each pixel position in the second light intensity map one by one.
5. The image data processing method of the mobile device according to claim 4, further comprising:
extracting third light intensity data corresponding to pixel positions of the first depth data and fourth light intensity data corresponding to pixel positions of the second depth data;
and synthesizing the third light intensity data and the fourth light intensity data according to the corresponding relation to obtain a new light intensity map.
6. The method according to claim 3, wherein the synthesizing the first depth data and the second depth data according to the correspondence to obtain a new depth map comprises at least one of:
converting the second depth data into depth data corresponding to a first position according to the corresponding relation, and synthesizing the converted depth data and the first depth data to obtain a new depth map; or,
and converting the first depth data into depth data corresponding to a second position according to the corresponding relation, and synthesizing the converted depth data and the second depth data to obtain a new depth map.
7. The method of image data processing for a mobile device of claim 1, wherein the depth data in the new depth map comprises: selecting the first depth data and/or the second depth data based on an overlapping region of the first depth data and the second depth data.
8. The method according to claim 1, wherein the first depth map and the second depth map are captured at a first exposure level and a second exposure level, respectively, during the motion of the mobile device according to a preset switching frequency.
9. The image data processing method of a mobile device according to claim 1 or 8, wherein the first depth map and the second depth map are any two adjacent images of a plurality of continuously captured images.
10. The image data processing method of a mobile device according to claim 1 or 8, wherein the first exposure level and the second exposure level are obtained by controlling the depth detection means to have different shutter speeds.
11. The method according to claim 1 or 8, wherein the first exposure level and the second exposure level are obtained by controlling the depth detection device to be in different illumination environments.
12. The method according to claim 1, wherein the positional relationship is obtained from motion data obtained from the mobile device between the first position and the second position.
13. A mobile device, comprising:
the depth detection device is used for shooting a depth map and light intensity data;
a storage device for storing at least one program;
processing means, connected to the depth detection means and the storage means, for invoking and executing the at least one program to coordinate the depth detection means and the storage means to execute and implement the image data processing method according to any one of claims 1 to 12.
14. The mobile device of claim 13, further comprising a motion measurement apparatus for providing motion data of the mobile device.
15. The mobile device of claim 13, comprising: the shutter in the depth detection device switches different shutter speeds at a preset switching frequency to provide a first exposure level and a second exposure level.
16. The mobile device of claim 13, comprising: the light emitter in the depth detection device switches different illumination levels at a preset switching frequency to provide a first exposure level and a second exposure level.
17. A control system for a mobile device, the mobile device being equipped with a depth detection apparatus, the control system comprising:
interface means for receiving the depth map and light intensity data captured from the depth detection means;
a storage device for storing at least one program;
processing means, connected to said interface means and storage means, for invoking and executing said at least one program to coordinate said interface means, storage means and depth detection means to execute and implement the image data processing method according to any one of claims 1-12.
18. A computer-readable storage medium characterized by storing at least one program which, when called, executes and implements the image data processing method according to any one of claims 1 to 12.
CN202010069936.5A 2020-01-20 2020-01-20 Image data processing method, control system and storage medium for mobile device Expired - Fee Related CN111246120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010069936.5A CN111246120B (en) 2020-01-20 2020-01-20 Image data processing method, control system and storage medium for mobile device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010069936.5A CN111246120B (en) 2020-01-20 2020-01-20 Image data processing method, control system and storage medium for mobile device

Publications (2)

Publication Number Publication Date
CN111246120A true CN111246120A (en) 2020-06-05
CN111246120B CN111246120B (en) 2021-11-23

Family

ID=70871894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010069936.5A Expired - Fee Related CN111246120B (en) 2020-01-20 2020-01-20 Image data processing method, control system and storage medium for mobile device

Country Status (1)

Country Link
CN (1) CN111246120B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820756A (en) * 2022-04-22 2022-07-29 北京有竹居网络技术有限公司 Depth map processing method and device, electronic device and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539934A (en) * 2015-01-05 2015-04-22 京东方科技集团股份有限公司 Image collecting device and image processing method and system
CN107229903A (en) * 2017-04-17 2017-10-03 深圳奥比中光科技有限公司 Method, device and the storage device of robot obstacle-avoiding
US20170289515A1 (en) * 2016-04-01 2017-10-05 Intel Corporation High dynamic range depth generation for 3d imaging systems
CN109300155A (en) * 2018-12-27 2019-02-01 常州节卡智能装备有限公司 A kind of obstacle-avoiding route planning method, device, equipment and medium
CN109819173A (en) * 2017-11-22 2019-05-28 浙江舜宇智能光学技术有限公司 Depth integration method and TOF camera based on TOF imaging system
CN110378945A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539934A (en) * 2015-01-05 2015-04-22 京东方科技集团股份有限公司 Image collecting device and image processing method and system
US20170289515A1 (en) * 2016-04-01 2017-10-05 Intel Corporation High dynamic range depth generation for 3d imaging systems
CN107229903A (en) * 2017-04-17 2017-10-03 深圳奥比中光科技有限公司 Method, device and the storage device of robot obstacle-avoiding
CN109819173A (en) * 2017-11-22 2019-05-28 浙江舜宇智能光学技术有限公司 Depth integration method and TOF camera based on TOF imaging system
CN109300155A (en) * 2018-12-27 2019-02-01 常州节卡智能装备有限公司 A kind of obstacle-avoiding route planning method, device, equipment and medium
CN110378945A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820756A (en) * 2022-04-22 2022-07-29 北京有竹居网络技术有限公司 Depth map processing method and device, electronic device and readable storage medium

Also Published As

Publication number Publication date
CN111246120B (en) 2021-11-23

Similar Documents

Publication Publication Date Title
US10841491B2 (en) Reducing power consumption for time-of-flight depth imaging
CN109831660B (en) Depth image acquisition method, depth image acquisition module and electronic equipment
CN103516988B (en) Enhanced image processing with lens motion
CN110443186B (en) Stereo matching method, image processing chip and mobile carrier
JP6045378B2 (en) Information processing apparatus, information processing method, and program
CN108332660B (en) Robot three-dimensional scanning system and scanning method
CA2870480A1 (en) Hybrid precision tracking
US20180293735A1 (en) Optical flow and sensor input based background subtraction in video content
CN104125372A (en) Target photoelectric search and detection method
US8199247B2 (en) Method for using flash to assist in focal length detection
CN110246188B (en) Internal reference calibration method and device for TOF camera and camera
CN110544273A (en) motion capture method, device and system
CN110166680B (en) Device imaging method and device, storage medium and electronic device
CN115002433A (en) Projection equipment and ROI (region of interest) feature region selection method
CN113696180A (en) Robot automatic recharging method and device, storage medium and robot system
CN111246120B (en) Image data processing method, control system and storage medium for mobile device
US20210055420A1 (en) Base for spherical laser scanner and method for three-dimensional measurement of an area
CN209991983U (en) Obstacle detection equipment and unmanned aerial vehicle
CN108347561B (en) Laser guide scanning system and scanning method
US20150085078A1 (en) Method and System for Use in Detecting Three-Dimensional Position Information of Input Device
CN111866369A (en) Image processing method and device
CN115022553B (en) Dynamic control method and device for light supplement lamp
KR20230078675A (en) Simultaneous localization and mapping using cameras capturing multiple light spectra
CN112672134A (en) Three-dimensional information acquisition control equipment and method based on mobile terminal
CN114693799A (en) Parameter calibration method, target object tracking method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211123