CN112700482A - Camera depth resolution determination method and device, storage medium and intelligent device - Google Patents

Camera depth resolution determination method and device, storage medium and intelligent device Download PDF

Info

Publication number
CN112700482A
CN112700482A CN201911011010.4A CN201911011010A CN112700482A CN 112700482 A CN112700482 A CN 112700482A CN 201911011010 A CN201911011010 A CN 201911011010A CN 112700482 A CN112700482 A CN 112700482A
Authority
CN
China
Prior art keywords
camera
depth resolution
depth
resolution
horizontal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911011010.4A
Other languages
Chinese (zh)
Other versions
CN112700482B (en
Inventor
方巍
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201911011010.4A priority Critical patent/CN112700482B/en
Publication of CN112700482A publication Critical patent/CN112700482A/en
Application granted granted Critical
Publication of CN112700482B publication Critical patent/CN112700482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a camera depth resolution determination method, a device, a storage medium and intelligent equipment, wherein the camera depth resolution determination method comprises the following steps: acquiring an initial depth resolution of a camera; gradually adjusting the depth resolution of the camera according to the initial depth resolution and a preset amplification degree; acquiring the time length of the corresponding depth data transmitted from the sending node to the receiving node under different depth resolutions of the camera; and determining the optimal depth resolution according to the duration of the camera under different depth resolutions. The method and the device can select the most suitable depth resolution under the condition of not influencing the system performance, thereby improving the system efficiency.

Description

Camera depth resolution determination method and device, storage medium and intelligent device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a camera depth resolution determination method and device, a storage medium and an intelligent device.
Background
RGBD is mainly used as the obstacle avoidance navigation in the current robot navigation application, because the RGBD camera is selected to have large depth resolution usually because of the blind pursuit of high resolution, and the large depth resolution camera can generate overlarge data volume when in use, thereby affecting the system performance and leading to the low efficiency of the system performance.
Disclosure of Invention
The embodiment of the application provides a camera depth resolution determination method, a camera depth resolution determination device, a storage medium and intelligent equipment, and can solve the problem that a large depth resolution camera generates an overlarge data volume when in use, so that the system performance is influenced, and the system performance efficiency is low.
In a first aspect, an embodiment of the present application provides a method for determining a depth resolution of a camera, including:
acquiring an initial depth resolution of a camera;
gradually adjusting the depth resolution of the camera according to the initial depth resolution and a preset amplification degree;
acquiring the time length of the corresponding depth data transmitted from the sending node to the receiving node under different depth resolutions of the camera;
and determining the optimal depth resolution according to the duration of the camera under different depth resolutions.
In a possible implementation manner of the first aspect, the step of determining an optimal depth resolution according to the durations of the cameras at different depth resolutions includes:
respectively determining the average value of the time lengths of the depth data of the specified quantity transmitted from the sending node to the receiving node under different depth resolutions of the camera to obtain the average time lengths under different depth resolutions;
and determining the maximum depth resolution corresponding to the average duration meeting the preset duration as the optimal depth resolution of the camera.
In a possible implementation manner of the first aspect, the step of acquiring an initial depth resolution of the camera includes:
acquiring a view range angle of the camera;
acquiring the farthest identification distance required by the camera and the horizontal length and the vertical length of the minimum object to be identified by the camera;
determining an initial depth resolution of the camera based on the field of view range angle, the farthest recognition distance, and a horizontal length and a vertical length of the smallest object.
In a possible implementation manner of the first aspect, the determining the initial depth resolution of the camera according to the horizon angle, the farthest recognition distance, and the horizontal length and the vertical length of the minimum object includes:
calculating a horizontal depth data value according to the horizontal angle and the farthest identification distance;
calculating a vertical depth data value according to the vertical angle and the farthest identification distance;
determining a horizontal initial depth resolution in the horizontal direction according to the horizontal depth data value and the horizontal length of the minimum object;
determining a vertical initial depth resolution in the vertical direction from the vertical depth data value and a vertical length of the smallest object.
In a possible implementation manner of the first aspect, the initial depth resolution is a product of the horizontal initial depth resolution and the vertical initial depth resolution.
In a possible implementation manner of the first aspect, the step of determining a horizontal initial depth resolution in the horizontal direction according to the horizontal depth data value and the horizontal length of the minimum object includes:
determining the horizontal initial depth resolution M according tomin
Mmin=2xtan(0.5θh)/J,
Wherein x represents the farthest recognition distance, J represents the horizontal length of the smallest object, and θhRepresents a horizontal angle;
the step of determining a vertical initial depth resolution in the vertical direction from the vertical depth data value and the vertical length of the smallest object comprises:
determining the vertical initial depth resolution N according tomin
Nmin=2xtan(0.5θv)/K,
Wherein x represents the farthest recognition distance, K represents the vertical length of the smallest object, and θ v represents the vertical angle.
In a second aspect, an embodiment of the present application provides an apparatus for determining depth resolution of a camera, including:
an initial resolution acquisition unit for acquiring an initial depth resolution of the camera;
an adjustment resolution obtaining unit, configured to gradually adjust the depth resolution of the camera according to a predetermined amplification degree according to the initial depth resolution;
the data transmission information acquisition unit is used for acquiring the time length of the corresponding depth data transmitted from the sending node to the receiving node under different depth resolutions of the camera;
and the resolution determining unit is used for determining the optimal depth resolution according to the duration of the camera under different depth resolutions.
In a third aspect, an embodiment of the present application provides an intelligent device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the camera depth resolution determination method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the camera depth resolution determination method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a smart device, causes the smart device to perform the camera depth resolution determination method according to the first aspect.
The method comprises the steps of obtaining an initial depth resolution of a camera, gradually adjusting the depth resolution of the camera according to a preset increment according to the initial depth resolution, then obtaining the time length of corresponding depth data transmitted to a receiving node by a transmitting node under different depth resolutions of the camera, determining the optimal depth resolution according to the time length of the camera under different depth resolutions, avoiding the influence on system performance efficiency due to the fact that excessive data is generated by selecting the maximum depth resolution, and enabling the optimal depth resolution to be selected under the condition that the system performance is not influenced, thereby improving the system efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a camera depth resolution determination method according to an embodiment of the present invention;
fig. 2 is a flowchart of a specific implementation of the camera depth resolution determining method S101 according to the embodiment of the present invention;
FIG. 3 is a flowchart illustrating an embodiment of the present invention for determining an initial depth resolution of a camera;
FIG. 4 is a schematic diagram of an RGBD depth camera provided by an embodiment of the present invention at a ranging plane ABCD;
fig. 5 is a flowchart of a specific implementation of the camera depth resolution determining method S104 according to the embodiment of the present invention;
fig. 6 is a block diagram of a camera depth resolution determination apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an intelligent device provided in an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The camera in the embodiment of the application is applied to the intelligent robot, and is particularly applied to the navigation obstacle avoidance of the intelligent robot.
Fig. 1 shows an implementation flow of a camera depth resolution determination method provided in an embodiment of the present application, where the method flow includes steps S101 to S104. The specific realization principle of each step is as follows:
s101: an initial depth resolution of the camera is acquired.
In an embodiment of the application, the camera is an RGBD depth camera. The initial depth resolution of the camera is not uniform depending on the model of the camera and the smart robot to which the camera is applied. The initial depth resolution is the depth resolution determined by the user according to the actual application situation. In an embodiment of the present application, the initial depth resolution of the camera may be a minimum depth resolution of the camera.
As an embodiment of the present application, fig. 2 shows a specific implementation flow of step S101 of the camera depth resolution determining method provided in the embodiment of the present application, which is detailed as follows:
a1: a field of view range angle of the camera is acquired. Specifically, the view range angle is a parameter of the camera, and refers to a visible range interval of the camera, and the view range angle includes a horizontal angle and a vertical angle.
A2: and acquiring the farthest identification distance required by the camera and the horizontal length and the vertical length of the minimum object to be identified by the camera. Specifically, the farthest identification distance required by the camera refers to a distance of the farthest obstacle that the user-defined intelligent robot needs to identify in practical application. The minimum object to be recognized by the camera is a minimum obstacle which needs to be recognized by the intelligent robot defined by a user in practical application, the area of the minimum object which needs to be recognized by the intelligent robot at the farthest recognition distance is limited in the navigation obstacle avoidance process of the intelligent robot, and the area of the minimum object is the product of the horizontal length and the vertical length of the minimum object. In the embodiment of the application, the horizontal length and the vertical length of the minimum object to be recognized by the camera may be user-defined or determined by statistical calculation according to big data. The maximum recognition distance and the horizontal length and the vertical length of the minimum object can be user-defined required information.
A3: determining an initial depth resolution of the camera based on the field of view range angle, the farthest recognition distance, and a horizontal length and a vertical length of the smallest object.
In the embodiment of the application, the initial depth resolution of the camera is determined according to the view range angle of the camera, the required farthest recognition distance and the horizontal length and the vertical length of the minimum object to be recognized, and the initial depth resolution of the camera is determined by combining the parameters of the camera and the requirements of a user, so that the determination of the depth resolution is more suitable for the requirements of the user on the basis of considering the performance of the camera.
Optionally, as an embodiment of the present application, the initial depth resolution includes a horizontal initial depth resolution in a horizontal direction and a vertical initial depth resolution in a vertical direction, and the initial depth resolution is a product of the horizontal initial depth resolution and the vertical initial depth resolution. The view range angle includes a horizontal angle and a vertical angle, and as shown in fig. 3, the step of determining the initial depth resolution of the camera according to the view range angle, the farthest recognition distance, and the horizontal length and the vertical length of the minimum object specifically includes:
b1: and calculating a horizontal depth data value according to the horizontal angle and the farthest identification distance. The depth data value refers to a distance value to each obstacle within a visual field range of the camera. The horizontal depth data value refers to a depth data value in the horizontal direction.
B2: and calculating a vertical depth data value according to the vertical angle and the farthest identification distance. The vertical depth data value refers to a depth data value in a vertical direction.
B3: according to the horizontal depth data value andthe horizontal length of the smallest object determines the horizontal initial depth resolution in the horizontal direction. Specifically, the horizontal initial depth resolution M is determined according to the following equation (1)min
Mmin=2xtan(0.5θh)/J(1),
Wherein x represents the farthest recognition distance, J represents the horizontal length of the smallest object, and θhIndicating the horizontal angle.
B4: determining a vertical initial depth resolution in the vertical direction from the vertical depth data value and a vertical length of the smallest object. Specifically, the vertical initial depth resolution N is determined according to the following equation (2)min
Nmin=2xtan(0.5θv)/K(2),
Wherein x represents the farthest recognition distance, K represents the vertical length of the smallest object, and θvIndicating the vertical angle.
In the embodiment of the present application, the units of the farthest identification distance, the horizontal length of the minimum object, and the vertical length of the minimum object are uniform and may be in meters. In the present embodiment, the initial depth resolution of the camera is horizontal initial depth resolution and vertical initial depth resolution, that is, (2xtan (0.5 θ)h)/J)*(2xtan(0.5θV)/K)。
Exemplarily, fig. 4 is a schematic diagram of an RGBD depth camera in a range finding plane ABCD, where x is a farthest recognition distance to be recognized by the camera, AB is the horizontal depth data value, and AB is 2xtan (0.5 θ)h) When the minimum object horizontal direction length recognized by the camera at the distance of x meters is J, the minimum depth resolution of the RGBD depth camera in the horizontal direction is MminCD is the vertical depth data value AB/J, CD 2xtan (0.5 θ)v) When the minimum object vertical length recognized by the camera at the distance of x meters is K, the minimum depth resolution of the RGBD depth camera in the vertical direction is Nmin=CD/K。
S102: and gradually adjusting the depth resolution of the camera according to the initial depth resolution and a preset amplification.
In the embodiment of the application, since the robot system runs on different platforms, for example, x86, an ARM platform is provided. Meanwhile, the dominant frequency of the system is fast or slow, so that a large amount of real-time depth data is a test for the performance of the system. The initial depth resolution of the camera, or the minimum depth resolution of the camera, is not necessarily the depth resolution that enables the system performance to be optimal, and therefore, to obtain the optimal depth resolution of the camera, after the initial depth resolution is determined, the depth resolution of the camera is adjusted step by step in a predetermined increment on the basis of the initial depth resolution, that is, the depth resolution is increased on the basis of the initial depth resolution, and then the depth resolution that can be optimal for the system performance is selected preferentially from the depth resolution.
S103: and acquiring the time length of the corresponding depth data transmitted from the sending node to the receiving node under different depth resolutions of the camera.
Specifically, the sending node is a sending node of a camera for acquiring depth data, and the receiving node is a node subscribing to the depth data in the intelligent robot, for example, a main control chip of the robot. In the intelligent robot, the time length of the depth data corresponding to the initial depth resolution, which is transmitted from a sending node of a camera of the intelligent robot to a receiving node subscribing the depth data, such as a main control chip of the robot, is obtained, and the time length of the depth data corresponding to the initial depth resolution, which is transmitted from the sending node to the receiving node, is also obtained at the same time, the time length of the depth data corresponding to the initial depth resolution is gradually adjusted according to the preset amplification.
In the embodiment of the application, a sending node and a receiving node are arranged, the performance of the current system is evaluated according to the time difference between the time point when the sending node publishes data and the time point when the receiving node receives data, and the time difference between the time point when the sending node publishes data and the time point when the receiving node receives data, namely the time length for transmitting depth data from the sending node to the receiving node. It should be noted that, in order to ensure the reliability of the performance evaluation, the data values of the depth data transmitted at different depth resolutions are the same.
S104: and determining the optimal depth resolution according to the duration of the camera under different depth resolutions.
Specifically, one depth data is transmitted typically several tens to several hundreds of microseconds. And determining the quality of the system performance according to the time length of the depth data transmitted by the camera, wherein the longer the time length of the depth data transmitted from the sending node to the receiving node is, the worse the system performance is, and the shorter the time length of the depth data transmitted from the sending node to the receiving node is, the better the system performance is. In the embodiment of the application, the time lengths of depth data transmitted by the cameras under different depth resolutions are compared one by one, and the optimal resolution is determined by combining the depth resolution and the time length of the transmitted depth data, for example, the maximum depth resolution corresponding to the shortest time length of the depth data transmitted from the sending node to the receiving node is selected to be determined as the optimal resolution, so that the situation that the cameras excessively generate data influence on performance when recognizing obstacles is avoided, and the performance efficiency of the system is improved.
Optionally, counting the time length of a specified number of depth data transmitted from a sending node to a receiving node by the camera under the initial depth resolution, and calculating an average value to obtain the average time length under the initial depth resolution; and counting the time length of the depth data transmitted from the sending node to the receiving node under the depth resolution ratio which is adjusted by the camera step by step according to the preset amplification, and calculating the average value to obtain the average time length of the adjusted depth resolution ratio. And comparing the average time length under the initial depth resolution with the average time length under the adjusted depth resolution one by one, and selecting the depth resolution corresponding to the shortest average time length from the average time lengths to determine the depth resolution as the optimal resolution. That is, if the average duration corresponding to the initial depth resolution is short, the initial depth resolution is determined as the optimal depth resolution of the camera. And if the average duration corresponding to the adjusted depth resolution is short, determining the adjusted depth resolution as the optimal depth resolution of the camera. By comparing the average duration of transmission of the plurality of depth data at different depth resolutions, the determination of the optimal depth resolution can be more accurate and reliable.
As an embodiment of the present application, fig. 5 shows a specific implementation flow of step S104 of the camera depth resolution determining method provided in the embodiment of the present application, which is detailed as follows:
c1: and respectively determining the average value of the time lengths of the specified number of depth data transmitted from the sending node to the receiving node under different depth resolutions of the camera to obtain the average time lengths under different depth resolutions.
C2: and determining the maximum depth resolution corresponding to the average duration meeting the preset duration as the optimal depth resolution of the camera. Specifically, the time difference between the transmission and reception of topic under the Ros system is generally 1ms, and this time does not affect the operation of the system, so the preset duration may be set to 1 ms. Since the larger the resolution is, the more the image information is, the more advantageous the robot for navigation is, when the average time length satisfying the preset time length corresponds to more than one depth resolution, the maximum depth resolution is determined as the optimal resolution.
In the embodiment of the application, in order to improve the accuracy of performance evaluation, the time length of a plurality of depth data transmitted from a sending node to a receiving node under the same depth resolution is obtained, the average value of the time length of the plurality of depth data transmitted from the sending node to the receiving node under the same depth resolution is calculated, and the average value of the time length is determined as the average time length of the depth data transmitted under the depth resolution. For example, under the same depth resolution, 10000 depth data are sequentially sent, the time length of each depth data transmitted from a sending node to a receiving node is obtained, the average time length of the 10000 depth data transmitted from the sending node to the receiving node is calculated, and the maximum depth resolution corresponding to the average time length of 1ms is determined as the optimal depth resolution of the camera.
In the embodiment of the application, the initial depth resolution of the camera is obtained, the depth resolution of the camera is adjusted step by step according to the preset amplification according to the initial depth resolution, then the duration that the corresponding depth data are transmitted to the receiving node by the sending node under different depth resolutions of the camera is obtained, the optimal depth resolution is determined according to the duration of the camera under different depth resolutions, the influence on the performance efficiency of the system due to the fact that the excessive data are generated by selecting the maximum depth resolution is avoided, the optimal depth resolution can be selected under the condition that the performance of the system is not influenced, and therefore the efficiency of the system is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a block diagram of a camera depth resolution determining apparatus provided in an embodiment of the present application, corresponding to the camera depth resolution determining method described in the above embodiment, and only shows portions related to the embodiment of the present application for convenience of explanation.
Referring to fig. 6, the camera depth resolution determination apparatus includes: an initial resolution acquisition unit 61, an adjusted resolution acquisition unit 62, a data transmission information acquisition unit 63, a resolution determination unit 64, wherein:
an initial resolution acquisition unit 61 for acquiring an initial depth resolution of the camera;
an adjustment resolution obtaining unit 62, configured to gradually adjust the depth resolution of the camera according to the initial depth resolution by a predetermined increment;
a data transmission information obtaining unit 63, configured to obtain a duration that the corresponding depth data is transmitted from the sending node to the receiving node when the camera is at different depth resolutions;
a resolution determination unit 64, configured to determine an optimal depth resolution according to the durations of the cameras at different depth resolutions.
Optionally, the initial resolution obtaining unit 61 includes:
the first information acquisition module is used for acquiring the view range angle of the camera;
the second information acquisition module is used for acquiring the farthest identification distance required by the camera and the horizontal length and the vertical length of the minimum object to be identified by the camera;
an initial resolution determination module to determine an initial depth resolution of the camera based on the field of view angle, the farthest recognition distance, and a horizontal length and a vertical length of the smallest object.
Optionally, the initial depth resolution includes a horizontal initial depth resolution in a horizontal direction and a vertical initial depth resolution in a vertical direction, the initial depth resolution is a product of the horizontal initial depth resolution and the vertical initial depth resolution, the field-of-view angle includes a horizontal angle and a vertical angle, and the initial resolution determination module specifically includes:
the horizontal depth data value determining submodule is used for calculating a horizontal depth data value according to the horizontal angle and the farthest identification distance;
the vertical depth data determination submodule is used for calculating a vertical depth data value according to the vertical angle and the farthest identification distance;
a horizontal initial resolution determination submodule for determining a horizontal initial depth resolution in the horizontal direction based on the horizontal depth data value and the horizontal length of the minimum object;
and the vertical initial resolution determination submodule is used for determining the vertical initial depth resolution in the vertical direction according to the vertical depth data value and the vertical length of the minimum object.
Optionally, the horizontal initial resolution determination sub-module is further specifically configured to:
determining the horizontal initial depth resolution M according tomin
Mmin=2xtan(0.5θh)/J,
Wherein x represents the farthest recognition distance, J represents the horizontal length of the smallest object, and θhIndicating the horizontal angle.
Optionally, the vertical initial resolution determination sub-module is further specifically configured to:
determining the vertical initial depth resolution N according tomin
Nmin=2xtan(0.5θv)/K,
Wherein x represents the farthest recognition distance, K represents the vertical length of the smallest object, and θ v represents the vertical angle.
Optionally, the resolution determination unit 64 includes:
the average duration determining module is used for respectively determining the average of durations of the camera at different depth resolutions, wherein the specified number of depth data are transmitted to the receiving node by the sending node, and the average durations at different depth resolutions are obtained;
and the optimal resolution determining module is used for determining the maximum depth resolution corresponding to the average duration meeting the preset duration as the optimal depth resolution of the camera.
In the embodiment of the application, the initial depth resolution of the camera is obtained, the depth resolution of the camera is adjusted step by step according to the preset amplification according to the initial depth resolution, then the duration that the corresponding depth data are transmitted to the receiving node by the sending node under different depth resolutions of the camera is obtained, the optimal depth resolution is determined according to the duration of the camera under different depth resolutions, the influence on the performance efficiency of the system due to the fact that the excessive data are generated by selecting the maximum depth resolution is avoided, the optimal depth resolution can be selected under the condition that the performance of the system is not influenced, and therefore the efficiency of the system is improved.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where computer-readable instructions are stored, and when executed by a processor, the computer-readable instructions implement steps that can implement the above-mentioned method embodiments.
The embodiment of the present application provides a computer-readable instruction product, which when executed on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments.
Embodiments of the present application further provide a computer-readable storage medium, which stores computer-readable instructions, and when executed by a processor, the computer-readable instructions implement the steps of any one of the camera depth resolution determination methods shown in fig. 1 to 5.
An embodiment of the present application further provides an intelligent device, which includes a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, where the processor executes the computer readable instructions to implement the steps of any one of the camera depth resolution determination methods shown in fig. 1 to 5.
Embodiments of the present application further provide a computer program product, which when run on a server, causes the server to execute the steps of implementing any one of the camera depth resolution determination methods as represented in fig. 1 to 5.
Fig. 7 is a schematic diagram of an intelligent device provided in an embodiment of the present application. As shown in fig. 7, the smart device 7 of this embodiment includes: a processor 70, a memory 71, and computer readable instructions 72 stored in the memory 71 and executable on the processor 70. The processor 70, when executing the computer readable instructions 72, implements the steps in the various camera depth resolution determination method embodiments described above, such as steps S101-S104 shown in fig. 1. Alternatively, the processor 70, when executing the computer readable instructions 72, implements the functionality of the modules/units in the device embodiments described above, such as the functionality of the units 61 to 64 shown in fig. 6.
Illustratively, the computer readable instructions 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to accomplish the present application. The one or more modules/units may be a series of computer-readable instruction segments capable of performing specific functions, which are used to describe the execution process of the computer-readable instructions 72 in the smart device 7.
The smart device 7 may be a smart robot, such as a ROS robot. The intelligent device 7 may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the smart device 7, and does not constitute a limitation of the smart device 7, and may include more or less components than those shown, or combine certain components, or different components, for example, the smart device 7 may also include input-output devices, network access devices, buses, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the smart device 7, such as a hard disk or a memory of the smart device 7. The memory 71 may also be an external storage device of the Smart device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the Smart device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the smart device 7. The memory 71 is used to store the computer readable instructions and other programs and data required by the smart device. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for determining camera depth resolution, comprising:
acquiring an initial depth resolution of a camera;
gradually adjusting the depth resolution of the camera according to the initial depth resolution and a preset amplification degree;
acquiring the time length of the corresponding depth data transmitted from the sending node to the receiving node under different depth resolutions of the camera;
and determining the optimal depth resolution according to the duration of the camera under different depth resolutions.
2. The method of determining camera depth resolution of claim 1, wherein the step of determining an optimal depth resolution based on the duration of the camera at different depth resolutions comprises:
respectively determining the average value of the time lengths of the depth data of the specified quantity transmitted from the sending node to the receiving node under different depth resolutions of the camera to obtain the average time lengths under different depth resolutions;
and determining the maximum depth resolution corresponding to the average duration meeting the preset duration as the optimal depth resolution of the camera.
3. The camera depth resolution determination method of claim 1, wherein the step of obtaining an initial depth resolution of the camera comprises:
acquiring a view range angle of the camera;
acquiring the farthest identification distance required by the camera and the horizontal length and the vertical length of the minimum object to be identified by the camera;
determining an initial depth resolution of the camera based on the field of view range angle, the farthest recognition distance, and a horizontal length and a vertical length of the smallest object.
4. The camera depth resolution determination method of claim 3, wherein the initial depth resolution includes a horizontal initial depth resolution in a horizontal direction and a vertical initial depth resolution in a vertical direction, the field of view angle includes a horizontal angle and a vertical angle, and the step of determining the initial depth resolution of the camera based on the field of view angle, the farthest recognition distance, and the horizontal length and the vertical length of the smallest object includes:
calculating a horizontal depth data value according to the horizontal angle and the farthest identification distance;
calculating a vertical depth data value according to the vertical angle and the farthest identification distance;
determining a horizontal initial depth resolution in the horizontal direction according to the horizontal depth data value and the horizontal length of the minimum object;
determining a vertical initial depth resolution in the vertical direction from the vertical depth data value and a vertical length of the smallest object.
5. The camera depth resolution determination method of claim 4, wherein the initial depth resolution is a product of the horizontal initial depth resolution and the vertical initial depth resolution.
6. The camera depth resolution determination method of claim 4, wherein the step of determining the horizontal initial depth resolution in the horizontal direction based on the horizontal depth data value and the horizontal length of the smallest object comprises:
determining the horizontal initial depth resolution M according tomin
Mmin=2xtan(0.5θh)/J,
Wherein x represents the farthest recognition distance, J represents the horizontal length of the smallest object, and θhRepresents a horizontal angle;
the step of determining a vertical initial depth resolution in the vertical direction from the vertical depth data value and the vertical length of the smallest object comprises:
determining the vertical initial depth resolution N according tomin
Nmin=2xtan(0.5θv)/K,
Wherein x represents the farthest recognition distance, K represents the vertical length of the smallest object, and θvIndicating the vertical angle.
7. A camera depth resolution determination apparatus, comprising:
an initial resolution acquisition unit for acquiring an initial depth resolution of the camera;
an adjustment resolution obtaining unit, configured to gradually adjust the depth resolution of the camera according to a predetermined amplification degree according to the initial depth resolution;
the data transmission information acquisition unit is used for acquiring the time length of the corresponding depth data transmitted from the sending node to the receiving node under different depth resolutions of the camera;
and the resolution determining unit is used for determining the optimal depth resolution according to the duration of the camera under different depth resolutions.
8. The camera depth resolution determination apparatus of claim 7, wherein the initial resolution acquisition unit includes:
the first information acquisition module is used for acquiring the view range angle of the camera;
the second information acquisition module is used for acquiring the farthest identification distance required by the camera and the horizontal length and the vertical length of the minimum object to be identified by the camera;
an initial resolution determination module to determine an initial depth resolution of the camera based on the field of view angle, the farthest recognition distance, and a horizontal length and a vertical length of the smallest object.
9. A smart device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the camera depth resolution determination method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements the camera depth resolution determination method according to any one of claims 1 to 6.
CN201911011010.4A 2019-10-23 2019-10-23 Camera depth resolution determination method and device, storage medium and intelligent equipment Active CN112700482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911011010.4A CN112700482B (en) 2019-10-23 2019-10-23 Camera depth resolution determination method and device, storage medium and intelligent equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911011010.4A CN112700482B (en) 2019-10-23 2019-10-23 Camera depth resolution determination method and device, storage medium and intelligent equipment

Publications (2)

Publication Number Publication Date
CN112700482A true CN112700482A (en) 2021-04-23
CN112700482B CN112700482B (en) 2023-12-29

Family

ID=75504964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911011010.4A Active CN112700482B (en) 2019-10-23 2019-10-23 Camera depth resolution determination method and device, storage medium and intelligent equipment

Country Status (1)

Country Link
CN (1) CN112700482B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160073094A1 (en) * 2014-09-05 2016-03-10 Microsoft Corporation Depth map enhancement
CN108573477A (en) * 2018-03-14 2018-09-25 深圳怡化电脑股份有限公司 Eliminate method, system and the terminal device of image moire fringes
CN108898549A (en) * 2018-05-29 2018-11-27 Oppo广东移动通信有限公司 Image processing method, picture processing unit and terminal device
CN109274864A (en) * 2018-09-05 2019-01-25 深圳奥比中光科技有限公司 Depth camera, depth calculation System and method for
CN109993694A (en) * 2017-12-29 2019-07-09 Tcl集团股份有限公司 A kind of method and device generating super-resolution image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160073094A1 (en) * 2014-09-05 2016-03-10 Microsoft Corporation Depth map enhancement
CN106688012A (en) * 2014-09-05 2017-05-17 微软技术许可有限责任公司 Depth map enhancement
CN109993694A (en) * 2017-12-29 2019-07-09 Tcl集团股份有限公司 A kind of method and device generating super-resolution image
CN108573477A (en) * 2018-03-14 2018-09-25 深圳怡化电脑股份有限公司 Eliminate method, system and the terminal device of image moire fringes
CN108898549A (en) * 2018-05-29 2018-11-27 Oppo广东移动通信有限公司 Image processing method, picture processing unit and terminal device
CN109274864A (en) * 2018-09-05 2019-01-25 深圳奥比中光科技有限公司 Depth camera, depth calculation System and method for

Also Published As

Publication number Publication date
CN112700482B (en) 2023-12-29

Similar Documents

Publication Publication Date Title
US11069088B2 (en) Visual positioning method and apparatus, electronic device, and system
CN110850872A (en) Robot inspection method and device, computer readable storage medium and robot
CN110632582B (en) Sound source positioning method, device and storage medium
CN112912932B (en) Calibration method and device for vehicle-mounted camera and terminal equipment
CN109949306B (en) Reflecting surface angle deviation detection method, terminal device and storage medium
CN112712040A (en) Method, device and equipment for calibrating lane line information based on radar and storage medium
CN112927306B (en) Calibration method and device of shooting device and terminal equipment
WO2021057324A1 (en) Data processing method and apparatus, chip system, and medium
CN112200884A (en) Method and device for generating lane line
CN113970734A (en) Method, device and equipment for removing snowing noise of roadside multiline laser radar
CN113487678A (en) Camera calibration method, system and processing circuit
CN112700482A (en) Camera depth resolution determination method and device, storage medium and intelligent device
CN112902911A (en) Monocular camera-based distance measurement method, device, equipment and storage medium
CN108937702B (en) Boundary detection method and device for robot, robot and medium
CN111105465A (en) Camera device calibration method, device, system electronic equipment and storage medium
CN115790640A (en) Vehicle mileage correction method and device based on multidimensional data intelligent analysis
CN112101148B (en) Moving object detection method and device, storage medium and terminal equipment
CN112529943B (en) Object detection method, object detection device and intelligent equipment
CN111314602B (en) Target object focusing method, target object focusing device, storage medium and electronic device
US11115594B2 (en) Shutter speed adjusting method and apparatus, and robot using the same
CN112991463A (en) Camera calibration method, device, equipment, storage medium and program product
CN111127898A (en) Method and device for switching trigger modes
EP3811349A2 (en) System and method for automatic calibration of vehicle position determining device in a traffic system
CN111741218B (en) Focusing method, device, electronic equipment and storage medium
CN112199418B (en) State identification method, device and equipment for industrial object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant