CN115018899A - Display device and depth image acquisition method - Google Patents

Display device and depth image acquisition method Download PDF

Info

Publication number
CN115018899A
CN115018899A CN202210474625.6A CN202210474625A CN115018899A CN 115018899 A CN115018899 A CN 115018899A CN 202210474625 A CN202210474625 A CN 202210474625A CN 115018899 A CN115018899 A CN 115018899A
Authority
CN
China
Prior art keywords
target
image
area
depth image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210474625.6A
Other languages
Chinese (zh)
Inventor
孙晓芹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202210474625.6A priority Critical patent/CN115018899A/en
Publication of CN115018899A publication Critical patent/CN115018899A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure relates to a display device and a method for acquiring a depth image, and relates to the technical field of display devices. The method comprises the following steps: a camera configured to capture an image; a controller configured to: determining an exposure state of the first depth image; determining a target exposure value according to the exposure state and the default exposure value; controlling the camera to acquire a first speckle image based on a default exposure value and a second speckle image based on a target exposure value; and determining a target depth image according to the first speckle image and the second speckle image. The embodiment of the disclosure is used for solving the problem of depth information loss in a depth image due to shooting scene complexity.

Description

Display device and depth image acquisition method
Technical Field
The present disclosure relates to the field of display device technologies, and in particular, to a display device and a method for acquiring a depth image.
Background
With the rapid development of society and science and technology, the application of a three-dimensional (3-dimensional, 3D) camera is also more and more extensive. The complexity of the shooting scene causes the problem that the depth information of the depth image obtained by the 3D camera is lost, for example, when the distance between the shot object and the 3D camera is short, the center of the obtained depth image is over-exposed, and the depth information of the center of the depth image is lost. The related art mainly increases the brightness uniformity of the depth image by adding a Diffraction Optical Element (DOE) on the hardware level to obtain a complete depth image, but increases the hardware cost and has poor operability.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, the present disclosure provides a display device and a method for acquiring a depth image, which can adapt to various complex shooting scenes and improve the integrity of depth information in the depth image.
In order to achieve the above object, the embodiments of the present disclosure provide the following technical solutions:
in a first aspect, the present disclosure provides a display device comprising:
a camera configured to capture an image;
a controller configured to: determining an exposure state of the first depth image;
determining a target exposure value according to the exposure state and the default exposure value;
controlling the camera to acquire a first speckle image based on a default exposure value and a second speckle image based on a target exposure value;
and determining a target depth image according to the first speckle image and the second speckle image.
As an optional implementation manner of the embodiment of the present disclosure, the controller is specifically configured to: determining a target pixel point from all pixel points of the first depth image, wherein the depth value difference value between the target pixel point and an adjacent pixel point is larger than or equal to the pixel point with a preset difference value; determining a target area in the first depth image based on the target pixel point; and determining the exposure state of the first depth image according to the position of the target area in the first depth image.
As an optional implementation manner of the embodiment of the present disclosure, the controller is further configured to: determining the area of a target region;
a controller specifically configured to: and under the condition that the area of the target region is larger than or equal to the preset area, determining the exposure state of the first depth image according to the position of the target region in the first depth image.
As an optional implementation manner of the embodiment of the present disclosure, the controller is specifically configured to: and under the condition that the target area is located in a preset central area of the first depth image according to the position of the target area in the first depth image, determining that the exposure state of the first depth image is an overexposure state.
As an optional implementation manner of the embodiment of the present disclosure, the controller is specifically configured to: and under the condition that the target area is determined to be in a preset peripheral area of the first depth image according to the position of the target area in the first depth image, determining that the exposure state of the first depth image is an underexposure state.
As an optional implementation manner of the embodiment of the present disclosure, the exposure state is an overexposure state, the target exposure value is smaller than a default exposure value, and the controller is specifically configured to: determining a target central area in the second speckle image and determining a target peripheral area in the first speckle image; the target central area corresponds to a first area, and the first area is an area determined according to the target area and a preset central area in the first depth image; the target peripheral area corresponds to a second area, and the second area is an area outside the first area in the first depth image; and determining a target depth image according to the target central area and the target peripheral area.
As an optional implementation manner of the embodiment of the present disclosure, the exposure state is an underexposure state, the target exposure value is greater than the default exposure value, and the controller is specifically configured to: determining a target peripheral area in the second speckle image and determining a target central area in the first speckle image; the target peripheral area corresponds to a third area, and the third area is an area determined according to the target area and a preset peripheral area in the first depth image; the target central area corresponds to a fourth area, and the fourth area is an area outside the third area in the first depth image; and determining a target depth image according to the target peripheral area and the target central area.
As an optional implementation manner of the embodiment of the present disclosure, the controller is specifically configured to: controlling a camera to acquire a first infrared image and a first infrared speckle image based on a default exposure value, and acquire a second infrared image and a second infrared speckle image based on a target exposure value; a first speckle image is determined from the first infrared image and the first infrared speckle image, and a second speckle image is determined from the second infrared image and the second infrared speckle image.
As an optional implementation manner of the embodiment of the present disclosure, the controller is further configured to: after the target depth image is determined according to the first speckle image and the second speckle image, three-dimensional coordinates of a photographed object in the target depth image are determined based on position information and depth values of each pixel point in the target depth image.
In a second aspect, a method for obtaining a depth image is provided, including:
collecting an image;
determining an exposure state of the first depth image;
determining a target exposure value according to the exposure state and the default exposure value;
acquiring a first speckle image based on a default exposure value and acquiring a second speckle image based on a target exposure value;
and determining a target depth image according to the first speckle image and the second speckle image.
As an optional implementation manner of the embodiment of the present disclosure, determining the target exposure value according to the exposure state and the default exposure value includes: determining a target pixel point from all pixel points of the first depth image, wherein the depth value difference value between the target pixel point and an adjacent pixel point is larger than or equal to a preset difference value; determining a target area in the first depth image based on the target pixel point; and determining the exposure state of the first depth image according to the position of the target area in the first depth image.
As an optional implementation manner of the embodiment of the present disclosure, after determining a target region in a first depth image based on a target pixel point, the method includes: determining the area of a target region; determining the exposure state of the first depth image according to the position of the target area in the first depth image, wherein the determining comprises the following steps: and under the condition that the area of the target region is larger than or equal to the preset area, determining the exposure state of the first depth image according to the position of the target region in the first depth image.
As an optional implementation manner of the embodiment of the present disclosure, determining an exposure state of the first depth image according to a position of the target area in the first depth image includes: and under the condition that the target area is determined to be in a preset central area of the first depth image according to the position of the target area in the first depth image, determining that the exposure state of the first depth image is an overexposure state.
As an optional implementation manner of the embodiment of the present disclosure, determining the exposure state of the first depth image according to the position of the target region in the first depth image includes: and under the condition that the target area is determined to be in a preset peripheral area of the first depth image according to the position of the target area in the first depth image, determining that the exposure state of the first depth image is an underexposure state.
As an optional implementation manner of the embodiment of the present disclosure, the determining the target depth image according to the first speckle image and the second speckle image includes: determining a target central area in the second speckle image and determining a target peripheral area in the first speckle image; the target central area corresponds to a first area, and the first area is an area determined according to the target area and a preset central area in the first depth image; the target peripheral area corresponds to a second area, and the second area is an area outside the first area in the first depth image; and determining a target depth image according to the target central area and the target peripheral area.
As an optional implementation manner of the embodiment of the present disclosure, the determining the target depth image according to the first speckle image and the second speckle image includes: determining a target peripheral area in the second speckle image and determining a target central area in the first speckle image; the target peripheral area corresponds to a third area, and the third area is an area determined according to the target area and a preset peripheral area in the first depth image; the target central area corresponds to a fourth area, and the fourth area is an area outside the third area in the first depth image; and determining a target depth image according to the target peripheral area and the target central area.
As an optional implementation manner of the embodiment of the present disclosure, acquiring the first speckle image based on the default exposure value and acquiring the second speckle image based on the target exposure value includes: controlling a camera to acquire a first infrared image and a first infrared speckle image based on a default exposure value, and acquire a second infrared image and a second infrared speckle image based on a target exposure value; a first speckle image is determined from the first infrared image and the first infrared speckle image, and a second speckle image is determined from the second infrared image and the second infrared speckle image.
As an optional implementation manner of the embodiment of the present disclosure, after determining the target depth image according to the first speckle image and the second speckle image, the method further includes: after the target depth image is determined according to the first speckle image and the second speckle image, three-dimensional coordinates of a photographed object in the target depth image are determined based on position information and depth values of each pixel point in the target depth image.
In a third aspect, a computer-readable storage medium is provided, comprising: the computer-readable storage medium stores thereon a computer program which, when executed by a processor, implements the method for acquiring a depth image according to the second aspect or any one of its alternative embodiments.
In a fourth aspect, a computer program product is provided, comprising: when the computer program product runs on a computer, the computer is caused to implement the method for acquiring a depth image according to the second aspect or any one of its alternative embodiments.
Compared with the related art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
the utility model provides a display device, gather the image through the camera of configuration, confirm the exposure state of the first depth image of gathering, then confirm target exposure value according to exposure state and default exposure value, again based on acquiescence exposure value obtain first speckle image, and based on target exposure value obtain the second speckle image, confirm target depth image according to first speckle image and second speckle image, the problem of depth information disappearance in the depth image because of shooting scene complexity has been solved, the scene adaptability that the depth image acquireed has been promoted, maneuverability is strong.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or technical solutions in the related art, the drawings used in the description of the embodiments or related art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic view of an operation scenario between a display device and a control apparatus in an embodiment of the present disclosure;
fig. 2 is a block diagram of a hardware configuration of a display device according to an embodiment of the present disclosure;
fig. 3A is a schematic view of an application scenario a of a method for obtaining a depth image according to an embodiment of the present disclosure;
fig. 3B is a schematic view of an application scenario of a method for acquiring a depth image according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a method for obtaining a depth image according to an embodiment of the present disclosure;
FIG. 5A is a first schematic diagram of a target area provided in an embodiment of the present disclosure;
FIG. 5B is a second schematic diagram of a target area provided in an embodiment of the present disclosure;
fig. 6A is a schematic illustration three of a target area provided in an embodiment of the present disclosure;
fig. 6B is a schematic illustration four of a target region provided in an embodiment of the present disclosure;
FIG. 7A is a first schematic diagram illustrating a determination of a target speckle image according to an embodiment of the disclosure;
FIG. 7B is a second schematic diagram illustrating the determination of a target speckle image according to an embodiment of the present disclosure;
FIG. 8A is a third schematic diagram illustrating the determination of a target speckle image in an embodiment of the disclosure;
fig. 8B is a fourth schematic diagram illustrating the determination of a target speckle image according to the embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
The terms "first," "second," "third," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
As will be understood by those skilled in the art, due to the configuration of the lens in the 3D camera, when the lens receives the laser light and forms an image, the image brightness of the middle area is generally higher than that of the edge area. And 3D camera when the higher service environment of luminance such as strong light, strong sunlight, can cause the influence to the speckle pattern luminance that 3D camera gathered, for example, the scattered spot that receives is submerged in among the background, or, the whole luminance of speckle pattern is great, and some speckle patterns can overexpose even, and adjacent speckle point links together and can't distinguish, leads to the depth image quality relatively poor, and depth information is incomplete. In addition, when the distance between the shot object and the 3D camera is long, the light energy reflected by the shot object is weak, which also affects the brightness of the speckle pattern estimated and collected under the structured light, for example, the speckle pattern is underexposed, the brightness of the speckle pattern is low, and the speckle pattern cannot be accurately extracted, thereby resulting in poor quality of the depth image. When the distance between the shot object and the 3D camera is short, the light energy reflected by the shot object is too strong, and the brightness of the speckle pattern collected by the structured light camera is also influenced, for example, the overall brightness of the speckle pattern is high, even some speckle patterns are overexposed, adjacent speckle points are connected together and cannot be distinguished, so that the quality of the depth image is poor, and the depth information is incomplete.
In the related technology, for improving the quality of the depth image, the optical diffraction element DOE is mainly added on the hardware layer, so that the speckle point of the diffraction central region of one DOE and the scattered spot of the diffraction edge region of the other DOE are formed, the brightness uniformity of the depth image is improved, and the purpose of improving the quality of the depth image is achieved. However, such a solution increases hardware cost, is not highly operable, and is not suitable for switching between complex shooting scenes.
In order to solve the technical problem, embodiments of the present disclosure provide a display device and a method for acquiring a depth image. The display equipment collects images through the configured camera at first, determines the exposure state of the collected first depth image, then determines a target exposure value according to the exposure state and the default exposure value, then acquires a first speckle image based on the default exposure value, acquires a second speckle image based on the target exposure value, and determines a target depth image according to the first speckle image and the second speckle image, so that the problem of depth information loss in the depth image due to shooting scene complexity is solved, and scene adaptability and operability of depth image acquisition are improved.
Fig. 1 is a schematic view of an operation scenario between a display device and a control apparatus in an embodiment of the present disclosure.
In some embodiments, the user may operate the display apparatus 200 through the smart device 300 or the control device 100, and the display apparatus 200 performs data communication with the server 400.
As shown in fig. 1, an application scenario of the display device provided by the present disclosure will be described by taking an example in which the control device 100 operates the display device 200. In one application scenario, a user operates the display device 200 through the control apparatus 100 to perform an entertainment activity such as a motion sensing game, a fitness game, or the like through the display device 200. Control device 100 sends an instruction to display device 200 according to an operation of a user, and when display device 200 determines that the user needs to perform a motion sensing game according to the instruction, acquires an image according to a camera configured in display device 200, determines an exposure state of the acquired first depth image, determines a target exposure value according to the exposure state and a default exposure value, acquires a first speckle image based on the default exposure value, acquires a second speckle image based on the target exposure value, and determines a target depth image according to the first speckle image and the second speckle image. The motion transformation and the like of the user when the motion sensing game is played can be determined according to the target depth image, the interactivity of the motion sensing game is improved, and the diversified requirements of the user are met.
In some embodiments, the control device 100 may be a remote controller, and the communication between the remote controller and the terminal device includes an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication methods, and the display device 200 is controlled by a wireless or wired method. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200.
In some embodiments, the smart device 300 (e.g., mobile terminal, tablet, computer, laptop, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application program running on the smart device.
In some embodiments, the display device 200 may also receive the user's control through touch or gesture, etc., instead of using the smart device or control device described above to receive instructions.
In some embodiments, the display device 200 may also be controlled in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received by a module configured inside the display device 200 to obtain a voice command, or may be received by a voice control device provided outside the display device 200.
In some embodiments, the display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers. Or a cloud server. The above is only an example, and this is not limited in this embodiment.
Fig. 2 is a block diagram of a hardware configuration of a display device according to an embodiment of the present disclosure. The display apparatus as shown in fig. 2 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface 280. The controller includes a central processing unit, a video processor, an audio processor, a graphic processor, a Random Access Memory (RAM), a Read-Only Memory (ROM), and first to nth interfaces for input/output. The display 260 may be at least one of a liquid crystal display, an OLED display, a touch display, and a projection display, and may also be a projection device and a projection screen. The tuner demodulator 210 receives a broadcast television signal through a wired or wireless reception manner, and demodulates an audio/video signal, such as an Electronic Program Guide (EPG) data signal, from a plurality of wireless or wired broadcast television signals. The detector 230 is used to collect signals of an external environment or interaction with the outside. The controller 250 and the tuner-demodulator 210 may be located in different separate devices, that is, the tuner-demodulator 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in memory. The controller 250 controls the overall operation of the display apparatus 500. The User may enter User commands in a Graphical User Interface (GUI) displayed on display 260, and the User input Interface receives the User input commands through the GUI. Alternatively, the user may input a user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
In some embodiments, a "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A common presentation form of a user interface is a graphical user interface, which refers to a user interface displayed in a graphical manner and related to computer operations. The interface element may be an icon, a window, a control, and the like, displayed in a display screen of the electronic device, where the control may include at least one of an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget (Web Widget), and the like.
In some embodiments, the controller includes at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphic Processing Unit (GPU), a RAM Random Access Memory (RAM), a ROM (Read-Only Memory), a first interface to an nth interface for input/output of a Digital Signal Processor (DSP), a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions receiving external input, so as to finally display and play various audio-video contents. The CPU processor may include a plurality of processors. E.g. comprising a main processor and one or more sub-processors.
The present disclosure provides a display apparatus 200, including:
a camera 201 configured to capture an image;
a controller 202 configured to: determining an exposure state of the first depth image;
determining a target exposure value according to the exposure state and the default exposure value;
controlling the camera to acquire a first speckle image based on a default exposure value and a second speckle image based on a target exposure value;
and determining a target depth image according to the first speckle image and the second speckle image.
It should be noted that the camera 201 functions the same as or similar to the image collector in the detector 230. The camera 201 may be a structured light camera with various structures such as monocular and binocular. The present disclosure does not specifically limit this.
Above-mentioned display device gathers the image through the camera of configuration, confirms the exposure state of the first depth image of gathering, then confirms the target exposure value according to exposure state and default exposure value, again based on acquiescence exposure value obtain first speckle image, and based on target exposure value obtain the second speckle image, confirm the target depth image according to first speckle image and second speckle image, solved because shoot scene complexity and lead to the problem of the depth information disappearance in the depth image, promoted the scene adaptability that the depth image acquireed, maneuverability is strong.
As shown in fig. 3A, fig. 3A is a schematic view of an application scene of a method for acquiring a depth image according to an embodiment of the present disclosure, where fig. 3A includes a display device 200, and the display device 200 is configured with a 3D camera 201. In fig. 3A, the user acquires a depth image of the user through the display device 200 to perform face recognition, and in the process, the distance between the user and the 3D camera 201 is shorter than a distance D. In general, due to the shooting principle of the 3D camera and the small distance D between the user and the 3D camera 201, the acquired first depth map is in an overexposure state, which shows that the center of the image is bright and the periphery is dark. In this scenario, the 3D camera 201 of the display device 200 is configured to capture an image, capturing a first depth image of the user. Then, the controller of the display device 200 determines that the exposure state of the user is the overexposure state according to the first depth image of the user collected by the 3D camera 201, then determines that the target exposure value is the exposure value smaller than the default exposure value according to the overexposure state and the default exposure value, controls the 3D camera 201 to acquire the first speckle image based on the default exposure value, acquires the second speckle image according to the target exposure value, and finally determines the target depth image according to the first speckle image and the second speckle image, so that the problems of overexposure and depth information loss of the depth image caused by the fact that the user is close to the 3D camera are solved, the face depth image of the user with complete depth information is obtained, and face recognition is more accurate.
As shown in fig. 3B, fig. 3B is a schematic view of an application scene of a method for acquiring a depth image according to an embodiment of the present disclosure, where fig. 3B includes a display device 200, and the display device 200 is configured with a 3D camera 201. In fig. 3B, the user performs an entertainment activity such as a motion sensing game and a motion sensing exercise through the display device 200. In this process, the distance between the user and the 3D camera 201 is far distance D. In general, due to the shooting principle of the 3D camera and the large distance D between the user and the 3D camera 201, the acquired first depth map is in an underexposure state, which shows that the center of the image is dark and the periphery is bright. In this scenario, the 3D camera 201 of the display device 200 is configured to capture an image, capturing a first depth image of the user. Then, the controller of the display device 200 determines that the exposure state of the user is an underexposure state according to the first depth image of the user acquired by the 3D camera 201, then determines that the target exposure value is an exposure value larger than the default exposure value according to the underexposure state and the default exposure value, controls the 3D camera 201 to acquire the first speckle image based on the default exposure value, acquires the second speckle image according to the target exposure value, and finally determines the target depth image according to the first speckle image and the second speckle image, so that the problems of underexposure of the depth image and depth information loss caused by the fact that the user is far away from the 3D camera are solved, the depth image with complete depth information is obtained, and the use experience feeling of the user is improved
The method for obtaining the depth image provided in the embodiments of the present disclosure may be implemented by a computer device, where the computer device includes, but is not limited to, a server, a personal computer, a notebook computer, a tablet computer, a smart television, a vehicle-mounted device, and the like. The computer equipment comprises user equipment and network equipment. The user equipment includes but is not limited to a computer, a smart phone, a tablet computer and the like; the network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a cloud consisting of a large number of computers or network servers for cloud computing, wherein the cloud computing is a kind of distributed computing, and a super virtual computer is composed of a group of loosely coupled computers. Wherein, the computer device can be operated alone to realize the disclosure, and can also be accessed to the network and realize the disclosure through the interactive operation with other computer devices in the network. The Network in which the computer device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
It should be noted that the protection scope of the method for acquiring a depth image according to the embodiment of the present disclosure is not limited to the execution order of the steps listed in this embodiment, and all the solutions implemented by the steps increase, decrease, and step replacement according to the related art made according to the principle of the present disclosure are included in the protection scope of the present disclosure.
As shown in fig. 4, fig. 4 is a schematic flowchart of a method for acquiring a depth image according to an embodiment of the present disclosure, where the method includes:
s401, collecting an image.
In some embodiments, the display device provided by the present disclosure is configured with a camera, which may be a monocular, binocular, or other structured light camera, and the like, and the present disclosure does not specifically limit this. The camera includes, but is not limited to, an infrared light emitter and an infrared light camera.
Among these, images include, but are not limited to: infrared images, infrared speckle images. Taking the structured light camera as an example, the infrared image is an image acquired by the infrared light camera when the infrared light emitter of the structured light camera emits infrared light, and the infrared speckle image is an image acquired by the infrared light camera when the infrared light emitter of the 3D camera projects the infrared speckle image. It is understood that the infrared speckle image contains both infrared and speckle information.
In some embodiments, the camera acquires the first infrared speckle image, and the parallax image is calculated according to a preset reference infrared speckle image to obtain the first depth image.
The conversion formula of the parallax image and the depth image is as follows:
Figure BDA0003624803660000101
in the formula (1), b is a baseline of the 3D camera, f is a camera focal length of the 3D camera, generally, a camera focal length of an infrared camera included in the 3D camera, and D is a parallax value of speckles.
S402, determining the exposure state of the first depth image.
Wherein, the exposure state includes: overexposure state, underexposure state and normal state.
In some embodiments, the exposure state of the first depth image is determined by determining the position of a pixel point with missing depth information in the first depth image, where the pixel point with missing depth information may be determined by calculating a depth difference value through adjacent pixel points, and it can be understood that the depth value of the adjacent pixel point of the pixel point with missing depth information has a sudden change and the variation of the depth value is greater than a preset difference value; the pixel point where the depth information is missing may also be a pixel point whose depth value is smaller than a preset depth value, for example, a pixel point whose depth value is 0.
In some embodiments, the present disclosure determines a target pixel point in the first depth image by an edge detection method in image processing, where the target pixel point is a pixel point where a depth value difference between adjacent pixel points is greater than or equal to a preset difference, and the edge detection method includes, but is not limited to: edge detection methods based on differential operators, Laplacian gaussian (Laplacian) and Canny (Canny). The embodiment of the present disclosure provides an implementation manner, in which a target pixel point is determined by a differential edge detection method. The differential edge detection method is to set 9 × 9 neighborhoods:
Figure BDA0003624803660000102
then, all the pixel points in the first depth image are traversed by using the neighborhood of 9 × 9.
For each pixel point in the first depth image, calculating the depth difference value of adjacent pixel points through a formula (2) and a formula (3)
Figure BDA0003624803660000103
Figure BDA0003624803660000104
Wherein, f (m, n) represents the coordinate as the depth value variation of the pixel point (m, n), and the depth value difference of the pixel point (m, n) is G (m, n) ≈ G (m) | + | G (n) |.
And judging whether the depth difference value is greater than or equal to a preset difference value, and determining the pixel point as a target pixel point under the condition that the depth difference value is greater than or equal to the preset difference value.
And determining a target area in the first depth image according to the target pixel point, wherein the target area is an area with missing depth information. It can be understood that the target region is an internal region circled by a target pixel point connection line, as shown in fig. 5, fig. 5A is a schematic diagram of the target region provided in the embodiment of the present disclosure, fig. 5A includes a first depth image 501, a target region 502 in the first depth image 501 is an internal region circled by a target pixel point 503 connection line, and only a part of target pixel points are shown in fig. 5A. The target area may also be a polygonal area determined according to a minimum abscissa, a maximum abscissa, a minimum ordinate, and a maximum ordinate in the position information of the target pixel in a pixel coordinate system of the first depth image, as shown in fig. 5B, where fig. 5B is a schematic diagram two of the target area provided in the embodiment of the present disclosure. Fig. 5B includes a first depth image 501, a minimum abscissa XA and a maximum abscissa XB, a minimum ordinate YC, and a maximum ordinate YD of a target pixel 502, and the target area is a polygonal area 503 composed of points A, B, C, D.
In practical applications, although there is a region where depth information is missing, that is, a target region, in the first depth image, since the target region is too small in area, the three-dimensional coordinates of the subject obtained based on the depth image are not affected, and therefore, the target region having an excessively small area can be ignored. On the contrary, when the target area reaches a certain area, the acquisition of the three-dimensional coordinates of the object to be photographed is affected, and therefore, it is necessary to determine whether to perform filling according to the area of the target area. In some embodiments of the present disclosure, the area of the target region is determined according to the target region, and then, whether the area of the target region is greater than or equal to a preset area is determined. And under the condition that the area of the target area is larger than or equal to the preset area, determining the exposure state of the first depth image according to the position of the target area in the first depth image. And under the condition that the area of the target area is smaller than the preset area, the first depth image is in a normal state, and the area with the missing depth information does not influence the acquisition of the three-dimensional coordinate of the shot object.
In the above-described embodiment, after the target area is determined, the target area is compared with the preset area, thereby determining the exposure state of the first depth image. The preset area corresponds to a preset reference infrared speckle image, and can be a preset central area or a preset peripheral area. Information such as the shape, the number and the regularity of scattered spots in the preset reference infrared speckle image is fixed, and the preset reference infrared speckle image and the first depth image are obtained by the same camera, so that pixel points in the two images are aligned, the preset central area in the first depth image can be determined according to the preset central area in the preset reference infrared speckle image, or the preset peripheral area in the first depth image can be determined according to the preset peripheral area in the preset reference infrared speckle image.
In some embodiments, the target region is determined to be in a preset central region or a preset peripheral region of the first depth image according to the position of the target region in the first depth image. The preset area is determined according to the preset reference infrared speckle image, and it can be understood that the image corresponding to the preset central area and the preset peripheral area form a complete reference infrared speckle image. Firstly, whether pixel points included in a target region are located in a preset central region is determined, and under the condition that a certain number of pixel points included in the target region are located in the preset central region, the target region is determined to be located in the preset central region of a first depth image, and depth information of a main missing central region in the first depth image is represented. For example, if 80% of the pixels included in the target region are located in the preset central region, it is determined that the target region is located in the preset central region of the first depth image.
For example, as shown in fig. 6A, fig. 6A is a third schematic diagram of a target region provided in the embodiment of the present disclosure, where fig. 6A includes a first depth image 601, a target region 602, and a preset central region 603, and pixels included in the target region 602 are obtained through calculation, and more than 80% of the pixels are located in the preset central region 603, so that the target region 602 is determined to be located in the preset central region 603 of the first depth image.
When a certain number of pixel points included in the target region are in the preset peripheral region, it is determined that the target region is in the preset peripheral region of the first depth image, which indicates that the depth information of the peripheral region is mainly missing in the first depth image, for example, if 80% of the pixel points included in the target region are in the preset peripheral region, the target region determines that the target region is in the preset peripheral region of the first depth image.
For example, as shown in fig. 6B, fig. 6B is a fourth schematic diagram of the target region provided in the embodiment of the present disclosure, where fig. 6B includes a first depth image 601, a target region 604, and a preset peripheral region 605, and it is calculated that more than 80% of pixel points included in the target region 604 are located in the preset peripheral region 605, so that it is determined that the target region 604 is located in the preset peripheral region 605 of the first depth image.
Then, under the condition that the target area is determined to be in a preset central area of the first depth image, determining that the exposure state of the first depth image is an overexposure state; and under the condition that the target area is determined to be in the preset peripheral area of the first depth image, determining that the exposure state of the first depth image is an underexposure state.
In some embodiments, all the pixel points in the first depth image are traversed, the pixel points of which the pixel values are smaller than the preset pixel values are determined as target pixel points, and the ratio of the number of the target pixel points in the first depth image to the total number of all the pixel points in the first depth image is smaller than a preset threshold value, which indicates that the depth information in the first depth image is complete and is in a normal state, and the disclosure does not process the situation.
If the ratio of the number of target pixel points in the first depth image to the total number of all pixel points in the first depth image is greater than or equal to a preset threshold value, it indicates that the depth information in the first depth image is incomplete and may be in an overexposure state or an underexposure state.
And determining whether the exposure state of the first depth image is an overexposure state or an underexposure state according to the position of the target pixel point. Firstly, a target area is determined according to the position of a target pixel point. If the ratio of the void points of the target pixel points in the target area in the preset central area in the first depth image is greater than or equal to the preset ratio, determining that the target area is in the preset central area of the first depth image, indicating that the depth information of the preset central area of the first depth image is incomplete, and determining that the exposure state of the first depth image is an over-exposure state; otherwise, if the proportion of the target pixel points in the preset central region in the first depth image among the target pixel points is smaller than the preset proportion, it indicates that the depth information of the preset central region of the first depth image is complete, and if the target pixel points exist in the first depth image, it indicates that the target pixel points are in the region except the preset central region in the first image, that is, the target region is in the preset peripheral region of the first depth image, and it can be determined that the exposure state of the first depth image is the underexposure state.
In some embodiments, the first depth image is output under the condition that the exposure state of the first depth image is determined to be a normal state, and the fact that the depth information of the first depth image is complete or the missing depth information does not affect the acquisition of the three-dimensional coordinate of the shot object is shown, so that the three-dimensional coordinate of the shot object is obtained according to the first depth image, the method and the device are suitable for diversified application scenes, and the diversified requirements of users are met.
In the above embodiment, according to the depth value of each pixel point in the first depth image, a pixel point in which the depth change value of the adjacent pixel point is greater than the preset difference value is determined as a target pixel point, or a pixel point in which the depth value is smaller than the preset depth value is determined as a target pixel point, and a target area, that is, an area in which depth information is missing, is determined according to the target pixel point, so as to determine the exposure state of the first depth image, and thereby an appropriate implementation mode is accurately selected by determining the exposure state of the first depth image, so as to obtain the target depth image with complete depth information, thereby directly outputting the first depth image with normal exposure, and screening the first depth image with overexposure or underexposure to perform subsequent processing.
And S403, determining a target exposure value according to the exposure state and the default exposure value.
In some embodiments, in the case that the exposure state is the overexposure state, the target exposure value is determined to be an exposure value smaller than the default exposure value according to the default exposure value, so that the camera can obtain the depth information missing from the default exposure value at the target exposure value (low exposure value).
In some embodiments, in the case that the exposure state is the under-exposure state, the target exposure value is determined to be an exposure value greater than the default exposure value according to the default exposure value, so that the camera can obtain the depth information missing from the default exposure value at the target exposure value (high exposure value).
S404, acquiring a first speckle image based on the default exposure value, and acquiring a second speckle image based on the target exposure value.
In some embodiments, the first infrared image and the first infrared speckle image are collected by the camera based on the default exposure value, a difference value calculation is performed according to the first infrared image and the first infrared speckle image to obtain a first speckle image, and unnecessary interference information is removed to obtain the first speckle image under the default exposure value. And acquiring a second infrared image and a second infrared speckle image by the camera based on the target exposure value, and calculating a difference value according to the second infrared image and the second infrared speckle image to obtain a second speckle image.
Acquiring a first infrared image and a first infrared speckle image according to a default exposure value, namely the exposure value corresponding to the first depth image, based on the exposure state of the first depth image, and under the condition that the first depth image is in an overexposure state, and further obtaining a first speckle image; the target exposure value is an exposure value smaller than the default exposure value, and it can be understood that, at a low exposure value, the second infrared image and the second speckle infrared image are collected, and further the second speckle image with a low exposure value is obtained.
Under the condition that the first depth image is in an underexposure state, acquiring a first infrared image and a first infrared speckle image according to a default exposure value, namely the exposure value corresponding to the first depth image, and further obtaining a first speckle image; and the target exposure value is an exposure value larger than the default exposure value, the second infrared image and the second speckle infrared image are collected under the high exposure value, and the second speckle image with the high exposure value is obtained through differential calculation.
And S505, determining a target depth image according to the first speckle image and the second speckle image.
The following description is made with respect to a process of determining a target depth image from two cases, an overexposed state and an underexposed state, respectively, in accordance with an exposure state of a first depth image:
(1) overexposure status
In some embodiments, the target center region in the second speckle image is determined with the first depth image in an overexposed state. First, a first area is determined in a first depth image according to a target area and a preset central area, and a target central area corresponding to the first area is determined in a second speckle image according to position information of the first area. And determining a target peripheral area corresponding to a second area in the second speckle image according to the position information of the second area, wherein the second area is an area except the first area in the first depth image.
For example, as shown in fig. 7A, fig. 7A is a schematic diagram of determining a target speckle image in the embodiment of the disclosure, in fig. 7A, (a) includes a first depth image 701, a target region 702 is located entirely in a preset central region 703, but the target region 702 does not occupy the preset central region 703 entirely, the size of the preset central region 703 may be taken as the size of the target central region 704, as shown in (b) of fig. 7A, that is, an area with the same position information is determined from a second speckle image 701a according to the position information of the preset central region 703 as the target central region 704; as shown in fig. 7A (c), an area at the same position is determined from the first speckle image 701b as a target peripheral area 706 based on the position information of the area other than the preset central area 703, that is, the position information of the preset peripheral area 705. A target speckle image 701c as shown in (d) in fig. 7A is determined from the target central region 704 and the target peripheral region 706.
Further exemplarily, as shown in fig. 7B, fig. 7A is a schematic diagram of a target central region in the embodiment of the present disclosure, where (a) in fig. 7B includes a first depth image 701, a target region 702 is partially located in a preset central region 703, and another portion exceeds the preset central region 703, and a first region in the first depth image is determined according to position information of the preset central region 703 in the first depth image and position information of the excess portion in the target region 702, and as shown in (B) in fig. 7B, a region with the same position information as the first region is determined from a second speckle image 701B as a target central region 714. As shown in fig. 7B (c), based on the position information of the first region in the first depth image, a region other than the first region 714 is determined as a second region from the first speckle image 701a, and a region having the same position information as the second region is determined as a target peripheral region 715 from the second depth image. From the target central region 714 and the target peripheral region 715, a target speckle image 701c is determined as shown in fig. 7A (d).
Through the embodiment, when the first depth image is in an overexposure state, the depth information of the target area is lost, based on the first speckle image under the default exposure value and the second speckle image under the low exposure value, the target central area of the second speckle image under the low exposure value is used for filling the lost depth information of the target area of the first depth image, and it can be understood that the target central area of the second speckle image and the target peripheral area of the first speckle image are subjected to image fusion and splicing to obtain the target speckle image with complete depth information, and the target depth image is determined according to the target speckle image.
(2) Under-exposed state
In some embodiments, the peripheral region of the target in the second speckle image is determined with the first depth image in an underexposed state. First, a third area is determined in the first depth image according to the target area and a preset peripheral area, and a target peripheral area corresponding to the third area is determined in the second speckle image according to position information of the third area. And determining a fourth area outside the third area in the first depth image, and determining a target central area corresponding to the fourth area in the first speckle image according to the position information of the fourth area.
For example, as shown in fig. 8A, fig. 8A is a schematic diagram of determining a target speckle image in the embodiment of the present disclosure, in fig. 8A, (a) includes a first depth image 801, a target area 802 is located in a preset peripheral area 803, but the target area 802 does not occupy the preset peripheral area 803, the size of the preset peripheral area 803 may be used as the size of the target peripheral area 804, and as shown in (b) of fig. 8A, an area with the same position information is determined from a second speckle image 801b according to the position information of the preset peripheral area 803, and is used as the target peripheral area 804. As shown in fig. 8A (c), an area at the same position is determined from the first speckle image 801a as the target central area 805 based on the position information of the area other than the preset peripheral area 803, that is, the position information of the preset central area. The target speckle image 801c as shown in (d) in fig. 8A is determined.
As another example, as shown in fig. 8B, fig. 8B is a fourth schematic diagram illustrating the determination of the target speckle image in the embodiment of the present disclosure, in fig. 8B, (a) includes a first depth image 801, a target area 802 is partially located in a preset peripheral area 803, and another part exceeds the preset peripheral area 803, and a third area in the first depth image is determined according to the position information of the preset peripheral area 803 in the first depth image and the position information of the excess part in the target area 802. As shown in (B) in fig. 8B, an area of the same position information as the third area is determined from the second speckle image 801B as a target peripheral area 811. A region other than the third region is determined as a fourth region based on the position information of the third region in the first depth image, and as shown in (c) in fig. 8B, a region having the same position information as the fourth region is determined as a target center region 812 from the second depth image. The target speckle image 801c as shown in (d) in fig. 8B is determined.
Through the embodiment, when the first depth image is in the underexposed state, the depth information of the target area is lost, and based on the first speckle image under the default exposure value and the second speckle image under the high exposure value, the target peripheral area of the second speckle image under the high exposure value is used for filling the lost depth information of the target area of the first depth image, and it can be understood that the target peripheral area of the second speckle image and the target central area of the first speckle image are subjected to image fusion and splicing to obtain the target speckle image with complete depth information. And determining a target depth image according to the target speckle image.
In some embodiments, in the process of determining the target depth image according to the target speckle image, speckles in the target speckle image are complete, and depth information in the target depth image obtained according to the target speckle image is also complete. Firstly, a target disparity map is determined according to a target speckle image and a preset reference speckle image. And obtaining the target depth image according to the target disparity map and the conversion formula between the disparity image and the depth image.
After obtaining the target depth image, in order to obtain the three-dimensional coordinates of the object to be shot in the target depth image, firstly, the three-dimensional coordinates of the object to be shot in the target depth image are determinedDetermining the position information and the depth value of each pixel point in the target depth image, and based on a pinhole model:
Figure BDA0003624803660000161
in the formula (4)
Figure BDA0003624803660000162
The number of the symbols K is recorded as K,
Figure BDA0003624803660000163
when T is recorded, the above formula can be expressed as:
Figure BDA0003624803660000164
in the formula (5), the reaction mixture is,
Figure BDA0003624803660000165
the position of a pixel point in the target depth image is corresponding to a point P under a world coordinate system, and the origin of the world coordinate system is established on the optical center of the camera;
Figure BDA0003624803660000166
the position of a pixel point corresponding to the point P in the target depth image under the image coordinate system;
Figure BDA0003624803660000167
the position of a pixel point corresponding to the point P in the target depth image under the camera coordinate system is determined; k is an internal parameter matrix of the camera and is obtained by calibrating camera parameters; t is an external parameter matrix of the camera and is an identity matrix.
According to
Figure BDA0003624803660000168
Obtaining the three-dimensional coordinates of point P:
Figure BDA0003624803660000169
the point P is any pixel point in the target depth image, the three-dimensional coordinates of each pixel point in the target depth image can be obtained through the above formula, and the three-dimensional coordinates of the shot object in the target depth image in the three-dimensional space can be determined according to the three-dimensional coordinates of the pixel points. Therefore, diversified requirements of users are met, for example, the method is suitable for shooting scenes in a short distance to perform face recognition, or is suitable for shooting scenes in a long distance to perform motion sensing games or fitness.
In summary, the present disclosure provides a method for obtaining a depth image, which collects an image through a configured camera, determines an exposure state of the collected first depth image, determines a target exposure value according to the exposure state and a default exposure value, obtains a first speckle image based on the default exposure value, obtains a second speckle image based on the target exposure value, and determines a target depth image according to the first speckle image and the second speckle image, thereby solving a problem of depth information loss in the depth image due to complexity of a shooting scene, improving scene adaptability of obtaining the depth image, and having strong operability.
The embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the method for obtaining a depth image in the foregoing method embodiments, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiments of the present disclosure provide a computer program product, where the computer program is stored, and when being executed by a processor, the computer program implements each process of the method for obtaining a depth image in the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
As will be appreciated by one of skill in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium.
In the present disclosure, the Processor may be a Central Processing Unit (CPU), and may also be other general purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field-Programmable Gate arrays (FPGA) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In the present disclosure, the memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
In the present disclosure, computer-readable media include both non-transitory and non-transitory, removable and non-removable storage media. Storage media may implement information storage by any method or technology, and the information may be computer-readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A display device, comprising:
a camera configured to capture an image;
a controller configured to: determining an exposure state of the first depth image;
determining a target exposure value according to the exposure state and a default exposure value;
controlling the camera to acquire a first speckle image based on the default exposure value and a second speckle image based on the target exposure value;
and determining a target depth image according to the first speckle image and the second speckle image.
2. The display device of claim 1, wherein the controller is specifically configured to:
determining a target pixel point from all pixel points of the first depth image, wherein the depth value difference value between the target pixel point and an adjacent pixel point is larger than or equal to the pixel point with a preset difference value;
determining a target area in the first depth image based on the target pixel point;
and determining the exposure state of the first depth image according to the position of the target area in the first depth image.
3. The display device of claim 2, wherein the controller is further configured to:
determining the area of the target region;
the controller is specifically configured to:
and determining the exposure state of the first depth image according to the position of the target region in the first depth image under the condition that the area of the target region is larger than or equal to a preset area.
4. The display device of claim 3, wherein the controller is specifically configured to:
and determining that the exposure state of the first depth image is an overexposure state under the condition that the target area is determined to be in a preset central area of the first depth image according to the position of the target area in the first depth image.
5. The display device of claim 3, wherein the controller is specifically configured to:
and under the condition that the target area is determined to be in a preset peripheral area of the first depth image according to the position of the target area in the first depth image, determining that the exposure state of the first depth image is an underexposure state.
6. The display device according to claim 4, wherein the exposure state is an overexposure state, the target exposure value is less than the default exposure value, and the controller is specifically configured to:
determining a target center region in the second speckle image,
determining a target peripheral region in the first speckle image;
the target central area corresponds to a first area, and the first area is an area determined according to the target area and the preset central area in the first depth image; the target peripheral region corresponds to a second region, and the second region is a region of the first depth image other than the first region;
and determining the target depth image according to the target central area and the target peripheral area.
7. The display device of claim 5, wherein the exposure state is an underexposure state, the target exposure value is greater than the default exposure value, and the controller is specifically configured to:
determining a target peripheral area in the second speckle image,
determining a target central region in the first speckle image;
the target peripheral area corresponds to a third area, and the third area is an area determined according to the target area and the preset peripheral area in the first depth image; the target central region corresponds to a fourth region, and the fourth region is a region outside the third region in the first depth image;
and determining the target depth image according to the target peripheral area and the target central area.
8. The display device according to claim 1, comprising: the controller is specifically configured to:
controlling the camera to acquire a first infrared image and a first infrared speckle image based on the default exposure value, and acquire a second infrared image and a second infrared speckle image based on the target exposure value;
determining the first speckle image from the first infrared image and the first infrared speckle image, and determining the second speckle image from the second infrared image and the second infrared speckle image.
9. The display device according to claim 1, comprising: the controller further configured to:
after determining a target depth image according to the first speckle image and the second speckle image, determining three-dimensional coordinates of a shot object in the target depth image based on position information and depth values of all pixel points in the target depth image.
10. A method for acquiring a depth image is characterized by comprising the following steps:
collecting an image;
determining an exposure state of the first depth image;
determining a target exposure value according to the exposure state and a default exposure value;
acquiring a first speckle image based on the default exposure value and acquiring a second speckle image based on the target exposure value;
and determining a target depth image according to the first speckle image and the second speckle image.
CN202210474625.6A 2022-04-29 2022-04-29 Display device and depth image acquisition method Pending CN115018899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210474625.6A CN115018899A (en) 2022-04-29 2022-04-29 Display device and depth image acquisition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210474625.6A CN115018899A (en) 2022-04-29 2022-04-29 Display device and depth image acquisition method

Publications (1)

Publication Number Publication Date
CN115018899A true CN115018899A (en) 2022-09-06

Family

ID=83066995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210474625.6A Pending CN115018899A (en) 2022-04-29 2022-04-29 Display device and depth image acquisition method

Country Status (1)

Country Link
CN (1) CN115018899A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294187A (en) * 2022-10-08 2022-11-04 合肥的卢深视科技有限公司 Image processing method of depth camera, electronic device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294187A (en) * 2022-10-08 2022-11-04 合肥的卢深视科技有限公司 Image processing method of depth camera, electronic device and storage medium

Similar Documents

Publication Publication Date Title
EP3579544B1 (en) Electronic device for providing quality-customized image and method of controlling the same
JP6214236B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
US10762653B2 (en) Generation apparatus of virtual viewpoint image, generation method, and storage medium
US10437545B2 (en) Apparatus, system, and method for controlling display, and recording medium
EP3687161B1 (en) Method for image shooting, apparatus, and storage medium
JP2014197824A5 (en)
CN106416222A (en) Real-time capture exposure adjust gestures
US20210099669A1 (en) Image capturing apparatus, communication system, data distribution method, and non-transitory recording medium
US10951873B2 (en) Information processing apparatus, information processing method, and storage medium
CN113613072B (en) Multi-channel screen-throwing display method and display equipment
CN112073798B (en) Data transmission method and equipment
US11917329B2 (en) Display device and video communication data processing method
TW201428685A (en) Image processor and display method for fisheye image thereof
CN114296949A (en) Virtual reality equipment and high-definition screen capturing method
US11818492B2 (en) Communication management apparatus, image communication system, communication management method, and recording medium
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN113645494A (en) Screen fusion method, display device, terminal device and server
CN116711316A (en) Electronic device and operation method thereof
US10216381B2 (en) Image capture
CN115018899A (en) Display device and depth image acquisition method
CN108769538B (en) Automatic focusing method and device, storage medium and terminal
CN110928509A (en) Display control method, display control device, storage medium, and communication terminal
US11062422B2 (en) Image processing apparatus, image communication system, image processing method, and recording medium
WO2023174009A1 (en) Photographic processing method and apparatus based on virtual reality, and electronic device
CN107105158B (en) Photographing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination