CN113329165A - Imaging display device, wearable device and imaging display system - Google Patents

Imaging display device, wearable device and imaging display system Download PDF

Info

Publication number
CN113329165A
CN113329165A CN202010130040.3A CN202010130040A CN113329165A CN 113329165 A CN113329165 A CN 113329165A CN 202010130040 A CN202010130040 A CN 202010130040A CN 113329165 A CN113329165 A CN 113329165A
Authority
CN
China
Prior art keywords
imaging
image information
unit
display
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010130040.3A
Other languages
Chinese (zh)
Inventor
西出洋祐
市川武史
沖田彰
坪井俊紀
中辻七朗
吉岡宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CN202010130040.3A priority Critical patent/CN113329165A/en
Publication of CN113329165A publication Critical patent/CN113329165A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Abstract

The present disclosure relates to an imaging display device, a wearable device, and an imaging display system. The imaging unit includes a plurality of photoelectric conversion elements, a processing unit, and a display unit. The processing unit processes the signal transmitted from the imaging unit. The display unit displays an image based on the signal transmitted from the processing unit. The imaging unit acquires first image information at a first time. The processing unit generates first prediction image information at a second time later than the first time based on the first image information. Also, the display unit displays an image based on the first prediction image information.

Description

Imaging display device, wearable device and imaging display system
Technical Field
The present disclosure relates to an imaging display device, a wearable device, and an imaging display system.
Background
Wearable devices with imaging display devices, known as head-mounted displays or smart glasses, are known. In one system of the above wearable device, a landscape in front of a user is captured as an image by an imaging display device, and the captured image is displayed on the display device. In the above system, even if the external scenery is viewed via the display device, the user can feel as if the user is directly viewing the external scenery.
In order to miniaturize the above display device, japanese patent application laid-open No. 2002-.
Disclosure of Invention
According to an aspect of the present disclosure, an imaging display apparatus includes: an imaging unit including a plurality of photoelectric conversion elements; a processing unit configured to process a signal transmitted from the imaging unit; and a display unit configured to display an image based on the signal transmitted from the processing unit, wherein the imaging unit acquires the first image information at a first time, wherein the processing unit generates first prediction image information at a second time later than the first time based on the first image information, and wherein the display unit displays the image based on the first prediction image information.
Other features of the present disclosure will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Drawings
Fig. 1A is a schematic diagram illustrating an imaging display apparatus according to a first exemplary embodiment. Fig. 1B is a schematic diagram illustrating a modification of the imaging display apparatus according to the first exemplary embodiment. Fig. 1C is a schematic diagram illustrating another modification of the imaging display apparatus according to the first exemplary embodiment. Fig. 1D is a schematic diagram illustrating still another example of the imaging display device according to the first exemplary embodiment.
Fig. 2A and 2B are diagrams illustrating an operation of the imaging display apparatus according to the first exemplary embodiment.
Fig. 3 is a diagram illustrating a comparative example of the operation of the imaging display apparatus according to the first exemplary embodiment.
Fig. 4 is a schematic diagram illustrating an imaging display apparatus according to a second exemplary embodiment.
Fig. 5 is a diagram illustrating an operation of an imaging display apparatus according to a second exemplary embodiment.
Fig. 6A and 6B are diagrams illustrating an operation of the imaging display apparatus according to the third exemplary embodiment.
Fig. 7 is a schematic diagram illustrating an imaging display apparatus according to a fourth exemplary embodiment.
Fig. 8 is a diagram illustrating an operation of an imaging display apparatus according to a fifth exemplary embodiment.
Fig. 9 is a diagram illustrating an operation of an imaging display apparatus according to a sixth exemplary embodiment.
Fig. 10A and 10B are schematic diagrams illustrating a wearable device. Fig. 10C is a sectional view schematically illustrating a positional relationship between the imaging unit and the display unit. Fig. 10D and 10E are plan views schematically illustrating a positional relationship between the imaging unit and the display unit.
Fig. 11 is a schematic diagram illustrating an imaging display system.
Fig. 12 is a schematic diagram illustrating an imaging display apparatus according to a tenth exemplary embodiment.
Fig. 13 is a schematic diagram illustrating an operation of an imaging display apparatus according to an eleventh exemplary embodiment.
Detailed Description
Hereinafter, exemplary embodiments will be described with reference to the accompanying drawings. In the exemplary embodiments described below, description of a configuration similar to that already described in another exemplary embodiment will be omitted. In addition, the exemplary embodiments may be appropriately changed or combined.
The first exemplary embodiment will be described below. The present exemplary embodiment will be described with reference to fig. 1A to 1D and fig. 2A and 2B. Fig. 1A is a schematic diagram illustrating an imaging display apparatus 100 according to the present exemplary embodiment. The imaging display apparatus 100 includes an imaging unit 101, a processing unit 102, and a display unit 103.
The imaging unit 101 includes a plurality of light receiving elements. For example, a photoelectric conversion element is used as the light receiving element. The light receiving element performs an imaging operation for acquiring image information by converting light (external information) entering from the outside into an electric signal. Based on the image information from the imaging unit 101, the processing unit 102 generates information (hereinafter, referred to as "prediction image information") about an image to be captured by the imaging unit 101. The display unit 103 includes a plurality of light emitting elements. Each of the plurality of light emitting elements converts an electrical signal into light. The display unit 103 displays (outputs) an image based on the prediction image information generated by the processing unit 102. A plurality of pixels are arranged in an array state on the imaging unit 101 and the display unit 103. Each pixel disposed on the imaging unit 101 includes at least one light receiving element, and each pixel disposed on the display unit 103 includes at least one light emitting element. The processing unit 102 receives image information from the imaging unit 101 and outputs predicted image information to the display unit 103. In addition, the processing unit 102 may output a control signal for performing an imaging operation and a control signal for performing a display operation to the imaging unit 101 and the display unit 103, respectively.
Here, a modification of the imaging display apparatus 100 according to the present exemplary embodiment shown in fig. 1A will be described. Fig. 1B is a schematic diagram illustrating a modification of the imaging display apparatus 100 according to the present exemplary embodiment in fig. 1A. In the imaging display device 120, the processing unit 102 shown in fig. 1A includes an Artificial Intelligence (AI) unit 104 on which an AI is installed. In the present exemplary embodiment, the AI unit 104 has a deep learning (deep structured learning) function. In fig. 1B, image information captured by the imaging unit 101 is converted into predicted image information by the processing unit 102 including the AI unit 104.
Fig. 1C is a schematic diagram illustrating a modification of the imaging display apparatus 100 of the present exemplary embodiment illustrated in fig. 1A. The processing unit 102 of the imaging display device 130 communicates with the processing means 105. The processing unit 102 and the processing device 105 are connected to each other via a network. The processing device 105 is disposed outside the imaging display apparatus 130, such as a cloud. In the imaging display apparatus 130, the AI unit 104 is not included in the processing unit 102, but is included in the processing device 105. The processing unit 102 and the processing device 105 exchange information, thereby generating predicted image information based on the image information. In fig. 1C, image information captured by the imaging unit 101 is converted into predicted image information by the processing unit 102 which has acquired information from the processing device 105. In this way, the imaging display apparatus 130 can generate predicted image information using information stored in the external device.
Fig. 1D is a schematic diagram illustrating a modification of the imaging display apparatus 100 of the present exemplary embodiment illustrated in fig. 1A. The processing unit 102 of the imaging display device 140 includes the AI unit 104. The processing unit 102 communicates with a processing device 106, and the processing device 106 also communicates with another processing device 105. The processing device 106 is disposed on the cloud and stores data. The processing device 105 including the AI unit 104 is disposed separately from the imaging display apparatus 140 and the processing device 106. The processing unit 102 and the processing device 106, and the processing device 106 and the processing device 105 are connected to each other via a network, respectively. In fig. 1D, the processing unit 102 receives setting information stored in the processing device 106 and generates predicted image information based on the setting information. The setting information includes basic information about the environment or the target object, and various values for generating prediction image information. In addition, the processing unit 102 transmits a plurality of pieces of information including the image information from the imaging unit 101 to the processing device 106. These pieces of information are transmitted to the processing device 105 via the processing device 106. The processing device 105 generates various values for generating predicted image information based on the received pieces of information, and transmits the generated values to the processing device 106. The processing device 106 updates the basic information and values stored therein and retains the updated information and various values as new information. As described above, the imaging display apparatus 140 can generate predicted image information using information stored in the external device.
In fig. 1A, the processing unit 102 predicts image information to be captured by the imaging unit 101 based on the image information acquired by the imaging unit 101, and transmits the image information to the display unit 103 as predicted image information. In addition, the processing unit 102 may process other types of information such as temperature/humidity information, acceleration information, and pressure information together with the image information. The same can be said for the processing unit 102 in fig. 1B, the processing unit 102 and the processing device 105 in fig. 1C, and the processing unit 102 and the processing devices 105 and 106 in fig. 1D described in the modification.
Subsequently, the operation of the imaging display apparatus 100 of the present exemplary embodiment will be described with reference to fig. 2A and 2B. Fig. 2A and 2B are diagrams illustrating the operation of the imaging display apparatus 100 of the present exemplary embodiment and the relationship between image information and predicted image information with respect to one frame at a time. In fig. 2A and 2B, the image information at time Tn is expressed as "An", and the future image information (predicted image information) processed by the processing unit 102 is expressed as "Bn".
The operation of the imaging display apparatus 100 according to the present exemplary embodiment will be described with reference to fig. 2A. In this operation, the imaging unit 101 performs an imaging operation to perform an imaging operation at time T-2Acquiring image information A-2At time T-1Acquiring image information A-1At time T0Acquiring image information A0And at time T+1Acquiring image information A+1. Next, the processing unit 102 bases on the input image information A, respectively-1、A0And A+1Generating predicted image information B0、B+1And B+2. Then, the processing unit 102 predicts the image information B0、B+1And B+2Output to the display unit 103. The display unit 103 performs the display based on the time T0Predicted image information B of0Based on the image at time T+1Predicted image information B of+1And based on the image at time T+2Predicted image information B of+2The display operation of the image of (2).
In other words, the imaging unit 101 at a certain time T-1For obtaining image information A-1And at a certain time T-1Late time T0Process for acquiring and image information A-1Different image information A0The imaging operation of (1). Display unit 103 at time T0Performing a display operation for displaying the image based on the image information A-1Generated predicted image information B0The image of (2). In addition, at specific time T0Late time T+1At this point, the imaging unit 101 performs an imaging operation for acquiring and imaging the image information a0Different image information A+1. Then, the display unit 103 performs a display operation for displaying the slave image information a0Generated predicted image information B+1The image of (2).
Here, a comparative example will be described with reference to fig. 3. In the imaging display apparatus not having the processing unit 102 described in the present exemplary embodiment, at time T-1Image information a captured by the imaging unit 101-1At time T0Displayed by the display unit 103.
A difference between a configuration in which the predicted image information of the present exemplary embodiment in fig. 2A is displayed and a configuration in which the image information captured by the imaging unit 101 in fig. 3 is displayed as it is will be described. If the current time is time T0Then image information A0Corresponding to the real phenomenon (i.e., the actual image) at that time. The information serving as a source of an image to be displayed by the display unit 103 of the present exemplary embodiment is predicted image information B0. If the image information captured by the imaging unit 101 is used as it is, the information serving as the source of the image displayed by the display unit 103 is image information a-1. Here, the amount of change of the image information is expressed as "(image information a)-1-image information A0) Not less than (predicted image information B)0-image information A0)". Thus, with the configuration described in the present exemplary embodiment, an imaging display apparatus that performs reduction of a difference between a real phenomenon and a displayed image can be obtained.
The timing for displaying the predicted image information of the present exemplary embodiment will be described. The processing unit 102 of the present exemplary embodiment generates predicted image information to reduce a delay time between image information captured by the imaging unit 101 at a certain time and an image to be displayed by the display unit 103. It is desirable to set the timing of displaying the predicted image information as follows.
First, assume that the imaging unit 101 captures an image at an optional time Tn. The processing unit 102 generates predicted image information based on the image information acquired at time Tn. The time at which the display unit 103 displays an image based on the predicted image information generated with respect to the time Tn is expressed as time Tm. Here, the difference Δ T between the imaging timing and the display timing can be expressed by formula 1.
Δ T ═ Tn-Tm formula 1
The display frame rate DFR (fps: frames per second) indicates the number of images displayed by the display unit 103 per second. The imaging display apparatus is controlled in such a manner that the difference Δ T satisfies formula 2. More preferably, the imaging display apparatus is controlled in such a manner that the difference Δ T satisfies formula 3.
-2/DFR. ltoreq. DELTA.T. ltoreq.2/DFR equation 2
-1/DFR.ltoreq.DELTA.T.ltoreq.1/DFR equation 3
For example, when the display frame rate is 240fps, the time taken to display one image (one frame) after capturing the image is approximately 4 × 10-3And second. Thus, the difference Δ T is expressed as follows.
-4×10-3≤ΔT≤4×10-3Equation 4
By displaying the image based on the predicted image information at the above timing, it is possible to display a moving image with a small amount of delay time between the actual image and the displayed image. This moving image display may be referred to as a "real-time display". Thus, in the present exemplary embodiment, real-time display, or precisely, pseudo real-time display may be performed. The present disclosure is also efficiently applied to moving images in addition to still images.
In addition, the timing difference can be used to generate predicted image information in addition to displaying an image at the timing. Image information captured by the imaging unit 101 at selectable times is expressed as "An". The image information displayed at the same time by the display unit 103 is expressed as "Dn". Here, the difference between the image information An and the image information Dn (i.e., the time difference between the image information An and the image information Dn) may be expressed as Δ a ═ Dn-An. In the exemplary embodiment of fig. 2A, the image information Dn is equal to the image information Bn (Dn ═ Bn). In other words, the time difference between the image information captured by the imaging unit 101 at a certain time (i.e., the real phenomenon (actual image) at that time) and the image information displayed by the display unit 103May be. + -. 4X 10-3And second. When the time difference between the image information is + -4 × 10-3At second, the image displayed by the display unit 103 is delayed by 4 × 10 with respect to the actual image at a certain time-3Delayed image of second, or 4 x 10 after the actual image at a certain time-3Future images of seconds. It is desirable to generate the predicted image information under the above conditions. Further, for example, the image information An and the image information Dn may be compared by using raw data of the image information An and the image information Dn. Then, when calculating the root mean square of the difference, the image information Dn may fall within ± 4 × 10-3In the range of seconds. The processing unit 102 sets various parameters for generating the next prediction image information by using this difference information.
As shown in fig. 3, when image information captured by the imaging unit 101 is displayed on the display unit 103, a delay time occurs. In particular, when the additional image processing is performed, the delay time becomes 100 × 10-3And second. However, by generating the predicted image information of the present exemplary embodiment, an image having no time difference from the actual image can be displayed.
As examples of the additional image processing, dark field image processing for increasing the brightness of a dark image, image enlargement processing for enlarging and displaying a small object, and thermal display processing for displaying a thermal image are given. With the processing according to the present exemplary embodiment, it is possible to perform real-time display even if additional time is required to perform the above-described image processing.
Next, the operation shown in fig. 2B will be described. In this operation, the imaging unit 101 performs an imaging operation for at time T-2Acquiring image information A-2At time T-1Acquiring image information A-1At time T0Acquiring image information A0And at time T+1Acquiring image information A+1. The processing units 102 are based on the input image information a, respectively-1、A0And A+1Generating predicted image information B+1、B+2And B+3. Then, the processing unit 102 predicts the image information B+1、B+2And B+3Output to the display unit 103. The display unit 103 performs a display operation for displaying at time T0Displaying based on predicted image information B+1At time T+1Displaying based on predicted image information B+2And at time T+2Displaying based on predicted image information B+3The image of (2). In other words, the processing unit 102 predicts that it will be at time T0Captured image information, and the display unit 103 at time T0And displaying the image based on the predicted image information. In this way, information on a future time later than the imaging time can be displayed at the imaging time. By continuously repeating the above-described operations, images of future times later than the actual image are continuously displayed, that is, the images can be displayed as video images.
Image information serving as a source of predicted image information will be described. For example, in FIG. 2A, based on image information A-1Generating predicted image information B0. In FIG. 2B, based on the image information A-1Generating predicted image information B+1. In other words, one piece of predicted image information is generated based on one piece of image information. However, one piece of predicted image information may be generated based on two or more pieces of image information. For example, in FIG. 2A, the image information A may be based-2And A-1Generating predicted image information B0And in fig. 2B, may be based on the image information a-2And A-1Generating predicted image information B+1. Thus, it is possible to generate predicted image information by using at least one piece of image information.
The frame rate of the present exemplary embodiment will be described. First, the number of pieces of image information acquired by the imaging unit 101 per second is defined as an imaging frame rate sfr (fps). Then, as described above, the number of pieces of image information displayed by the display unit 103 per second is defined as the display frame rate dfr (fps). At this time, in the present exemplary embodiment, the relationship between the frame rates in fig. 2A and 2B is expressed as "SFR ═ DFR". However, the imaging frame rate and the display frame rate may be different from each other. Specifically, it is desirable that the frame rate satisfies the condition "SFR ≧ DFR" because predicted image information can be generated from a plurality of pieces of captured image information.
Next, the configuration of the imaging display apparatus 100 will be described. Examples of the photoelectric conversion element included in the imaging unit 101 include a photodiode, a photogate, and a photoelectric conversion film. For example, silicon, germanium, indium, gallium, and arsenic may be used as materials for photodiodes and photogates. As examples of the photodiode, a positive-negative (P-N) junction type photodiode, a positive-intrinsic-negative (PIN) type photodiode, and an avalanche type photodiode can be given.
For example, a Complementary Metal Oxide Semiconductor (CMOS) image sensor may be used as the imaging unit 101, and the CMOS image sensor may be a front-side illumination type or a back-side illumination type. In addition, the CMOS image sensor may have a structure in which a semiconductor substrate on which a photodiode is arranged and a semiconductor substrate on which a scanning circuit and a control circuit are arranged are stacked on each other.
In addition, the material of the photoelectric conversion film may be an organic material or an inorganic material. The organic photoelectric conversion film has a structure disposed at a position between a pair of electrodes, the structure having at least one organic layer for performing photoelectric conversion. The organic photoelectric conversion film may also have a structure having a plurality of organic layers stacked and arranged at a position between a pair of electrodes. The organic layer may be made of a single material or a plurality of materials mixed together. In addition, the organic layer may be formed by a vacuum vapor deposition process or a coating process. For example, a quantum dot type photoelectric conversion film using a quantum dot thin film layer containing a fine semiconductor crystal instead of an organic layer, or a perovskite type photoelectric conversion film including a photoelectric conversion layer composed of a transition metal oxide having a perovskite structure is used as an inorganic photoelectric conversion film.
The display unit 103 includes a plurality of light emitting elements. The light emitting element may be a Liquid Crystal Display (LCD), an inorganic Light Emitting Diode (LED), an organic LED (oled), or a quantum dot LED (qled). For example, materials such as aluminum, gallium, arsenic, phosphorus, indium, nitrogen, selenium, zinc, diamond, zinc oxide, and/or perovskite semiconductors are used for inorganic LEDs. The inorganic LED having a P-N junction structure formed by using the above materials emits light having an energy (wavelength) corresponding to a band gap between the above materials. For example, an organic LED includes a light-emitting layer containing at least one organic light-emitting material disposed at a position between a pair of electrodes. The organic LED may include a plurality of light emitting layers, and may have a structure in which a plurality of organic layers are stacked one on another. The light emitting layer may be made of a single material or a plurality of materials mixed together. The light emitted from the light emitting layer may be fluorescence or phosphorescence, and the light may be monochromatic light (e.g., blue, green, or red light) or white light. In addition, the organic layer may be formed by a vacuum vapor deposition process or a coating process.
In addition, the imaging display device may have a structure in which at least three chips of the imaging unit 101, the processing unit 102, and the display unit 103 are stacked and electrically connected to each other through semiconductor processing.
In the case where the imaging display device 100 of the present exemplary embodiment is used as a wearable device, it is desirable that the amount of data to be processed by the processing unit 102 be as small as possible. This is because the wearable device needs to be reduced in size, weight, and thickness as much as possible, and if the data processing load is small, the chip size of the processing unit 102 can be further reduced. To reduce the data processing load, the AI process may be performed by another device (e.g., a device provided on the cloud) as shown in fig. 1C or 1D. In addition, a method for reducing the resolution of a portion other than the line-of-sight region, a method for processing a portion other than the line-of-sight region as a still image, or a method for processing a portion other than the line-of-sight region in monochrome instead of color may be used as a method for reducing the amount of processing.
If it takes a long time to capture and display an image of a real phenomenon such as a landscape, a difference occurs between the real phenomenon and the displayed image. For example, if there is a difference between the real phenomenon and the display image, the user cannot perform an operation for capturing a moving object. However, according to the present exemplary embodiment, it is possible to provide an imaging display apparatus that performs reduction of a time difference between a real phenomenon and a display image. Therefore, an operation for capturing the moving object can be performed.
Hereinafter, a second exemplary embodiment will be described. The present exemplary embodiment will be described with reference to fig. 4 and 5. Fig. 4 is a schematic diagram illustrating the imaging display apparatus 400 according to the present exemplary embodiment. Similar to the imaging display apparatus 130 shown in fig. 1C, the imaging display apparatus 400 includes an imaging unit 101, a processing unit 102, a display unit 103, and a processing device 105. The imaging display device 400 further comprises a detection unit 107 for detecting environmental information. In the imaging display apparatus 400, each of the processing unit 102 and the processing device 105 includes the AI unit 104.
The detection unit 107 includes at least one sensor. The sensor may detect at least one piece of environmental information. As examples of the environmental information, information on atmospheric temperature, water temperature, humidity, atmospheric pressure, water pressure, and brightness is given. In addition, the detection unit 107 may also acquire physical information such as the acceleration of the object captured by the imaging unit 101. In the present exemplary embodiment, although the detection unit 107 is built in the imaging display apparatus 400, the detection unit 107 may be arranged externally.
For example, the AI unit 104 having the deep learning function generates predicted image information based on the image information acquired from the imaging unit 101 and the environment information acquired from the detection unit 107. At this time, predicted image information regarding time in consideration of the time taken for the processing unit 102 to perform processing is generated. In other words, if an image is captured at a certain time, predicted image information relating to a future time is generated, and the time required to perform processing is added to the certain time in addition to the time taken to capture and display the image. The details of the operation are similar to those described in the first exemplary embodiment.
Subsequently, the processing performed by the AI unit 104 will be described taking as an example a scene in which people are playing baseball. Fig. 5 is a diagram illustrating the operation of the imaging display apparatus of the present exemplary embodiment. FIG. 5 illustrates at four different times T-3、T-2、T-1And T0Examples of four types of captured images.
Actual image in fig. 5The actual image is illustrated (under illumination). Here, the actual image is referred to as being at time T-3、T-2、T-1And T0Each time an image captured by the imaging unit 101. The actual image (under light) illustrates at time T when a person plays baseball in a bright environment (under light)-3、T-2、T-1And T0The actual image captured at each time. Because the pitcher and ball can be clearly identified, the batter can hit the ball at time T0Hit and return the ball.
The actual image (in the dark) illustrates the actual image. Here, the actual image is referred to as being at time T-3、T-2、T-1And T0Each time an image captured by the imaging unit 101. The actual image (in the dark) illustrates at time T when a person plays baseball in a dark environment (in the dark)-3、T-2、T-1And T0The actual image captured at each time. Since the ball cannot be seen with the naked eye in such a dark environment, the batter cannot be left at time T0Hit and return the ball. Thus, albeit at a corresponding time T-3、T-2And T-1The actual image of the pitcher and ball under light and the actual image in the dark move and position the same, but at time T0Different results were obtained.
The image of the comparative example illustrates a case where the imaging display apparatus of the comparative example is used when a person plays baseball in the dark. The image of the comparative example represents an image obtained by additionally performing image processing on an actual image captured in the dark and displayed on the imaging display apparatus. Even in a state shown in an actual image captured in the dark, in which an object cannot be seen by naked eyes, the same image as the actual image captured under light can be displayed by performing additional processing on captured image information. The additional image processing is processing for increasing the brightness of an image captured in the dark. By performing the above processing, the batter can see the ball. However, due to the delay time, at time T-1The position of the ball is different from the position in the actual image. Thus, the batter cannot be at time T0Hit and return the ball.
In the image of the present exemplary embodiment, an exemplary embodiment will be described in which the imaging display device of the present exemplary embodiment is used when a person plays a baseball in the dark. The image of the present exemplary embodiment is an image displayed on the imaging display device 100 after performing additional image processing on an actual image captured in the dark. The same image as an actual image captured under light can be displayed by performing additional processing on captured image information even in a state shown in the actual image captured in the dark where an object cannot be seen by naked eyes. In addition, as described in the imaging display apparatus 100 of the present exemplary embodiment, an image using predicted image information without a delay time can be displayed. Thus, the batter may be at time T0Hit and return the ball as if the batter were playing a baseball under light. At time T-3、T-2、T-1And T0The movement and position of the pitcher and the ball in the image of the present exemplary embodiment are substantially the same as those in the actual image captured under illumination.
As in the present exemplary embodiment, an image can be displayed in real time by the imaging display apparatus using the predicted image information. Therefore, the position of the moving object such as the ball can be accurately identified. In addition, by using the predicted image information, an image can be displayed in real time even if additional processing is performed on the captured image information. The imaging display device according to the present exemplary embodiment is ideal for capturing and displaying the movement of a moving object in sports such as baseball.
In addition, in the example shown in fig. 5, the detection unit 107 detects information on wind direction and wind speed, and outputs the detected information to the processing unit 102. The processing unit 102 predicts the speed or route of the ball based on the above information, and generates predicted image information about the ball. In addition, the detection unit 107 can determine whether or not additional image processing is to be performed by the processing unit 102 by detecting luminance information.
In addition, in the imaging display apparatus 400 of the present exemplary embodiment, it is desirable that the imaging frame rate SFR is larger than the display frame rate DFR. For example, the imaging frame rate SFR and the display frame rate DFR may be set to 500fps and 60fps, respectively. Since the imaging frame rate SFR is larger than the display frame rate DFR (SFR > DFR), one piece of predicted image information can be generated based on a plurality of pieces of image information. Therefore, the rate of coincidence between the display image and the actual image is increased, so that the movement of the moving object can be accurately displayed.
In addition, the various types of information transmitted to the external processing apparatus 105 in fig. 1D include various types of information acquired by the detection unit 107 of the present exemplary embodiment shown in fig. 4. For example, batter's shot data, data regarding the batter's shot style, weather information, and user (batter) information are basic information stored in the processing device 106.
Hereinafter, a third exemplary embodiment will be described. In the present exemplary embodiment, another operation of the imaging display apparatus 400 described in the second exemplary embodiment will be described. Fig. 6A and 6B are diagrams illustrating the operation of the imaging display apparatus of the present exemplary embodiment. Fig. 6A illustrates an actual image, and fig. 6B illustrates an image based on predicted image information.
The actual image shown in fig. 6A may be referred to as image information 600 acquired by the imaging unit 101 at a certain time. The processing unit 102 detects the moving object 602 by using image information captured before a certain time and at least two pieces of image information included in the image information 600. A stationary object 601 characteristic of the scenery may be detected at the same time. The characteristic stationary object 601 can be identified by line-of-sight detection described below. Gaze detection refers to eye tracking. In fig. 6A, for example, two trees are characteristic stationary objects 601, while a train is a moving object 602. The subsequent image processing method will be described.
One of the image processing methods is a method of partially generating predicted image information by the AI unit 104 only with respect to the portion of the image information determined to be the moving object 602. Then, since the actual image of the still object 601 is unlikely to be changed, prediction image information based on the image information on the still object 601 is not generated. Predicted image information 610 generated by the above-described method is shown in fig. 6B. For illustrative purposes, the moving object 602 in fig. 6A remains shown in fig. 6B. In the prediction image information 610, the position of the moving object 612 is different from the position of the moving object 602, and the position of the stationary object 611 is not changed from the position of the stationary object 601 (not shown). By the above processing, the load of the processing unit 102 can be reduced while displaying an image in real time.
Another image processing method is a method of further performing additional image processing only on the portion determined as the moving object 602. In the additional image processing, the resolution of the portion determined as the moving object 602 is improved and/or refined, while the resolution of each portion determined as the stationary object 601 is reduced. The high-resolution moving object 612 and the low-resolution stationary object 611 are displayed on the display unit 103. Since the processing changes depending on the part, the load of the processing unit 102 can be reduced. In addition, since the moving object 612 is displayed at a high resolution and the stationary object 611 is displayed at a low resolution, a natural image close to an image seen by human eyes can be provided.
Another image processing method is a method in which if there is time before the signal is output to the display unit 103 after the predictive image information is generated with respect to the moving object 602, the predictive image information is generated with respect to the portion of the stationary object in the periphery of the moving object 602. With this method, an image with higher accuracy can be displayed in real time.
As described above, the processing unit 102 performs moving object detection on the image data received from the imaging unit 101, and changes the processing method depending on the part. With this processing method, an image can be displayed with high quality while reducing the load on the processing unit 102.
Further, a method other than the method using two or more pieces of image information is also provided as a method for detecting a stationary object and a moving object. For example, a moving object detection unit may be provided as the detection unit 107. The moving object detection unit may include a ranging sensor. The number of stationary objects 601 and moving objects 602 to be detected by the moving object detection unit is not limited.
Hereinafter, a fourth exemplary embodiment will be described. The imaging display device of the present exemplary embodiment will be described with reference to fig. 7 and 8. Fig. 7 is a schematic diagram illustrating the imaging display apparatus 700 of the present exemplary embodiment. The imaging display apparatus 700 includes a line-of-sight detection unit 108 instead of the detection unit 107 included in the imaging display apparatus 400 shown in fig. 4. In fig. 7, the line of sight detection unit 108 is built into the imaging display device 700. Alternatively, the line of sight detection unit 108 may be arranged externally.
In fig. 7, image information captured by the imaging unit 101 is output to the processing unit 102. Meanwhile, the sight line information acquired by the sight line detection unit 108 is output to the processing unit 102. The processing unit 102 generates predicted image information by the AI unit 104. In addition, the processing unit 102 generates predicted line-of-sight information by predicting the movement and position of the future line-of-sight based on the line-of-sight information. Then, based on the predicted line-of-sight information, the processing unit 102 performs additional image processing to increase the resolution of the portion where the line-of-sight exists and decrease the resolution of the portion other than the portion where the line-of-sight exists, and generates final predicted image information. While performing the real-time display, the display unit 103 may display the portion where the predicted line of sight exists at a high resolution and display the portion other than the portion where the predicted line of sight exists at a low resolution.
This operation will be described in detail with reference to fig. 8. Fig. 8 is a diagram illustrating the operation of the imaging display apparatus of the present exemplary embodiment. Similar to fig. 5, fig. 8 illustrates a scenario in which a person is playing baseball. FIG. 8 illustrates at four different times T-3、T-2、T-1And T0Examples of three types of images captured.
The actual image is at time T-3、T-2、T-1And T0Each time captured image. Batter at time T0Hit and return the ball. The descriptive image schematically illustrates a line of sight (line of sight region) on the actual image. The line of sight is detected by the line of sight detection unit 108. In the context of a descriptive image,the line of sight is adjusted to the ball and moves as the ball moves. The image of the present exemplary embodiment is an image obtained using the predicted line-of-sight information of the present exemplary embodiment. This image is based on the predicted image information, to which image processing for increasing the resolution of the portion where the line of sight exists and reducing the resolution of the portion other than the portion where the line of sight exists is additionally performed. The portion of the ball is displayed at a high resolution and the portion of the pitcher is displayed at a low resolution. By the above processing, a high-quality image can be displayed in real time while reducing the load on the processing unit 102.
Although the imaging display apparatus 700 includes the line-of-sight detection unit 108 instead of the detection unit 107 shown in fig. 4, the imaging display apparatus 700 may include both the detection unit 107 and the line-of-sight detection unit 108. The configuration thereof may be changed as appropriate. In addition, the line of sight detecting unit 108 may employ alternative methods, such as a method for detecting the position of the iris of the eye or a method using corneal reflection by emitting infrared light.
Hereinafter, a fifth exemplary embodiment will be described. In the present exemplary embodiment, a processing method to be executed when the line of sight detecting unit 108 of the fourth exemplary embodiment detects a plurality of line of sight regions will be described with reference to fig. 9. Fig. 9 is a diagram illustrating an operation of the imaging display apparatus of the present exemplary embodiment. Similar to fig. 8, fig. 9 is a diagram illustrating a scenario in which a person is playing baseball. FIG. 9 illustrates at three different times T-3、T-2And T-1Three examples of captured images. In the present exemplary embodiment, predicted image information in which a plurality of line-of-sight regions are weighted and processed into a resolution based on the weighting is generated. The display unit 103 displays an image of a portion corresponding to the sight line region at a resolution based on the weighting, and displays an image of a portion other than the sight line region at a low resolution.
This operation will be described in detail with reference to fig. 9. The actual image is at time T-3、T-2And T-1Each time captured image. Each actual image includes a pitcher and runner on first base, and the batter is about to be at time T-1Hit and return the ball.
The descriptive image of fig. 9 schematically illustrates the sight-line region on the actual image. A weighting value (%) is indicated for each sight-line region. At time T-3The batter's line of sight is adjusted to the ball and runner thrown by the pitcher. Here, the visual line area of the ball is weighted 60%, and the visual line area of the runner is weighted 40%. At time T-2The batter's line of sight is adjusted to the ball. Thus, the sight areas of the ball, runner and pitcher are weighted 90%, 8% and 2%, respectively. At time T-1The batter's line of sight is primarily directed to the ball. The sight areas of the ball, runner and pitcher were weighted 98%, 1% and 1%, respectively. Such weighting is performed by the processing unit 102, and the value of the weighting may be determined based on the movement of the line of sight detected by the line of sight detection unit 108. Alternatively, this determination may be performed by another AI unit 104.
The image of the present exemplary embodiment shown in fig. 9 is an image using the predicted line-of-sight information according to the present exemplary embodiment. The processing unit 102 adjusts the resolution of the portion based on the weighting of the line-of-sight region shown in the descriptive image, and generates predicted image information. Each value shown in the image of the present exemplary embodiment represents a ratio of resolutions, and this is a value at which the maximum resolution is 100%. The portion outside the sight line region can be displayed with the minimum resolution. With this configuration, an image close to an image seen by human eyes can be displayed, and the processing load of the processing unit 102 can also be reduced.
Hereinafter, a sixth exemplary embodiment will be described. The imaging display device of the present exemplary embodiment can display an image using light other than visible light (near infrared light, and ultraviolet light). For example, in the imaging display apparatus 400 shown in fig. 4, the imaging unit 101 includes a photoelectric conversion element that performs detection in the visible light region and a photoelectric conversion element that performs detection of light of a wavelength band that falls outside the visible light region. For example, the imaging unit 101 includes at least two area sensors. A photoelectric conversion element for visible light is arranged on one of the two area sensors, and a photoelectric conversion element for invisible light is arranged on the other area sensor. Alternatively, the imaging unit 101 includes one area sensor. The area sensor includes at least one photoelectric conversion element for visible light and one photoelectric conversion element for invisible light.
With the above-described imaging unit 101, in addition to image information on the visible light region, an image signal of the invisible light region including the near infrared light region can be acquired. Based on the above image information, the processing unit 102 generates predicted image information about one visible light region. With this configuration, an image with improved and/or refined sensitivity can be displayed even in a state where the sensitivity is low in the visible light region. In other words, the imaging display device of the present exemplary embodiment also displays an image invisible to human eyes in real time. The above-described imaging display device of the present exemplary embodiment is applicable to, for example, night vision devices, surveillance devices, binoculars, telescopes, and medical inspection devices.
Hereinafter, a seventh exemplary embodiment will be described. In the above-described exemplary embodiment, the predicted image information is generated by the AI process using the deep learning function. In the present exemplary embodiment, a trained model established by machine learning is used. Data is collected from the experts and a trained model is built based on the collected data. Then, the imaging display apparatus in which the predicted image information is generated based on the data collected from the expert is applied to the non-expert. For example, data is acquired from professional athletes and a trained model is built based on the acquired data. This trained model is then used by the AI unit to generate predictive image information. By using the imaging display device of the present exemplary embodiment, a non-professional athlete may virtually experience the vision or attention of a professional athlete, and thus the athletic ability of the non-professional athlete may be improved and/or refined in a shorter period of time. The present exemplary embodiment is also applicable to fields where it is desired to inherit the expertise of experts. For example, the present exemplary embodiment may be applied to occupations in various fields requiring professional skills, such as pilots, doctors, traditional tradesmen, and security services.
Hereinafter, an eighth exemplary embodiment will be described. An application example of the imaging display device according to each of the above exemplary embodiments when applied to a wearable device will be described with reference to fig. 10A to 10E. The imaging display device may be applied to wearable devices such as smart glasses, Head Mounted Displays (HMDs), or smart contact lenses.
Fig. 10A is a schematic diagram illustrating smart glasses 1000. The smart glasses 1000 are also called glasses type imaging display devices or glasses. The smart glasses 1000 include a glasses frame and an imaging display device according to the above-described exemplary embodiments. Specifically, the smart glasses 1000 include at least two imaging units 1001, a processing unit 1002, and a display unit 1003. The two imaging units 1001 are arranged on the sides of the spectacle frame and the processing unit 1002 is housed within the temple of the spectacle. The display unit 1003 is arranged at an optional position depending on a display form, and may be included in the lens 1011. In any case, the display unit 1003 displays an image on the lens 1011. The processing unit 1002 may include an AI unit. The smart glasses 1000 may include an external interface so that the processing unit 1002 may exchange data with an external AI unit.
The smart glasses 1000 in fig. 10A may include two imaging display devices for the right and left eyes, respectively. In this case, the timings for capturing and displaying images may be optionally set in the respective imaging display apparatuses for the right and left eyes. In particular, images may be captured at the same time and displayed at different times, or images may be captured at different times and displayed at the same time.
Fig. 10B is a schematic diagram illustrating a smart contact lens 1020. The smart contact lenses 1020 are also referred to as contact lens type imaging display devices or contact lenses. The smart contact lens 1020 includes an imaging display device 1021 and a control device 1022. The control device 1022 functions as a power supply unit for supplying power to the imaging display device 1021. The control device 1022 includes an AI unit and supports a processing unit of the imaging display device 1021. Further, the AI unit may be disposed on a different terminal than the smart contact lens 1020. It is desirable to arrange an optical system for collecting light to the imaging display device 1021 on the smart contact lens 1020. The power supply unit comprises an interface for connecting a power supply to the external part. The power supply unit may be connected to the external part and charged via a wired connection or a wireless connection.
A transparent material is used as a base material of the lens 1011 in fig. 10A and the smart contact lens 1020 in fig. 10B, and a display unit of the imaging display apparatus projects a display image on the transparent lens portion. At this time, an image based on predicted image information at a future time later than the display time shown in fig. 2B can be displayed so that the user can see both the actual image and the predicted image in the future. Since image information at a time later than the current time can be displayed in real time, a user playing a field ball, for example, in a baseball game, can occupy a defensive position in advance by moving in the direction of the ball hit by a batter. At this time, since the user can see both the actual image and the predicted image, the user can realize a higher level of motion expression than the user's true level of motion expression. In addition, the imaging display apparatus can freely adjust the timing for displaying the acquired image information. With this configuration, an operation suitable for the user can be selected.
As shown in fig. 10A, the imaging unit 1001 and the display unit 1003 may be arranged at different positions, or may be stacked and arranged on the line of sight. Fig. 10C is a schematic sectional view illustrating the imaging unit 1001 and the display unit 1003. Fig. 10D is a schematic plan view of the imaging unit 1001 and the display unit 1003 viewed from the imaging unit 1001 side. Fig. 10E is a schematic plan view of the imaging unit 1001 and the display unit 1003 viewed from the display unit 1003 side. Fig. 10D illustrates the centroid of the imaging region 1031 in which the pixels of the imaging unit 1001 are arranged. Fig. 10E illustrates the centroid of the display region 1033 where the pixels of the display unit 1003 are arranged. As shown in fig. 10C, it is desirable to arrange the imaging unit 1001 and the display unit 1003 in a state where the line segment a passes through two centroids. With this configuration, it is possible to reduce a variation in the wearable device caused by a positional difference between the captured image information and the display image.
Hereinafter, a ninth exemplary embodiment will be described. In the present exemplary embodiment, an imaging display system will be described. Fig. 11 is a schematic diagram illustrating an imaging display system 1100. The imaging display system 1100 of the present exemplary embodiment includes a plurality of imaging display devices 1101 and at least one control device 1102. The imaging display apparatus 1101 may be the imaging display apparatus described in any one of the exemplary embodiments, for example, the imaging display apparatus 100 shown in fig. 1A.
The plurality of imaging display devices 1101 may receive signals from the control device 1102 and transmit signals to the control device 1102. Each of the imaging display device 1101 and the control device 1102 includes an external interface unit for performing wired or wireless communication. The control device 1102 receives signals from the plurality of imaging display devices 1101 and outputs signals for controlling the plurality of imaging display devices 1101. The control device 1102 may include a part of the functions of the processing unit 102 of the imaging display device 1101. The imaging display system 1100 may further include a data storage unit, a control unit, and a processing unit. For example, the imaging display system 1100 may include the processing device 105 or 106 shown in fig. 1D. In this case, the control device 1102 may communicate with the processing apparatus 105 or 106.
For example, the imaging display system 1100 of the present exemplary embodiment can display an image on a single display unit 103 by using a plurality of pieces of image information acquired from a plurality of respective imaging units 101 included in a plurality of imaging display apparatuses 100. Specifically, for example, when a plurality of users use the respective imaging display apparatuses 100, a plurality of pieces of image information are acquired from the plurality of imaging display apparatuses 100, and an image can be displayed in real time on another imaging display apparatus 100 used by a user different from the plurality of users. For example, at least one spectator may see, in real time, an image of the line of sight of professional players playing in the same sports field.
In addition, images can be displayed on a plurality of display units 103 using image information acquired from a single imaging unit 101. Specifically, image information acquired from one imaging display apparatus 100 may be displayed in real time on a plurality of imaging display apparatuses 100 different from the imaging display apparatus from which the image information is acquired. For example, a line-of-sight image of a professional athlete may be viewed simultaneously by multiple viewers in real time.
As described above, by virtually experiencing the line of sight of professional players, the audience can view the image with a realistic sensation, which makes the audience feel as if the audience is in the stadium himself.
In addition, this system allows the respective display devices to perform imaging operations or display operations at different intervals. In addition, in the system, it is possible to share image information acquired by a plurality of imaging display devices and various types of information and use it for creating predicted image information on the plurality of imaging display devices.
Hereinafter, a tenth exemplary embodiment will be described with reference to fig. 12. Fig. 12 is a schematic diagram illustrating the imaging display apparatus 800 of the present exemplary embodiment. The imaging display device 800 shown in fig. 12 is similar to the imaging display device 130 in fig. 1C. The same reference numerals are applied to the same elements, and the description thereof will be omitted. The imaging display apparatus 800 includes an imaging unit 101, a processing unit 102, a display unit 103, and a processing device 105. The imaging display apparatus 800 further includes a recording unit 109 for recording image information. In the imaging display apparatus 800, each of the processing unit 102 and the processing device 105 includes the AI unit 104. Although the imaging display apparatus 800 includes the recording unit 109 in fig. 12, the recording unit 109 may be included in the processing unit 102. The position of the recording unit 109 may be set as appropriate. The information to be input to the recording unit 109 is, for example, image information from the imaging unit 101, that is, information before being converted into predicted image information. By recording the above-described image information, the recording unit 109 can also acquire images other than the prediction image.
The operation of the imaging display apparatus 800 according to the present exemplary embodiment will be described. The image information from the imaging unit 101 is input to both the recording unit 109 and the processing unit 102 that generates predicted image information. The operation to be performed after the image information is input to the processing unit 102 is similar to that described in the first exemplary embodiment. The image information input to the recording unit 109 is not converted into a prediction image, but is directly recorded as image information acquired by the imaging unit 101. With this configuration, the time difference between the true phenomenon and the display image is reduced by using the prediction image information generated by the processing unit 102, and the image information acquired by the imaging unit 101 can be recorded as it is.
Hereinafter, an eleventh exemplary embodiment will be described. In the present exemplary embodiment, a method of further performing the enlargement processing by the processing unit 102 and the AI unit 104 of the first exemplary embodiment will be described. Fig. 13 is a diagram illustrating the operation of the imaging display apparatus of the present exemplary embodiment. Similar to fig. 6A, the external information in fig. 13 represents an actual image, that is, image information acquired by the imaging unit 101.
Example 1 is a comparative example. The enlarged image of example 1 illustrates partial image information obtained by performing enlargement processing on a part of a specific region that is image information obtained by the imaging unit 101. Here, the display unit 103 displays an image based on the partial image information. As shown in example 1, in general, when the enlargement processing is performed on the partial area information, the resolution of the partial area information is reduced.
Example 2 is an example according to the present exemplary embodiment. The enlarged image of example 1 illustrates partial image information obtained by performing enlargement processing on a part of a specific region that is image information obtained by the imaging unit 101. Here, the display unit 103 displays an image based on the partial image information. In example 2, the processing unit 102 performs enlargement processing on the one portion, and further performs resolution improvement processing thereon. The resolution increasing process is a process for increasing the resolution. For example, resolution-enhancement processing may be performed on part of the pixel information by performing inter-pixel compensation processing using a plurality of pieces of image information. In addition, the resolution enhancement processing is performed by compensation processing using a plurality of pieces of image information or compensation processing for estimating a shape by detecting an outline of the image information.
In the imaging display device of the present exemplary embodiment, the enlargement processing of a part of the image based on the image information may be performed by the processing unit 102 and the AI unit 104. Accordingly, an image based on the enlarged partial image information can be displayed on the display unit 103. The imaging display apparatus of the present exemplary embodiment can perform resolution enhancement processing on part of image information. In addition, the image based on the enlarged partial image information described in the present exemplary embodiment may be implemented simultaneously with the function for reducing the time difference between the trueness phenomenon and the display image as described in the other exemplary embodiments.
Although the exemplary embodiment has been described by taking baseball as an example, the present disclosure is not limited thereto. The imaging display device of the present disclosure can reduce a time difference between a real phenomenon and a display image so that a user can use the device without feeling discomfort. As described above, according to an aspect of the present disclosure, an imaging display apparatus that performs reduction of a difference between a real phenomenon and a display image can be obtained.
The embodiment(s) of the present disclosure may also be implemented by a computer of a system or apparatus that reads and executes computer-executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a "non-transitory computer-readable storage medium") to perform the functions of one or more of the above-described embodiment(s), and/or that includes one or more circuits (e.g., Application Specific Integrated Circuits (ASICs)) for performing the functions of one or more of the above-described embodiment(s), as well as by, for example, reading and executing the computer-executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s), and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s), by a method performed by a computer of a system or apparatus. The computer may include one or more processors (e.g., a Central Processing Unit (CPU), Micro Processing Unit (MPU)) and may include a separate computer or a network of separate processors to read out and execute computer-executable instructions. The computer-executable instructions may be provided to the computer, for example, from a network or from a storage medium. The storage medium may include, for example, a hard disk, a Random Access Memory (RAM), a read-only memory(ROM), storage for distributed computing systems, optical disks such as Compact Disks (CD), Digital Versatile Disks (DVD), or Blu-ray disks (BD)TM) One or more of a flash memory device, a memory card, etc.
While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (20)

1. An imaging display device, comprising:
an imaging unit including a plurality of photoelectric conversion elements;
a processing unit configured to process a signal transmitted from the imaging unit; and
a display unit configured to display an image based on the signal transmitted from the processing unit,
wherein the imaging unit acquires first image information at a first time,
wherein the processing unit generates first predicted image information of a second time later than the first time based on the first image information, an
Wherein the display unit displays an image based on the first prediction image information.
2. The imaging display device of claim 1,
wherein the imaging unit acquires second image information at a second time, an
Wherein at a third time later than the second time, the imaging unit acquires third image information, and the display unit displays an image based on second prediction image information generated from the second image information and the third image information.
3. The imaging display device of claim 1,
wherein the imaging unit performs an imaging operation for acquiring fourth image information at a fourth time between the first time and the second time, an
Wherein the first predictive picture information is generated from at least the first picture information and the fourth picture information.
4. The imaging display device according to claim 1, wherein when the number of pieces of image information acquired per second by the imaging unit is defined as an imaging frame rate, and the number of images displayed per second by the display unit is defined as a display frame rate, the imaging frame rate is equal to or greater than the display frame rate.
5. The imaging display device of claim 1,
wherein the processing unit detects a moving object and a stationary object based on the first image information and identifies a first part detected as a moving object and a second part detected as a stationary object, and
wherein the processing unit performs resolution reduction processing on the second portion when the first prediction image information is generated.
6. The imaging display device according to claim 1, further comprising a line-of-sight detection unit configured to detect a line of sight,
wherein the processing unit performs processing for reducing the resolution of a region other than the line-of-sight region detected by the line-of-sight detecting unit when the first prediction image information is generated.
7. The imaging display device according to claim 6, wherein the processing unit predicts the line-of-sight region with respect to the second time when the first prediction image information is generated.
8. The imaging display device according to claim 1, wherein when the number of images displayed by the display unit per second is expressed as a Display Frame Rate (DFR), a time difference Δ a between image information acquired by the imaging unit at a certain time and predicted image information displayed by the display unit at the certain time satisfies a condition-2/DFR ≦ Δ a ≦ 2/DFR.
9. The imaging display device of claim 8, wherein the difference Δ a satisfies-4 x 10-3≤ΔA≤4×10-3The conditions of (1).
10. The imaging display device of claim 1, wherein the processing unit comprises an Artificial Intelligence (AI) unit.
11. The imaging display device of claim 10, wherein the AI unit has a deep learning function.
12. The imaging display device of claim 10, wherein the AI unit includes a trained model, and the trained model is built based on data obtained from an expert.
13. The imaging display device of claim 1,
wherein each of the plurality of photoelectric conversion elements is capable of detecting light in a visible light region and a near infrared light region, an
Wherein the processing unit performs processing for converting image information regarding a near infrared region of the image information into image information regarding a visible light region.
14. The imaging display device of claim 1,
wherein the imaging unit includes an imaging region in which imaging is performed,
wherein the display unit includes a display area for performing display, an
Wherein the centroid of the imaging area and the centroid of the display area coincide with each other.
15. The imaging display device of claim 1, wherein the imaging unit comprises a back-illuminated Complementary Metal Oxide Semiconductor (CMOS) image sensor.
16. The imaging display apparatus according to claim 1, wherein in the imaging unit, a substrate on which the plurality of photoelectric conversion elements are arranged and a substrate on which a circuit for processing signals from the plurality of photoelectric conversion elements is arranged are stacked.
17. The imaging display device of claim 1,
wherein at least three chips including a first chip on which an imaging unit is arranged, a second chip on which a display unit is arranged, and a third chip on which a processing unit is arranged are stacked, and
wherein the third chip is arranged between the first chip and the second chip.
18. The imaging display device according to any one of claims 1 to 17, wherein the display unit includes an organic Light Emitting Diode (LED) or an inorganic LED as a light emitting element.
19. A wearable device, comprising:
an imaging display device including an imaging unit having a plurality of photoelectric conversion elements, a processing unit configured to process a signal transmitted from the imaging unit, and a display unit configured to display an image based on the signal transmitted from the processing unit;
a power supply unit configured to supply power to the imaging display apparatus; and
an interface configured to wirelessly perform an external connection,
wherein the imaging unit acquires first image information at a first time, the processing unit generates first prediction image information regarding a second time later than the first time based on the first image information, and the display unit displays an image based on the first prediction image information at the second time.
20. An imaging display system, comprising:
a plurality of imaging display devices; and
a control device configured to control the plurality of imaging display apparatuses,
wherein each of the imaging display devices includes an imaging unit having a plurality of photoelectric conversion elements, a processing unit configured to process a signal transmitted from the imaging unit, and a display unit configured to display an image based on the signal transmitted from the processing unit, and
wherein the imaging unit acquires first image information at a first time, the processing unit generates first prediction image information regarding a second time later than the first time based on the first image information, and the display unit displays an image based on the first prediction image information at the second time.
CN202010130040.3A 2020-02-28 2020-02-28 Imaging display device, wearable device and imaging display system Pending CN113329165A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010130040.3A CN113329165A (en) 2020-02-28 2020-02-28 Imaging display device, wearable device and imaging display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010130040.3A CN113329165A (en) 2020-02-28 2020-02-28 Imaging display device, wearable device and imaging display system

Publications (1)

Publication Number Publication Date
CN113329165A true CN113329165A (en) 2021-08-31

Family

ID=77413363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010130040.3A Pending CN113329165A (en) 2020-02-28 2020-02-28 Imaging display device, wearable device and imaging display system

Country Status (1)

Country Link
CN (1) CN113329165A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050111551A1 (en) * 2003-11-04 2005-05-26 Kazushi Sato Data processing apparatus and method and encoding device of same
US20140176591A1 (en) * 2012-12-26 2014-06-26 Georg Klein Low-latency fusing of color image data
CN108292489A (en) * 2015-11-20 2018-07-17 索尼互动娱乐股份有限公司 Information processing unit and image generating method
CN108885799A (en) * 2016-03-23 2018-11-23 索尼互动娱乐股份有限公司 Information processing equipment, information processing system and information processing method
WO2019027258A1 (en) * 2017-08-01 2019-02-07 Samsung Electronics Co., Ltd. Electronic device and method for controlling the electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050111551A1 (en) * 2003-11-04 2005-05-26 Kazushi Sato Data processing apparatus and method and encoding device of same
US20140176591A1 (en) * 2012-12-26 2014-06-26 Georg Klein Low-latency fusing of color image data
CN108292489A (en) * 2015-11-20 2018-07-17 索尼互动娱乐股份有限公司 Information processing unit and image generating method
CN108885799A (en) * 2016-03-23 2018-11-23 索尼互动娱乐股份有限公司 Information processing equipment, information processing system and information processing method
WO2019027258A1 (en) * 2017-08-01 2019-02-07 Samsung Electronics Co., Ltd. Electronic device and method for controlling the electronic device

Similar Documents

Publication Publication Date Title
US11126014B2 (en) Eyewear, eyewear systems and associated methods for enhancing vision
EP2720464B1 (en) Generating image information
CN108351523A (en) Stereocamera and the depth map of structure light are used using head-mounted display
US20200049946A1 (en) Display apparatus and method of displaying using gaze prediction and image steering
CN216625895U (en) Electronic device
US10908427B2 (en) System for virtual reality or augmented reality having an eye sensor and method thereof
JP7418075B2 (en) Imaging display devices, wearable devices, and imaging display systems
US20210400185A1 (en) Motion-based operation of imaging devices
JP2011223284A (en) Pseudo-stereoscopic image generation device and camera
CN103716531A (en) Photographing system, photographing method, light emitting apparatus and photographing apparatus
US20240086137A1 (en) Near eye display apparatus
US11797259B2 (en) Imaging display device, wearable device, and imaging display system
CN113329165A (en) Imaging display device, wearable device and imaging display system
JP2011232371A (en) Imaging apparatus
JP2000221953A (en) Image display device, image processing method, and image display system by applying them
JP2021010158A (en) Imaging display device and wearable device
US8681263B2 (en) Imager capturing an image with a rolling shutter using flicker detection
US11579688B2 (en) Imaging display device and wearable device
US20230247290A1 (en) Image capture apparatus and control method thereof
JP2009100371A (en) Image display apparatus, and recording and reproducing apparatus
US20230379594A1 (en) Image blending
US20210256726A1 (en) Shooting control apparatus, image capturing apparatus, shooting control method, and storage medium
US20230254470A1 (en) Stereoscopic image pickup apparatus, control method, storage medium, and control apparatus
US11889196B2 (en) Systems and methods for determining image capture settings
US20230126836A1 (en) Image pickup apparatus used as action camera, control method therefor, and storage medium storing control program therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination