CN110493523B - Image display method, device, terminal and storage medium - Google Patents

Image display method, device, terminal and storage medium Download PDF

Info

Publication number
CN110493523B
CN110493523B CN201910798038.0A CN201910798038A CN110493523B CN 110493523 B CN110493523 B CN 110493523B CN 201910798038 A CN201910798038 A CN 201910798038A CN 110493523 B CN110493523 B CN 110493523B
Authority
CN
China
Prior art keywords
screen
camera
under
cameras
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910798038.0A
Other languages
Chinese (zh)
Other versions
CN110493523A (en
Inventor
黄凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910798038.0A priority Critical patent/CN110493523B/en
Publication of CN110493523A publication Critical patent/CN110493523A/en
Application granted granted Critical
Publication of CN110493523B publication Critical patent/CN110493523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Abstract

The embodiment of the application provides an image display method, an image display device, a terminal and a storage medium. The method is applied to a terminal, the terminal comprises m under-screen cameras, m is an integer greater than 1, and the method comprises the following steps: acquiring face images through n of the m under-screen cameras, wherein n is a positive integer; detecting whether a target under-screen camera exists in the n under-screen cameras, wherein the target under-screen camera is an under-screen camera with the view direction matched with the direct-view direction corresponding to the collected face image; and if the target screen lower camera exists in the n screen lower cameras, displaying the face image acquired by the target screen lower camera. The embodiment of the application improves the quality of the finally displayed face image.

Description

Image display method, device, terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of terminals, in particular to an image display method, an image display device, a terminal and a storage medium.
Background
Terminals such as mobile phones are generally equipped with a camera to implement an image capturing function.
In the related art, taking a mobile phone as an example, a front camera is disposed at a top end of a front panel of the mobile phone, and the front camera can be used for self-shooting and video call. When the user uses the front camera to shoot by oneself, need just looking at the front camera, the user is the direct vision place ahead in the self-shooting that shoots out like this.
However, if the user watches the screen during photographing or video call using the front camera, the direction of the user's sight line in the picture taken by the camera may not be in front of the front view, which may result in poor quality of the final displayed image of the terminal.
Disclosure of Invention
The embodiment of the application provides an image display method, an image display device, a terminal and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an image display method, where the method is applied to a terminal, the terminal includes m off-screen cameras, and m is an integer greater than 1, and the method includes:
acquiring face images through n of the m under-screen cameras, wherein n is a positive integer;
detecting whether a target under-screen camera exists in the n under-screen cameras, wherein the target under-screen camera is an under-screen camera with the view direction matched with the direct view direction corresponding to the collected face image;
and if the target screen lower camera exists in the n screen lower cameras, displaying the face image acquired by the target screen lower camera.
On the other hand, an embodiment of the present application provides an image display device, where the device is applied to a terminal, the terminal includes m off-screen cameras, m is an integer greater than 1, and the device includes:
the image acquisition module is used for acquiring face images through n of the m under-screen cameras, wherein n is a positive integer;
the camera detection module is used for detecting whether a target under-screen camera exists in the n under-screen cameras, wherein the target under-screen camera is an under-screen camera of which the sight line direction corresponding to the acquired face image is matched with the direct-view direction;
and the image display module is used for displaying the face image acquired by the target screen lower camera if the target screen lower camera exists in the n screen lower cameras.
In yet another aspect, an embodiment of the present application provides a terminal, which includes a processor and a memory, where the memory stores a computer program, and the computer program is loaded by the processor and executed to implement the method according to the above aspect.
In yet another aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, the computer program being loaded and executed by a processor to implement the method according to the above aspect.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
the image collected by the camera under the screen is matched with the direct-viewing direction in the sight direction corresponding to the collected face image through output display, so that the eyes in the face image finally displayed by the terminal are in front-viewing front, and the image quality finally displayed by the terminal is improved.
Drawings
Fig. 1 is a schematic diagram illustrating an operating principle of an under-screen camera according to an embodiment of the present application;
FIG. 2 is a schematic diagram of OLED screen lighting provided by one embodiment of the present application;
FIG. 3 is a flow chart of an image display method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a face image provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a face image provided in another embodiment of the present application;
fig. 6 is a schematic diagram of a terminal provided by an embodiment of the present application;
FIG. 7 is a schematic view of an off-screen camera provided by an embodiment of the present application;
FIG. 8 is a schematic view of an underscreen camera provided in accordance with another embodiment of the present application;
FIG. 9 is a schematic view of an underscreen camera provided in accordance with yet another embodiment of the present application;
fig. 10 is a block diagram of an image display apparatus provided in an embodiment of the present application;
fig. 11 is a block diagram of an image display apparatus according to another embodiment of the present application;
fig. 12 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In the method provided by the embodiment of the application, the execution main body of each step may be a terminal. The terminal may be an electronic device such as a mobile phone, a tablet computer, an e-book reader, a multimedia playing device, a wearable device, a laptop portable computer, and the like.
The terminal comprises m screen lower cameras, and m is an integer greater than 1. The under-screen camera refers to a camera positioned below the screen. As shown in fig. 1, the working principle of the under-screen camera is as follows: the screen area 100 is divided into a transparent display area 110 (may also be referred to as a light-transmitting area) and a normal display area 120 (may also be referred to as a non-light-transmitting area). The under-screen camera 130 is disposed under the transparent display area 110, and the screen corresponding to the transparent display area 110 has the characteristics of low reflection and high light transmittance. When the off-screen camera 130 is started, the transparent display area 110 above the off-screen camera 130 can allow external light to enter fully; when the off-screen camera 130 is not activated, the transparent display area 110 above the off-screen camera 130 may normally display the content as the normal display area 120.
In this embodiment, the screen of the terminal may be an OLED (Organic Light Emitting Diode) screen, each pixel in the screen has a very small size, and is divided into three sub-pixel groups of red, green and blue, and then the three sub-pixel groups of red, green and blue form different colors. As shown in fig. 2, the light emitting principle of the OLED screen is: the OLED screen includes a metal cathode layer 21, an electron transport layer 22, a self-light emitting layer 23, a hole transport layer 24, and an ITO (Indium Tin Oxide) anode layer 25. In the above-mentioned hierarchical structure of the OLED screen, the ITO anode layer 25 is located at the uppermost layer, the hole transport layer 24 is located under the ITO anode layer 25, the self-luminescent layer 23 is located under the hole transport layer 24, the electron transport layer 22 is located under the self-luminescent layer 23, and the metal cathode layer 21 is located under the electron transport layer 22. The metal cathode layer 21 and the ITO anode layer 25 are connected with a power supply, when the OLED screen is powered on, the metal cathode layer 21 generates electrons, the ITO anode layer 25 generates holes, the electrons penetrate through the electron transport layer 22 under the action of an electric field force, the holes penetrate through the hole transport layer 24 and reach the self-luminous layer 23, the electrons have positive charges, the holes have negative charges, the electrons and the holes attract each other and are bound together under the action of coulomb force to form excitons, the excitons excite luminous molecules, so that the luminous molecules are in an excited state and emit light energy, and the OLED screen emits light through the transparent hole transport layer 24 and the ITO anode layer 25.
In order to realize the transparent display region 110, one implementation is to reduce the pixel density of the display region so that the area of the region where no display pixels are arranged is increased, thereby increasing the light transmittance; another way to achieve this is to reduce the area occupied by the driving circuit, for example, to reduce the number of TFTs (Thin Film Transistors) of an AMOLED (Active-Matrix Organic Light Emitting Diode), for example, to increase the Light transmittance by using a PMOLED (Passive-Matrix Organic Light Emitting Diode); yet another way to achieve this is to use a transparent conductive electrode material, such as ITO, instead of an opaque conductive metal material to increase the light transmittance.
Next, examples of the present application will be described.
Referring to fig. 3, a flowchart of an image display method according to an embodiment of the present application is shown. In this embodiment, the method is mainly applied to a terminal for example, where the terminal includes m off-screen cameras, and m is an integer greater than 1. The method may comprise the steps of:
step 301, acquiring a face image through n of the m under-screen cameras, where n is a positive integer.
In the embodiment of the application, the terminal can acquire the face image through all the under-screen cameras, and also can acquire the face image through part of the under-screen cameras, for example, the terminal can determine which under-screen camera or cameras acquire the face image according to the sight line of a user, the current time period or the selection of the user.
Step 302, detecting whether a target under-screen camera exists in the n under-screen cameras, wherein the target under-screen camera is an under-screen camera whose sight direction corresponding to the acquired face image is matched with the direct-viewing direction.
In this embodiment of the application, the target under-screen camera may be any one of the n under-screen cameras. The fact that the sight line direction corresponding to the acquired face image is matched with the direct-view direction means that the sight line direction corresponding to the acquired face image is matched with the direct-view direction, or the included angle between the sight line direction corresponding to the acquired face image and the direct-view direction is within an allowable error angle. The sight line direction corresponding to the collected face image is matched with the direct-viewing direction, which shows that the two eyes of the user in the collected face image are cameras under the direct-viewing screen.
And 303, if the target underscreen camera exists in the n underscreen cameras, displaying the face image acquired by the target underscreen camera.
When the target screen lower camera exists in the n screen lower cameras, the face image collected by the target screen lower camera is displayed, and the fact that a user in the image finally displayed by the terminal is in front of the front view is guaranteed. The image quality in a self-photographing scene or a video call scene or other scenes is improved.
Suppose that the face images are collected by 2 sub-screen cameras (a first sub-screen camera and a second sub-screen camera), the face image collected by the first sub-screen camera is shown as part (a) in fig. 4, the face image collected by the second sub-screen camera is shown as part (b) in fig. 4, and because the sight line direction corresponding to the face image of part (b) in fig. 4 is matched with the direct-view direction, the sub-screen camera corresponding to the face image of fig. 4(b) is determined as the target sub-screen camera, that is, the second sub-screen camera is determined as the target sub-screen camera, and the face image collected by the second sub-screen camera is output.
To sum up, among the technical scheme that this application embodiment provided, through the image that the sight direction that the facial image that the output display was gathered corresponds and looks at the image that camera was gathered under the direction assorted screen for people's eye in the facial image that the terminal finally shows is the front of looking forward, has promoted the image quality that the terminal finally shows.
In addition, the screen lower camera is arranged below the screen, so that a user can see the screen picture to adjust the state of the user in real time, and the problem that the user cannot see the screen picture and look directly at the camera at the same time is solved.
Optionally, before detecting whether the target underscreen camera exists in the n underscreen cameras, the terminal needs to determine a sight line direction corresponding to the face image. Illustratively, the corresponding sight line direction of the face image can be determined by the following method:
1. for an ith under-screen camera in the n under-screen cameras, identifying the outline of eyes and the outline of eyeballs in a face image acquired by the ith under-screen camera, wherein i is a positive integer less than or equal to n;
illustratively, the corresponding sight direction of the face image can be determined by a processor in the terminal. After the camera under the screen collects the face image, the terminal stores the face image into a corresponding storage space, the processor acquires the face image from the corresponding storage space, and the outline of the eyes and the outline of the eyeballs in the face image are identified: the processor firstly judges the human eye area based on the human face image and determines the outline of the human eye; the contour of the eyeball is then determined based on the contour of the eye. The processor analyzes the gray information of the face image, and the gray corresponding to the eyeballs is higher, so that the outlines of the eyeballs can be determined.
2. Determining the position of the eyeball in the eye according to the contour of the eye and the contour of the eyeball;
when the eyes look in different directions, the positions and directions of the centers of the eyeballs are different.
3. And determining the sight line direction corresponding to the face image acquired by the ith underscreen camera based on the positions of eyeballs in the eyes.
Optionally, the corresponding gaze direction of the face image is determined based on the position of the center of the eyeball.
In summary, in the technical solution provided in the embodiment of the present application, the position of the eyeball in the eye is determined according to the contour of the eye and the contour of the eyeball, so that the sight line direction corresponding to the face image is determined based on the position of the eyeball in the eye, and the sight line direction corresponding to the face image is determined more accurately.
Illustratively, when the target camera is a first under-screen camera, the terminal displays a face image acquired by the first under-screen camera. The first under-screen camera is one of the n under-screen cameras. In practical applications, when a user may rotate the head or move the body during using the terminal, thereby causing a shift in the viewing direction, the image display method may further include the following steps:
if the target under-screen camera is switched from the first under-screen camera to the second under-screen camera, displaying a face image acquired by the second under-screen camera; the second screen lower camera is another screen lower camera except the first screen lower camera in the n screen lower cameras.
Because the under-screen camera collects the face image in real time, the determination of the sight direction corresponding to the face image collected by the under-screen camera is also carried out in real time. When the terminal detects that the target screen lower camera is switched from the first camera to the second camera, the fact that the visual line direction of the user deviates is indicated, the terminal can display the face image collected by the second screen lower camera, and the finally output image is guaranteed to be in accordance with the actual scene.
In a possible implementation manner, when the target screen lower camera is switched from the first screen lower camera to the second screen lower camera and the duration of the target screen lower camera reaches the preset duration, the face image acquired by the second screen lower camera is displayed.
In practical application, the sight direction of the user may only have a short deviation, and the user still looks at the first sub-screen camera under the normal view condition in many cases, so that the face image acquired by the second sub-screen camera does not need to be output. Therefore, when the duration of the target screen lower camera kept at the second screen lower camera reaches the preset duration, the fact that the sight direction of the user does not deviate for a short time is indicated, and at the moment, the terminal can output the face image collected by the second screen lower camera.
To sum up, in the technical scheme provided by the embodiment of the application, when the terminal detects that the camera under the target screen is switched, the face image collected by the switched camera under the screen is displayed, and the real-time performance is high.
When the terminal detects that the camera under the target screen is switched and the duration of the switching reaches the preset duration, the face image collected by the switched camera under the screen is displayed, the accuracy is high, and the problem that the sight direction of a user is short-time offset is solved.
In one example, after the terminal detects that no target under-screen camera exists in the n under-screen cameras, the following steps are executed:
1. correcting the positions of eyeballs in a face image acquired by any one of the n under-screen cameras so as to enable the sight line direction corresponding to the corrected face image to be matched with the direct-viewing direction;
the terminal is assumed to comprise a first screen lower camera and a second screen lower camera, and the terminal can correct the positions of eyeballs in the face image collected by the first screen lower camera, so that the sight line direction corresponding to the corrected face image is matched with the direct-view direction. For example, the position of the eyeball in the face image collected by the first under-screen camera is below the eye contour, and the terminal can correct the position of the eyeball to the center position of the eye contour, so that the sight line direction corresponding to the corrected face image is matched with the direct view direction.
2. And displaying the corrected human face image acquired by any one of the off-screen cameras.
In another example, after the terminal detects that no target under-screen camera exists in the n under-screen cameras, the following steps are executed:
1. respectively intercepting image areas corresponding to the face images acquired by the n under-screen cameras;
illustratively, image areas corresponding to the face images acquired by the n off-screen cameras correspond to positions of the n off-screen cameras in the screen areas. Assuming that the terminal acquires a face image through 2 sub-screen cameras (a first sub-screen camera and a second sub-screen camera), the first sub-screen camera is located above the screen area, and the second sub-screen camera is located below the screen area, the terminal can intercept an upper area of the face image acquired by the first sub-screen camera and intercept a lower area of the face image acquired by the second sub-screen camera, as shown in fig. 5, the face image of the upper half of fig. 5 is acquired by the first sub-screen camera, the face image of the lower half of fig. 8 is acquired by the second sub-screen camera, and the area corresponding to the rectangular frame in fig. 5 represents the image area. The sizes of the image areas corresponding to the face images acquired by the n off-screen cameras can be consistent or inconsistent, but the face images can be finally spliced into a complete face image.
2. Splicing image areas corresponding to the face images acquired by the n underscreen cameras to obtain spliced images;
3. correcting the positions of eyeballs in the spliced images so that the sight line direction corresponding to the corrected spliced images is matched with the direct-view direction;
and if the sight line direction corresponding to the spliced image is not matched with the direct-viewing direction, correcting the position of the eyeball in the spliced image so as to enable the sight line direction corresponding to the corrected spliced image to be matched with the direct-viewing direction. The corrected stitched image is shown in the right part of fig. 5.
4. And displaying the corrected spliced image.
To sum up, among the technical scheme that this application embodiment provided, when there is not the camera under the target screen in the camera under n screens, revise the position of eyeball in the face image that the camera was gathered under arbitrary screen to make the sight direction and the direct-view direction phase-match that the face image after revising corresponds, guaranteed that people's eye is direct-view the place ahead in the face image of final display, thereby guarantee the image quality of final display.
The image areas corresponding to the face images collected by the n underscreen cameras are respectively intercepted, the image areas are spliced to obtain spliced images, and the positions of eyeballs in the spliced images are corrected, so that the eyes in the finally displayed spliced images are in direct vision in the front, and the finally displayed spliced images are good in effect.
Illustratively, before the terminal acquires the face image through n off-screen cameras among the m off-screen cameras, the n off-screen cameras need to be started first.
In one example, the terminal further includes an eye tracking component. Illustratively, as shown in FIG. 6, an eye tracking assembly 20 is disposed at a top region of the terminal 10. The eye tracking assembly 20 may include an infrared LED (Light Emitting Diode) and an infrared camera. The infrared LED is used to emit infrared light. Infrared light enters the eye through the pupil, and outside the pupil area, the light does not enter the eye but is reflected back to the infrared camera. Thus, the infrared camera treats the pupil area as a dark area, while the rest of the eye is brighter, which is "dark pupil eye tracking". If the infrared source is close to the optical axis, it may reflect from behind the eye, in which case the pupil appears bright, which is a "bright pupil eye tracking". Whether it is a bright or dark pupil, it is critical to have the pupil be different from the rest of the eye.
The terminal can start the camera under the screen by the following mode:
1. acquiring the sight direction of a user through an eyeball tracking assembly;
the eye tracking assembly acquires a human eye image, determines the contour of a pupil based on the human eye image, determines the position of the pupil through analysis, can analyze the gazing direction of a single eye by depending on the position of the pupil inside the eye, and can determine the sight direction of a user according to the sight converging direction of two eyes.
2. Determining a target screen area corresponding to the sight direction of the user based on the sight direction of the user;
based on the direction of the user's gaze, a target screen region in which the user's gaze is focused is determined, e.g., whether the target screen region is located in the upper half, middle, or lower half of the screen region.
3. And starting n off-screen cameras corresponding to the target screen area.
And if the target screen area is located in the upper half area of the screen, starting the off-screen camera corresponding to the upper half area.
In another example, the m off-screen cameras have different periods of operation. The terminal starts n off-screen cameras in the current time period in the working period. For example, the terminal includes 3 cameras under the screen altogether (first camera under the screen, second camera under the screen and third camera under the screen), and the operating period of camera under the first screen is 5: 00-12:00, the working time period of the second under-screen camera is 12:01-21:00, the working time period of the third under-screen camera is 21:01-4:59, and the terminal starts the first under-screen camera on the assumption that the current time period is 8:00-11: 00.
In yet another example, the terminal receives a camera selection instruction; and starting the n off-screen cameras according to the camera selection instruction. The camera selection instruction is used for indicating the selected off-screen camera. The user can select by oneself which screen camera or which screens of starting, and the terminal receives camera selection command to according to camera selection command, start the camera under the screen that corresponds with it.
To sum up, in the technical scheme provided by the embodiment of the application, the power consumption of the terminal is reduced by starting part of the off-screen cameras. The terminal can start the under-screen camera corresponding to the sight direction of the user, and the under-screen camera is started more accurately; the terminal can start the corresponding under-screen camera according to the working time period, the starting time division of the under-screen camera is more reasonable, and the service life of the under-screen camera is prolonged; the terminal can also start the corresponding under-screen camera according to the selection of the user, and the selection of the under-screen camera is more flexible. The camera under the screen is started through the three different modes, and the starting of the camera under the screen is more flexible.
In one example, the screen area 100 of the terminal includes a first display area 101 and a second display area 102, the first display area 101 and the second display area 102 each include a transparent display area 110, and an off-screen camera 130 is disposed below the transparent display area 110, as shown in fig. 7; in another example, the screen area 100 of the terminal includes a first display area 101, a second display area 102, and a third display area 103, the first display area 101, the second display area 102, and the third display area 103 each include a transparent display area 110, and an off-screen camera 130 is disposed below the transparent display area 110, as shown in fig. 8; in yet another example, the screen area 100 of the terminal includes a first display area 101 and a second display area 102, the first display area 101 and the second display area 102 each include at least one transparent display area 110, and an off-screen camera 130 is disposed below the transparent display area 110, as shown in fig. 9, the off-screen camera shown in fig. 9 may be suitable for a larger-sized terminal. The number and the positions of the cameras 130 under the screen are not limited in the embodiment of the application. The number of transparent display areas 110 and corresponding off-screen cameras is increased, and the coverage rate of human eyes in the visual range of the screen can be improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 10, a block diagram of an image display device according to an embodiment of the present application is shown. The device has the functions of realizing the method examples, and the functions can be realized by hardware or by hardware executing corresponding software. The device 1000 can be applied to a terminal, wherein the terminal comprises m under-screen cameras, and m is an integer greater than 1. The apparatus 1000 may include: an image acquisition module 1010, a camera detection module 1020, and an image display module 1030.
The image acquisition module 1010 is configured to acquire a face image through n off-screen cameras of the m off-screen cameras, where n is a positive integer.
The camera detection module 1020 is configured to detect whether a target under-screen camera exists in the n under-screen cameras, where the target under-screen camera is an under-screen camera in which a sight direction corresponding to the acquired face image and a direct-view direction are matched.
The image display module 1030 is configured to display a face image acquired by the target screen lower camera if the target screen lower camera exists in the n screen lower cameras.
To sum up, among the technical scheme that this application embodiment provided, through the image that the sight direction that the facial image that the output display was gathered corresponds and looks at the image that camera was gathered under the direction assorted screen for people's eye in the facial image that the terminal finally shows is the front of looking forward, has promoted the image quality that the terminal finally shows.
Optionally, as shown in fig. 11, the apparatus 1000 further includes: a contour recognition module 1040, an eye position determination module 1050, and a line of sight determination module 1060.
The contour identification module 1040 is configured to identify, for an ith off-screen camera of the n off-screen cameras, a contour of an eye and a contour of an eyeball in a face image acquired by the ith off-screen camera, where i is a positive integer less than or equal to n.
The eyeball position determination module 1050 is configured to determine the position of the eyeball in the eye according to the contour of the eye and the contour of the eyeball.
The sight line determining module 1060 is configured to determine, based on the position of the eyeball in the eye, a sight line direction corresponding to a face image acquired by the ith off-screen camera.
Optionally, the image display module 1030 is further configured to display a face image acquired by a second sub-screen camera if the target sub-screen camera is switched from a first sub-screen camera to the second sub-screen camera;
the first screen lower camera is one of the n screen lower cameras, and the second screen lower camera is the other screen lower camera except the first screen lower camera in the n screen lower cameras.
Optionally, the image display module 1030 is further configured to, when the target under-screen camera is switched from the first under-screen camera to the second under-screen camera and the duration of the second under-screen camera reaches a preset duration, execute the step of displaying the face image acquired by the second under-screen camera.
Optionally, the apparatus 1000 further includes: an eye correction module 1070.
The eyeball correction module 1070 is configured to correct the position of an eyeball in a face image acquired by any one of the n underscreen cameras if the target underscreen camera does not exist in the n underscreen cameras, so that a sight line direction corresponding to the corrected face image matches with a direct view direction.
The image display module 1030 is further configured to display the corrected face image acquired by any one of the off-screen cameras.
Optionally, the apparatus 1000 further includes: an image capture module 1080 and an image stitching module 1090.
The image capture module 1080 is configured to capture, if the target camera does not exist in the n off-screen cameras, image areas corresponding to the face images acquired by the n off-screen cameras, respectively.
The image stitching module 1090 is configured to stitch image areas corresponding to the face images acquired by the n underscreen cameras to obtain a stitched image.
The eyeball correcting module 1070 is further configured to correct the position of the eyeballs in the stitched image, so that the sight line direction corresponding to the corrected stitched image matches with the direct viewing direction.
The image display module 1030 is further configured to display the corrected stitched image.
Optionally, the terminal further comprises an eye tracking component;
the apparatus 1000, further comprising: a zone determination module 1091 and a camera activation module 1092.
The gaze direction determining module 1060 is further configured to obtain a gaze direction of the user through the eye tracking component.
The area determining module 1091 is configured to determine a target screen area corresponding to the gaze direction of the user based on the gaze direction of the user.
The camera starting module 1092 is configured to start n off-screen cameras corresponding to the target screen area.
Optionally, the working time periods of the m off-screen cameras are different;
the camera starting module 1092 is further configured to start the n off-screen cameras in the current time period in the working period.
Optionally, the apparatus 1000 further comprises: an instruction receiving module 1093.
The instruction receiving module 1093 is configured to receive a camera selection instruction.
The camera starting module 1092 is further configured to start the n off-screen cameras according to the camera selection instruction.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 12, a block diagram of a terminal according to an embodiment of the present application is shown. The terminal may be a mobile phone, a tablet computer, an electronic book reading device, a multimedia playing device, a wearable device, or other portable electronic devices.
The terminal in the embodiment of the present application may include one or more of the following components: a processor 1210 and a memory 1220.
Processor 1210 may include one or more processing cores. The processor 1210, which connects various parts within the overall terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1220, and calling data stored in the memory 1220. Alternatively, the processor 1210 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). Processor 1210 may incorporate one or a combination of Central Processing Units (CPUs), modems, and the like. Wherein, the CPU mainly processes an operating system, an application program and the like; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1210, but may be implemented by a single chip.
Optionally, the processor 1210, when executing the program instructions in the memory 1220, implements the methods provided by the various method embodiments described above.
The Memory 1220 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1220 includes a non-transitory computer-readable medium. The memory 1220 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1220 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function, instructions for implementing the various method embodiments described above, and the like; the storage data area may store data created according to the use of the terminal, and the like.
The structure of the terminal described above is only illustrative, and in actual implementation, the terminal may include more or less components, such as: a display screen or a bluetooth module, etc., which is not limited in this embodiment.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium is also provided, in which a computer program is stored, which is loaded and executed by a processor to implement the above-mentioned method.
In an exemplary embodiment, a computer program product is also provided for implementing the above method when executed.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. An image display method is applied to a terminal, the terminal comprises m under-screen cameras, m is an integer greater than 1, and the method comprises the following steps:
acquiring face images through n of the m under-screen cameras, wherein n is a positive integer;
detecting whether a target under-screen camera exists in the n under-screen cameras, wherein the target under-screen camera is an under-screen camera with the view direction matched with the direct view direction corresponding to the collected face image;
and if the target screen lower camera exists in the n screen lower cameras, displaying the face image acquired by the target screen lower camera.
2. The method of claim 1, wherein before detecting whether a target off-screen camera exists in the n off-screen cameras, further comprising:
for an ith under-screen camera in the n under-screen cameras, identifying the outline of eyes and the outline of eyeballs in a face image acquired by the ith under-screen camera, wherein i is a positive integer less than or equal to n;
determining the position of the eyeball in the eye according to the contour of the eye and the contour of the eyeball;
and determining the sight line direction corresponding to the face image acquired by the ith under-screen camera based on the positions of the eyeballs in the eyes.
3. The method according to claim 1, wherein after displaying the face image captured by the target camera, the method further comprises:
if the target under-screen camera is switched from a first under-screen camera to a second under-screen camera, displaying a face image acquired by the second under-screen camera;
the first screen lower camera is one of the n screen lower cameras, and the second screen lower camera is the other screen lower camera except the first screen lower camera in the n screen lower cameras.
4. The method of claim 3, wherein after displaying the facial image captured by the target off-screen camera, further comprising:
and when the target screen lower camera is switched from the first screen lower camera to the second screen lower camera and the duration of the second screen lower camera reaches the preset duration, executing the step of displaying the face image collected by the second screen lower camera.
5. The method of claim 1, wherein after detecting whether a target off-screen camera exists in the n off-screen cameras, the method further comprises:
if the target under-screen camera does not exist in the n under-screen cameras, correcting the positions of eyeballs in the face image acquired by any one of the n under-screen cameras so as to enable the sight line direction corresponding to the corrected face image to be matched with the direct view direction;
and displaying the corrected human face image acquired by any one of the under-screen cameras.
6. The method of claim 1, wherein after detecting whether a target camera exists in the n off-screen cameras, the method further comprises:
if the target camera does not exist in the n under-screen cameras, respectively intercepting image areas corresponding to the face images acquired by the n under-screen cameras;
splicing image areas corresponding to the face images acquired by the n underscreen cameras to obtain spliced images;
correcting the positions of eyeballs in the spliced images so that the sight line direction corresponding to the corrected spliced images is matched with the direct viewing direction;
and displaying the corrected spliced image.
7. The method of claim 1, wherein the terminal further comprises an eye tracking component;
before n of the m screen lower cameras gather the face image, the method further comprises:
acquiring the sight direction of a user through the eyeball tracking assembly;
determining a target screen area corresponding to the gaze direction of the user based on the gaze direction of the user;
and starting n off-screen cameras corresponding to the target screen area.
8. The method of claim 1, wherein the m off-screen cameras have different periods of operation;
before n of the m screen lower cameras gather the face image, the method further comprises:
and starting the n off-screen cameras in the current time period in the working time period.
9. The method of claim 1, wherein before the acquiring the face images by n of the m off-screen cameras, further comprising:
receiving a camera selection instruction;
and starting the n off-screen cameras according to the camera selection instruction.
10. An image display device, wherein the device is applied to a terminal, the terminal comprises m under-screen cameras, m is an integer greater than 1, and the device comprises:
the image acquisition module is used for acquiring face images through n of the m under-screen cameras, wherein n is a positive integer;
the camera detection module is used for detecting whether a target under-screen camera exists in the n under-screen cameras, wherein the target under-screen camera is an under-screen camera of which the sight line direction corresponding to the acquired face image is matched with the direct-view direction;
and the image display module is used for displaying the face image acquired by the target screen lower camera if the target screen lower camera exists in the n screen lower cameras.
11. A terminal, characterized in that the terminal comprises a processor and a memory, the memory storing a computer program that is loaded and executed by the processor to implement the method according to any of claims 1 to 9.
12. A computer-readable storage medium, in which a computer program is stored which is loaded and executed by a processor to implement the method according to any one of claims 1 to 9.
CN201910798038.0A 2019-08-27 2019-08-27 Image display method, device, terminal and storage medium Active CN110493523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910798038.0A CN110493523B (en) 2019-08-27 2019-08-27 Image display method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910798038.0A CN110493523B (en) 2019-08-27 2019-08-27 Image display method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110493523A CN110493523A (en) 2019-11-22
CN110493523B true CN110493523B (en) 2021-03-16

Family

ID=68554659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910798038.0A Active CN110493523B (en) 2019-08-27 2019-08-27 Image display method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110493523B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11974032B2 (en) 2020-07-13 2024-04-30 Wuhan China Star Optoelectronics Semiconductor Display Technology Co., Ltd. Display screen and display device

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110809115B (en) * 2019-10-31 2021-04-13 维沃移动通信有限公司 Shooting method and electronic equipment
CN110941473B (en) * 2019-11-27 2023-10-24 维沃移动通信有限公司 Preloading method, preloading device, electronic equipment and medium
CN110971805A (en) * 2019-12-20 2020-04-07 维沃移动通信有限公司 Electronic equipment and photographing method thereof
CN111432155B (en) * 2020-03-30 2021-06-04 维沃移动通信有限公司 Video call method, electronic device and computer-readable storage medium
CN113497887A (en) * 2020-04-03 2021-10-12 中兴通讯股份有限公司 Photographing method, electronic device and storage medium
CN111885286B (en) * 2020-07-13 2021-09-24 武汉华星光电半导体显示技术有限公司 Display screen and display device
CN113965690A (en) * 2020-07-20 2022-01-21 珠海格力电器股份有限公司 Method, device and equipment for opening off-screen camera and storage medium
CN113965743A (en) * 2020-07-21 2022-01-21 珠海格力电器股份有限公司 Image shooting method and device and electronic equipment
CN114071002B (en) * 2020-08-04 2023-01-31 珠海格力电器股份有限公司 Photographing method and device, storage medium and terminal equipment
CN113888979B (en) * 2021-10-21 2023-09-26 武汉华星光电半导体显示技术有限公司 Flexible display module and mobile terminal

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853048A (en) * 2009-03-31 2010-10-06 联想(北京)有限公司 Adjustable image display and acquisition device
CN103246044A (en) * 2012-02-09 2013-08-14 联想(北京)有限公司 Automatic focusing method, automatic focusing system, and camera and camcorder provided with automatic focusing system
CN103984097A (en) * 2013-02-12 2014-08-13 精工爱普生株式会社 Head mounted display, control method for head mounted display, and image display system
CN106469038A (en) * 2016-09-26 2017-03-01 南京酷派软件技术有限公司 Display screen changing method based on multi-screen terminal and device
CN106973237A (en) * 2017-05-25 2017-07-21 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108012110A (en) * 2016-11-01 2018-05-08 法乐第(北京)网络科技有限公司 A kind of car outer video acquiring method, device and electronic equipment
CN108833753A (en) * 2018-06-29 2018-11-16 维沃移动通信有限公司 A kind of image obtains and application method, terminal and computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103698884A (en) * 2013-12-12 2014-04-02 京东方科技集团股份有限公司 Opening type head-mounted display device and display method thereof
CN106056092B (en) * 2016-06-08 2019-08-20 华南理工大学 The gaze estimation method for headset equipment based on iris and pupil

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853048A (en) * 2009-03-31 2010-10-06 联想(北京)有限公司 Adjustable image display and acquisition device
CN103246044A (en) * 2012-02-09 2013-08-14 联想(北京)有限公司 Automatic focusing method, automatic focusing system, and camera and camcorder provided with automatic focusing system
CN103984097A (en) * 2013-02-12 2014-08-13 精工爱普生株式会社 Head mounted display, control method for head mounted display, and image display system
CN106469038A (en) * 2016-09-26 2017-03-01 南京酷派软件技术有限公司 Display screen changing method based on multi-screen terminal and device
CN108012110A (en) * 2016-11-01 2018-05-08 法乐第(北京)网络科技有限公司 A kind of car outer video acquiring method, device and electronic equipment
CN106973237A (en) * 2017-05-25 2017-07-21 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108833753A (en) * 2018-06-29 2018-11-16 维沃移动通信有限公司 A kind of image obtains and application method, terminal and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11974032B2 (en) 2020-07-13 2024-04-30 Wuhan China Star Optoelectronics Semiconductor Display Technology Co., Ltd. Display screen and display device

Also Published As

Publication number Publication date
CN110493523A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110493523B (en) Image display method, device, terminal and storage medium
US11483491B2 (en) Image acquisition method, apparatus, and storage medium that adjusts the light transmittance of an area having a under-screen camera disposed below the area
US11132160B2 (en) Electronic terminal and display screen having a camera under a display area
CN108520888B (en) Display screen and display device thereof
CN110809878B (en) Ambient light and proximity detection method, shooting method, terminal and computer storage medium
KR102191707B1 (en) Display screen assembly, electronic device and image acquisition method
CN109257484B (en) Display screen brightness processing method, terminal device and readable storage medium
WO2020133953A1 (en) Display panel luminance correction method and display panel luminance correction device
CN106060419B (en) A kind of photographic method and mobile terminal
CN107566628B (en) Display module, display method thereof and terminal
CN110024366A (en) A kind of terminal and image pickup method with camera
CN104575445A (en) Method and apparatus for controlling screen brightness in electronic device
KR20160015775A (en) transparent display apparatus and control method thereof
CN110943105B (en) Display structure, display panel and display device
WO2021196839A1 (en) Photographing method, electronic device, and storage medium
CN111599312B (en) Light emitting control method, application processor AP, driving chip and display device
US20080094478A1 (en) Image capture and display devices, methods, and computer readable media
CN106161933A (en) A kind of image processing method and mobile terminal
CN111754929B (en) Ambient light parameter acquisition method and device and storage medium
RU2734543C1 (en) Display structure, display panel and display device
CN111258518A (en) Display control method and device of display screen and storage medium
US10769986B2 (en) OLED display device and driving method thereof
JP2023022709A (en) Light emission device, control method for the same, photoelectric conversion device, electronic apparatus, illumination device and movable body
CN110956938A (en) Screen brightness adjusting method and device, electronic equipment and readable storage medium
CN115278027B (en) Display control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant