CN109218648B - Display control method and terminal equipment - Google Patents

Display control method and terminal equipment Download PDF

Info

Publication number
CN109218648B
CN109218648B CN201811110132.4A CN201811110132A CN109218648B CN 109218648 B CN109218648 B CN 109218648B CN 201811110132 A CN201811110132 A CN 201811110132A CN 109218648 B CN109218648 B CN 109218648B
Authority
CN
China
Prior art keywords
screen
video image
terminal device
information
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811110132.4A
Other languages
Chinese (zh)
Other versions
CN109218648A (en
Inventor
徐桃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811110132.4A priority Critical patent/CN109218648B/en
Publication of CN109218648A publication Critical patent/CN109218648A/en
Application granted granted Critical
Publication of CN109218648B publication Critical patent/CN109218648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/0206Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings
    • H04M1/0208Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings characterized by the relative motions of the body parts
    • H04M1/0214Foldable telephones, i.e. with body parts pivoting to an open position around an axis parallel to the plane they define in closed position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display

Abstract

The embodiment of the invention provides a display control method and terminal equipment, which are applied to the technical field of communication and are used for solving the problem of poor video image display effect when the terminal equipment is in video call. Specifically, this scheme is applied to the terminal equipment who has first screen and second screen, and this scheme includes: under the condition that a first user and a second user carry out video call, a first video image of the first user and a second video image of the second user are obtained; the first video image is displayed on the first screen and the second video image is displayed on the second screen. The scheme is particularly applied to the process of respectively displaying the multi-party video images on different screens when the terminal equipment is in video call.

Description

Display control method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a display control method and terminal equipment.
Background
With the development of communication technology, the intelligent degree of terminal equipment such as mobile phones and tablet computers is continuously improved so as to meet various requirements of users. For example, the interest of the user in the video call of the terminal device is more and more required.
In the prior art, during a video call between one user and another user, a terminal device of one user may generally display a video image (e.g., a video image) of one party as primary information on the entire display screen, and display a video image of the other party as secondary information on an upper right corner area on the display screen.
There is a problem in that, since important contents in one video image displayed on the entire display screen by one terminal device may be blocked by another video image superimposed thereon in the prior art, the one video image is displayed poorly. Also, since the display area of the other party video image is generally small, the display effect of the other party video image is poor. Namely, the video image display effect is poor when the terminal device carries out video call.
Disclosure of Invention
The embodiment of the invention provides a display control method and terminal equipment, and aims to solve the problem that the video image display effect is poor when the terminal equipment is in video call.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a display control method, which is applied to a terminal device having a first screen and a second screen, and the display control method includes: acquiring a first video image of a first user and a second video image of a second user in a state that the first user and the second user carry out video call; the first video image is displayed on the first screen and the second video image is displayed on the second screen.
In a second aspect, an embodiment of the present invention further provides a terminal device, where the terminal device has a first screen and a second screen, and the terminal device includes: the device comprises an acquisition module and a display module; the acquisition module is used for acquiring a first video image of a first user and a second video image of a second user in a state that the first user and the second user carry out video call; the display module is used for displaying the first video image on the first screen and displaying the second video image on the second screen.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the display control method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the display control method according to the first aspect.
In an embodiment of the present invention, a terminal device has a first screen and a second screen. Under the condition that a first user and a second user carry out video call, a first video image of the first user and a second video image of the second user can be obtained; the first video image is displayed on the first screen and the second video image is displayed on the second screen. Based on the scheme, the terminal device can respectively display the first video image on the first screen and the second video image on the second screen, and the area of the display area of one screen (such as the first screen or the second screen) of the terminal device is generally larger, so that the area of the display area occupied by the first video image and the second video image displayed by the terminal device is larger, and the first video image and the second video image are not overlapped. Therefore, the display effect of the video image during the video call of the terminal equipment can be improved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a display control method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of content displayed by a terminal device according to an embodiment of the present invention;
fig. 4 is a second flowchart of a display control method according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of a possible terminal device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first user and the second user, etc. are for distinguishing different users, not for describing a specific order of the users.
The display control method and the terminal device provided by the embodiment of the invention are applied to the terminal device with at least two screens, such as the terminal device with a first screen and a second screen. Specifically, in the process of a multiparty video call by two or more users, the terminal device may display the multiparty video images on different screens of the at least two screens respectively. The area of one screen is usually large, and one video image can be displayed on one screen, so that the display effect of the multi-party video image is good. The problem that the video image display effect is poor when the terminal equipment carries out video call in the prior art can be improved.
It should be noted that, in the display control method provided in the embodiment of the present invention, the execution main body may be a terminal device, or a Central Processing Unit (CPU) of the terminal device, or a control module in the terminal device for executing the display control method. In the embodiment of the present invention, a display control method executed by a terminal device is taken as an example to describe the display control method provided in the embodiment of the present invention.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the display control method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, applications such as a system setup application, a system chat application, and a system camera application. And the third-party setting application, the third-party camera application, the third-party chatting application and other application programs.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the display control method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the display control method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the display control method provided by the embodiment of the invention by running the software program in the android operating system.
The following describes the display control method provided in the embodiment of the present invention in detail with reference to the flowchart of the display control method shown in fig. 2. Specifically, the display control method can be specifically applied to a terminal device having at least two screens. Wherein, although the logical order of the display control methods provided by embodiments of the present invention is shown in method flow diagrams, in some cases, the steps shown or described may be performed in an order different than here. For example, the display control method shown in fig. 2 may include steps 201 and 202:
step 201, in a state that a first user and a second user perform a video call, a terminal device acquires a first video image of the first user and a second video image of the second user.
The first video image of the first user is a video image corresponding to the first user in the video call, and the second video image of the second user is a video image corresponding to the second user in the video call.
Optionally, the terminal device provided in the embodiment of the present invention may be a terminal device capable of acquiring a video image, for example, the terminal device may acquire a video image through a camera (e.g., a front camera) therein, such as a first video image of a first user.
It is understood that the first video image may be a video image captured by a terminal device (denoted as terminal device 1) used by a first user, and the second video image may be a video image captured by a terminal device (denoted as terminal device 2) used by a second user.
It should be emphasized that the terminal device performing the display control method in the embodiment of the present invention, for example, the terminal device performing step 201 and step 202, may be a terminal device used by the first user, that is, the terminal device 1. At this time, the first user is an owner user, the terminal device 1 is a local device, and the second user is a non-owner user (i.e., a chat target).
The terminal devices described below in the embodiments of the present invention all refer to the terminal device 1 if not emphasized.
Optionally, the video call provided in the embodiment of the present invention may be a video call service provided by a system communication application in the terminal device, or a video call service provided by a third party communication application in the terminal device, which is not limited in the embodiment of the present invention.
Two or more video images can be included in a video call, that is, a video call can be a video call between two or more terminal devices.
Exemplarily, in the embodiment of the present invention, an example in which one video call includes two video images, that is, the first video image and the second video image, is taken as an example, to describe the display control method provided in the embodiment of the present invention. At this time, the video call is a video call between the terminal device 1 and the terminal device 2.
It should be noted that, in the embodiment of the present invention, the first video image and the second video image are both dynamic images, and at this time, the terminal device may perform processing on image frames in the first video image and the second video image respectively. Or the first video image is a frame image at a certain moment in a video call corresponding to the first user, and the second video image is a frame image at the certain moment in the video call corresponding to the second user.
Step 202, the terminal device displays the first video image on the first screen and displays the second video image on the second screen.
The first screen and the second screen are different screens of at least two screens of the terminal equipment.
It should be emphasized that the screens of the terminal device mentioned in the embodiments of the present invention, such as the first screen and the second screen, are display screens with a display function in the terminal device.
Furthermore, in the embodiment of the present invention, a screen in the terminal device may also have a touch function. At this time, the screen in the terminal device may be a combination of a display screen and a touch screen.
Alternatively, the number of the at least two screens in the terminal device may be two or more.
The at least two screens in the terminal equipment can comprise a main screen and other screens besides the main screen. The panel where the main screen of the terminal device is located is usually provided with a camera, that is, when a user faces the main screen, the terminal device can acquire a video image corresponding to the user through the camera. At this time, the terminal device usually displays the video image corresponding to the owner user on its main screen, and displays the video image corresponding to the non-owner user on other screens.
For example, in the embodiment of the present invention, the first screen in the terminal device may be a home screen of the terminal device, and the second screen may be another screen in the terminal device.
Optionally, the terminal device having at least two screens (e.g., the first screen and the second screen) provided in the embodiment of the present invention may be a folding screen type terminal device or a non-folding screen type terminal device. At least two screens in the folding screen type terminal equipment can be folded, and the folding angle between two adjacent screens in the at least two screens can be an angle between 0 degree and 360 degrees. At least two screens in the non-folding screen type terminal device may be arranged on different surfaces in the terminal device, for example, when the non-folding screen type terminal device is a mobile phone, the at least two screens (such as a first screen and a second screen) may be arranged on the front surface and the back surface of the mobile phone, respectively.
Exemplarily, as shown in fig. 3, a schematic diagram of content displayed by a terminal device according to an embodiment of the present invention is provided. Among them, the terminal device shown in fig. 3 includes a screen 31 and a screen 32, where the video image 1 corresponding to the user a is displayed on the screen 31, and the video image 2 corresponding to the user B is displayed on the screen 32. The screen 31 shown in fig. 3 may be a first screen in the terminal device, the user a may be the first user, and the video image 1 may be the first video image; the screen 32 may be a second screen in the terminal device, the user B may be the second user, and the video image 2 may be the second video image.
In which a video image 1 is displayed on the entire screen 31 and a video image 2 is displayed on the entire screen 32 as shown in fig. 3. In this case, the video image 1 and the video image 2 are displayed without overlapping each other, and occupy a large area of the display region.
Similarly, in the display control method provided by the present invention, for the description that the terminal device displays the multi-party video images in one video call on multiple screens respectively, reference may be made to the description that the terminal device displays the first video image on the first screen and displays the second video image on the second screen in the embodiment of the present invention, and details of the embodiment of the present invention are not repeated here.
It should be noted that the display control method provided in the embodiment of the present invention may be applied to a terminal device having a first screen and a second screen, and the method includes: under the condition that a first user and a second user carry out video call, a first video image of the first user and a second video image of the second user are obtained; the first video image is displayed on the first screen and the second video image is displayed on the second screen. Based on the scheme, the terminal device can respectively display the first video image on the first screen and the second video image on the second screen, and the area of the display area of one screen (such as the first screen or the second screen) of the terminal device is generally larger, so that the area of the display area occupied by the first video image and the second video image displayed by the terminal device is larger, and the first video image and the second video image are not overlapped. Therefore, the display effect of the video image during the video call of the terminal equipment can be improved.
In a possible implementation manner, in the embodiment of the present invention, the terminal device is a folding screen type terminal device. Exemplarily, as shown in fig. 4, a schematic diagram of content displayed by another terminal device provided in the embodiment of the present invention is shown. In fig. 4, after step 201 and before step 202 shown in fig. 2, step 203 to step 205 may be further included:
step 203, the terminal device obtains target information, wherein the target information comprises at least one of target character posture information and folding angle.
The target information comprises at least one of target character posture information and a folding angle, the target character posture information is character posture information in the first video image, and the folding angle is a folding angle between the first screen and the second screen.
In a first application scenario, the target information of the embodiment of the present invention includes target person pose information. The person pose information (e.g., target person pose information) may include at least one of person facial pose information, person hand pose information, and person limb motion information, and the first AR object may be a foreground special effect image. Wherein the first video image is a real image and the first AR object may be a virtual image.
For example, the character facial pose information may indicate a facial motion of the user, such as a facial motion that reflects the expression of the user. For example, the person facial pose information of the user may indicate a crying expression, a smiling expression, a kissing action, or the like. The character hand posture information can be hand postures of closing two hands, opening five fingers or scissors hands and the like. The character limb pose information may indicate a motion of the user running or a motion of the user waving his hand, etc.
Illustratively, the human facial pose information indicates a crying expression, and the first AR object may be a virtual tear image; the person facial pose information indicates a smiling expression, and the first AR subject may be a virtual smiling face image; the person face posture information indicates kissing action, and the first AR object can be an image of a virtual red lip, a love image and the like; when the character hand posture information indicates that the hands are combined, the first AR object can be a virtual Buddha light image and the like. When the character hand posture information indicates that the five fingers are opened, the first AR object can be a virtual pet image and the like. When the character limb posture information may indicate a running motion of the user, the first AR object may be a virtual smoke image or the like. The character limb posture information may indicate a hand waving motion of the user, and the first AR object may be a virtual ribbon image or the like.
Of course, the character pose information may be a combination of a plurality of pieces of information among character face pose information, character hand pose information, and character limb motion information. For example, the character pose information may indicate a motion of a user kissing, at which time the character pose information may include both character facial pose information, character hand pose information, and character limb motion information.
It should be noted that, in the embodiment of the present invention, the second AR object corresponding to the first AR object may be an AR object that interacts with the first AR object with strong interest.
For example, when the target character pose information indicates a kiss action of the user, the first AR object may be a virtual kiss expression and the second AR object may be a virtual shame expression.
The body motion information of the person provided in the embodiment of the present invention may specifically indicate an arm motion, a leg motion, a foot motion, or the like of the person, and the embodiment of the present invention is not particularly limited to this.
Alternatively, the terminal device may set priorities among the above-described character face posture information, character hand posture information, and character limb movement information, for example, priorities among the character face posture information, the character hand posture information, and the character limb movement information are sequentially lowered.
For example, the terminal device may preferentially process the person posture information with a higher priority, that is, preferentially acquire the AR object corresponding to the preset person posture information with a higher priority as the first AR object.
It can be understood that, in the embodiment of the present invention, in the first application scenario, the first AR object and the second AR object obtained by the terminal may be referred to as AR expressions.
It should be noted that, in the embodiment of the present invention, the system sets the authority for inputting the AR expression, and brings all communication applications capable of performing video call into the range capable of opening the AR expression. When the user uses the communication application, the user can select to open a certain communication application, and when the user needs to pay attention, both users (such as a first user and a second user) using the communication application to carry out video call need to open the authority.
In addition, after the terminal device 2 receives the target character posture information and the first AR object sent by the terminal device 1, the second AR object, i.e. the corresponding AR expression, such as the virtual shy image, can be found from the second image library.
In a second application scenario, the target information may include a folding angle, and the first AR object may be a background special effect image.
Illustratively, in the embodiment of the present invention, the at least one pre-stored folding angle range may include [0, 3] degrees, (3, 170] degrees and (170, 180] degrees, where when the folding angle between the first screen and the second screen of the terminal device is [0, 3] degrees, the folding screen of the terminal device is in the single-screen state, the folding range is denoted as folding range 1, when the folding angle between the first screen and the second screen of the terminal device is (3, 170] degrees, the folding screen of the terminal device is in the folding state, the folding range is denoted as folding range 2, and when the folding angle between the first screen and the second screen of the terminal device is (170, 180] degrees, the folding screen of the terminal device is in the unfolding state, and the folding range is denoted as folding range 3.
Illustratively, in the embodiment of the present invention, each preset folding angle range in the at least one folding angle range corresponds to one first AR object, that is, different screen states (i.e., states of the folding screen) correspond to different chat backgrounds, which includes but is not limited to the following correspondence relationships:
the first AR object corresponding to the single-screen state (namely, the folding range 1) is an indoor background image and the like; the first AR object corresponding to the folding state (namely the folding range 2) is an opened book image, a black-and-white key image and the like, scene subdivision can be carried out according to different folding angles, and different folding angles correspond to different scenes; the first AR object corresponding to the unfolded state (i.e., the folded range 3) is a beach landscape image, a grassland landscape image, or the like.
It is to be understood that in the second application scenario, the second AR object corresponding to the first AR object may be the same as the first AR object.
Specifically, the terminal device may obtain the current screen state read by the system in real time, and automatically switch to the corresponding chat background, and the user may freely change the current chat background by changing the screen state. That is, the user manually changes the folding angle between the first screen and the second screen of the terminal device to change the first AR object and the second AR object.
It should be emphasized that the display control method provided in the embodiment of the present invention may be applied to the first application scene and the second application scene at the same time, and at this time, the terminal device may respectively obtain the first AR object corresponding to the target person posture information and the first AR object corresponding to the folding angle, and simultaneously display the two first AR objects on the first video image. Namely, the foreground special effect image and the background special effect image are displayed in an overlapping mode on the first video image, for example, the AR expression and the chat background are displayed in an overlapping mode on the first video image.
Similarly, two or more first AR objects may also be displayed superimposed on the second video image displayed by the terminal device, that is, the foreground special effect image and the background special effect image are displayed superimposed on the second video image.
It should be noted that, in the display control method provided in the embodiment of the present invention, the terminal device may acquire the first AR object in multiple forms, for example, may acquire a foreground special effect image and a background special effect image for a person in the first video image. Therefore, two or more first AR objects can be displayed on the first video image in an overlapping mode, and interestingness of displaying the video image in the video call process of the terminal device can be further improved.
Further, in fig. 4, step 202 in fig. 2 may be replaced by step 202 a:
step 202a, the terminal device displays the first video image and the first AR object on the first screen, and displays the second video image and the second AR object on the second screen.
The terminal device 1, which is the terminal device executing the step 202a, is a virtual first AR object superimposed and displayed on the actual first video image; the displayed virtual second AR object is superimposed on the actual second video image.
Specifically, after the terminal device 1 obtains the first AR object, it may interact with the terminal device 2 to obtain the second AR object, and enable the terminal device 2 to obtain the first AR object and the second AR object.
It can be understood that in the method provided by the embodiment of the present invention, the terminal device 2 may display the second video image and the second AR object on its main screen, and display the first video image and the first AR object on other screens than its main screen.
In addition, in the embodiment of the present invention, the first video image and the second video image may be respectively displayed in different areas on the same screen in the terminal device (for example, the terminal device 1), and the first AR object and the second AR object may be respectively displayed in corresponding areas.
It can be understood that the Augmented Reality (AR) technology is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and 3D models, and the goal of the technology is to fit a virtual world on a screen in the real world and interact with the virtual world.
Specifically, based on the AR technology, the terminal device may analyze the face and the positions of the five sense organs of a person (e.g., a first user) in the first video image, and present an AR expression represented by the first AR object on the displayed face of the first user, such as a kiss expression, on the cheek of the first user as a red lip effect; the face, five sense organ position, of a person (e.g., the second user) in the second video image is analyzed to present the AR expression represented by the second AR object on the displayed face of the second user, e.g., the mimicry expression is presented as two halos on the cheek of the first user. Therefore, the first user and the second user can see more interesting expressions through the chat interfaces on the screens of the respective terminal devices, and the interestingness of video image display during video call of the terminal devices is improved.
It should be noted that, in the display control method provided in the embodiment of the present invention, the terminal device may determine, in real time, that the terminal device includes the target information, and determine the first AR object corresponding to the target information and the second AR object corresponding to the first AR object. Therefore, the terminal equipment can adopt the AR technology, the first video image and the first AR object are displayed on the first screen, the second video image and the second AR object are displayed on the second screen, and the interestingness of displaying the video image by the terminal equipment can be improved.
In a possible implementation manner, at least one preset message may be stored in the terminal device, and each preset message may include at least one of the preset character posture information and the preset folding angle range.
It is understood that the terminal device (i.e. the terminal device 1) may determine whether the current first video image is a human image; in the case where the first video image is determined to be a person image, the terminal device may determine the target person posture information indicated by the first video image.
It can be understood that, in the embodiment of the present invention, at least one piece of preset person posture information may be stored in advance, and after the terminal device determines that the first video image acquired by the terminal device is a person image and detects the person posture information indicated by the first video image, the person posture information may be compared with the previously stored person posture information to determine the target person posture information.
For example, the preset information provided by the embodiment of the present invention may include at least one preset character posture information stored in advance.
Optionally, the step 203 may include steps 203a and 203 b:
step 203a, the terminal device acquires at least one piece of character posture information in the first video image.
Optionally, the first video image may include a plurality of persons, and the persons may have priorities among themselves. For example, among the plurality of persons, the person located in the middle of the first video image has a higher priority, whereas the person has a lower priority; or, the person with the most pixels in the first video image has a higher priority, and vice versa.
The at least one piece of character posture information is character posture information of one character in the first video image or character posture information of a plurality of characters in the first video image. In the embodiment of the present invention, at least one piece of character posture information is taken as the character posture information of one character in the first video image.
For example, the terminal device may preferentially perform the above operation of acquiring at least one piece of character posture information on the character with the highest priority in the first video image. Or the terminal device may perform the above operation of acquiring at least one piece of character posture information on the characters with the priorities from high to low in the first video image in sequence.
And 203b, the terminal device takes the character posture information matched with the preset character posture information in the at least one piece of character posture information as the target character posture information.
In an embodiment of the present invention, matching one piece of character pose information with another piece of character pose information means that the two pieces of character pose information are the same or have a similarity greater than a certain threshold.
In the embodiment of the invention, the terminal equipment can acquire at least one piece of character posture information in the first video image in real time and determine that the character posture information matched with the preset character posture information is the target posture information. Therefore, the terminal equipment can successfully acquire the first AR object according to the posture information of the target character.
In a second application scenario, the target information of the embodiment of the present invention may include a folding angle.
It is understood that at least one folding angle range may be stored in advance in the embodiment of the present invention. Specifically, the terminal device (i.e., the terminal device 1) may obtain the folding angle between the first screen and the second screen of the terminal device in real time; subsequently, the terminal device may determine which of the at least one folding angle range the current folding angle is in.
In the embodiment of the invention, the terminal equipment can acquire the folding angle between the first screen and the second screen of the terminal equipment in real time, so that the terminal equipment can successfully acquire the first AR object according to the posture information of the target person in real time.
And step 204, the terminal equipment acquires a first AR object corresponding to the target information according to the target information.
It can be understood that, in the embodiment of the present invention, one preset information may correspond to one preset AR object.
Optionally, step 204 in this embodiment of the present invention may be implemented by step 204 a:
step 204a, the terminal device obtains a first AR object corresponding to the target information from the first image library according to the target information and a first corresponding relationship, where the first corresponding relationship is a corresponding relationship between the target information and the first AR object.
The first image library is an image library stored in the terminal device or a server interacting with the terminal device in advance, and the first image library is an image library corresponding to the target information. Each image in the first image library (such as the first AR object) may correspond to one piece of information (such as the target information).
Optionally, the first corresponding relationship between the target information and the first AR object may be that the target information includes an identifier, and the attribute information of the first AR object also includes the identifier.
For example, in a first application scenario, each preset person posture information in the at least one preset person posture information in the embodiment of the present invention may correspond to one AR object.
For example, in a second application scenario, each folding angle in the same preset folding angle range in the embodiment of the present invention may correspond to an AR object.
Step 205, the terminal device obtains a second AR object corresponding to the first AR object according to the first AR object.
It is understood that there is a corresponding relationship between the first AR object and the second AR object.
Optionally, the step 205 may be implemented by the step 205 a:
step 205a, the terminal device obtains a second AR object corresponding to the first AR object from the second image library according to the first AR object and a second corresponding relationship, where the second corresponding relationship is a corresponding relationship between the first AR object and the second AR object.
Illustratively, the second image library provided in the embodiment of the present invention is the same image library as the first image library.
Optionally, the second corresponding relationship between the first AR object and the second AR object may be that the attribute information of the first AR object includes an identifier, and the attribute information of the second AR object also includes the identifier.
Further, the identifier included in the target information may be the same as the identifier included in the second AR object, i.e. the target information, the first AR object and the second AR object all include the same identifier.
In addition, the first AR object and the second AR object may be a dynamic image or a static image, which is not limited in this embodiment of the present invention.
It should be noted that, in the display control method provided in the embodiment of the present invention, since the first image library and the second image library are stored in advance, when the terminal device is triggered by the user to acquire an interesting AR image, the first AR object and the second AR object that are mutually objects can be acquired more conveniently, and the user does not need to manually search for a needed AR object. Therefore, the method and the device are beneficial to improving the rapidity and convenience of displaying the AR object while displaying the video image by the terminal device.
In a possible implementation manner, in the display control method provided in an embodiment of the present invention, the second AR object includes at least one sub AR object. Specifically, step 202a in the above embodiment may be replaced by step 202b or step 202 c:
step 202b, the terminal device displays the first video image and the first AR object on the first screen, and displays the second video image and the first sub-AR object on the second screen, where the first sub-AR object is one of the plurality of sub-AR objects.
Illustratively, when the first AR object is a virtual kissing image, the at least one sub-AR object included in the corresponding second AR object respectively includes a virtual blush image and a virtual handmask image, both of which are virtual shy images. For example, the first sub-AR object may be the virtual blush image.
In conjunction with the contents displayed by the terminal device shown in fig. 3, the terminal device displays a virtual kiss image superimposed on the video image 1 displayed on the screen 31; the terminal device displays a virtual blush image superimposed on the video image 2 displayed on the screen 32.
Step 202c, the terminal device displays the first video image and the first AR object on the first screen, and displays the second video image and at least one sub AR object on the second screen according to the folding angle.
In conjunction with the contents displayed by the terminal device shown in fig. 3, the terminal device displays a virtual kiss image superimposed on the video image 1 displayed on the screen 31; the terminal device displays a virtual face-red image and a virtual handmask image superimposed on the video image 2 displayed on the screen 32.
It should be noted that, in the display control method provided in the embodiment of the present invention, when the terminal device superimposes and displays the first video image and the first AR object on the first screen of the terminal device in combination with the folding angle of the folding screen of the terminal device, such as the folding angle between the first screen and the second screen, the terminal device superimposes and displays the second video image and the sub AR object in the one or more second AR objects on the second screen. Therefore, the interestingness of video image display in the video call process of the terminal equipment can be further improved.
Optionally, in the case that the second AR object includes at least one sub-AR object, the step 202a may also be implemented by the step 202d-1 and the step 202 d-2.
It is understood that the above steps 202d-1 and 202d-2 can also be a specific extension of 202c in the above embodiment.
Step 202d-1, under the condition that the folding angle is in the first angle range, the terminal device displays the first video image and the first AR object on the first screen, and displays the second video image and all the sub AR objects on the second screen.
For example, the first angle range may be a folding angle range 3, that is, the state of the folding screen of the terminal device is an unfolded state.
The terminal device can display a plurality of sub-AR objects on the second video image on the second screen in an overlapped mode at the same time, and the virtual face-red image and the virtual hand-covering image are displayed in an overlapped mode at the same time. And when one target information is continuously unchanged, the terminal equipment can continuously display a plurality of sub AR objects on the second video image on the second screen. The duration may be a preset duration, and is recorded as duration 1.
Step 202d-2, under the condition that the folding angle is in the second angle range, the terminal device displays the first video image and the first AR object on the first screen, and sequentially displays each sub-AR object on the second screen according to a preset time interval.
For example, the first angle range may be a folding angle range 2, i.e., the state of the folding screen of the terminal device is a folded state.
And the terminal equipment can sequentially display each sub-AR object on the second video image on the second screen in an overlapping manner, such as sequentially displaying the virtual blush image and the virtual hand-covering image in an overlapping manner. The duration of the preset time interval may be a preset duration, and is recorded as duration 2, and the duration 2 may be less than or equal to the duration 1.
Optionally, the terminal device may randomly display each AR object of the at least one sub-AR object, so as to sequentially display each sub-AR object on the second screen.
It should be noted that, in the display control method provided in the embodiment of the present invention, when the terminal device superimposes and displays the first video image and the first AR object on the first screen of the terminal device in combination with the folding angle of the folding screen of the terminal device, such as the folding angle between the first screen and the second screen, the second video image and the sub AR object in the one or more second AR objects are superimposed and displayed on the second screen according to a certain time sequence. Therefore, the interestingness of video image display in the video call process of the terminal equipment can be further improved.
Exemplarily, as shown in fig. 5, a schematic diagram of a possible structure of a terminal device provided in an embodiment of the present invention is shown. Fig. 5 shows a terminal device 50 having a first screen and a second screen; terminal device 50, comprising: an acquisition module 501 and a display module 502; an obtaining module 501, configured to obtain a first video image of a first user and a second video image of a second user when the first user and the second user perform a video call; the display module 502 is configured to display the first video image acquired by the acquisition module 501 on a first screen, and display the second video image acquired by the acquisition module 501 on a second screen.
Optionally, the terminal device 50 is a folding screen type terminal device; the obtaining module 501 is further configured to obtain target information after obtaining the first video image and the second video image, where the target information includes at least one of target person posture information and a folding angle; acquiring a first AR object corresponding to the target information according to the target information; acquiring a second AR object corresponding to the first AR object according to the first AR object; a display module 502, further configured to display a first video image and a first AR object on a first screen, and display a second video image and a second AR object on a second screen; wherein the target character posture information is character posture information in the first video image; the character posture information comprises at least one of character face posture information, character gesture information and character limb action information; the folding angle is a folding angle between the first screen and the second screen.
Optionally, the second AR object comprises at least one sub-AR object; a display module 502, specifically configured to display a second video image and a first sub-AR object on a second screen, where the first sub-AR object is one of at least one sub-AR object; or, displaying the second video image and the at least one sub AR object on the second screen according to the folding angle.
Optionally, the second AR object comprises at least one sub-AR object; a display module 502, specifically configured to display the second video image and all the sub AR objects on the second screen when the folding angle is in the first angle range; and under the condition that the folding angle is in the second angle range, sequentially displaying each sub-AR object on the second screen according to a preset time interval.
Optionally, the obtaining module 501 is specifically configured to obtain a first AR object corresponding to the target information from the first image library according to the target information and a first corresponding relationship, where the first corresponding relationship is a corresponding relationship between the target information and the first AR object.
Optionally, the obtaining module 501 is specifically configured to obtain a second AR object corresponding to the first AR object from the second image library according to the first AR object and a second corresponding relationship, where the second corresponding relationship is a corresponding relationship between the first AR object and the second AR object.
Optionally, the target information includes target person posture information; an obtaining module 501, configured to obtain at least one piece of character pose information in a first video image; and taking the character posture information matched with the preset character posture information in the at least one piece of character posture information as the target character posture information.
The terminal device 50 provided in the embodiment of the present invention can implement each process implemented by the terminal device in the foregoing method embodiments, and for avoiding repetition, details are not described here again.
It should be noted that the terminal device provided in the embodiment of the present invention has a first screen and a second screen. Under the condition that a first user and a second user carry out video call, a first video image of the first user and a second video image of the second user can be obtained; the first video image is displayed on the first screen and the second video image is displayed on the second screen. Based on the scheme, the terminal device can respectively display the first video image on the first screen and the second video image on the second screen, and the area of the display area of one screen (such as the first screen or the second screen) of the terminal device is generally larger, so that the area of the display area occupied by the first video image and the second video image displayed by the terminal device is larger, and the first video image and the second video image are not overlapped. Therefore, the display effect of the video image during the video call of the terminal equipment can be improved.
Fig. 6 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 6 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The processor 110 is configured to obtain a first video image of a first user and a second video image of a second user when the first user and the second user perform a video call; and a display unit 106 for displaying the first video image acquired by the processor 110 on a first screen and displaying the second video image acquired by the processor 110 on a second screen.
It should be noted that the terminal device provided in the embodiment of the present invention has a first screen and a second screen. Under the condition that a first user and a second user carry out video call, a first video image of the first user and a second video image of the second user can be obtained; the first video image is displayed on the first screen and the second video image is displayed on the second screen. Based on the scheme, the terminal device can respectively display the first video image on the first screen and the second video image on the second screen, and the area of the display area of one screen (such as the first screen or the second screen) of the terminal device is generally larger, so that the area of the display area occupied by the first video image and the second video image displayed by the terminal device is larger, and the first video image and the second video image are not overlapped. Therefore, the display effect of the video image during the video call of the terminal equipment can be improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 6, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the above display control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the display control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A display control method applied to a terminal device having a first screen and a second screen, comprising:
acquiring a first video image of a first user and a second video image of a second user in a state that the first user and the second user are in video call;
displaying the first video image on a first screen and the second video image on a second screen;
the displaying the first video image on a first screen and the second video image on a second screen includes:
displaying the first video image and a first AR object on a first screen, and displaying the second video image and a second AR object on a second screen, the first AR object being different from the second AR object, the second AR object being for interacting with the first AR object, the first AR object and the second AR object both including a foreground special effect image and a background special effect image;
the terminal equipment is folding screen type terminal equipment;
after the obtaining the first video image of the first user and the second video image of the second user, the display control method further includes:
acquiring target information, wherein the target information comprises a folding angle, and the folding angle is the folding angle between the first screen and the second screen;
acquiring a first AR object corresponding to the target information according to the target information;
acquiring a second AR object corresponding to the first AR object according to the first AR object;
the second AR object comprises at least one sub-AR object;
the displaying the second video image and the second AR object on a second screen includes:
displaying the second video image and the at least one sub-AR object on the second screen according to the folding angle.
2. The display control method according to claim 1,
the target information further comprises target character posture information;
wherein the target person pose information is person pose information in the first video image; the character pose information includes at least one of character facial pose information, character gesture information, and character limb motion information.
3. The display control method according to claim 1, wherein the second AR object includes at least one sub AR object;
the displaying the second video image and the second AR object on the second screen includes:
displaying the second video image and all of the sub-AR objects on the second screen with the fold angle in a first angular range;
and under the condition that the folding angle is in a second angle range, sequentially displaying each sub-AR object on the second screen according to a preset time interval.
4. The display control method according to claim 2, wherein the target information includes target person posture information;
the acquiring of the target information includes:
acquiring at least one piece of character posture information in the first video image;
and taking the character posture information matched with the preset character posture information in the at least one piece of character posture information as the target character posture information.
5. A terminal device, characterized in that the terminal device has a first screen and a second screen, the terminal device comprising: the device comprises an acquisition module and a display module;
the acquisition module is used for acquiring a first video image of a first user and a second video image of a second user in a video call state between the first user and the second user;
the display module is used for displaying the first video image on a first screen and displaying the second video image on a second screen;
the display module is specifically configured to display the first video image and a first AR object on a first screen, and display the second video image and a second AR object on a second screen, where the first AR object is different from the second AR object, the second AR object is used to interact with the first AR object, and both the first AR object and the second AR object include a foreground special effect image and a background special effect image;
the terminal equipment is folding screen type terminal equipment;
the obtaining module is further configured to obtain target information after obtaining a first video image of the first user and a second video image of the second user, where the target information includes a folding angle, and the folding angle is a folding angle between the first screen and the second screen; acquiring a first AR object corresponding to the target information according to the target information; acquiring a second AR object corresponding to the first AR object according to the first AR object;
the second AR object comprises at least one sub-AR object;
the display module is specifically configured to display the second video image and the at least one sub AR object on the second screen according to the folding angle.
6. The terminal device of claim 5,
the target information further comprises target character posture information;
wherein the target person pose information is person pose information in the first video image; the character pose information includes at least one of character facial pose information, character gesture information, and character limb motion information.
7. The terminal device of claim 5, wherein the second AR object comprises at least one sub-AR object;
the display module is specifically configured to display the second video image and all the sub AR objects on the second screen when the folding angle is within a first angle range; and under the condition that the folding angle is in a second angle range, sequentially displaying each sub-AR object on the second screen according to a preset time interval.
8. The terminal device according to claim 6, wherein the target information includes target person posture information;
the acquisition module is specifically used for acquiring at least one piece of character posture information in the first video image; and taking the character posture information matched with the preset character posture information in the at least one piece of character posture information as the target character posture information.
9. A terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the display control method according to any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the display control method according to any one of claims 1 to 4.
CN201811110132.4A 2018-09-21 2018-09-21 Display control method and terminal equipment Active CN109218648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811110132.4A CN109218648B (en) 2018-09-21 2018-09-21 Display control method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811110132.4A CN109218648B (en) 2018-09-21 2018-09-21 Display control method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109218648A CN109218648A (en) 2019-01-15
CN109218648B true CN109218648B (en) 2021-01-22

Family

ID=64985465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811110132.4A Active CN109218648B (en) 2018-09-21 2018-09-21 Display control method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109218648B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109714485B (en) * 2019-01-10 2021-07-23 维沃移动通信有限公司 Display method and mobile terminal
CN110087013B (en) * 2019-04-18 2022-06-17 徐静思 Video chat method, mobile terminal and computer readable storage medium
CN110996034A (en) * 2019-04-25 2020-04-10 华为技术有限公司 Application control method and electronic device
CN110191301A (en) * 2019-05-06 2019-08-30 珠海格力电器股份有限公司 A kind of video calling control method, device, terminal and storage medium
CN110207643B (en) * 2019-05-31 2021-02-19 闻泰通讯股份有限公司 Folding angle detection method and device, terminal and storage medium
CN110418095B (en) * 2019-06-28 2021-09-14 广东虚拟现实科技有限公司 Virtual scene processing method and device, electronic equipment and storage medium
CN110308793B (en) * 2019-07-04 2023-03-14 北京百度网讯科技有限公司 Augmented reality AR expression generation method and device and storage medium
CN110381282B (en) * 2019-07-30 2021-06-29 华为技术有限公司 Video call display method applied to electronic equipment and related device
CN110532054B (en) * 2019-08-30 2023-04-07 维沃移动通信有限公司 Background setting method and mobile terminal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111796A (en) * 2018-01-31 2018-06-01 努比亚技术有限公司 Protrusion display methods, terminal and storage medium based on flexible screen

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1328908C (en) * 2004-11-15 2007-07-25 北京中星微电子有限公司 A video communication method
KR101889838B1 (en) * 2011-02-10 2018-08-20 삼성전자주식회사 Portable device having touch screen display and method for controlling thereof
EP2615564A1 (en) * 2012-01-11 2013-07-17 LG Electronics Computing device for performing at least one function and method for controlling the same
CN103295510B (en) * 2012-03-05 2016-08-17 联想(北京)有限公司 A kind of method adjusting resolution and electronic equipment
CN103297742A (en) * 2012-02-27 2013-09-11 联想(北京)有限公司 Data processing method, microprocessor, communication terminal and server
CN103369288B (en) * 2012-03-29 2015-12-16 深圳市腾讯计算机系统有限公司 The instant communication method of video Network Based and system
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN103916621A (en) * 2013-01-06 2014-07-09 腾讯科技(深圳)有限公司 Method and device for video communication
CN103744477B (en) * 2013-12-25 2017-02-22 三星半导体(中国)研究开发有限公司 Separating type double screen terminal and control system thereof
CN106507021A (en) * 2015-09-07 2017-03-15 腾讯科技(深圳)有限公司 Method for processing video frequency and terminal device
CN106791571B (en) * 2017-01-09 2020-05-19 宇龙计算机通信科技(深圳)有限公司 Video display method and device for double-screen display terminal
CN106778706A (en) * 2017-02-08 2017-05-31 康梅 A kind of real-time mask video display method based on Expression Recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111796A (en) * 2018-01-31 2018-06-01 努比亚技术有限公司 Protrusion display methods, terminal and storage medium based on flexible screen

Also Published As

Publication number Publication date
CN109218648A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109218648B (en) Display control method and terminal equipment
CN108712603B (en) Image processing method and mobile terminal
CN109215007B (en) Image generation method and terminal equipment
CN109874038B (en) Terminal display method and terminal
CN108255378A (en) A kind of display control method and mobile terminal
CN109240577B (en) Screen capturing method and terminal
CN109361869A (en) A kind of image pickup method and terminal
CN110096326B (en) Screen capturing method, terminal equipment and computer readable storage medium
CN108459797A (en) A kind of control method and mobile terminal of Folding screen
CN109409244B (en) Output method of object placement scheme and mobile terminal
CN109388304A (en) A kind of screenshotss method and terminal device
CN109032486B (en) Display control method and terminal equipment
CN111010523B (en) Video recording method and electronic equipment
CN111026316A (en) Image display method and electronic equipment
CN109408168A (en) A kind of remote interaction method and terminal device
CN111031253B (en) Shooting method and electronic equipment
CN109407948B (en) Interface display method and mobile terminal
CN109618218B (en) Video processing method and mobile terminal
WO2021082772A1 (en) Screenshot method and electronic device
CN111461985A (en) Picture processing method and electronic equipment
CN110866465A (en) Control method of electronic equipment and electronic equipment
CN109491634B (en) Screen display control method and terminal equipment
CN108833791B (en) Shooting method and device
CN109117037B (en) Image processing method and terminal equipment
CN111093033B (en) Information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant