CN110944139B - Display control method and electronic equipment - Google Patents

Display control method and electronic equipment Download PDF

Info

Publication number
CN110944139B
CN110944139B CN201911202444.2A CN201911202444A CN110944139B CN 110944139 B CN110944139 B CN 110944139B CN 201911202444 A CN201911202444 A CN 201911202444A CN 110944139 B CN110944139 B CN 110944139B
Authority
CN
China
Prior art keywords
electronic device
electronic equipment
user
target object
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911202444.2A
Other languages
Chinese (zh)
Other versions
CN110944139A (en
Inventor
李家裕
杨其豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911202444.2A priority Critical patent/CN110944139B/en
Publication of CN110944139A publication Critical patent/CN110944139A/en
Application granted granted Critical
Publication of CN110944139B publication Critical patent/CN110944139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N7/144Constructional details of the terminal equipment, e.g. arrangements of the camera and the display camera and display on the same optical axis, e.g. optically multiplexing the camera and display for eye to eye contact

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a display control method and electronic equipment, relates to the technical field of communication, and aims to solve the problems that the process of indicating other users to execute operations through the existing electronic equipment is long in time consumption and low in efficiency. The method comprises the following steps: displaying a first video picture, wherein the first video picture is a video picture collected and sent by second electronic equipment in the video call process of the first electronic equipment and the second electronic equipment; sending first information to the second electronic equipment, wherein the first information is used for indicating that target operation is executed on a target object in a first video picture displayed by the second electronic equipment; wherein the target operation comprises at least one of: the method comprises the steps of magnifying and displaying a target object, and overlaying and displaying hand indication information of a first electronic device user on the target object, wherein the hand indication information is used for indicating the real-time position and hand movement of the hand of the first electronic device user. The method is applied to a scene remotely guided by the electronic equipment.

Description

Display control method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a display control method and electronic equipment.
Background
With the rapid development of communication technology, the use demand of electronic devices by users is also increasing. In order to meet the increasing use requirements of users, the intelligent degree of electronic equipment is higher and higher.
Currently, a user may instruct other users to perform some operation through an electronic device. Taking the example that the user instructs another user to buy goods through the electronic device, if the user 1 wants to make the user 2 buy goods, the electronic device (hereinafter referred to as the electronic device 1) of the user 1 can establish a call connection with the electronic device (hereinafter referred to as the electronic device 2) of the user 2, and then the user 1 can verbally describe information of goods he wants to buy to the user 2, so that the user 2 can help the user 1 buy goods according to the information of the goods verbally described by the user 1; alternatively, the user 1 may trigger the electronic device 1 to send a message to the electronic device 2 through the chat software, and after the electronic device 2 receives the shopping list, the user 2 purchases the product with reference to the shopping list by the user 1.
However, in the case where the information about the items verbally described by the user 1 or in the shopping list is not accurate and clear, the user 2 may need to make a call or message again to communicate with the user 1 according to the above method, which results in a long time consuming purchasing process. Therefore, the process that the user instructs other users to perform the operation through the electronic equipment is time-consuming and inefficient.
Disclosure of Invention
The embodiment of the invention provides a display control method and electronic equipment, and aims to solve the problems that the process of indicating other users to execute operations through the existing electronic equipment is long in time consumption and low in efficiency.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a display control method, where the method is applied to a first electronic device, and the method includes: displaying a first video picture; sending the first information to the second electronic equipment; the first video picture is a video picture collected and sent by the second electronic device in the video call process of the first electronic device and the second electronic device, the first information is used for indicating to execute target operation on a target object in the first video picture displayed by the second electronic device, and the target operation comprises at least one of the following operations: the method comprises the steps of magnifying and displaying a target object, and overlaying and displaying hand indication information of a first electronic device user on the target object, wherein the hand indication information is used for indicating the real-time position and hand movement of the hand of the first electronic device user.
In a second aspect, an embodiment of the present invention provides a first electronic device, which includes a display module and an execution module. The display module is used for displaying a first video picture; the execution module is used for sending first information to the second electronic equipment after the display module displays the first video picture; the first video picture is a video picture collected and sent by the second electronic device in the video call process of the first electronic device and the second electronic device, the first information is used for indicating to execute target operation on a target object in the first video picture displayed by the second electronic device, and the target operation comprises at least one of the following operations: the method comprises the steps of magnifying and displaying a target object, and overlaying and displaying hand indication information of a first electronic device user on the target object, wherein the hand indication information is used for indicating the real-time position and hand movement of the hand of the first electronic device user.
In a third aspect, an embodiment of the present invention provides a first electronic device, where the first electronic device includes a processor, a memory, and a computer program stored in the memory and being executable on the processor, and the computer program, when executed by the processor, implements the steps of the display control method in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the display control method as in the first aspect described above.
In the embodiment of the present invention, the first electronic device may display the first video picture (in the process of video call between the first electronic device and the second electronic device, the second electronic device collects and sends the video picture); and transmitting first information (for instructing to perform a target operation on a target object in a first video picture displayed by the second electronic device) to the second electronic device. Wherein the target operation comprises at least one of: the method comprises the steps of displaying a target object in an enlarged mode, and displaying hand indication information (used for indicating the real-time position and hand motion of the hand of the first electronic device user) of the first electronic device user on the target object in an overlapped mode. By the scheme, in a scene remotely guided through the first electronic equipment, the target object can be clearly viewed by a user (such as a taught party) of the second electronic equipment due to the enlarged display of the target object; and hand indication information of a first electronic equipment user (such as a director) is superposed and displayed on the target object, so that the target object can be clearly and clearly indicated to a second electronic equipment user, and therefore, in the process that the first electronic equipment user remotely directs the second electronic equipment user through the electronic equipment, the first electronic equipment can carry out video call with the second electronic equipment, so that the first electronic equipment can indicate the second electronic equipment to execute target operation on the target object by sending the first information to the second electronic equipment, and therefore, the target object can be clearly indicated to the second electronic equipment user (namely, the taught party), and further, the second electronic equipment user can clearly know the content indicated by the first electronic equipment user in real time, intuitively and clearly. Therefore, the display control method provided by the embodiment of the invention can improve the remote guidance efficiency of the first electronic equipment.
Drawings
Fig. 1 is a schematic diagram of an architecture of an android operating system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a display control method according to an embodiment of the present invention;
FIG. 3 is a second schematic diagram of a display control method according to an embodiment of the present invention;
fig. 4 is one of schematic interfaces of an application of the display control method according to the embodiment of the present invention;
fig. 5 is a third schematic diagram of a display control method according to an embodiment of the present invention;
fig. 6 is a second schematic interface diagram of an application of the display control method according to the embodiment of the present invention;
FIG. 7 is a fourth schematic diagram illustrating a display control method according to an embodiment of the present invention;
fig. 8 is a third schematic interface diagram of an application of the display control method according to the embodiment of the present invention;
FIG. 9 is a fourth schematic view of an interface applied by the display control method according to the embodiment of the present invention;
FIG. 10 is a fifth schematic view illustrating a display control method according to an embodiment of the present invention;
FIG. 11 is a sixth schematic view illustrating a display control method according to an embodiment of the present invention;
FIG. 12 is a seventh schematic diagram illustrating a display control method according to an embodiment of the present invention;
FIG. 13 is a fifth exemplary diagram of an interface applied by the display control method according to the embodiment of the present invention;
FIG. 14 is a schematic diagram of acquiring a hand image of a user according to an embodiment of the present invention;
fig. 15 is a sixth schematic interface diagram of an application of the display control method according to the embodiment of the present invention;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 17 is a second schematic structural diagram of an electronic device according to an embodiment of the invention;
fig. 18 is a third schematic structural diagram of an electronic apparatus according to an embodiment of the invention;
fig. 19 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first electronic device and the second electronic device, etc. are for distinguishing different electronic devices, and are not for describing a specific order of the electronic devices.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units, and the like.
The embodiment of the invention provides a display control method and electronic equipment, wherein first electronic equipment can display a first video picture (in the process of video call between the first electronic equipment and second electronic equipment, the second electronic equipment collects and sends the video picture); and transmitting first information (for instructing to perform a target operation on a target object in a first video picture displayed by the second electronic device) to the second electronic device. Wherein the target operation comprises at least one of: the method comprises the steps of displaying a target object in an enlarged mode, and displaying hand indication information (used for indicating the real-time position and hand motion of the hand of the first electronic device user) of the first electronic device user on the target object in an overlapped mode. By the scheme, in a scene remotely guided through the first electronic equipment, the target object can be clearly viewed by a user (such as a taught party) of the second electronic equipment due to the enlarged display of the target object; and hand indication information of a first electronic equipment user (such as a director) is superposed and displayed on the target object, so that the target object can be clearly and clearly indicated to a second electronic equipment user, and therefore, in the process that the first electronic equipment user remotely directs the second electronic equipment user through the electronic equipment, the first electronic equipment can carry out video call with the second electronic equipment, so that the first electronic equipment can indicate the second electronic equipment to execute target operation on the target object by sending the first information to the second electronic equipment, and therefore, the target object can be clearly indicated to the second electronic equipment user (namely, the taught party), and further, the second electronic equipment user can clearly know the content indicated by the first electronic equipment user in real time, intuitively and clearly. Therefore, the display control method provided by the embodiment of the invention can improve the remote guidance efficiency of the first electronic equipment.
The electronic devices in the embodiment of the present invention (for example, the first electronic device and the second electronic device in the embodiment of the present invention) may be electronic devices having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the display control method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the display control method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the display control method may operate based on the android operating system shown in fig. 1. That is, the processor or the electronic device may implement the display control method provided by the embodiment of the present invention by running the software program in the android operating system.
The electronic equipment in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. For example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal may be a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
An execution main body of the display control method provided in the embodiment of the present invention may be the first electronic device, or may also be a functional module and/or a functional entity capable of implementing the display control method in the first electronic device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes the first electronic device as an example to exemplarily describe the display control method provided by the embodiment of the present invention.
In the embodiment of the invention, under the condition that the first electronic equipment user remotely guides the second electronic equipment user through the first electronic equipment, the first electronic equipment can establish a video call with the second electronic equipment. In the process of a video call between a first electronic device and a second electronic device, the first electronic device may display a video picture (e.g., a first video picture in the embodiment of the present invention) captured and transmitted by the second electronic device on a screen of the first electronic device, and the first electronic device may indicate or emphasize a certain object (e.g., a target object in the embodiment of the present invention) in the first video picture to a user of the second electronic device by transmitting a piece of information (e.g., a first piece of information in the embodiment of the present invention) to the second electronic device, so that the second electronic device may be instructed to perform a specific operation (e.g., a target operation in the embodiment of the present invention) on the object in the video picture (e.g., the first video picture in the embodiment of the present invention) captured and displayed by the second electronic device. Therefore, in the process of video call between the first electronic device and the second electronic device, the first electronic device can control the second electronic device to execute operation on the object in the video picture displayed by the second electronic device, so that the object can be clearly indicated or emphasized to the second electronic device user, and the second electronic device user can further know the content indicated by the first electronic device user in real time, intuitively and clearly. Therefore, the display control method provided by the embodiment of the invention can improve the remote guidance efficiency of the first electronic equipment.
It can be understood that the first electronic device user in the embodiment of the present invention may be a mentor, and the second electronic device user may be a taught party.
The following describes an exemplary display control method according to an embodiment of the present invention with reference to the drawings.
As shown in fig. 2, an embodiment of the invention provides a display control method. The method is applied to a first electronic device and comprises the following steps 201 to 203.
Step 201, the first electronic device displays a first video image.
The first video picture can be a video picture acquired and sent by the second electronic device in the video call process of the first electronic device and the second electronic device.
In the embodiment of the present invention, when a user of a first electronic device wants to remotely guide a user of a second electronic device through the first electronic device, the user of the first electronic device may trigger the first electronic device to perform a video call with the second electronic device, and then the second electronic device may collect a video image (i.e., the first video image) and send the first video image to the first electronic device. In this way, after the first electronic device receives the first video frame, the first electronic device may display the first video frame.
Optionally, in the embodiment of the present invention, the first video picture may be a video picture acquired by a rear camera of the second electronic device, and may also be a video picture acquired by a front camera of the second electronic device. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
In the embodiment of the present invention, in the process of video call between the first electronic device and the second electronic device, the first electronic device may further display a video picture (hereinafter, may be referred to as a second video picture) collected by the first electronic device.
In the embodiment of the present invention, the video call interface of the first electronic device may include two areas, which are a first area and a second area, respectively. The first area may be a main picture area of the video call interface, and the second area may be an auxiliary picture area of the video call interface.
Optionally, in this embodiment of the present invention, the first electronic device may display a first video image in the first area, and display a second video image in the second area; the second video screen may be displayed in the first area and the first video screen may be displayed in the second area. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in the embodiment of the present invention, in a video call process between the first electronic device and the second electronic device, a user of the first electronic device may trigger the first electronic device to switch an area displaying the first video image and the second video image through an input to the first electronic device.
For example, in a case where the first electronic device displays the first video screen in the first area and displays the second video screen in the second area, the user of the first electronic device may trigger the first electronic device to display the second video screen in the first area and display the first video screen in the second area through an input to the first electronic device (e.g., a click input to the second area, etc.).
It should be noted that, in the embodiments of the present invention, the first electronic device displays the first video screen in the first area, and displays the second video screen in the second area. For an implementation manner in which the first electronic device displays the second video picture in the first region and displays the first video picture in the second region, the implementation manner is similar to that in which the first electronic device displays the first video picture in the first region and displays the second video picture in the second region, and in order to avoid repetition, details of the embodiment of the present invention are not repeated.
In addition, the first video picture may be a video picture acquired by the second electronic device in real time, and the second video picture may be a video picture acquired by the first electronic device in real time.
Step 202, the first electronic device sends the first information to the second electronic device.
And step 203, the second electronic device executes target operation on the target object in the first video picture displayed by the second electronic device.
The first information may be used to instruct the second electronic device to perform a target operation on a target object in a first video picture displayed by the second electronic device, where the target operation may include at least one of: and magnifying and displaying the target object, and overlaying and displaying the hand indication information of the first electronic equipment user on the target object.
In the embodiment of the present invention, after the first electronic device displays the first video picture, the first electronic device may send the first information to the second electronic device, so that after the second electronic device receives the first information, the second electronic device may perform the target operation on the first video picture displayed by the second electronic device, so as to clearly indicate the target object to a user of the second electronic device, and further, improve the remote guidance efficiency of the first electronic device.
In an embodiment of the present invention, the target object may be an image in the first video frame.
Optionally, in a case that the target operation includes displaying hand indication information of the first electronic device user on the target object in an overlapping manner, the target object may be determined according to a current position of the hand of the first electronic device user. Specifically, the first electronic device may map a hand of the first electronic device user into a first video frame displayed by the first electronic device, and an image corresponding to a position to which the hand of the first electronic device user is mapped may be the target object.
Optionally, in this embodiment of the present invention, the hand instruction information of the first electronic device user may be any possible information, such as a hand image of the first electronic device user acquired by the first electronic device in real time, or a hand image of the first electronic device user stored in the first electronic device. The method and the device can be determined according to actual use requirements, and the embodiment of the invention is not limited.
The embodiment of the invention provides a display control method which can be applied to a scene remotely guided by first electronic equipment, and can enable a second electronic equipment user (such as a taught party) to clearly view a target object due to the fact that the target object is displayed in an enlarged mode; and the hand indication information of the first electronic equipment user (such as a director) is superimposed and displayed on the target object, so that the target object can be clearly and clearly indicated to the second electronic equipment user, and therefore, in the process that the first electronic equipment user remotely directs the second electronic equipment user through the electronic equipment, the first electronic equipment can carry out video call with the second electronic equipment, so that the first electronic equipment can indicate the second electronic equipment to execute target operation on the target object by sending the first information to the second electronic equipment, and therefore, the target object can be clearly indicated or emphasized to the second electronic equipment user (i.e. a taught party), and further, the second electronic equipment user can clearly know the content indicated by the first electronic equipment user in real time, intuitively and clearly. Therefore, the display control method provided by the embodiment of the invention can improve the remote guidance efficiency of the first electronic equipment.
Optionally, in this embodiment of the present invention, after the first electronic device displays the first video picture, the first electronic device may first determine whether the first electronic device meets a preset condition, and if the first electronic device meets the preset condition, the first electronic device may perform the following step 202 a; if the first electronic device does not meet the preset condition, the first electronic device can keep displaying the first video picture.
Illustratively, in conjunction with fig. 2, as shown in fig. 3, the step 202 may be specifically implemented by the step 202a described below.
Step 202a, under the condition that a preset condition is met, the first electronic device sends first information to the second electronic device.
Wherein the preset condition may include at least one of the following: the first electronic equipment receives a first input of a user aiming at a target object, the first electronic equipment acquires hand indication information of the user of the first electronic equipment, and the first electronic equipment acquires voice information associated with the target object.
It should be noted that, in actual implementation, the preset condition may also be any other possible condition, which may be determined according to actual usage requirements, and the embodiment of the present invention is not limited.
Optionally, in this embodiment of the present invention, the first input may be a double-finger separated input of the user for the target object, or may also be any possible input such as a single-click input or a double-click input of the user for the target object. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
In an embodiment of the present invention, in a case that the preset condition is that the first electronic device receives the first input, the target object may be determined according to an input position of the first input. It is understood that the portion of the first video frame corresponding to the first input is the target object.
Optionally, in this embodiment of the present invention, the first information may include at least one of the following: the input information of the first input, the real-time position and the hand motion of the hand of the first electronic equipment user and the voice information related to the real scene object corresponding to the target object.
The real-world object corresponding to the target object may be a real object indicated by the target object. For example, the target object may be a "milk" image in the first video frame, and the real-world object corresponding to the target object may be real-world milk corresponding to the "milk" image.
Optionally, in this embodiment of the present invention, the input information of the first input may include any possible information, such as an input position of the first input and an input type of the first input. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
It is to be understood that, in the embodiment of the present invention, the second electronic device may determine the target object according to the input position of the first input, and determine an operation (for example, enlarge and display the target object) performed on the target object according to the input type of the first input.
Optionally, in this embodiment of the present invention, when the target operation includes displaying hand indication information of the first electronic device user in an overlapping manner on the target object, the second electronic device may determine the target object according to a real-time position of the hand of the first electronic device user, and determine the hand indication information of the first electronic device user according to a hand motion of the first electronic device user. Specifically, the hand indication information of the first electronic device user displayed by the second electronic device may be real-time hand movements of the first electronic device user collected by the first electronic device. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
The display control method provided by the embodiment of the invention can be applied to a scene remotely guided by first electronic equipment, and because the first input information, the real-time position and hand motion of the hand of the first electronic equipment user and the voice information associated with the real-scene object corresponding to the target object can all represent that the first electronic equipment user wants to trigger the target operation executed by second electronic equipment on the target object through the first electronic equipment, under the condition that the preset condition is met, the first electronic equipment sends the first information to the second electronic equipment to instruct the second electronic equipment to execute the target operation on the target object, so that the first electronic equipment user can actively guide the second electronic equipment through the first electronic equipment, and the performance of the remote guidance of the first electronic equipment can be further improved.
In the embodiment of the present invention, the video call interface of the second electronic device may include two areas, which are an area a and an area B. The area a may be a main picture area of the video call interface, and the area B may be an auxiliary picture area of the video call interface.
Optionally, in this embodiment of the present invention, the second electronic device may display the first video picture in the area a, and display the second video picture in the area B (the video picture that is collected and sent to the second electronic device by the first electronic device); the second video picture may be displayed in the area a and the first video picture may be displayed in the area B. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
It can be understood that, in the embodiment of the present invention, in a case where the first electronic device displays the first video picture in the first area and displays the second video picture in the second area, if the second electronic device displays the first video picture in the area a and displays the second video picture in the area B, a form in which the second electronic device displays the video pictures may be the same as a form in which the first electronic device displays the video pictures.
It should be noted that, in the embodiments of the present invention, the second electronic device displays the first video screen in the area a, and displays the second video screen in the area B. For the second electronic device displaying the second video picture in the area a, the implementation manner of displaying the first video picture in the area B is similar to the implementation manner of displaying the first video picture in the area a by the second electronic device, and displaying the second video picture in the area B, and in order to avoid repetition, the embodiments of the present invention are not repeated.
The steps 201, 202a and 203 are described below with reference to fig. 4.
For example, taking a scene that a first electronic device user makes a video call with a second electronic device through the first electronic device to help the second electronic device purchase goods as an example, it is assumed that the first video picture includes images of multiple goods, the target object is an image of "milk", the target operation is to display hand indication information of the first electronic device user on the target object in an overlapping manner, and the preset condition is that the first electronic device acquires the hand indication information of the first electronic device user. Then, if the first electronic device user wants to make the second electronic device user buy milk, the first electronic device user may receive an image indicating "milk", so that, as shown in fig. 4 (a), in the case that the first electronic device collects the hand indication information 31 of the first electronic device user, the first electronic device may send first information to the second electronic device, where the first information may include the real-time hand position and hand movement of the first electronic device user. Then, after the second electronic device receives the first information, the second electronic device may display the hand instruction information of the first electronic device user superimposed on the image of "milk" (i.e., the target object) in the first video screen displayed in the area a 32 of the video call interface of the second electronic device as shown in (b) in fig. 4. Therefore, the first electronic device can clearly indicate the second electronic device user that the first electronic device user wants to help the second electronic device user purchase 'milk' through the hand indication information of the first electronic device user, so that the time consumed by the first electronic device user indicating the second electronic device user to help the second electronic device user purchase goods through the first electronic device can be reduced, and the remote guidance efficiency of the first electronic device can be improved.
The display control method provided by the embodiment of the invention can be applied to a scene remotely guided by first electronic equipment, and since the first input of the first electronic equipment user for the image of the target object, the collected hand instruction information of the first electronic equipment user and the collected voice information associated with the target object are active behavior information of the first electronic equipment user, when the preset condition is met, the first electronic equipment can determine that the first electronic equipment user wants to indicate or emphasize the target object to the second electronic equipment user, and then the first electronic equipment can send the first information to the second electronic equipment to indicate the second electronic equipment to execute corresponding operation on the target object, so that the first electronic equipment user can accurately guide the second electronic equipment in real time through the first electronic equipment, thereby, the remote guidance efficiency of the first electronic device can be improved.
Optionally, in this embodiment of the present invention, after the first electronic device displays the first video image, the first electronic device may further perform the target operation on the first video image displayed by the first electronic device. Therefore, the first video picture displayed by the first electronic device and the first video picture displayed by the second electronic device can be synchronized, that is, the contents displayed by the first electronic device and the second electronic device are synchronized in real time in the video communication process of the first electronic device and the second electronic device.
For example, referring to fig. 2, as shown in fig. 5, after step 201, the display control method according to the embodiment of the present invention may further include step 204.
It should be noted that the execution sequence among step 202, step 203 and step 204 may not be limited by the embodiment of the present invention. Steps 202-203 may be performed first, followed by step 204; alternatively, the step 204 can be performed first, and then the step 202 can be performed
Step 203; step 202-step 203 and step 204 may also be performed simultaneously. The embodiment of the present invention is exemplified by first performing step 202 to step 203, and then performing step 204.
And step 204, the first electronic equipment executes target operation on the target object in the first video picture displayed by the first electronic equipment.
In the embodiment of the present invention, after the first electronic device displays the first video picture, the first electronic device may perform the above-mentioned target operation on the terminal target object of the first video picture displayed by the first electronic device, so that the contents displayed by the first electronic device and the second electronic device are synchronized.
It should be noted that, in the embodiment of the present invention, the target operation performed on the target object by the first electronic device may be the same as the target operation performed on the target object by the second electronic device.
Optionally, in this embodiment of the present invention, the first electronic device may perform the target operation on a target object in the first video image displayed by the first electronic device, when the preset condition is satisfied.
It should be noted that, in the embodiment of the present invention, for the relevant descriptions of the preset condition, the target object, and the target operation, reference may be specifically made to the detailed descriptions of the preset condition, the target object, and the target operation in the foregoing embodiment, and in order to avoid repetition, details are not described here again.
For example, it is assumed that the preset condition is that the first electronic device receives the first input, the first input is a click input of a user of the first electronic device to a target object in the first video frame, and the target operation is to enlarge and display the target object in the first video frame. Then, as shown in fig. 6 (a), when the first electronic device user clicks an image of "milk" (i.e., a target object) in the first video screen, the first electronic device may display the image of "milk" in an enlarged manner as shown in fig. 6 (b) in response to the input; and the first electronic device may transmit the first information (including the input information of the first input) to the second electronic device, and in the case where the second electronic device receives the first information, as shown in (c) of fig. 6, the second electronic device may also enlarge and display the image of "milk", so that the content displayed by the first electronic device may be synchronized with the content displayed by the second electronic device.
It should be noted that, in the embodiment of the present invention, after the first electronic device displays the target object in an enlarged manner (i.e., performs the target operation on the target object), the user may trigger the first electronic device to display the target object in a reduced manner through another input (hereinafter referred to as a third input), so that the first electronic device may display each object in the first video screen in a standard size, that is, the first electronic device displays the first video screen in the standard size. Accordingly, after the first electronic device receives the third input, the first electronic device may send an indication message to the second electronic device to instruct the second electronic device to zoom out the display target object, so that the second electronic device may display the first video picture in a standard size. As such, the content displayed by the first electronic device may be further synchronized with the content displayed by the second electronic device.
Optionally, in this embodiment of the present invention, the third input may be an input of a double-finger pinch of the first electronic device user with respect to the target object, or may also be any possible input such as a single-click input or a double-click input of the first electronic device user with respect to the target object. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
The display control method provided by the embodiment of the invention can be applied to a scene remotely guided by first electronic equipment, and because the first electronic equipment executes target operation on a target object in the first video picture displayed by the first electronic equipment after the first electronic equipment displays the first video picture, the first electronic equipment can execute the same operation on the first video picture displayed by the first electronic equipment and the first video picture displayed by the second electronic equipment, so that the content displayed by the first electronic equipment is the same as the content displayed by the second electronic equipment, and further, the content displayed by the first electronic equipment and the content displayed by the second electronic equipment are synchronous in the process of remotely guiding by the first electronic equipment.
Optionally, in this embodiment of the present invention, in a case that the target operation is to enlarge and display an image of a target object in a first video frame displayed by a second electronic device, after the first electronic device sends the first information to the second electronic device, if a user of the first electronic device wants to emphasize the target object to the user of the second electronic device, the user of the first electronic device may trigger the first electronic device to send information (e.g., the second information in this embodiment of the present invention) to the second electronic device through an input (e.g., the second input) to the image of the target object, so as to instruct the second electronic device to display an identifier, so that the user of the second electronic device may be intuitively and clearly indicated the target object, and further, the remote guidance efficiency of the first electronic device may be further improved.
Illustratively, in conjunction with fig. 2, as shown in fig. 7, after step 203, the display control method provided in the embodiment of the present invention may further include steps 205 to 207, which are described below. Specifically, the step 203 may be implemented by the step 203b described below.
And step 203b, the second electronic device displays the target object in the first video picture displayed by the second electronic device in an enlarged mode.
Step 205, the first electronic device receives a second input of the first electronic device user to the target object.
Step 206, the first electronic device sends second information to the second electronic device in response to the second input.
The second information may be used to instruct the second electronic device to display a first identifier, where the first identifier may be used to select the target object; the second information may include the input information of the second input and a display mode of the first identifier.
Of course, in actual implementation, the second information may further include any other possible information, which may be determined according to actual usage requirements, and the embodiment of the present invention is not limited.
And step 207, displaying the first identifier by the second electronic equipment.
In this embodiment of the present invention, after the first electronic device sends the first information to the second electronic device, the first electronic device user may trigger the first electronic device to send the second information to the second electronic device through the second input, so as to instruct the second electronic device to display the first identifier, and further emphasize the target object to the second electronic device user through the first identifier.
Optionally, in this embodiment of the present invention, the second input may be any possible input, such as a single-click input, a double-click input, a long-press input, or a sliding input of the first electronic device user on the target object, which may be determined specifically according to an actual use requirement, and this embodiment of the present invention is not limited.
The display control method provided by the embodiment of the invention can be applied to a scene remotely guided by the first electronic device, and the second input can indicate that the first electronic device user has an intention of emphasizing the target object to the second electronic device user, so that after the first electronic device receives the second input, the first electronic device can send the second information to the second electronic device, the second electronic device can be instructed to select the target object by displaying the first identifier, the target object can be clearly and intuitively instructed to the second electronic device user, and the remote guidance efficiency of the first electronic device can be further improved.
Optionally, in this embodiment of the present invention, the display mode of the first identifier may include multiple display modes. Wherein, the display form of the first mark displayed in different display modes can be different.
Optionally, in this embodiment of the present invention, the step 207 may be specifically implemented by a step 207a or a step 207b described below.
And step 207a, the second electronic device displays the first identifier in a first mode.
The first mode may be a mode for displaying a preset identifier, that is, the first identifier may be the preset identifier.
In this embodiment of the present invention, after the second electronic device receives the second information sent by the first electronic device, the second electronic device may display the preset identifier (i.e., the first identifier) according to the second information, so that the target object may be emphasized by the preset identifier.
Optionally, in the embodiment of the present invention, the preset identifier may be a circle point identifier, an arrow identifier, or a hook identifier, which may be used to indicate an image of the target object. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in this embodiment of the present invention, the preset identifier may be displayed on the target object, or may be displayed around the target object. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
For example, it is assumed that the target object is an image of "milk" as shown in fig. 8, the second input is a single-click input of the user of the first electronic device on the target object, and the preset identifier is a dot identifier and is displayed on the target object. Then, as shown in fig. 8 (a), when the user of the first electronic device clicks the image 41 of "milk" (i.e., the target object), the first electronic device may transmit second information to the second electronic device in response to the input, and the second information may include an input position of the second input on the target object, an input type of the second input as a click input (i.e., input information of the second input), and a display mode in which a preset identifier (i.e., the first identifier) is displayed. As such, after the second electronic device receives the second information, the second electronic device may display the dot identifier 42 as shown in (b) of fig. 8, so that the target object may be emphasized to the second electronic device user through the dot identifier 42.
And step 207b, the second electronic device displays the first identifier in a second mode.
The second mode may be a mode in which a mark is displayed according to an input trajectory.
In this embodiment of the present invention, after the second electronic device receives the second information, the second electronic device may display an identifier (i.e., the first identifier) according to the second information and the second input track in the second information, so that the target object may be emphasized by the identifier.
In the embodiment of the present invention, the identifier displayed according to the input track of the second input may be any possible identifier, such as a line of the input track of the second input. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
For example, it is assumed that the target object is an image of "milk" as shown in fig. 9, the second input is an input of a circle drawn by the user of the first electronic device around the target object, and the first input is an input trace line identified as the second input. Then, as shown in (a) of fig. 9, when the user of the first electronic device circles around the image 41 of "milk" (i.e., the target object), the first electronic device may transmit second information to the second electronic device in response to the input, the second information may include an input location of the second input as the surroundings of the target object, an input of the second input of a type of circling around the target object (i.e., input information of the second input), and a display identifier according to an input trajectory of the second input (i.e., a display mode of the first identifier). As such, after the second electronic device receives the second information, the second electronic device may display a circle mark 43 as shown in (b) of fig. 9 to circle the target object, so that the target object may be emphasized to the second electronic device user through the circle mark 43.
The black dot mark shown in fig. 9 (b) may be a start position of the second input.
It should be noted that, in the embodiment of the present invention, the display mode of the first indicator is only exemplified by the first mode and the second mode, which does not limit the present application in any way. In practical implementation, the display mode of the first identifier may also be any other possible display mode, which may be determined according to practical use requirements, and the embodiment of the present invention is not limited.
The display control method provided by the embodiment of the invention can be applied to a scene of remote guidance of the first electronic device, and because the forms of the first identifier displayed in different modes (the first mode or the second mode) are different, the form of the first identifier displayed in the second electronic device can be more flexible, so that the experience of a user can be improved.
Optionally, in this embodiment of the present invention, in a case that the second electronic device displays the first identifier in one mode (for example, the first target mode in this embodiment of the present invention), if the user of the first electronic device wants to trigger the second electronic device to switch the mode in which the second electronic device displays the first identifier, the user of the first electronic device may trigger the first electronic device to send a message (for example, the third message in this embodiment of the present invention) to the second electronic device through an input (for example, the fourth input in this embodiment of the present invention) to instruct the second electronic device to switch the mode in which the second electronic device displays the first identifier. Therefore, the first electronic equipment user can control the second electronic equipment to display the mode of the first identifier through the first electronic equipment according to the use requirement of the first electronic equipment user, and therefore the flexibility of remote guidance of the first electronic equipment can be improved.
Illustratively, in conjunction with fig. 7, as shown in fig. 10, after step 207, the display control method provided by the embodiment of the present invention may further include steps 208 to 210 described below. Specifically, step 207 may be implemented by step 207c described below.
And step 207c, the second electronic device displays the first identifier in the first target mode.
Step 208, the first electronic device receives a fourth input from the user of the first electronic device.
Step 209, the first electronic device sends third information to the second electronic device in response to the fourth input.
The third information is used for indicating the second electronic device to display the first identifier in a second target mode.
Step 210, the second electronic device switches the display mode of the first identifier from the first target mode to the second target mode.
Wherein the first target mode may be the first mode, and the second target mode may be the second mode; alternatively, the first target mode may be a second mode, and the second target mode may be the first mode.
In this embodiment of the present invention, if the user of the first electronic device wants to trigger the second electronic device to switch the display mode of the first identifier through the first electronic device when the second electronic device displays the first identifier in the first target mode, the user of the first electronic device may trigger the first electronic device to send the third information to the second electronic device through the fourth input. In this way, after the second electronic device receives the third information, the second electronic device may switch the display mode of the first identifier from the first target mode to the second target mode, so that the user of the first electronic device may remotely control the display mode of the second electronic device to display the first identifier through the first electronic device, thereby further improving flexibility of remote guidance of the first electronic device.
Optionally, in the embodiment of the present invention, the fourth input may be any possible input, such as a double-click input, a three-click input, a long-press input or a long-press input, or a double-press input, of the user on the target object, which may be determined specifically according to an actual use requirement, and the embodiment of the present invention is not limited.
In this embodiment of the present invention, since the first electronic device user may trigger the first electronic device to send the third information to the second electronic device through the fourth input, so as to instruct the second electronic device to switch the display mode of the first identifier, for example, to switch from the first mode to the second mode, or to switch from the second mode to the first mode, the first electronic device may control the display mode of the second electronic device for displaying the first identifier by sending the third information to the second electronic device, that is, the first electronic device user may control the display mode of the second electronic device for displaying the first identifier through the first electronic device according to his/her will, so as to improve flexibility of remote guidance of the first electronic device.
Optionally, in this embodiment of the present invention, after the first electronic device receives the second input, the first electronic device may further display the first identifier in response to the second input. Therefore, the first electronic device can synchronize the content of the video call interface of the second electronic device, and therefore the user of the first electronic device can acquire the content in the video call between the first electronic device and the second electronic device in real time.
Illustratively, in conjunction with fig. 7, as shown in fig. 11, after step 207, the display control method provided in the embodiment of the present invention may further include step 211.
It should be noted that the execution sequence between step 206 to step 207 and step 211 may not be limited by the embodiment of the present invention. Step 206-step 207 may be performed first, followed by step 211; step 211 can be executed first, and then step 206-step 207 can be executed; step 206-step 207 and step 211 may also be performed simultaneously. The embodiment of the present invention is exemplarily illustrated by first performing step 206 to step 207 and then performing step 211.
Step 211, the first electronic device responds to the second input and displays the first identifier.
In the embodiment of the present invention, after the first electronic device receives the second input, the first electronic device may also display the first identifier.
It should be noted that, in the embodiment of the present invention, a display mode of the first identifier displayed by the first electronic device may be the same as a display mode of the first identifier displayed by the second electronic device; the display position of the first identifier displayed by the first electronic device may also be the same as the display position of the first identifier displayed by the second electronic device. The method and the device can be determined according to actual use requirements, and the embodiment of the invention is not limited.
In the embodiment of the present invention, for the description related to displaying the first identifier by the first electronic device, reference may be specifically made to the detailed description of displaying the first identifier by the second electronic device in the foregoing embodiment, and details are not repeated here to avoid repetition.
The display control method provided by the embodiment of the invention can be applied to a scene remotely guided by first electronic equipment, and after the first electronic equipment receives the second input, the first electronic equipment displays the first identifier, so that the content displayed by the first electronic equipment is the same as the content displayed by the second electronic equipment, namely the content displayed by the first electronic equipment and the content displayed by the second electronic equipment are synchronous in real time, therefore, in the process of remotely guiding by the first electronic equipment, a user of the first electronic equipment can see the same content as a user of the second electronic equipment, and the user of the first electronic equipment can conveniently remotely guide the second electronic equipment by the user of the first electronic equipment. Thus, the remote guidance efficiency of the first electronic device can be further improved.
Optionally, in this embodiment of the present invention, after the second electronic device and/or the first electronic device displays the first identifier, when the user of the first electronic device wants to trigger the second electronic device to cancel displaying the first identifier through the first electronic device, the user of the first electronic device may trigger the first electronic device to send a message to the second electronic device through an input (for example, a fifth input in this embodiment of the present invention), so as to instruct the second electronic device to cancel displaying the first identifier.
Illustratively, in conjunction with fig. 7, as shown in fig. 12, after step 207, the display control method provided by the embodiment of the present invention may further include steps 212 to 214 described below.
Step 212, the first electronic device receives a fifth input from the user.
Step 213, the first electronic device sends fourth information to the second electronic device in response to the fifth input.
The fourth information may be used to instruct the second electronic device to cancel displaying the first identifier.
And step 214, the second electronic device cancels the display of the first identifier.
In this embodiment of the present invention, after the second electronic device displays the first identifier, the user of the first electronic device may trigger the first electronic device to send the fourth information to the second electronic device through the fifth input, so that after the second electronic device receives the fourth information, the second electronic device may cancel displaying the first identifier, and thus the first electronic device may control the second electronic device to cancel displaying the first identifier through the fourth information.
Optionally, in this embodiment of the present invention, after the second electronic device cancels the display of the first identifier, the second electronic device may continue to maintain the target operation on the target object in the first video image, or may directly display the first video image (i.e., cancel the target operation on the target object in the first video image). The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in an embodiment of the present invention, the fourth information may include input information of the fifth input, for example, information of an input type and an input position of the fifth input.
Of course, in actual implementation, the fourth information may further include any other possible information, which may be determined according to actual usage requirements, and the embodiment of the present invention is not limited.
Optionally, in this embodiment of the present invention, the fifth input may be any possible input, such as a downward sliding input, an upward sliding input, a leftward sliding input, or a rightward sliding input of the user of the first electronic device on the screen of the first electronic device, which may be determined specifically according to actual usage requirements, and this embodiment of the present invention is not limited.
It should be noted that the sliding up, sliding down, sliding left, sliding right, and the like in the embodiment of the present invention are exemplarily illustrated by taking an input of the first electronic device user on the screen of the first electronic device as an example, that is, the sliding up, sliding down, sliding left, and sliding right, and the like are taken by taking an input of the first electronic device user on the screen of the first electronic device with respect to the first electronic device or the screen of the first electronic device.
Specifically, for example, when the screen of the first electronic device faces the first electronic device user, the upward sliding input refers to an input that the first electronic device user slides towards the top of the screen of the first electronic device, the downward sliding input refers to an input that the first electronic device user slides towards the bottom of the screen of the first electronic device, the leftward sliding input refers to an input that the first electronic device user slides towards the left side of the screen of the first electronic device, and the rightward sliding input refers to an input that the first electronic device user slides towards the right side of the screen of the first electronic device.
For example, assuming that the fifth input is a slide-down input of the first electronic device user on the screen of the first electronic device, as shown in (a) of fig. 13, in the case where the second electronic device displays the first identifier 43, as shown in (b) of fig. 13, when the first electronic device user slides down on the screen of the first electronic device (i.e., the fifth input), the first electronic device may transmit fourth information (for instructing the second electronic device to cancel displaying the first identifier) to the second electronic device in response to the input. After the second electronic device receives the fourth information, the second electronic device may cancel displaying the first identifier, as shown in (c) of fig. 13, and after the second electronic device cancels displaying the first identifier, the first electronic device may continue displaying the target object.
The display control method provided by the embodiment of the invention can be applied to a scene remotely guided by first electronic equipment, and because the first identifier can be used for selecting the target object, in the process of remotely guiding, if a user of the first electronic equipment does not want to emphasize the target object any more, the first electronic equipment can be triggered to send fourth information to the second electronic equipment through the fifth input, so that the second electronic equipment can be instructed to cancel displaying the first identifier, and thus in the process of remotely guiding by using the first electronic equipment, the electronic equipment of a taught person (namely, the second electronic equipment) can display different contents according to the will of the user of the first electronic equipment, and the efficiency of remotely guiding by the first electronic equipment can be further improved.
Optionally, in this embodiment of the present invention, in a case that the first electronic device displays the first identifier, after the first electronic device receives the fifth input, in response to the fifth input, the first electronic device may further cancel displaying the first identifier displayed by the first electronic device, so that the content in the video call interface displayed by the first electronic device and the content in the video call interface displayed by the second electronic device are provided, so as to facilitate a user (instructor) of the first electronic device to remotely guide a user of the second electronic device through the first electronic device.
Optionally, in this embodiment of the present invention, when the target operation is to display the hand instruction information of the first electronic device user on the target object in the first video image in an overlapping manner, the step 203 may be specifically implemented by the following step 203 c.
And step 203c, the second electronic device displays the hand indication information of the first electronic device user on the target object in an overlapping mode according to the target proportion.
The target proportion may be determined according to a first distance and a second distance, the first distance may be a distance between a hand of a user of the first electronic device and a camera of the first electronic device, and the second distance may be a distance between a real-scene object corresponding to the target object and a camera of the second electronic device.
Optionally, in this embodiment of the present invention, the first electronic device may obtain the hand indication information (for example, a hand image of the first electronic device user) of the first electronic device user by using a video segmentation method. Specifically, the first electronic device may perform video segmentation on the second video image (the video image captured by the first electronic device) in the background to obtain a hand image of the first electronic device user in the second video image.
Optionally, in the embodiment of the present invention, the video segmentation method may be any possible method such as a deep learning method, and may specifically be determined according to an actual use requirement, and the embodiment of the present invention is not limited.
For example, as shown in fig. 14, when the first electronic device acquires the hand indication information of the first electronic device user in the second video frame, the first electronic device may perform image segmentation on the second video frame by a deep learning method to obtain the hand indication information 51 of the first electronic device user.
The above step 203c will be described with reference to fig. 15.
For example, as shown in (a) in fig. 15, when the first electronic device captures a hand image 61 of the first electronic device user, the first electronic device may add information of the hand image of the first electronic device user to the first information, send the first information to the second electronic device, and after the second electronic device receives the first information, the second electronic device may display a hand image 62 of the first electronic device user as shown in (b) in fig. 15 in a superimposed manner on a first video screen displayed by the second electronic device.
Optionally, in an embodiment of the present invention, the target ratio may be a ratio between the second distance and the first distance.
Of course, in actual implementation, the target proportion may also be determined according to any other possible manner, and may specifically be determined according to actual use requirements, and the embodiment of the present invention is not limited.
Optionally, in this embodiment of the present invention, the first distance may be measured by the first electronic device and then sent to the second electronic device, and the second distance may be measured by the second electronic device.
The display control method provided by the embodiment of the invention can be applied to a scene remotely guided by first electronic equipment, and because the first video picture is a video picture collected and sent by second electronic equipment, according to the target proportion, the hand indication information of the first electronic equipment user is superposed on the first video picture, and the scene in which the hand of the first electronic equipment user is superposed on a live-action image corresponding to a target object in reality can be restored, so that the target object can be clearly indicated to the second electronic equipment user (namely a taught person) in real time, and the remote guidance efficiency of the first electronic equipment can be further improved.
In the embodiment of the present invention, the display control methods shown in the above drawings are all exemplarily described with reference to one drawing in the embodiment of the present invention. In specific implementation, the display control method shown in each of the above drawings may also be implemented by combining any other drawings that may be combined, which are illustrated in the above embodiments, and are not described herein again.
As shown in fig. 16, an embodiment of the present invention provides a first electronic device 700, and the first electronic device 700 may include a display module 701 and a transmitting module 702. The display module 701 may be configured to display a first video picture, where the first video picture is a video picture acquired and sent by a second electronic device in a video call process between the first electronic device and the second electronic device; a sending module 702, configured to send, to the second electronic device, first information after the first video picture displayed by the displaying module 701, where the first information is used to instruct to perform a target operation on a target object in the first video picture displayed by the second electronic device. Wherein the target operation comprises at least one of: magnifying and displaying the target object, and overlaying and displaying hand indication information of a first electronic equipment user on the target object; the hand indication information of the first electronic device user is used for indicating the real-time position and the hand action of the hand of the first electronic device user.
Optionally, the sending module 702 is specifically configured to send the first information to the second electronic device when a preset condition is met; the preset conditions include at least one of the following: the method comprises the steps of receiving a first input of a first electronic equipment user for a target object, collecting hand indication information of the first electronic equipment user, and collecting voice information associated with the target object.
The embodiment of the present invention provides a first electronic device, which can be applied to a scene remotely guided by the first electronic device, and since the first input of the first electronic device user for the image of the target object is received, the collected hand instruction information of the first electronic device user and the collected voice information associated with the target object are active behavior information of the first electronic device user, when the preset condition is satisfied, the first electronic device can determine that the first electronic device user wants to indicate or emphasize the target object to the second electronic device user, and then the first electronic device can send the first information to the second electronic device to indicate the second electronic device to perform a corresponding operation on the target object, so that the first electronic device user can accurately guide the second electronic device in real time through the first electronic device, thereby, the remote guidance efficiency of the first electronic device can be improved.
Optionally, the first information may include at least one of: the first input information, the real-time position and the hand action of the hand of the first electronic equipment user and the voice information related to the real scene object corresponding to the target object in the first video picture.
The embodiment of the present invention provides a first electronic device, which can be applied to a scene remotely guided by the first electronic device, and since first input information, a real-time position and a hand motion of a hand of a user of the first electronic device, and voice information associated with a real-scene object corresponding to a target object can all represent that the user of the first electronic device wants to trigger a target operation performed on the target object by a second electronic device through the first electronic device, when the preset condition is met, the first electronic device sends the first information to the second electronic device to instruct the second electronic device to perform the target operation on the target object, so that the user of the first electronic device can actively guide the second electronic device through the first electronic device, and thus the performance of remotely guiding by the first electronic device can be further improved.
Optionally, with reference to fig. 16, as shown in fig. 17, the first electronic device 700 may further include an execution module 703. The executing module 703 may be configured to execute a target operation on the first video screen displayed by the first electronic device after the first video screen is displayed by the displaying module 701.
The embodiment of the invention provides a first electronic device, which can be applied to a scene remotely guided by the first electronic device, and after the first electronic device displays a first video picture, the first electronic device executes a target operation on a target object in the first video picture displayed by the first electronic device, so that the first electronic device can execute the same operation on the first video picture displayed by the first electronic device as the first video picture displayed by the second electronic device, and thus the content displayed by the first electronic device can be the same as the content displayed by the second electronic device, and further, in the process of remotely guiding by the first electronic device, the content displayed by the first electronic device and the content displayed by the second electronic device are synchronous.
Optionally, the target operation is to enlarge and display a target object in the first video picture; in conjunction with fig. 16 described above, as shown in fig. 18, the first electronic device 700 may further include a receiving module 704. A receiving module 704, configured to receive a second input to the target object from the user of the first electronic device after the sending module 702 sends the first information to the second electronic device; the sending module 702 may be further configured to send, in response to the second input received by the receiving module 704, second information to the second electronic device, where the second information is used to instruct the second electronic device to display the first identifier, and the first identifier is used to select the target object. Wherein the second information includes input information of the second input and a display mode of the first identifier.
The embodiment of the present invention provides a first electronic device, which can be applied to a scene remotely guided by the first electronic device, wherein the second input can indicate that a user of the first electronic device has an intention to emphasize the target object to a user of a second electronic device, so that after the first electronic device receives the second input, the first electronic device can send the second information to the second electronic device, and can instruct the second electronic device to select the target object by displaying the first identifier, so that the target object can be clearly and intuitively indicated to the user of the second electronic device, and further the remote guidance efficiency of the first electronic device can be further improved.
Optionally, the display module 701 may be further configured to display the first identifier in response to the second input received by the receiving module 704.
The embodiment of the invention provides a first electronic device, which can be applied to a scene remotely guided by the first electronic device, and after the first electronic device receives the second input, the first electronic device displays a first identifier, so that the content displayed by the first electronic device is the same as the content displayed by the second electronic device, that is, the content displayed by the first electronic device is synchronous with the content displayed by the second electronic device in real time, and therefore, in the process of remotely guiding by the first electronic device, a user of the first electronic device can see the same content as a user of the second electronic device, and the user of the first electronic device can conveniently remotely guide the second electronic device by the user of the first electronic device. Thus, the remote guidance efficiency of the first electronic device can be further improved.
Optionally, the display module 701 may be specifically configured to display the first identifier in a first mode, where the first mode is a mode for displaying the preset identifier; or, the display module 701 may be specifically configured to display the first identifier in a second mode, where the second mode is a mode for displaying the identifier according to the input track.
The embodiment of the invention provides a first electronic device, which can be applied to a scene of remote guidance of the first electronic device, and because the forms of first identifiers displayed in different modes (a first mode or a second mode) are different, the forms of the first identifiers displayed in the second electronic device are more flexible, so that the experience of a user can be improved.
Optionally, the target operation is to display hand indication information of a first electronic device user in an overlapping manner on an image of a target object in the first video picture; the first information may be specifically used to instruct the second electronic device to display, in accordance with the target scale, the hand instruction information of the user of the first electronic device superimposed on the target object in the first video image displayed by the second electronic device. The target proportion is determined according to a first distance and a second distance, the first distance is the distance between the hand of the user of the first electronic equipment and the camera of the first electronic equipment, and the second distance is the distance between the real scene object corresponding to the target object and the camera of the second electronic equipment.
The embodiment of the invention provides a first electronic device, which can be applied to a scene remotely guided by the first electronic device, and because the first video picture is a video picture collected and sent by a second electronic device, according to the target proportion, hand indication information of a first electronic device user is superposed on the first video picture, and a scene in which a hand of the first electronic device user is superposed on a live-action image corresponding to a target object in reality can be restored, so that the target object can be clearly indicated to the second electronic device user (namely a taught person) in real time, and the remote guidance efficiency of the first electronic device can be improved.
Optionally, the first distance is sent to the second electronic device after being measured by the first electronic device, and the second distance is measured by the second electronic device.
The electronic device provided by the embodiment of the invention can realize each process executed by the electronic device in the display control method embodiment, and can achieve the same technical effect, and the details are not repeated here in order to avoid repetition.
The embodiment of the invention provides a first electronic device, in a scene remotely guided by the first electronic device, a second electronic device user (such as a taught party) can clearly view a target object due to the fact that the target object is displayed in an enlarged manner; and the hand indication information of the first electronic equipment user (such as a director) is superimposed and displayed on the target object, so that the target object can be clearly and clearly indicated to the second electronic equipment user, and therefore, in the process that the first electronic equipment user remotely directs the second electronic equipment user through the electronic equipment, the first electronic equipment can carry out video call with the second electronic equipment, so that the first electronic equipment can indicate the second electronic equipment to execute target operation on the target object by sending the first information to the second electronic equipment, and therefore, the target object can be clearly indicated or emphasized to the second electronic equipment user (i.e. a taught party), and further, the second electronic equipment user can clearly know the content indicated by the first electronic equipment user in real time, intuitively and clearly. Therefore, the display control method provided by the embodiment of the invention can improve the remote guidance efficiency of the first electronic equipment.
Fig. 19 is a hardware schematic diagram of an electronic device (which may be specifically a first electronic device) for implementing various embodiments of the present invention. As shown in fig. 19, electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 19 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The display unit 106 is configured to display a first video picture; and the radio frequency unit 101 is configured to send first information to the second electronic device after the display unit displays the first video frame. The first video picture is a video picture collected and sent by the second electronic device in the video call process of the first electronic device and the second electronic device, the first information is used for indicating to execute target operation on a target object in the first video picture displayed by the second electronic device, and the target operation comprises at least one of the following operations: the method comprises the steps of magnifying and displaying a target object, and overlaying and displaying hand indication information of a first electronic device user on the target object, wherein the hand indication information of the first electronic device user is used for indicating the real-time position and hand movement of the hand of the first electronic device user.
It can be understood that, in the embodiment of the present invention, the display module 701 in the structural schematic diagram of the electronic device (for example, fig. 16 and the like) may be implemented by the display unit 106; the transmitting module 702 in the structural schematic diagram of the electronic device may be implemented by the radio frequency unit 101; the execution module 703 in the structural schematic diagram of the electronic device (e.g., fig. 17) may be implemented by the processor 110; the receiving module 704 in the structural schematic diagram of the electronic device (e.g., fig. 18) may be implemented by the user input unit 107.
The embodiment of the invention provides a first electronic device, in a scene remotely guided by the first electronic device, a second electronic device user (such as a taught party) can clearly view a target object due to the fact that the target object is displayed in an enlarged manner; and the hand indication information of the first electronic equipment user (such as a director) is superimposed and displayed on the target object, so that the target object can be clearly and clearly indicated to the second electronic equipment user, and therefore, in the process that the first electronic equipment user remotely directs the second electronic equipment user through the electronic equipment, the first electronic equipment can carry out video call with the second electronic equipment, so that the first electronic equipment can indicate the second electronic equipment to execute target operation on the target object by sending the first information to the second electronic equipment, and therefore, the target object can be clearly indicated or emphasized to the second electronic equipment user (i.e. a taught party), and further, the second electronic equipment user can clearly know the content indicated by the first electronic equipment user in real time, intuitively and clearly. Therefore, the display control method provided by the embodiment of the invention can improve the remote guidance efficiency of the first electronic equipment.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include an image capturing device (e.g., a camera) 1040, a Graphics Processing Unit (GPU) 1041, and a microphone 1042. An image capture device 1040 (e.g., a camera) captures image data for a still picture or video. The graphic processor 1041 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes the processor 110 shown in fig. 19, the memory 109, and a computer program stored in the memory 109 and capable of being executed on the processor 110, where the computer program, when executed by the processor 110, implements each process of the foregoing display control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor shown in fig. 19, the computer program implements each process of the display control method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A display control method is applied to first electronic equipment, and is characterized by comprising the following steps:
displaying a first video picture, wherein the first video picture is a video picture collected and sent by second electronic equipment in the video call process of the first electronic equipment and the second electronic equipment;
sending first information to the second electronic equipment, wherein the first information is used for indicating that target operation is executed on a target object in the first video picture displayed by the second electronic equipment;
wherein the target operation comprises the step of displaying hand indication information of a first electronic equipment user on the target object in an overlapping mode, wherein the hand indication information is used for indicating the real-time position and hand action of the hand of the first electronic equipment user;
if the target operation comprises that hand indication information of a first electronic equipment user is displayed on the target object in an overlapping mode;
the executing a target operation on a target object in the first video picture displayed by the second electronic device comprises:
according to the target proportion, hand indication information of a first electronic equipment user is displayed in an overlapping mode on the target object;
the target proportion is determined according to a first distance and a second distance, the first distance is a distance between a hand of a user of the first electronic equipment and a camera of the first electronic equipment, and the second distance is a distance between a real scene object corresponding to the target object and a camera of the second electronic equipment.
2. The method of claim 1, wherein sending the first information to the second electronic device comprises:
sending the first information to the second electronic equipment under the condition that a preset condition is met;
the preset condition comprises at least one of the following conditions: receiving a first input of the first electronic device user aiming at the target object, collecting hand indication information of the first electronic device user, and collecting voice information associated with the target object.
3. The method of claim 2, wherein the first information comprises at least one of: the input information of the first input, the real-time position and hand action of the hand of the first electronic equipment user, and the voice information associated with the real scene object corresponding to the target object.
4. The method of claim 1, wherein after the displaying the first video picture, the method further comprises:
and executing the target operation on a target object in the first video picture displayed by the first electronic equipment.
5. The method of claim 1, wherein the target operation further comprises displaying the target object in an enlarged manner;
after the sending the first information to the second electronic device, the method further includes:
receiving a second input of the first electronic equipment user to the target object;
responding to the second input, and sending second information to the second electronic equipment, wherein the second information is used for instructing the second electronic equipment to display a first identifier, and the first identifier is used for selecting the target object;
wherein the second information includes input information of the second input and a display mode of the first identifier.
6. The method of claim 5, further comprising:
in response to the second input, displaying the first identifier.
7. The method of claim 5 or 6, wherein displaying the first indicator comprises:
displaying the first identifier in a first mode, wherein the first mode is a mode for displaying a preset identifier;
alternatively, the first and second electrodes may be,
and displaying the first identifier in a second mode, wherein the second mode is a mode for displaying the identifier according to the input track.
8. The method of claim 1, wherein the first distance is measured by the first electronic device and sent to the second electronic device, and wherein the second distance is measured by the second electronic device.
9. A first electronic device, comprising a display module and an execution module;
the display module is used for displaying a first video picture, wherein the first video picture is a video picture acquired and sent by second electronic equipment in the video call process of the first electronic equipment and the second electronic equipment;
the execution module is configured to send first information to the second electronic device after the display module displays the first video picture, where the first information is used to instruct to execute a target operation on a target object in the first video picture displayed by the second electronic device;
wherein the target operation comprises the step of displaying hand indication information of a first electronic equipment user on the target object in an overlapping mode, wherein the hand indication information is used for indicating the real-time position and hand action of the hand of the first electronic equipment user;
if the target operation comprises that hand indication information of a first electronic equipment user is displayed on the target object in an overlapping mode;
the first information is specifically used for indicating the second electronic device to display hand indication information of a first electronic device user in an overlapping mode, and according to a target proportion, the first information is overlapped on the target object in the first video picture displayed by the second electronic device;
the target proportion is determined according to a first distance and a second distance, the first distance is a distance between a hand of a user of the first electronic equipment and a camera of the first electronic equipment, and the second distance is a distance between a real scene object corresponding to the target object and a camera of the second electronic equipment.
10. A first electronic device, characterized by comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the display control method according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the display control method according to any one of claims 1 to 8.
CN201911202444.2A 2019-11-29 2019-11-29 Display control method and electronic equipment Active CN110944139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911202444.2A CN110944139B (en) 2019-11-29 2019-11-29 Display control method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911202444.2A CN110944139B (en) 2019-11-29 2019-11-29 Display control method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110944139A CN110944139A (en) 2020-03-31
CN110944139B true CN110944139B (en) 2022-04-22

Family

ID=69909390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911202444.2A Active CN110944139B (en) 2019-11-29 2019-11-29 Display control method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110944139B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111601066B (en) * 2020-05-26 2022-03-25 维沃移动通信有限公司 Information acquisition method and device, electronic equipment and storage medium
CN112188260A (en) * 2020-10-26 2021-01-05 咪咕文化科技有限公司 Video sharing method, electronic device and readable storage medium
CN112565912B (en) * 2020-11-26 2023-03-24 维沃移动通信有限公司 Video call method and device, electronic equipment and readable storage medium
CN113784207A (en) * 2021-07-30 2021-12-10 北京达佳互联信息技术有限公司 Video picture display method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888423A (en) * 2012-12-20 2014-06-25 联想(北京)有限公司 Information processing method and information processing device
KR20160002132A (en) * 2014-06-30 2016-01-07 삼성전자주식회사 Electronic device and method for providing sound effects
CN109982024A (en) * 2019-04-03 2019-07-05 阿依瓦(北京)技术有限公司 Video pictures share labeling system and shared mask method in a kind of remote assistance

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673161B (en) * 2009-10-15 2011-12-07 复旦大学 Visual, operable and non-solid touch screen system
CN204440363U (en) * 2015-03-20 2015-07-01 宁波萨瑞通讯有限公司 The contactless cursor control system of panel computer or pad
CN105867626A (en) * 2016-04-12 2016-08-17 京东方科技集团股份有限公司 Head-mounted virtual reality equipment, control method thereof and virtual reality system
CN206712945U (en) * 2017-04-26 2017-12-05 联想新视界(天津)科技有限公司 Video communications system
CN109936773A (en) * 2017-12-19 2019-06-25 展讯通信(上海)有限公司 Implementation method, device and the user terminal of video shopping
CN108769517B (en) * 2018-05-29 2021-04-16 亮风台(上海)信息科技有限公司 Method and equipment for remote assistance based on augmented reality
CN110177250A (en) * 2019-04-30 2019-08-27 上海掌门科技有限公司 A kind of method and apparatus for the offer procurement information in video call process

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888423A (en) * 2012-12-20 2014-06-25 联想(北京)有限公司 Information processing method and information processing device
KR20160002132A (en) * 2014-06-30 2016-01-07 삼성전자주식회사 Electronic device and method for providing sound effects
CN109982024A (en) * 2019-04-03 2019-07-05 阿依瓦(北京)技术有限公司 Video pictures share labeling system and shared mask method in a kind of remote assistance

Also Published As

Publication number Publication date
CN110944139A (en) 2020-03-31

Similar Documents

Publication Publication Date Title
CN108958615B (en) Display control method, terminal and computer readable storage medium
CN108495029B (en) Photographing method and mobile terminal
CN110944139B (en) Display control method and electronic equipment
CN109743498B (en) Shooting parameter adjusting method and terminal equipment
CN109032445B (en) Screen display control method and terminal equipment
CN111142991A (en) Application function page display method and electronic equipment
CN109032486B (en) Display control method and terminal equipment
CN110908558A (en) Image display method and electronic equipment
CN111142723B (en) Icon moving method and electronic equipment
CN109857289B (en) Display control method and terminal equipment
CN109828731B (en) Searching method and terminal equipment
CN108536509B (en) Application body-splitting method and mobile terminal
CN110752981B (en) Information control method and electronic equipment
CN110196668B (en) Information processing method and terminal equipment
CN109257505B (en) Screen control method and mobile terminal
CN109495616B (en) Photographing method and terminal equipment
CN110908750B (en) Screen capturing method and electronic equipment
CN111124231B (en) Picture generation method and electronic equipment
CN111190517B (en) Split screen display method and electronic equipment
CN110703972B (en) File control method and electronic equipment
CN110944113B (en) Object display method and electronic equipment
CN111274842A (en) Method for identifying coded image and electronic equipment
CN111061446A (en) Display method and electronic equipment
CN111246105B (en) Photographing method, electronic device, and computer-readable storage medium
CN110647506B (en) Picture deleting method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant