CN111901518A - Display method and device and electronic equipment - Google Patents

Display method and device and electronic equipment Download PDF

Info

Publication number
CN111901518A
CN111901518A CN202010582016.3A CN202010582016A CN111901518A CN 111901518 A CN111901518 A CN 111901518A CN 202010582016 A CN202010582016 A CN 202010582016A CN 111901518 A CN111901518 A CN 111901518A
Authority
CN
China
Prior art keywords
image
preset model
user
model
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010582016.3A
Other languages
Chinese (zh)
Other versions
CN111901518B (en
Inventor
王卫国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010582016.3A priority Critical patent/CN111901518B/en
Publication of CN111901518A publication Critical patent/CN111901518A/en
Application granted granted Critical
Publication of CN111901518B publication Critical patent/CN111901518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Abstract

The application discloses a display method, a display device and electronic equipment, belongs to the technical field of communication, and can solve the problem of poor demonstration effect of operation demonstration through the electronic equipment. The method comprises the following steps: receiving a first input of a first control in a shooting preview interface, wherein the first control is used for triggering an auxiliary display function; and responding to the first input, displaying a first preset model in the shooting preview interface, wherein the first preset model is an auxiliary model in an auxiliary display function, the first preset model is generated according to at least one first image, and the at least one first image is an acquired user image. The method and the device are suitable for the scene that the user performs operation demonstration through the electronic equipment.

Description

Display method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a display method, a display device and electronic equipment.
Background
With the development of communication technology, the functions of electronic devices are becoming more and more rich, for example, users can perform some operation presentations through the electronic devices, for example, recording a presentation video, taking a presentation picture or live playing the operation presentation (including presentation during a video session) through the electronic devices.
Illustratively, a user is taken as an example to operate the presentation live through the electronic device. The user can operate and demonstrate in the acquisition range of the camera of the electronic equipment, so that the camera of the electronic equipment can acquire images including the operation and demonstration of the user in real time, and the images acquired in real time are transmitted to the electronic equipment (hereinafter referred to as electronic equipment a) of a video session object (namely, a user who performs video passing with the user) through a network, and the video session object can watch the operation and demonstration of the user in real time through the electronic equipment a.
However, according to the above method, if the user holds the electronic apparatus with one hand, the user can only perform the operation demonstration, i.e., the one-handed demonstration, with the hand not holding the electronic apparatus, which may result in poor demonstration of the operation demonstration through the electronic apparatus.
Disclosure of Invention
The embodiment of the application aims to provide a display method, a display device and electronic equipment, and can solve the problem of poor demonstration effect of operation demonstration through the electronic equipment.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a display method, where the method includes: receiving a first input of a first control in a shooting preview interface, wherein the first control is used for triggering an auxiliary display function; and responding to the first input, displaying a first preset model in the shooting preview interface, wherein the first preset model is an auxiliary model in an auxiliary display function, the first preset model is generated according to at least one first image, and the at least one first image is an acquired user image.
In a second aspect, embodiments of the present application provide a display device, which may include: the device comprises a receiving module and a display module. The shooting preview interface comprises a receiving module, a display module and a display module, wherein the receiving module is used for receiving first input of a first control in the shooting preview interface, and the first control is used for triggering an auxiliary display function; the display module is used for responding to the first input received by the receiving module, displaying a first preset model in the shooting preview interface, wherein the first preset model is an auxiliary model in an auxiliary display function, the first preset model is generated according to at least one first image, and the at least one first image is a collected user image.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a read storage medium on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the display device can receive a first input of a first control (used for triggering an auxiliary display function) in the shooting preview interface; and responding to a first input, displaying a first preset model in the shooting preview interface, wherein the first preset model can be an auxiliary model in an auxiliary display function, and the first preset model is generated according to at least one first image, and the at least one first image is a collected user image. According to the scheme, when a user needs to perform operation demonstration through the electronic equipment, the user can trigger the display device to display the auxiliary model generated according to the collected user image (for example, the display model generated based on the hand image of the user) in the shooting preview interface through inputting the first control used for triggering the auxiliary display function in the shooting preview interface, so that the user can input the auxiliary model through holding the hand of the electronic equipment, the auxiliary model is matched with the user to perform operation demonstration, for example, a picture of double-hand demonstration is displayed in the shooting preview interface, and therefore the demonstration effect can be improved, and the flexibility of operation demonstration can be executed.
Drawings
FIG. 1 is a schematic diagram of a display method according to an embodiment of the present invention;
fig. 2 is one of schematic interfaces of an application of a display method according to an embodiment of the present invention;
FIG. 3 is a second schematic interface diagram of an application of the display method according to the embodiment of the present invention;
fig. 4 is a third schematic interface diagram of an application of the display method according to the embodiment of the present invention;
FIG. 5 is a fourth schematic view of an interface applied by the display method according to the embodiment of the present invention;
FIG. 6 is a fifth schematic view of an interface applied by the display method according to the embodiment of the present invention;
FIG. 7 is a sixth schematic view of an interface applied by the display method according to the embodiment of the present invention;
FIG. 8 is a seventh schematic interface diagram illustrating an application of a display method according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a display device according to an embodiment of the invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 11 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The display method provided by the embodiment of the application can be applied to a scene that a user performs operation demonstration through electronic equipment.
For example, taking the operation demonstration during the video session of the user through the electronic device, for example, demonstrating the operation process of one device and demonstrating the positions of the hand acupoints, since the camera of the electronic device can capture the image of the object within its capture range, but cannot capture the image of the object outside its capture range, the user must demonstrate within the capture range of the camera, otherwise the video session object cannot view the demonstration process of the user. For example, if a user holds an electronic device with a single hand, in the related art, a presentation may be performed in two ways: 1. the user can fix the electronic equipment at a certain position, and then adjust the distance between the user and the object to be demonstrated and the electronic equipment, so as to ensure that the user and the object to be demonstrated are both in the acquisition range of the camera. 2. The user can hold the electronic device with one hand and use the one-handed demonstration, and of course, the demonstration object (such as the device and the user's hand) needs to be within the acquisition range of the camera at this time. It can be seen that in the above 1, if there is no suitable tool for fixing the electronic device near the user, the user cannot smoothly demonstrate; in the above 2, the one-handed demonstration may result in a poor effect of the demonstration.
In view of the above problem, in the embodiment of the present application, in a technical solution provided by the present application, when a display device displays a shooting preview interface (specifically, a video session interface), a control 1 for triggering an auxiliary display function may be included in the video session interface, so that a user may input to the control 1, for example, click on the control 1, that is, the display device receives a first input to a first control in the video session interface, and then, in response to the first input, the display device may display an auxiliary model in the video session interface, where the hand model may be an auxiliary model in the auxiliary display function, and the auxiliary model may be used to assist the user in performing a presentation action (for example, the user may operate the auxiliary model with a hand holding an electronic device), and the auxiliary model is generated according to a user image collected by the display device. In this way, the user can input the auxiliary model by holding the hand of the electronic device, so that the auxiliary model cooperates with the user to perform operation demonstration, for example, if the auxiliary model is generated based on the hand image of the user, a picture of two-hand demonstration can be presented in a shooting preview interface, and thus the demonstration effect and the flexibility of executing the operation demonstration can be improved.
It should be noted that, in the foregoing embodiment, the display method provided in the embodiment of the present application is exemplarily described by taking an example that a user performs presentation in a video session through a display device, and in actual implementation, the method may also be applied to any possible camera scene such as recording a presentation video through the display device, taking a presentation picture, and the like. For the description of recording the demonstration video and taking the demonstration picture through the display device, reference may be made to the above description of performing the demonstration in the video session through the display device in the above embodiments, and in order to avoid repetition, the description is omitted here.
As shown in fig. 1, an embodiment of the present application provides a display method, which may include the following steps 101 and 102:
step 101, a display device receives a first input of a first control in a shooting preview interface.
The first control can be used for triggering an auxiliary display function of the display device.
In the embodiment of the application, the auxiliary display function is a function of displaying an auxiliary model for assisting a user in performing operation demonstration on a shooting preview interface, and the auxiliary model is generated according to an acquired user image. The description of the user image will be described in detail in the following embodiments, and is not repeated here to avoid redundancy.
In the embodiment of the application, because the first preset model is generated according to the acquired user image, the simulation degree of the first auxiliary model is higher, and the simulation degree is just as that the hand of the user is really in the acquisition range of the camera.
Optionally, in this embodiment of the application, the shooting preview interface may be any interface for displaying a preview image acquired by a camera.
For example, the capture preview interface may be an interface in a camera application that displays capture preview images, or may be a video session interface in a communication application. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in this embodiment of the application, the first input may be a touch input of the user to the first control.
For example, the first input may be a click input of the first control by the user.
In the embodiment of the present application, the display device may be an electronic device, may also be a device in the electronic device, and may also be a device independent from the electronic device (for example, a device externally attached to the electronic device). The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in this embodiment, when the shooting preview interface is a video session interface, the video session interface may include a "more" control, and after the user inputs the "more" control, the display device may include a menu bar, where the menu bar may include the first control.
And 102, the display device responds to the first input and displays a first preset model in a shooting preview interface.
The first preset model may be an auxiliary model in an auxiliary display function of the display device, and the first preset model is generated according to at least one first image, which may be a user image acquired by the display device. It can be understood that, in the embodiment of the present application, the user image is an image of the user, and the two images have the same meaning and can be interchanged.
Optionally, in this embodiment of the application, the user image may be a hand image of the user (that is, the first preset model is a hand model of the user), a face image (that is, the first preset model is a face model of the user), an eye image (that is, the first preset model is an eye model of the user), a head image (that is, the first preset model is a head model of the user), an upper body image (that is, the first preset model is an upper body model of the user), a lower body image (that is, the first preset model is a lower body model of the user), and the like.
Optionally, in this embodiment of the application, in order to achieve a better display effect, the user image may be an image of a symmetric part of the user, where the symmetric part of the user may be a hand, a face, a leg, and the like of the user.
Optionally, in this embodiment of the application, when the at least one first image is an image (hereinafter, referred to as a first hand image) of a first hand (which may be a left hand or a right hand) of the user, the first preset model may be a first hand model.
Optionally, in this application embodiment, the first hand is a general finger, that is, in this application embodiment, the first hand may specifically be at least one of the following: a first hand; a first hand and a wrist corresponding to the first hand; the first hand, the wrist corresponding to the first hand, and the arm corresponding to the first hand. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in this embodiment of the present application, the first hand model may be any one of the following: a model of a second hand; a second hand and a wrist model corresponding to the second hand; a wrist corresponding to the second hand, and an arm corresponding to the second hand. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in this embodiment of the application, when the first preset model is a first hand model, the first hand model may represent a hand of a user holding the electronic device, or may also be a hand not holding the electronic device, and may specifically be determined according to an actual use requirement, which is not limited in this embodiment of the application.
Optionally, in this embodiment of the present application, the first preset model may be a two-dimensional model (2D); or a three-dimensional model (3D), which may be determined according to actual use requirements, and the embodiments of the present application are not limited.
Optionally, in this embodiment of the application, the displaying, by the display device, the first preset model in the shooting preview interface may specifically be displaying the first preset model in an overlapping manner on a preview image acquired by the camera. It can be understood that, in the embodiment of the present application, the first preset model may cover an image area of an area where the first preset model is located in a preview image acquired by the camera.
For example, as shown in fig. 2 (a), the shooting preview interface 20 includes a preview image 21 and an "auxiliary display" control 22, and after the user clicks on the "auxiliary display" control 22, as shown in fig. 2 (b), the display device may display a first hand model 23 (i.e., a first preset model) in the shooting preview interface 20 in an overlapping manner, and the first hand model 23 may cover a partial image area in the preview image 21.
Optionally, in an actual implementation of the embodiment of the present application, the first preset model covers an image area of an area where the first preset model is located in the preview image, and specifically, the display device replaces an RGB value of an area in the preview image where the first preset model is displayed with the RGB value of the first preset model.
Optionally, in this embodiment of the application, each of the at least one first image includes a user image; for example, the image of the first hand of the user, hereinafter simply referred to as the first hand image, means that the first hand image and the image of the first hand of the user have the same meaning and are interchangeable.
In the display method provided by the embodiment of the application, when a user needs to perform operation demonstration through the electronic device, because the user can input the first control for triggering the auxiliary display function in the shooting preview interface and trigger the display device to display the auxiliary model generated according to the acquired user image in the shooting preview interface, the user can input the auxiliary model through the hand holding the electronic device, so that the auxiliary model cooperates with the user to perform operation demonstration.
Optionally, in this embodiment of the application, after the display device displays the first preset model in the shooting preview interface, the user may trigger the first preset model to move and/or rotate in the shooting preview interface.
Optionally, in this embodiment of the application, when the user triggers the first preset model to rotate in the shooting preview interface, the first preset model may specifically rotate around the first rotation axis and/or the second rotation axis. The first rotating shaft is perpendicular to a screen of the electronic device, and the second rotating shaft is parallel to the screen of the electronic device.
For example, fig. 3 is a schematic diagram of the display device displaying a first hand model 31 (i.e., a first preset model) in the shooting preview interface 30, and as shown in (a) of fig. 3, the user can trigger the first hand model 31 to rotate around a first rotation axis 32. As shown in fig. 3 (b), when the first hand model 31 is a three-dimensional model, the user may also trigger the first hand model 31 to rotate around the second rotation axis 33.
Optionally, in this embodiment of the application, in order to facilitate user operation, when the display device displays the first preset model in the shooting preview interface and the first preset model is a three-dimensional model, the display device may further display a selection frame in the shooting preview interface, where the selection frame includes a first option and a second option, and the first option and the second option cannot be in a selected state at the same time.
When the first option is in the selected state, the user can trigger the first preset model to rotate along the first rotating shaft, and when the second option is in the selected state, the user can trigger the first preset model to rotate along the second rotating shaft.
In the embodiment of the application, the user can control the first preset model to move and rotate, so that the flexibility and convenience of auxiliary display can be further improved.
Optionally, in this embodiment of the application, the first preset model may be pre-stored in the electronic device, or may be generated by the display device according to at least one first image after receiving the first input of the user, and may specifically be determined according to an actual use requirement, which is not limited in this embodiment of the application.
Optionally, in this embodiment of the application, when the first preset model is generated after receiving the first input of the user, after the display device receives the first input, the display device may collect at least one first image first, and then generate the first preset model according to the at least one first image.
For example, in the embodiment of the present application, before the step 102, the display method provided in the embodiment of the present application may further include the following steps 103 to 105. The step 102 can be specifically realized by the step 102a described below.
Step 103, the display device responds to the first input and acquires at least one first image.
In this embodiment, the display device may specifically acquire the at least one first image through a camera of the electronic device.
Optionally, in this embodiment of the application, the number of the at least one first image may be one or multiple, and may be specifically determined according to actual use requirements, and this embodiment of the application is not limited.
In this embodiment of the application, when the number of the first images is multiple, each of the first images includes an image of a user, and different ones of the first images include images of different sides of the user.
For example, assuming that the user image is a first hand image and the number of the at least one first image is 2, a first one of the at least one first image may include a palm image of the first hand, as shown in (a) of fig. 4; as shown in fig. 4 (b), the second hand image of the at least one first image may include a hand back image of the first hand.
Optionally, in this embodiment of the application, taking the first preset model as the first hand model, that is, taking the at least one first image as the first hand image as an example, in order to avoid the first hand model being too large in size, when acquiring the at least one first image, the head of the user may be also placed within the acquisition range of the camera, and it is ensured that the image of the head of the user and the image of the first hand are not overlapped. In this way, the first hand image of the at least one acquired first image can be prevented from being oversized.
For example, as shown in fig. 5, 50 is an electronic device, 51 is a second hand, 52 is a user's head, and the coordinate ranges of the second hand and the user's head on the x-axis are different so as to avoid the user's head image from overlapping with the first hand image.
It should be noted that, in the embodiment of the application, in actual implementation, it may be ensured that the size of the first preset model meets the actual use requirement of the user by adjusting the image size of the first preset image. Of course, the size of the first preset model can also be directly scaled. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in this embodiment of the application, in order to ensure that the display device accurately acquires the at least one first image, the display device may output prompt information prompting the user to swing a gesture after receiving the first input, for example, taking the prompt information as the voice prompt information, the display device may sequentially display "please direct the palm of the hand toward the screen", "please direct the back of the hand toward the screen", "please direct one side of the thumb toward the screen", and "please direct one side of the little thumb toward the screen"; therefore, after the user swings the gesture according to the prompt message in sequence, the display device can acquire 4 first images.
And 104, the display device performs mirror image processing on the at least one first image to obtain at least one second image.
In the embodiment of the application, for each first image in at least one first image, a display device performs mirror image processing on one first image to obtain a second image; and different second images are obtained after different first images are mirrored. In this way, at least one second image may be obtained.
In this embodiment, the display device may perform mirroring on the first image through an image mirroring algorithm. For example, the image mirroring algorithm may be a horizontal mirroring algorithm.
Illustratively, (a) in fig. 6 is an image of the left hand of the user, (b) in fig. 6 is an image of the right hand of the user (i.e., a first image), and (c) in fig. 6 is a mirror image (i.e., a second image) obtained by performing mirror processing on the image of the left hand of the user. It can be seen that the mirror image of the user's right hand image is substantially identical to the image of the user's left hand (in practice there may be different texture information).
And 105, generating a first preset model by the display device according to the at least one second image.
And 102a, displaying a first preset model in a shooting preview interface by a display device.
In the embodiment of the present application, the dimensions of the first hand model are different, and the method for generating the first hand model by the display device according to the at least one first image may also be different.
The method for generating the first preset model by the display device is exemplarily described below with reference to the first possible implementation manner and the second possible implementation manner, respectively.
A first possible implementation: the first preset model is a two-dimensional model
In the embodiment of the present application, in a first possible implementation manner, it is assumed that the number of first images is 1, and correspondingly, the number of second images is also 1; the display device may obtain a mirrored user image (hereinafter referred to as a third image) from the second image, that is, discard a background image area in the second image, and use the third image as the first preset model.
It can be understood that, in this embodiment of the application, an example is given that the display device may also directly mirror each first image, and then obtain the mirrored user image from the mirrored first image, in an actual implementation, the display device may also obtain the user image from each first image, and then respectively mirror the obtained user image to obtain the second image, which may be determined according to actual use requirements, and this embodiment of the application is not limited.
A second possible implementation: the first predetermined model is a three-dimensional model
Optionally, in this embodiment of the application, in a second possible implementation manner, the number of the first images is N, where N is an integer greater than or equal to 2.
Optionally, in this embodiment of the application, before the display device generates the first preset model according to the N second images, initialization processing may be performed on the N second images, and heights of user images (for example, the first hand image) in the N initialized second images are the same.
Alternatively, in this embodiment of the application, the display device may generate the first preset model according to the N second images after the initialization processing by the following 3 methods (i.e., the following method 1 and method 2).
The method comprises the following steps:
optionally, in this embodiment of the application, in the method 1, the display device may obtain depth information corresponding to the user image from each second image to obtain N pieces of depth information, and then generate the first preset model according to the N pieces of second images and the N pieces of depth information.
For example, the first preset model may be generated according to the N second images and the N depth information through a depath map algorithm or a gray-scale correlation and feature matching algorithm.
The method 2 comprises the following steps:
in this embodiment of the application, in the method 2, the display device may obtain N sets of first feature point information from the N second images, and obtain K (K is an integer greater than 0) two-dimensional contour lines (for example, two-dimensional contour lines of a hand of a user, for example, contour lines of a cross section of the hand) through the N sets of first feature point information; then, a first preset model is generated based on the K two-dimensional contour lines and the N second images. Wherein each set of first feature point information may indicate information of at least one feature point in the second image.
Optionally, in this embodiment of the application, the display device may obtain N sets of first hand feature point information through a grabcut image segmentation algorithm.
It should be noted that, in the embodiment of the present application, the method 1 and the method 2 are only exemplary, and in an actual implementation, the first preset model may also be generated according to any other possible method. For example, when the first preset model is a hand model, the first hand model may be generated by stretching the palm image (back image) and then stitching the stretched palm image (back image) and back image (palm image). For another example, N groups of second feature point information may be extracted from the N second images, and then the N groups of second feature point information are matched with preset feature point information in a preset model to obtain a first preset model; wherein each set of second feature point information may indicate information of at least one feature point in the second image.
Optionally, in this embodiment of the application, when the first preset model is a three-dimensional model, the display device displays the first preset model on the shooting preview interface, and specifically may map the first preset model to the shooting preview interface.
It should be noted that, in the foregoing embodiment, the example is given by performing mirror image processing on at least one first image and generating the first preset model by using the mirrored image, and in an actual implementation, the preset model may be generated directly by using at least one first image.
In this embodiment of the application, when the first preset model is a right-hand model, since the left hand and the right hand of the user are mirror images of each other, a mirror image (i.e., at least one second image) of a left-hand image (i.e., at least one first image) of the user may be used as the right-hand image of the user, and thus a hand model generated according to the at least one second image may be used as the right-hand model. Therefore, when the user holds the electronic device with the right hand, the image (namely, at least one first image) of the left hand of the user can be collected, and the right-hand model capable of representing the right hand of the user is generated according to the image of the left hand of the user, so that when the display device displays the right-hand model, the effect that both hands are in the picture can be presented.
In this embodiment of the application, since the display device may generate the first preset model by mirroring the at least one first image and then generating the at least one second image, after the display device displays the first preset model in the shooting preview interface, an effect that the first object (for example, a right hand) and the second object (for example, a left hand) are both within a camera capture range may be presented, where the second object and the first object are mirror images of each other. Thus, the flexibility of the auxiliary display can be further improved.
Further, in a second possible implementation manner, since the display device performs mirroring on the reference images (i.e., the at least one first image) used for generating the preset model before generating the preset model, respectively, the amount of calculation of mirroring can be reduced, and thus the calculation cost can be saved.
Optionally, in this embodiment of the application, a third image is displayed in the shooting preview interface, and the third image may include an image of the target object. The third image may be an image area in a preview image currently captured by the camera, and the target object may be an object currently within a capture range of the camera.
Optionally, in this embodiment of the application, the target object may be at least one of a human body and an object.
Optionally, in the embodiment of the present application, the human body may specifically include arms, hands, a head, a body, legs, and the like of the human body; the object may specifically include living objects (e.g., cats, fish, birds, etc.) and inanimate objects (e.g., tables, chairs, packaging boxes, remote controls, displays, etc.). The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in this embodiment of the application, since the user may trigger the first preset model to move in the shooting preview interface through input to the first hand model, when the third image is displayed in the shooting preview interface, the user may trigger the display device to display the first preset model in an area meeting actual use requirements of the user in the third image through input to the first preset model.
Illustratively, in the embodiment of the present application, after the step 102, the display method provided in the embodiment of the present application may further include the following steps 106 and 107.
Step 106, the display device receives a second input to the first preset model.
And step 107, the display device responds to the second input and displays the first preset model in the target area of the third image.
The target area may be an area where at least one feature point of the target object is located in the third image. It is to be understood that, in the embodiment of the present application, the region where the at least one feature point is located in the third image refers to the region where the image of the at least one feature point is located in the third image.
Optionally, in this embodiment of the application, the second input may be used to determine a display area of the first preset model in the shooting preview interface.
Optionally, in this embodiment of the application, the target object may be any identifiable object.
Optionally, in this embodiment of the application, the second input may be a touch input to the first preset model (manner 1), or a voice input to the first preset model (manner 2).
Optionally, in this embodiment of the application, in mode 1, the second input may specifically be a dragging input of the first preset model by the user, in this case, the first preset model may move along with an input trajectory of the second input, that is, it can be understood that the display device may use an area including a dragging end position of the dragging input as a target area, and an image in the target area is an image of at least one feature point of the target object.
Illustratively, as shown in fig. 7 (b), the shooting preview interface 70 includes a right-hand image 71 of the user (i.e., an image of the target object) currently captured by the camera and a left-hand model 72 of the user (i.e., a first preset model); then, in the above-described manner 1, as shown in (b) of fig. 7, the user may drag the user's left-hand model 72 in the direction indicated by the dotted arrow, that is, the display apparatus receives the second input of the user, and then, as shown in (c) of fig. 7, the display apparatus may acquire a drag trajectory of the drag input and control the user's left-hand model 72 to move to the target area 73 along the drag trajectory, at which time, at least one feature point of the target object is the thumb of the user's right hand.
Optionally, in this embodiment of the application, in the above mode 2, the display device may recognize the image content of the third image, and obtain the target content information, where the target content information may include the feature point of the target object and the position information of the feature point of the target object in the third image.
Optionally, in this embodiment of the present application, the feature points of an object may include at least one of the following: shape, color, pattern, text description.
In the embodiment of the application, after the display device obtains the target content information, when a user inputs a voice message, the display device can perform semantic analysis on the voice message and judge whether a keyword indicating at least one feature point of a target object exists in the voice message; if there is a keyword indicating at least one feature point of the target object, the display apparatus may control the first preset model to move to an area where the at least one feature point is located in the third image.
Illustratively, as shown in fig. 8 (a), the shooting preview interface 80 displays a left-hand model 81 of the user (i.e. a first preset model) and a third image (a preview image captured by the camera), the third image includes an image 82 of a packing box (i.e. a target object), and the packing box includes an apple pattern thereon, then when the user says "apple", it means that the display device receives a second input to the left-hand model 81 of the user, and then the display device can respond to the second input, as shown in fig. 8 (b), the display device can control the left-hand model 81 of the user to move downwards to an area 83 where the apple pattern is located, i.e. the display device responds to the second input, and displays the first preset model in the target area of the third image.
In the embodiment of the application, the user can trigger the display device to update the display position of the first preset model through different inputs, so that the flexibility and convenience for operating the first preset model can be improved, and the man-machine interaction performance can be improved.
It can be understood that, since the third image is a preview image captured by the camera and the preview image is updated in real time, the display device can periodically recognize the content of the third image to ensure that the target content information is the content information corresponding to the last captured preview image.
The display method provided by the embodiment of the present application is exemplarily described below with reference to specific examples.
Example 1: under the condition that a user needs to display hand acupuncture points to a live broadcast object through a display device and the user holds the electronic equipment with the left hand, as shown in fig. 7 (a), the live broadcast interface 70 includes an image of the right hand of the user, and the user holds the electronic equipment with the left hand (i.e., the right hand of the user is located within the acquisition range of the camera (the front camera), and the left hand of the user is located outside the acquisition range of the camera), at this time, the user can click a "single-hand auxiliary" control 74 (i.e., a first control) in the live broadcast interface with the left hand; then, as shown in fig. 7 (b), the display device may display a "simulated gesture" 72 (i.e., a first hand model) in the live interface, the "simulated gesture" 72 being generated from an image of the right hand of the user captured by the display device after receiving the click input, the "simulated gesture" 72 being operable to represent a left hand image of the user. Therefore, the display device can continue to collect the image 71 of the right hand of the user through the camera and display the collected image 71 of the right hand in the live broadcast interface; in this case, the user may input the "simulated gesture" 72 with the left hand, triggering the display device to update the display location of the "simulated gesture" 72, for example, dragging the "simulated gesture" from the wrist image of the right hand to the thumb image of the right hand. It can be seen that holding electronic equipment in user's one hand, and display device passes through under the condition of leading camera collection image, the user can be through the touch-control input to "simulation gesture", realizes adopting "simulation gesture" to cooperate the hand acupuncture point demonstration that the user's right hand needs both hands just can accomplish, gives live broadcast object with the effect that the user demonstrated the hand acupuncture point in both hands. Thus, the operation interactivity and convenience can be improved.
In the embodiment of the application, the user can trigger the display device to update the display area of the first preset model through inputting the first preset model, so that the display area of the first preset model can meet the actual use requirement of the user.
Optionally, in this embodiment of the application, when a user performs a live broadcast demonstration operation through an electronic device, after the display device displays the first preset model in the shooting preview interface, the display device may further obtain an image (for example, a fourth image described below) displayed by the shooting preview interface, where the fourth image may include a preview image that is acquired by a camera last time and the first preset model displayed on the preview image, and send the obtained fourth image to an electronic device (for example, a target device described below) of a live broadcast object, so that the live broadcast object may view a display operation of the user in real time through the target device.
Illustratively, in the embodiment of the present application, after the step 102, the display method provided in the embodiment of the present application may further include the following step 108 and step 109.
And step 108, the display device acquires an image displayed on the shooting preview interface to obtain a fourth image.
And step 109, the display device sends the fourth image to the target equipment.
Optionally, in this embodiment of the application, the fourth image includes a preview image acquired by the camera in real time and an image of the first preset model.
In this embodiment, the display apparatus may transmit the fourth image to the target device through a network.
In the embodiment of the application, the display device can transmit the first preset model and the preview image collected by the camera to the target equipment in real time, so that the operation process that a live object can watch a user can be guaranteed, the operation interactivity of the demonstration operation can be improved, and the live object can be guaranteed to have better watching experience.
Optionally, in this embodiment of the application, assuming that the first preset model is an upper body model of a user, before the display device displays the first preset model in the shooting preview interface, the display device may first adjust the size (for example, display size) of the first preset model and an upper body image acquired by the camera in real time, specifically, after the display device adjusts the size of the first preset model and the size of the upper body image, when the display device can simultaneously display the first preset model and the upper body image in the shooting preview interface, an overlapping area between the first preset model and the upper body image is smaller than a preset threshold. In other words, the first preset model and the upper body image can be proportionally displayed in the shooting preview interface so as to present a display effect that the whole body of the user is within the acquisition range of the camera.
It should be noted that, in the display method provided in the embodiment of the present application, the execution main body may be a display device, or a control module in the display device for executing a method of loading display. In the embodiment of the present application, a display device executing a display method is taken as an example, and the device of the display device provided in the embodiment of the present application is described.
As shown in fig. 9, an embodiment of the present application provides a display device 90, where the display device 90 may include: a receiving module 91 and a display module 92. The receiving module 91 may be configured to receive a first input to a first control in the shooting preview interface, where the first control may be used to trigger an auxiliary display function; the display module 92 may be configured to display a first preset model in the shooting preview interface in response to the first input received by the receiving module 91, where the first preset model may be an auxiliary model in an auxiliary display function, and the first preset model is generated according to at least one first image, and the at least one first image may be a user image captured by a display device.
In the display device provided by the embodiment of the application, when a user needs to perform operation demonstration through electronic equipment, because a user can input a first control for triggering the auxiliary display function in a shooting preview interface, and a trigger display device displays an auxiliary model generated according to an acquired user image in the shooting preview interface, the user can input the auxiliary model through a hand holding the electronic equipment, so that the auxiliary model cooperates with the user to perform operation demonstration.
Optionally, in this embodiment of the application, the display device may further include an acquisition module, a generation module, and a mirror module. The acquisition module may be configured to acquire at least one first image before the display module 92 displays the first preset model in the shooting preview interface; the mirror image module can be used for carrying out mirror image processing on at least one first image acquired by the acquisition module to obtain at least one second image; the generating module may be configured to generate the first preset model according to at least one second image obtained by mirroring the mirroring module.
In this embodiment of the application, since the display device may generate the first preset model by mirroring the at least one first image and then generating the at least one second image, after the display device displays the first preset model in the shooting preview interface, an effect that the first object (for example, a right hand) and the second object (for example, a left hand) are both within a camera capture range may be presented, where the second object and the first object are mirror images of each other. Thus, the flexibility of the auxiliary display can be further improved.
Optionally, in this embodiment of the application, a third image is displayed in the shooting preview interface, and the third image may include an image of the target object; the receiving module 91 may be further configured to receive a second input to the first preset model after the first preset model is displayed in the shooting preview interface; the display module 92 may be further configured to display the first preset model in a target area of the third image in response to the second input received by the receiving module 91, where the target area may be an area in the third image where at least one feature point of the target object is located.
In the embodiment of the application, the user can trigger the display device to update the display area of the first preset model through inputting the first preset model, so that the display area of the first preset model can meet the actual use requirement of the user.
Optionally, in this embodiment of the application, the second input may be a touch input or a voice input to the first preset model.
In the embodiment of the application, the user can trigger the display device to update the display position of the first preset model through different inputs, so that the flexibility and convenience for operating the first preset model can be improved, and the man-machine interaction performance can be improved.
Optionally, in this embodiment of the present application, the display device may further include an obtaining module and a sending module; the obtaining module may be configured to obtain an image displayed on the shooting preview interface after the display module 92 displays the first preset model in the shooting preview interface, so as to obtain a fourth image; the sending module may be configured to send the fourth image acquired by the acquiring module to the target device.
In the embodiment of the application, the display device can transmit the first preset model and the preview image collected by the camera to the target equipment in real time, so that the operation process that a live object can watch a user can be guaranteed, the operation interactivity of the demonstration operation can be improved, and the live object can be guaranteed to have better watching experience.
The display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Network Attached Storage (NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The display device provided in the embodiment of the present application can implement each process implemented by the display device in the method embodiments of fig. 1 to 8, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 10, an electronic device 200 is further provided in this embodiment of the present application, and includes a processor 202, a memory 201, and a program or an instruction stored in the memory 201 and executable on the processor 202, where the program or the instruction is executed by the processor 202 to implement each process of the display method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 1007 is configured to receive a first input to a first control in the shooting preview interface, where the first control is used to trigger an auxiliary display function; a display unit 1006, configured to display a first preset model in the shooting preview interface in response to a first input received by the user input unit 1007, where the first preset model is an auxiliary model in the auxiliary display function, and the first preset model is generated according to at least one first image, and the at least one first image is a captured user image.
In the electronic device provided by the embodiment of the application, when a user needs to perform operation demonstration through the electronic device, because the user can trigger the electronic device to display an auxiliary model generated according to an acquired user image (for example, a display model generated based on a hand image of the user) in a shooting preview interface through inputting a first control for triggering the auxiliary display function in the shooting preview interface, the user can input the auxiliary model through holding a hand of the electronic device, so that the auxiliary model cooperates with the user to perform operation demonstration, for example, a picture of two-hand demonstration is presented in the shooting preview interface, thereby improving a demonstration effect and performing flexibility of operation demonstration.
Optionally, in this embodiment of the application, the processor 1010 may further acquire at least one first image through a camera of the electronic device before the display unit 1006 displays the first preset model in the shooting preview interface; and mirroring the at least one first image to obtain at least one second image; and generating a first preset model according to at least one second image obtained by mirroring of the mirroring module.
In the embodiment of the application, since the electronic device may generate the first preset model by mirroring the at least one first image and the at least one second image, after the electronic device displays the first preset model in the shooting preview interface, an effect that the first object (for example, a right hand) and the second object (for example, a left hand) are both within a camera capture range may be presented, where the second object and the first object are mirrored. Thus, the flexibility of the auxiliary display can be further improved.
Optionally, in this embodiment of the application, a third image is displayed in the shooting preview interface, and the third image may include an image of the target object; the user input unit 1007 may be further configured to receive a second input to the first preset model after the display unit 1006 displays the first preset model in the shooting preview interface; the display unit 1006 may be further configured to display the first preset model in a target area of the third image in response to a second input received by the user input unit 1007, where the target area may be an area in the third image where at least one feature point of the target object is located.
In the embodiment of the application, the user can trigger the electronic device to update the display area of the first preset model through the input of the first preset model, so that the display area of the first preset model can meet the actual use requirement of the user.
Optionally, in this embodiment of the application, the second input may be a touch input or a voice input to the first preset model.
In the embodiment of the application, the user can trigger the electronic equipment to update the display position of the first preset model through different inputs, so that the flexibility and convenience for operating the first preset model can be improved, and the man-machine interaction performance can be improved.
Optionally, in this embodiment of the application, the input unit 1004 may be configured to obtain an image displayed on the shooting preview interface after the display unit 1006 displays the first preset model in the shooting preview interface, so as to obtain a fourth image; the radio frequency unit 1001 may be configured to transmit the fourth image acquired by the input unit 1004 to the target device.
In the embodiment of the application, the electronic equipment can transmit the first preset model and the preview image collected by the camera to the target equipment in real time, so that the operation process that a live object can watch a user can be guaranteed, the operation interactivity of the demonstration operation can be improved, and the live object can be guaranteed to have better watching experience.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the graphics processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the display method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of displaying, the method comprising:
receiving a first input of a first control in a shooting preview interface, wherein the first control is used for triggering an auxiliary display function;
responding to the first input, displaying a first preset model in the shooting preview interface, wherein the first preset model is an auxiliary model in the auxiliary display function, the first preset model is generated according to at least one first image, and the at least one first image is an acquired user image.
2. The method of claim 1, wherein before displaying the first preset model in the capture preview interface, the method further comprises:
acquiring the at least one first image;
carrying out mirror image processing on the at least one first image to obtain at least one second image;
and generating the first preset model according to the at least one second image.
3. The method according to claim 1 or 2, wherein a third image is displayed in the shooting preview interface, and the third image comprises an image of a target object;
after the first preset model is displayed in the shooting preview interface, the method further includes:
receiving a second input to the first preset model;
and responding to the second input, and displaying the first preset model in a target area of the third image, wherein the target area is an area where at least one feature point of the target object is located in the third image.
4. The method of claim 3, wherein the second input is a voice input or a touch input.
5. The method according to claim 1 or 2, wherein after displaying the first preset model in the photographing preview interface, the method further comprises:
acquiring an image displayed on the shooting preview interface to obtain a fourth image;
and sending the fourth image to a target device.
6. A display device, characterized in that the device comprises: the device comprises a receiving module and a display module;
the receiving module is used for receiving a first input of a first control in a shooting preview interface, and the first control is used for triggering an auxiliary display function;
the display module is configured to display a first preset model in the shooting preview interface in response to the first input received by the receiving module, where the first preset model is an auxiliary model in the auxiliary display function, the first preset model is generated according to at least one first image, and the at least one first image is an acquired user image.
7. The apparatus of claim 6, further comprising an acquisition module, a mirroring module, and a generation module;
the acquisition module is used for acquiring the at least one first image before the display module displays the first preset model in the shooting preview interface;
the mirror image module is used for carrying out mirror image processing on the at least one first image acquired by the acquisition module to obtain at least one second image;
the generating module is configured to generate the first preset model according to the at least one second image obtained by mirroring the mirroring module.
8. The apparatus according to claim 6, wherein a third image is displayed in the shooting preview interface, and the third image comprises an image of a target object;
the receiving module is further configured to receive a second input to the first preset model after the first preset model is displayed in the shooting preview interface of the display module;
the display module is further configured to display the first preset model in a target area of the third image in response to the second input received by the receiving module, where the target area is an area where at least one feature point of the target object is located in the third image.
9. The apparatus of claim 8, wherein the second input is a touch input or a voice input to the first predetermined model.
10. The apparatus according to claim 6 or 7, wherein the apparatus further comprises an obtaining module and a sending module;
the acquisition module is used for acquiring an image displayed on the shooting preview interface after the display module displays a first preset model in the shooting preview interface, so as to obtain a fourth image;
the sending module is configured to send the fourth image acquired by the acquiring module to a target device.
CN202010582016.3A 2020-06-23 2020-06-23 Display method and device and electronic equipment Active CN111901518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010582016.3A CN111901518B (en) 2020-06-23 2020-06-23 Display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010582016.3A CN111901518B (en) 2020-06-23 2020-06-23 Display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111901518A true CN111901518A (en) 2020-11-06
CN111901518B CN111901518B (en) 2022-05-17

Family

ID=73206467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010582016.3A Active CN111901518B (en) 2020-06-23 2020-06-23 Display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111901518B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112473121A (en) * 2020-11-13 2021-03-12 海信视像科技股份有限公司 Display device and method for displaying dodging ball based on limb recognition
WO2024017236A1 (en) * 2022-07-22 2024-01-25 维沃移动通信有限公司 Shooting interface display method and apparatus, electronic device and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020181803A1 (en) * 2001-05-10 2002-12-05 Kenichi Kawakami System, method and program for perspective projection image creation, and recording medium storing the same program
US20040021664A1 (en) * 2002-07-31 2004-02-05 Canon Kabushiki Kaisha Information processing device and method
US20060239539A1 (en) * 2004-06-18 2006-10-26 Topcon Corporation Model forming apparatus, model forming method, photographing apparatus and photographing method
US20110007086A1 (en) * 2009-07-13 2011-01-13 Samsung Electronics Co., Ltd. Method and apparatus for virtual object based image processing
CN108089715A (en) * 2018-01-19 2018-05-29 赵然 A kind of demonstration auxiliary system based on depth camera
FR3062229A1 (en) * 2017-01-26 2018-07-27 Parrot Air Support METHOD FOR DISPLAYING ON A SCREEN AT LEAST ONE REPRESENTATION OF AN OBJECT, COMPUTER PROGRAM, ELECTRONIC DISPLAY DEVICE AND APPARATUS THEREOF
CN108495032A (en) * 2018-03-26 2018-09-04 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108961424A (en) * 2018-07-04 2018-12-07 百度在线网络技术(北京)有限公司 Virtual information processing method, equipment and storage medium
CN109859307A (en) * 2018-12-25 2019-06-07 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109992107A (en) * 2019-02-28 2019-07-09 济南大学 Virtual control device and its control method
CN111111194A (en) * 2019-11-28 2020-05-08 腾讯科技(深圳)有限公司 Virtual object control method, device, storage medium and electronic device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020181803A1 (en) * 2001-05-10 2002-12-05 Kenichi Kawakami System, method and program for perspective projection image creation, and recording medium storing the same program
US20040021664A1 (en) * 2002-07-31 2004-02-05 Canon Kabushiki Kaisha Information processing device and method
US20060239539A1 (en) * 2004-06-18 2006-10-26 Topcon Corporation Model forming apparatus, model forming method, photographing apparatus and photographing method
US20110007086A1 (en) * 2009-07-13 2011-01-13 Samsung Electronics Co., Ltd. Method and apparatus for virtual object based image processing
FR3062229A1 (en) * 2017-01-26 2018-07-27 Parrot Air Support METHOD FOR DISPLAYING ON A SCREEN AT LEAST ONE REPRESENTATION OF AN OBJECT, COMPUTER PROGRAM, ELECTRONIC DISPLAY DEVICE AND APPARATUS THEREOF
CN108089715A (en) * 2018-01-19 2018-05-29 赵然 A kind of demonstration auxiliary system based on depth camera
CN108495032A (en) * 2018-03-26 2018-09-04 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108961424A (en) * 2018-07-04 2018-12-07 百度在线网络技术(北京)有限公司 Virtual information processing method, equipment and storage medium
CN109859307A (en) * 2018-12-25 2019-06-07 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109992107A (en) * 2019-02-28 2019-07-09 济南大学 Virtual control device and its control method
CN111111194A (en) * 2019-11-28 2020-05-08 腾讯科技(深圳)有限公司 Virtual object control method, device, storage medium and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112473121A (en) * 2020-11-13 2021-03-12 海信视像科技股份有限公司 Display device and method for displaying dodging ball based on limb recognition
WO2024017236A1 (en) * 2022-07-22 2024-01-25 维沃移动通信有限公司 Shooting interface display method and apparatus, electronic device and readable storage medium

Also Published As

Publication number Publication date
CN111901518B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
US10394334B2 (en) Gesture-based control system
CN108255304B (en) Video data processing method and device based on augmented reality and storage medium
CN106716302B (en) Method, apparatus, and computer-readable medium for displaying image
CN103793060B (en) A kind of user interactive system and method
Shen et al. Vision-based hand interaction in augmented reality environment
CN108646997A (en) A method of virtual and augmented reality equipment is interacted with other wireless devices
US20120293544A1 (en) Image display apparatus and method of selecting image region using the same
JP6165485B2 (en) AR gesture user interface system for mobile terminals
CN111580661A (en) Interaction method and augmented reality device
CN109144252B (en) Object determination method, device, equipment and storage medium
CN107479712B (en) Information processing method and device based on head-mounted display equipment
CN111901518B (en) Display method and device and electronic equipment
CN109582122A (en) Augmented reality information providing method, device and electronic equipment
Shim et al. Gesture-based interactive augmented reality content authoring system using HMD
KR101488662B1 (en) Device and method for providing interface interacting with a user using natural user interface device
CN106502401B (en) Image control method and device
CN113961107B (en) Screen-oriented augmented reality interaction method, device and storage medium
US11042215B2 (en) Image processing method and apparatus, storage medium, and electronic device
Lee et al. Tunnelslice: Freehand subspace acquisition using an egocentric tunnel for wearable augmented reality
Ahmed et al. Interaction techniques in mobile Augmented Reality: State-of-the-art
CN112702533B (en) Sight line correction method and sight line correction device
JP6699406B2 (en) Information processing device, program, position information creation method, information processing system
JP2015052895A (en) Information processor and method of processing information
Nurai et al. A research protocol of an observational study on efficacy of microsoft kinect azure in evaluation of static posture in normal healthy population
TW201925989A (en) Interactive system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant