CN113849142A - Image display method and device, electronic equipment and computer readable storage medium - Google Patents

Image display method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113849142A
CN113849142A CN202111131215.3A CN202111131215A CN113849142A CN 113849142 A CN113849142 A CN 113849142A CN 202111131215 A CN202111131215 A CN 202111131215A CN 113849142 A CN113849142 A CN 113849142A
Authority
CN
China
Prior art keywords
image
display
position parameter
static image
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111131215.3A
Other languages
Chinese (zh)
Inventor
李禹�
张聪
胡震宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huole Science and Technology Development Co Ltd
Original Assignee
Shenzhen Huole Science and Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huole Science and Technology Development Co Ltd filed Critical Shenzhen Huole Science and Technology Development Co Ltd
Priority to CN202111131215.3A priority Critical patent/CN113849142A/en
Publication of CN113849142A publication Critical patent/CN113849142A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1415Digital output to display device ; Cooperation and interconnection of the display device with other functional units with means for detecting differences between the image stored in the host and the images displayed on the displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Abstract

The present disclosure provides an image display method, apparatus, electronic device and computer-readable storage medium; the method comprises the steps of firstly detecting a first position parameter of an attention focus of a target user in a display plane, wherein a static image is displayed in the display plane, then determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane, and finally dynamically processing the current target static image to change the current target static image into a current dynamic image or video and playing and displaying the current dynamic image or video. According to the method and the device, the attention focus of the target user is detected, and the display object in the attention focus range is dynamically displayed, so that the image display is diversified, and the interestingness of user experience is improved.

Description

Image display method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image display technologies, and in particular, to an image display method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology and multimedia technology, multimedia resources that people come into contact with are increasingly abundant. Nowadays, people are used to take pictures/videos by using terminal devices such as mobile phones and tablet computers, or make pictures/videos by themselves or download other pictures/videos, and then store the pictures/videos in electronic albums for subsequent use and viewing. However, the current display mode for the electronic photo album is single, and the picture/video in the current sight line range cannot be dynamically displayed according to the attention focus of the user.
Therefore, it is desirable to provide an image display method to alleviate the technical problem of the current image display method being single.
Disclosure of Invention
The present disclosure provides an image display method, an image display apparatus, an electronic device, and a computer-readable storage medium, which are used for alleviating the technical problem that the current image display mode is relatively single.
In order to solve the technical problem, the present disclosure provides the following technical solutions:
the present disclosure provides an image display method, including:
detecting a first position parameter of a focus of attention of a target user in a display plane; a static image is displayed in the display plane;
determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and dynamically processing the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying the current dynamic image or video.
Meanwhile, the present disclosure provides an image display device, including:
the first position parameter detection module is used for detecting a first position parameter of the attention focus of the target user in the display plane; a static image is displayed in the display plane;
the target determining module is used for determining a current target static image according to the first position parameter and the second position parameter of each static image in the display plane;
and the dynamic display module is used for dynamically processing the current target static image, so that the current target static image is changed into a current dynamic image or video and is displayed in a playing mode.
Optionally, the first position parameter detecting module includes:
the information acquisition module is used for acquiring the position information of the eyes and the head posture information of a target user;
the sight line generation module is used for inputting the binocular position information and the head posture information into a sight line evaluation model to obtain binocular sight lines of the target user;
the focus determining module is used for determining the attention focus of the target user according to the binocular vision and the display plane;
and the position parameter determining module is used for determining a first position parameter of the attention focus in the display plane according to a display plane coordinate system and the attention focus.
Optionally, the image display apparatus further comprises:
a prediction module for predicting a third location parameter based on a location prediction model and the first location parameter;
a future target determining module, configured to determine a future target static image according to the third position parameter and the second position parameter of each static image in the display plane;
and the dynamic preprocessing module is used for dynamically preprocessing the future target static image, changing the future target static image into a future dynamic image or video, and playing and displaying the future dynamic image or video after the current dynamic image or video is played and displayed.
Optionally, the goal determination module comprises:
the second position parameter determining module is used for determining second position parameters of the static images on the display plane according to the relative positions of the static images on the display plane;
the user attention area generating module is used for generating a user attention area according to the first position parameter;
the correlation parameter judging module is used for judging the second position parameter of each static image on the display plane and the correlation parameter of the user attention area;
and the object determining module is used for determining the current target static image from all the static images according to the associated parameters.
Optionally, the dynamic display module comprises:
the attribute information acquisition module is used for acquiring the attribute information of the current target static image;
the associated image determining module is used for determining a second image associated with the current target static image according to the attribute information;
and the first video generation module is used for generating a dynamic video and playing and displaying the dynamic video based on the current target static image and the second image.
Optionally, the dynamic display module comprises:
the similar image acquisition module is used for acquiring a third image similar to the current target static image;
and the second video generation module is used for generating a dynamic image and playing and displaying the dynamic image based on the current target static image and the third image.
Optionally, the dynamic display module comprises:
the characteristic point acquisition module is used for acquiring target characteristic points of the current target static image;
and the dynamic image generation module is used for dynamically processing the current target static image based on the target characteristic point, generating a dynamic image and playing and displaying the dynamic image.
Furthermore, the present disclosure provides an electronic device comprising a processor and a memory, the memory being configured to store a computer program, and the processor being configured to execute the computer program in the memory to perform the steps of the image presentation method.
Furthermore, the present disclosure provides a computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the above-mentioned image presentation method.
Has the advantages that: the present disclosure provides an image display method, apparatus, electronic device and computer readable storage medium; the method comprises the steps of firstly detecting a first position parameter of an attention focus of a target user in a display plane, wherein a static image is displayed in the display plane, then determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane, and finally dynamically processing the current target static image to change the current target static image into a current dynamic image or video and playing and displaying the current dynamic image or video. The method and the device mainly determine the target static image through the first position parameter of the attention focus of the target user in the display plane and the second position parameter of each static image in the display plane, dynamically process the target static image to display the dynamically processed image, so that the static image seen by eyes of the user is converted into dynamic display, the image display mode is enriched, and the interestingness of user experience is improved.
Drawings
The technical solutions and other advantages of the present disclosure will become apparent from the following detailed description of specific embodiments of the present disclosure, which is to be read in connection with the accompanying drawings.
Fig. 1 is a schematic system architecture diagram of an image display system according to an embodiment of the present disclosure.
Fig. 2 is a schematic flowchart of an image displaying method according to an embodiment of the present disclosure.
Fig. 3 is a schematic area diagram of an album display template provided in the embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a first display screen provided by the embodiment of the disclosure.
Fig. 5 is a schematic diagram of a user attention area provided by an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an image display apparatus provided in the embodiment of the present disclosure.
Fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Description of reference numerals:
101-a cloud server; 102-a display device; 103-control terminal.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules expressly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the division of modules presented in the disclosed embodiments is merely a logical division and may be implemented in a practical application in a different manner, such that multiple modules may be combined or integrated into another system or some features may be omitted or not implemented, and such that couplings or direct couplings or communicative connections between modules shown or discussed may be through interfaces, indirect couplings or communicative connections between modules may be electrical or the like, the embodiments of the present disclosure are not limited. Moreover, the modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiments of the present disclosure.
In the disclosed embodiment, the still image includes the picture itself and an image of a certain frame extracted from the video.
In the embodiment Of the present disclosure, the target user refers to a user in a current projection display environment detected by a sensor (a camera, a TOF (Time Of Flight) sensor, an infrared sensor, an UWB (Ultra Wide Band) wireless distance measurement sensor, a millimeter wave sensor, and the like) built in or external to the display device.
In the embodiment of the present disclosure, the focus of attention refers to a point where the attention of the user falls in the display plane, where the attention may be determined according to the line of sight of the user, or may be determined according to the gesture, face orientation, body orientation, and the like of the user.
In the embodiment of the present disclosure, the display plane may be a picture projected by the display device, or may be a picture displayed by the display screen.
The disclosure provides an image display method, an image display device, an electronic device and a computer-readable storage medium.
Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture of an image display system provided in the present disclosure, as shown in fig. 1, the image display system at least includes a cloud server 101, a display device 102, and a control terminal 103, wherein:
communication links are arranged among the cloud server 101, the display device 102 and the control terminal 103 so as to realize information interaction. The type of communication link may include a wired, wireless communication link, or fiber optic cable, etc., and the disclosure is not limited thereto.
The cloud server 101 may be an independent server, or may be a server network or a server cluster composed of servers; for example, the servers described in the present disclosure include, but are not limited to, computers, network hosts, database servers, storage servers, and Cloud servers consisting of an application server or a plurality of servers, wherein the Cloud servers are composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing).
The display device 102 is a device capable of projecting an image or Video onto a display plane, and may be connected to a computer, a mobile phone, a game machine, a DV (Digital Video camera), etc. through different interfaces or networks to play corresponding Video or image signals, wherein the display device 102 may be a projector, a micro-projection, etc. device with a projection function, or an electronic display, a liquid crystal display, etc.
The control terminal 103 may be a smart phone, a tablet computer, a notebook computer, a wearable device, a remote controller, or other devices capable of sending signals.
The utility model provides an image display system, this image display system includes high in the clouds server, display device and control terminal. Specifically, the display device 102 obtains each display object from the cloud server 101 or the control terminal 103, wherein the display device 102 preprocesses each display object (picture/video) to obtain each still image, for example, extracting a suitable frame in the video as a still cover; then, detecting a first position parameter of the attention focus of the target user in the display plane through a sensor arranged inside or outside the display device 102, and determining a current target static image from each static image according to the first position parameter and a second position parameter of each static image in the display plane; then, the current target static image is dynamically processed through the display device 102, so that the current target static image is changed into a current dynamic image or video, and the current image is displayed in a display plane through the display control operation of the control terminal 103.
Whether a user appears in a current projection display environment is detected through a sensor arranged in or outside the display device 102, then the attention focus of the user is further judged, a static image which is in a user attention range or is most relevant to the user attention range in a display plane is selected as a current target static image according to the position parameter of the attention focus and the position parameter of each static image in the display plane, and then the current target static image is dynamically processed, wherein the dynamic processing comprises playing a pre-processed video, displaying a dynamic special effect of the pre-processed image and the like.
In one embodiment, the display device 102 is connected to and communicates with the control terminal 103 to implement the solution of the embodiment of the present disclosure.
It should be noted that the system architecture diagram shown in fig. 1 is only an example, and the server, the terminal, the device and the scenario described in the embodiment of the present disclosure are for more clearly illustrating the technical solution of the embodiment of the present disclosure, and do not form a limitation on the technical solution provided in the embodiment of the present disclosure, and as a person having ordinary skill in the art knows that along with the evolution of the system and the occurrence of a new service scenario, the technical solution provided in the embodiment of the present disclosure is also applicable to similar technical problems. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
With reference to the system architecture of the image display system, the following will describe an image display method in the present disclosure in detail, please refer to fig. 2, and fig. 2 is a schematic flow chart of the image display method provided by the present disclosure, where the image display method at least includes the following steps:
step 201: detecting a first position parameter of a focus of attention of a target user in a display plane; a still image is shown in the display plane.
The static image displayed in the display plane is obtained by uploading a video/picture to a cloud server for storage through an application program on the display device/control terminal by a user, and then preprocessing the video/picture by the cloud server, or directly processing the stored video/picture by the display device/control terminal.
Specifically, since the picture itself is static, it does not need to be correspondingly statically processed, but can be simply adjusted in contrast, brightness, color temperature, etc. according to the user's preference or the current environment, and static stickers are added on the basis of the original picture; for the video, because the video is composed of images of one frame and one frame, a proper frame can be extracted as a static cover to be displayed in the display plane according to the preference of the user, and meanwhile, the simple adjustment of contrast, brightness, color temperature and the like can be performed according to the preference of the user or the current environment, and a static sticker is added on the basis of the original image.
In one embodiment, each static image may be presented in the form of an electronic album. Specifically, a preset album display template acquired from the cloud server and each static image are fused to obtain a first display picture, and the first display picture is displayed on a display plane. The preset album display template is a template provided by the cloud server and used for displaying each display object (including pictures or videos), and the template may include a plurality of different display frames, and each display frame is used for displaying one display object. As shown in fig. 3, which is a schematic region view of an album display template according to an embodiment of the present disclosure, the album display template shown in fig. 3 includes 6 display frames, and the backgrounds (i.e., regions other than the display frames) of different album display templates are different, and in the album display template shown in fig. 3, the background is pure gray. The preset album display template can be a basic display template preset by a system or an album display template arranged by a user in a user-defined manner according to personal preferences. Whether the system is preset or the user-defined, the data can be uploaded to the cloud server to be stored, and then the data can be called from the cloud server through the display device when the data needs to be used. For example, each static image is filled into each display frame of the album display template shown in fig. 3, so as to obtain the first display picture shown in fig. 4, where the first display picture is composed of a display object a, a display object B, a display object C, a display object D, a display object E, a display object F, and the background of the album display template, each display frame correspondingly displays one display object, and the final first display picture is displayed in a display plane in a projection manner.
The display device can detect the current display environment through a built-in or external sensor (a camera, a TOF sensor, an infrared sensor, a UWB (ultra wideband) wireless distance measurement sensor, a millimeter wave sensor and the like), judge whether a user appears in the current display environment, and if no user is in the current display environment, all images in a current display plane are in a static state; when a user appears in the current display environment, the attention focus of the user is judged through the sensor, and then a first position parameter of the attention focus in the display plane is obtained. The method for judging the attention focus comprises angle/face orientation estimation of a user based on the body posture of the user, sight line estimation, pointing direction estimation of a finger of the user and the like.
In one embodiment, the specific step of estimating the attention focus of the user through the line of sight includes: acquiring the position information of both eyes and the head posture information of a target user; inputting the binocular position information and the head posture information into a sight evaluation model to obtain binocular sights of the target user; determining the attention focus of the target user according to the binocular vision and the display plane; determining a first position parameter of the attention focus point in the display plane according to a display plane coordinate system and the attention focus point. The binocular position information is a user binocular picture shot by the camera, and the head posture information is a user head picture shot by the camera; the binocular vision is a ray which is extracted from the eyes according to the vision direction.
Specifically, a picture of the eyes and the head of the user can be captured by a camera of the display device and then input into the sight line evaluation model for sight line estimation to obtain the binocular sight line of the target user. General gaze estimation can be divided into geometry-based methods, which are reading some features of the eyes (e.g., key points such as the corners of the eyes, the positions of the pupils, etc.) through photographs of the eyes, and appearance-based methods, which calculate binocular gaze in combination with the features of the eyes and the head pose, since the direction of gaze depends not only on the state of the eyes (eyeball position, degree of opening and closing of the eyes, etc.) but also on the head pose; the appearance-based method is to directly learn a model for mapping the appearances of eyes and heads to sight lines, and the sight lines of the eyes can be directly obtained through the model; the intersection point of the two eye sight lines falling on the display plane is the attention focus of the target user; finally, a first position parameter of the focus of attention in the coordinate system of the display plane may be determined based on the relative position of the focus of attention in the display plane, as will be described below.
Optionally, when the user has an obvious finger pointing direction, the attention focus can be estimated by the pointing direction of the user's finger, and the specific steps include: acquiring gesture information of a target user; recognizing the gesture information according to a gesture recognition model to obtain a first attention guiding line; determining a focus of attention of the target user based on the first attention guideline and the display plane; determining a first position parameter of the attention focus point in the display plane according to a display plane coordinate system and the attention focus point. The gesture information can be a picture of the hand of the user shot by the camera; the first attention guide line is a ray that starts from a finger having a directivity characteristic and is directed in the direction of the finger.
Specifically, a picture of the hand of the user can be taken through a camera of the display device, then the gesture information is input to the gesture recognition model, and the first attention guiding line is obtained through gesture detection, gesture segmentation, gesture analysis and static or dynamic gesture recognition. The most common gesture segmentation method at present mainly comprises gesture segmentation based on monocular vision and gesture segmentation based on stereoscopic vision; the gesture analysis is one of key technologies for completing a gesture recognition system, shape features or motion tracks of gestures can be obtained through the gesture analysis, and the gesture analysis is mainly carried out through an edge contour extraction method, a centroid finger and other multi-feature combination method, a finger joint type tracking method and the like; gesture recognition is a process of classifying tracks (or points) in a model parameter space into a subset of the space, and includes static gesture recognition and dynamic gesture recognition, and the dynamic gesture recognition can be finally converted into static gesture recognition, and the gesture recognition is generally performed by a template matching method, a neural network method and a hidden markov model method. Obtaining the finger pointing direction of a user through gesture recognition, wherein a ray taking the finger pointing direction as a direction with the finger of the user as a starting point is a first attention finger line, and the intersection point of the first attention finger line on a display plane is the attention focus of a target user; finally, a first position parameter of the focus of attention in the coordinate system of the display plane may be determined based on the relative position of the focus of attention in the display plane, as will be described below.
Alternatively, if a picture of the user's eyes cannot be taken and the user does not have a distinct finger pointing, the focus of attention can be estimated by the angle of the user's body posture or the orientation of the user's face, which includes the following specific steps: acquiring orientation information of a target user; determining a second attention guiding line according to an attention distinguishing model and the orientation information; determining an attention focus of the target user according to the second attention guideline and the display plane; determining a first position parameter of the attention focus point in the display plane according to a display plane coordinate system and the attention focus point. Wherein the orientation information of the user comprises the angle of the body posture of the user and the orientation of the face of the user.
Specifically, the body posture of the user or the face of the user can be shot through a camera of the display device, then the orientation of the body or the face of the user is obtained based on the shot image through posture angle recognition or face orientation recognition, the center of the body or the face of the user is regarded as a particle, a ray taking the particle as a starting point and the body orientation or the face orientation of the user as a direction is a second attention guideline, and the intersection point of the second attention guideline falling on the display plane is the attention focus of the target user; finally, a first position parameter of the focus of attention in the coordinate system of the display plane may be determined based on the relative position of the focus of attention in the display plane, as will be described below.
In one embodiment, in addition to the above manner of determining the user's focus of attention according to the sensor, the method may further include predicting a current first location parameter according to a location prediction model, so as to determine the user's focus of attention at a future time, and the specific steps include: predicting a third position parameter according to a position prediction model and the first position parameter; determining a future target static image according to the third position parameter and the second position parameter of each static image in the display plane; and dynamically preprocessing the future target static image to change the future target static image into a future dynamic image or video, and playing and displaying or caching the future dynamic image or video after the current dynamic image or video is played and displayed, so that the response speed can be improved, and the user experience is further improved. Wherein the third position parameter is a position parameter of the focus of attention at a future time in the display plane.
Because the attention focus is judged and obtained according to the sight line, the body posture/the face orientation or the finger pointing direction of the user, and the dynamic display content of the target display object obtained from the cloud is possibly limited by the reasons of the network speed and the like, the image display has some delay or slowness problems, if the attention focus can be pre-judged in advance, the static image in the attention focus range can be pre-processed in advance, and the delay or slowness problems can be relieved. Specifically, the position prediction model is trained according to a training set composed of a plurality of historical attention focuses, and the position prediction model can predict the attention focus at a future moment according to the attention focus at the current moment, namely predict where the user may look at the next moment. Assuming that the relevant content is scanned every second to acquire the focus of attention, that is, the acquisition period of the focus of attention is 1s, and assuming that the current time is 9: 00, the preset period is 10, then a total of 10 attention focuses can be obtained in the preset period, and the 10 attention focuses are respectively at the time 8: 51. 8: 52. 8: 53. 8: 54. 8: 55. 8: 56. 8: 57. 8: 58. 8: 59. 9: 00, the future time 9 can be predicted from these 10 focus of attention: 01, and finally, determining the location parameter of the attention focus point in the coordinate system of the display plane according to the predicted relative location of the attention focus point in the display plane, as will be described later.
After acquiring the attention focus of the user, the position of the attention focus in the display plane needs to be judged, in order to quantify the position of the attention focus in the display plane, the present disclosure proposes a concept of a position parameter, and the specific step of acquiring the position parameter thereof includes: modeling the display plane to obtain a display plane coordinate system; and determining a first position parameter of the attention focus in the display plane coordinate system according to the relative position of the attention focus in the display plane.
Specifically, as shown in fig. 5, a display plane coordinate system is established with any two sides of the display plane as coordinate axes, and then a position coordinate of the attention focus point, that is, a position parameter of the attention focus point in the display plane coordinate system, may be obtained according to a relative position of the attention focus point in the display plane coordinate system.
202: and determining the current target static image according to the first position parameter and the second position parameter of each static image in the display plane.
In an embodiment, after acquiring a first position parameter of an attention focus in a display plane and a second position parameter of each still image in the display plane, a still image in a user attention range or most relevant to the user attention range may be selected from the still images displayed in the current display plane as a current target still image according to a position relationship between the first position parameter and the second position parameter, and the specific steps include: determining a second position parameter of each static image on the display plane according to the relative position of each static image on the display plane; generating a user attention area according to the first position parameter; judging a second position parameter of each static image on the display plane and an associated parameter of the user attention area; and determining the current target static image from all the static images according to the associated parameters.
Taking the static images shown in the form of the electronic album as an example, since the electronic album is shown in the current display plane, the coordinate information of each static image in the album can be determined according to the display plane coordinate system, because the area occupied by each static image relative to the current display plane is relatively large, and each static image cannot be directly regarded as a particle, as shown in fig. 5, the coordinates of the four corners of each display frame and the coordinates of the center of the display frame can be used as the second position parameter of the display object. The display equipment draws a virtual circular area by taking the first position parameter of the current attention focus as the center of a circle and a preset attention radius R as the radius, wherein the area is the user attention area. After the user attention area is determined, calculating a second position parameter of each display object and an association parameter of the user attention area, namely a distance from four-corner coordinates and center coordinates of each display frame to the user attention area, wherein if the coordinates fall in the user attention area, the display object represented by the coordinates is a target static image, and otherwise, the display object is a non-target static image. As shown in fig. 5, the focus of attention of the user is the focus S in fig. 5, the radius of the user attention area is R, and the second position parameters of the display object a, the display object C, and the display object D fall into the user attention area, so that the display object a, the display object C, and the display object D are target still images, and the display object B, the display object E, and the display object F are non-target still images.
203: and dynamically processing the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying the current dynamic image or video.
After the target static image which is in the user attention range or most relevant to the user attention range and the non-target static image outside the user attention range are screened out, the target static image is dynamically processed on the basis of the first display picture, so that the target static image is changed into a dynamic image or a video and is displayed in a playing mode, the non-target static image is still displayed in a static playing mode, a second display picture is obtained, and the second display picture is displayed in a display plane. As shown in fig. 5, in the current display plane, moving images or videos of the display object a, the display object C, and the display object D, and still images of the display object B, the display object E, and the display object F are displayed.
In one embodiment, the dynamic processing of the target still image may be by making a video by finding other images associated therewith, and includes the following specific steps: acquiring attribute information of a current target static image; determining a second image associated with the current target static image according to the attribute information; and generating a dynamic video based on the current target static image and the second image, and playing and displaying the dynamic video. Wherein the attribute information includes position information, person information, time information, and the like of capturing/acquiring the image; association means that at least one of the attribute information is the same or similar.
Specifically, the position information of the current target still image may be obtained, for example, the position information of the current target still image is in city a, so that at least one second image with the position information in city a may be downloaded from the cloud server according to the position information, and then the target still image and all the downloaded second images are made into a video for playing. For another example, the personal information of the current target still image is XX, the time is 5/6/2021 year, and the location is B city, so that at least one second image shot by XX in B city at 5/6/2021 year can be found locally or on the internet according to the cloud server, and then the target still image and all the downloaded second images are made into a video to be played.
In one embodiment, an image similar to the target still image can be found and made into a dynamic image, and the specific steps include: acquiring a third image similar to the current target static image; and generating a dynamic image based on the current target static image and the third image, and playing and displaying the dynamic image. Specifically, a third image similar to the target still image in terms of characteristics, which refer to image characteristics, is obtained, and then a dynamic image of the target still image is obtained through dynamic processing, where the dynamic processing may be to achieve an image dithering effect by jointly processing the target still image and the third image, and the like.
In an embodiment, a dynamic image may be obtained by dynamically transforming target feature points of a target static image, and the specific steps include: acquiring target characteristic points of a current target static image; and dynamically processing the current target static image based on the target characteristic point to generate a dynamic image, and playing and displaying the dynamic image. The target feature points, such as five sense organs (eyebrows, eyes, ears, nose, mouth) in the face, are processed to realize the moving effect of the face, so as to obtain a dynamic image corresponding to the target static image.
In the above process of dynamically processing the picture, a corresponding dynamic transformation parameter may be obtained according to the current environment or the preference of the user, so as to dynamically transform the picture based on the dynamic transformation parameter (for example, adding a dynamic special effect or dynamically rendering based on the dynamic rendering parameter), and finally obtain the picture with a dynamic effect, for example, identifying the content of the picture, and performing a targeted dynamic transformation on the picture (adding a fluttering effect to a static white cloud, adding a ripple scattering effect to a static water surface, and the like), for example, for the picture whose image content is difficult to identify, a layer of dynamic frame may be superimposed around the picture to form the picture with a dynamic effect, and in addition, a dynamic special effect and the like may be added on the picture to form the picture with a dynamic effect. The dynamic transformation parameters can include dynamic special effects (explosion, smoke, liquid, light effect, distortion, color change, blurring, shading, shaking, addition of variegated colors and the like), clockwise rotation of 90 degrees, right deviation of 10 pixels, up-down turning and the like; the dynamic transformation parameters may also be dynamic rendering parameters obtained by algorithms such as Nex (Real-time View Synthesis with Neural Basis Expansion, a novel View Synthesis method based on multi-plane image enhancement), LLFF (Local Light Field Fusion), NeRF (Neural network implicit Field method), and SRN (Spatial reconstruction, scene representation network).
The above contents are all examples of a target still image, i.e., a picture itself, and when the target still image is a certain frame image in a video, a video corresponding to the frame image can be directly pulled through the cloud server, and then the video is played and displayed in a current display plane or a certain section of the video is played and displayed. In addition, in order to increase the interest of video playing, dynamic special effects can be added to the video according to the current environment or the preference of a user (for example, some special effects/layers are inserted between frames to make the dynamic video look smoother, and for example, the brightness/color and the like of each frame of the video are processed to make the video look more uniform), so that a video picture with richer content is obtained.
Based on the content of the above embodiments, the embodiments of the present disclosure provide an image display apparatus. The image displaying apparatus is configured to execute the image displaying method provided in the foregoing method embodiment, and specifically, referring to fig. 6, the apparatus includes:
a first position parameter detection module 601, configured to detect a first position parameter of an attention focus of a target user in a display plane; a static image is displayed in the display plane;
a target determining module 602, configured to determine a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and a dynamic display module 603, configured to perform dynamic processing on the current target static image, so that the current target static image changes into a current dynamic image or video and is displayed in a playing manner.
In one embodiment, the first position parameter detection module 601 includes:
the information acquisition module is used for acquiring the position information of the eyes and the head posture information of a target user;
the sight line generation module is used for inputting the binocular position information and the head posture information into a sight line evaluation model to obtain binocular sight lines of the target user;
the focus determining module is used for determining the attention focus of the target user according to the binocular vision and the display plane;
and the position parameter determining module is used for determining a first position parameter of the attention focus in the display plane according to a display plane coordinate system and the attention focus.
In one embodiment, the image presentation device further comprises:
a prediction module for predicting a third location parameter based on a location prediction model and the first location parameter;
a future target determining module, configured to determine a future target static image according to the third position parameter and the second position parameter of each static image in the display plane;
and the dynamic preprocessing module is used for dynamically preprocessing the future target static image, changing the future target static image into a future dynamic image or video, and playing and displaying the future dynamic image or video after the current dynamic image or video is played and displayed.
In one embodiment, the goal determination module 602 includes:
the second position parameter determining module is used for determining second position parameters of the static images on the display plane according to the relative positions of the static images on the display plane;
the user attention area generating module is used for generating a user attention area according to the first position parameter;
the correlation parameter judging module is used for judging the second position parameter of each static image on the display plane and the correlation parameter of the user attention area;
and the object determining module is used for determining the current target static image from all the static images according to the associated parameters.
In one embodiment, the dynamic presentation module 603 comprises:
the attribute information acquisition module is used for acquiring the attribute information of the current target static image;
the associated image determining module is used for determining a second image associated with the current target static image according to the attribute information;
and the first video generation module is used for generating a dynamic video and playing and displaying the dynamic video based on the current target static image and the second image.
In one embodiment, the dynamic presentation module 603 comprises:
the similar image acquisition module is used for acquiring a third image similar to the current target static image;
and the second video generation module is used for generating a dynamic image and playing and displaying the dynamic image based on the current target static image and the third image.
In one embodiment, the dynamic presentation module 603 comprises:
the characteristic point acquisition module is used for acquiring target characteristic points of the current target static image;
and the dynamic image generation module is used for dynamically processing the current target static image based on the target characteristic point, generating a dynamic image and playing and displaying the dynamic image.
The image display apparatus of the embodiment of the present disclosure may be configured to implement the technical solutions of the foregoing method embodiments, and the implementation principles and technical effects thereof are similar, and are not described herein again.
Different from the prior art, the image display device provided by the disclosure is provided with the target determination module and the dynamic display module, the target static image in the attention range of the user is determined from the static images through the target determination module, and then the dynamic display module is used for dynamically processing the target static image, so that the target static image is changed into a dynamic image or a video and is played and displayed, the image display can be diversified through the mode, and the interestingness of the user experience is improved.
Correspondingly, the embodiment of the disclosure also provides an electronic device. As shown in fig. 7, the electronic device may include a processor 701 having one or more processing cores, a Wireless Fidelity (WiFi) module 702, a memory 703 having one or more computer-readable storage media, an audio circuit 704, a display unit 705, an input unit 706, a sensor 707, a power supply 708, and a Radio Frequency (RF) circuit 709. Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 7 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 701 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 703 and calling data stored in the memory 703, thereby performing overall monitoring of the electronic device. In one embodiment, processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701.
WiFi belongs to short-distance wireless transmission technology, and electronic equipment can help a user to send and receive e-mails, browse webpages, access streaming media and the like through a wireless module 702, and provides wireless broadband internet access for the user. Although fig. 7 shows the wireless module 702, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The memory 703 may be used to store software programs and modules, and the processor 701 may execute various functional applications and data processing by operating the computer programs and modules stored in the memory 703. The memory 703 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 703 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 703 may also include a memory controller to provide the processor 701 and the input unit 706 with access to the memory 703.
The audio circuitry 704 includes speakers that can provide an audio interface between a user and the electronic device. The audio circuit 704 can transmit the electrical signal converted from the received audio data to a speaker, and the electrical signal is converted into a sound signal by the speaker and output; on the other hand, the speaker converts the collected sound signal into an electrical signal, which is received by the audio circuit 704 and converted into audio data, and the audio data is processed by the audio data output processor 701 and then transmitted to another device via the rf circuit 709, or the audio data is output to the memory 703 for further processing. The audio circuit 704 may also include an earbud jack to provide communication of a peripheral headset with the electronic device.
The display unit 705 may be used to display information input by or provided to a user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof.
The input unit 706 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 706 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. In one embodiment, the touch sensitive surface may comprise two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 701, and can receive and execute commands sent by the processor 701. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 706 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The electronic device may also include at least one sensor 707, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display area according to the brightness of ambient light; the motion sensor may generate corresponding instructions based on gestures or other actions of the user. As for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be further configured to the electronic device, detailed descriptions thereof are omitted.
The electronic device also includes a power source 708 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 701 via a power management system to manage charging, discharging, and power consumption via the power management system. The power source 708 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The rf circuit 709 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then sends the received downlink information to one or more processors 701 for processing; in addition, data relating to uplink is transmitted to the base station. In general, rf circuitry 709 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the radio frequency circuit 709 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 701 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 703 according to the following instructions, and the processor 701 runs the application program stored in the memory 703, so as to implement the following functions:
detecting a first position parameter of a focus of attention of a target user in a display plane; a static image is displayed in the display plane;
determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and dynamically processing the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying the current dynamic image or video.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the disclosed embodiments provide a computer-readable storage medium having stored therein a plurality of instructions that are loadable by a processor to cause the following functionality:
detecting a first position parameter of a focus of attention of a target user in a display plane; a static image is displayed in the display plane;
determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and dynamically processing the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying the current dynamic image or video.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in any method provided by the embodiments of the present disclosure, the beneficial effects that can be achieved by any method provided by the embodiments of the present disclosure can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
Meanwhile, the disclosed embodiments provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above. For example, the following functions are implemented:
detecting a first position parameter of a focus of attention of a target user in a display plane; a static image is displayed in the display plane;
determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and dynamically processing the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying the current dynamic image or video.
The image display method, the image display device, the electronic device, and the computer-readable storage medium according to the embodiments of the disclosure are described in detail, and specific examples are applied to illustrate the principles and implementations of the disclosure, and the description of the embodiments is only used to help understand the method and the core concept of the disclosure; meanwhile, for those skilled in the art, according to the idea of the present disclosure, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present description should not be construed as a limitation to the present disclosure.

Claims (10)

1. An image presentation method, comprising:
detecting a first position parameter of a focus of attention of a target user in a display plane; a static image is displayed in the display plane;
determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and dynamically processing the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying.
2. The image presentation method according to claim 1, wherein the step of detecting a first position parameter of the attention focus of the target user in the display plane comprises:
acquiring the position information of both eyes and the head posture information of a target user;
inputting the binocular position information and the head posture information into a sight evaluation model to obtain binocular sights of the target user;
determining the attention focus of the target user according to the binocular vision and the display plane;
determining a first position parameter of the attention focus point in the display plane according to a display plane coordinate system and the attention focus point.
3. The image presentation method of claim 1, further comprising:
predicting a third position parameter according to a position prediction model and the first position parameter;
determining a future target static image according to the third position parameter and the second position parameter of each static image in the display plane;
and dynamically preprocessing the future target static image to change the future target static image into a future dynamic image or video, and playing and displaying the current dynamic image or video after the playing and displaying of the current dynamic image or video are finished.
4. The image displaying method according to claim 1, wherein the step of determining the current target still image according to the first position parameter and the second position parameter of each still image in the display plane comprises:
determining a second position parameter of each static image on the display plane according to the relative position of each static image on the display plane;
generating a user attention area according to the first position parameter;
judging a second position parameter of each static image on the display plane and an associated parameter of the user attention area;
and determining the current target static image from all the static images according to the associated parameters.
5. The image display method according to claim 1, wherein the step of dynamically processing the current target still image to change the current target still image into a current dynamic image or video and playing the display includes:
acquiring attribute information of a current target static image;
determining a second image associated with the current target static image according to the attribute information;
and generating a dynamic video based on the current target static image and the second image, and playing and displaying the dynamic video.
6. The image display method according to claim 1, wherein the step of dynamically processing the current target still image to change the current target still image into a current dynamic image or video and playing the display includes:
acquiring a third image similar to the current target static image;
and generating a dynamic image based on the current target static image and the third image, and playing and displaying the dynamic image.
7. The image display method according to claim 1, wherein the step of dynamically processing the current target still image to change the current target still image into a current dynamic image or video and playing the display includes:
acquiring target characteristic points of a current target static image;
and dynamically processing the current target static image based on the target characteristic point to generate a dynamic image, and playing and displaying the dynamic image.
8. An image display apparatus, comprising:
the first position parameter detection module is used for detecting a first position parameter of the attention focus of the target user in the display plane; a static image is displayed in the display plane;
the target determining module is used for determining a current target static image according to the first position parameter and the second position parameter of each static image in the display plane;
and the dynamic display module is used for dynamically processing the current target static image, so that the current target static image is changed into a current dynamic image or video and is displayed in a playing mode.
9. An electronic device, comprising a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for operating the computer program in the memory to execute the steps of the image presentation method according to any one of claims 1 to 7.
10. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the image presentation method according to any one of claims 1 to 7.
CN202111131215.3A 2021-09-26 2021-09-26 Image display method and device, electronic equipment and computer readable storage medium Pending CN113849142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111131215.3A CN113849142A (en) 2021-09-26 2021-09-26 Image display method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111131215.3A CN113849142A (en) 2021-09-26 2021-09-26 Image display method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113849142A true CN113849142A (en) 2021-12-28

Family

ID=78980206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111131215.3A Pending CN113849142A (en) 2021-09-26 2021-09-26 Image display method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113849142A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110074716A1 (en) * 2009-09-29 2011-03-31 Fujifilm Corporation Image displaying device, image displaying method, and program for displaying images
CN106851114A (en) * 2017-03-31 2017-06-13 努比亚技术有限公司 A kind of photo shows, photo generating means and method, terminal
CN107436879A (en) * 2016-05-25 2017-12-05 广州市动景计算机科技有限公司 The loading method and loading system of a kind of dynamic picture
CN108271021A (en) * 2016-12-30 2018-07-10 安讯士有限公司 It is controlled based on the block grade renewal rate for watching sensing attentively
CN110022445A (en) * 2019-02-26 2019-07-16 维沃软件技术有限公司 A kind of content outputting method and terminal device
US20190236125A1 (en) * 2018-01-31 2019-08-01 Nureva, Inc. Method, apparatus and computer-readable media for converting static objects into dynamic intelligent objects on a display device
CN110245250A (en) * 2019-06-11 2019-09-17 Oppo广东移动通信有限公司 Image processing method and relevant apparatus
CN110853073A (en) * 2018-07-25 2020-02-28 北京三星通信技术研究有限公司 Method, device, equipment and system for determining attention point and information processing method
CN111046744A (en) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN111309146A (en) * 2020-02-10 2020-06-19 Oppo广东移动通信有限公司 Image display method and related product
CN111432278A (en) * 2020-02-27 2020-07-17 北京达佳互联信息技术有限公司 Video control method, device, terminal and storage medium
CN111768352A (en) * 2020-06-30 2020-10-13 Oppo广东移动通信有限公司 Image processing method and device
CN111970566A (en) * 2020-08-26 2020-11-20 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and storage medium
CN113313072A (en) * 2021-06-28 2021-08-27 中国平安人寿保险股份有限公司 Method, device and equipment for constructing intelligent dynamic page and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110074716A1 (en) * 2009-09-29 2011-03-31 Fujifilm Corporation Image displaying device, image displaying method, and program for displaying images
CN107436879A (en) * 2016-05-25 2017-12-05 广州市动景计算机科技有限公司 The loading method and loading system of a kind of dynamic picture
CN108271021A (en) * 2016-12-30 2018-07-10 安讯士有限公司 It is controlled based on the block grade renewal rate for watching sensing attentively
CN106851114A (en) * 2017-03-31 2017-06-13 努比亚技术有限公司 A kind of photo shows, photo generating means and method, terminal
US20190236125A1 (en) * 2018-01-31 2019-08-01 Nureva, Inc. Method, apparatus and computer-readable media for converting static objects into dynamic intelligent objects on a display device
CN110853073A (en) * 2018-07-25 2020-02-28 北京三星通信技术研究有限公司 Method, device, equipment and system for determining attention point and information processing method
CN110022445A (en) * 2019-02-26 2019-07-16 维沃软件技术有限公司 A kind of content outputting method and terminal device
CN110245250A (en) * 2019-06-11 2019-09-17 Oppo广东移动通信有限公司 Image processing method and relevant apparatus
CN111046744A (en) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN111309146A (en) * 2020-02-10 2020-06-19 Oppo广东移动通信有限公司 Image display method and related product
CN111432278A (en) * 2020-02-27 2020-07-17 北京达佳互联信息技术有限公司 Video control method, device, terminal and storage medium
CN111768352A (en) * 2020-06-30 2020-10-13 Oppo广东移动通信有限公司 Image processing method and device
CN111970566A (en) * 2020-08-26 2020-11-20 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and storage medium
CN113313072A (en) * 2021-06-28 2021-08-27 中国平安人寿保险股份有限公司 Method, device and equipment for constructing intelligent dynamic page and storage medium

Similar Documents

Publication Publication Date Title
WO2020177582A1 (en) Video synthesis method, model training method, device and storage medium
US10891799B2 (en) Augmented reality processing method, object recognition method, and related device
CN111652121B (en) Training method of expression migration model, and method and device for expression migration
WO2020216054A1 (en) Sight line tracking model training method, and sight line tracking method and device
US20230393721A1 (en) Method and Apparatus for Dynamically Displaying Icon Based on Background Image
US10055879B2 (en) 3D human face reconstruction method, apparatus and server
WO2019184889A1 (en) Method and apparatus for adjusting augmented reality model, storage medium, and electronic device
CN108712603B (en) Image processing method and mobile terminal
CN108985220B (en) Face image processing method and device and storage medium
US11366528B2 (en) Gesture movement recognition method, apparatus, and device
CN109495616B (en) Photographing method and terminal equipment
WO2020108041A1 (en) Detection method and device for key points of ear region and storage medium
CN113426117B (en) Shooting parameter acquisition method and device for virtual camera, electronic equipment and storage medium
WO2021190387A1 (en) Detection result output method, electronic device, and medium
US20210152751A1 (en) Model training method, media information synthesis method, and related apparatuses
CN111031253B (en) Shooting method and electronic equipment
CN110290426B (en) Method, device and equipment for displaying resources and storage medium
CN108156374A (en) A kind of image processing method, terminal and readable storage medium storing program for executing
CN109272473B (en) Image processing method and mobile terminal
CN109639981B (en) Image shooting method and mobile terminal
WO2021185142A1 (en) Image processing method, electronic device and storage medium
CN112818733B (en) Information processing method, device, storage medium and terminal
CN107913519B (en) Rendering method of 2D game and mobile terminal
CN113014960B (en) Method, device and storage medium for online video production
CN109104573B (en) Method for determining focusing point and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination