CN108416832B - Media information display method, device and storage medium - Google Patents

Media information display method, device and storage medium Download PDF

Info

Publication number
CN108416832B
CN108416832B CN201810091225.0A CN201810091225A CN108416832B CN 108416832 B CN108416832 B CN 108416832B CN 201810091225 A CN201810091225 A CN 201810091225A CN 108416832 B CN108416832 B CN 108416832B
Authority
CN
China
Prior art keywords
target
shooting
real
image
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810091225.0A
Other languages
Chinese (zh)
Other versions
CN108416832A (en
Inventor
汪倩怡
王志斌
高雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810091225.0A priority Critical patent/CN108416832B/en
Publication of CN108416832A publication Critical patent/CN108416832A/en
Application granted granted Critical
Publication of CN108416832B publication Critical patent/CN108416832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a device and a storage medium for displaying media information. Wherein the method comprises the following steps: shooting a target object through first camera equipment of the terminal to obtain first media information, wherein the first media information is used for identifying the target object; shooting a target scene through second camera equipment of the terminal; in the process of shooting a target scene, first media information is displayed in a shooting picture obtained by shooting the target scene, wherein the proportion between the size of a target display area of the first media information displayed in the shooting picture and the size of the shooting picture is adjusted through a first target operation instruction, and/or the target display area of the first media information displayed in the shooting picture is adjusted through a second target operation instruction. The invention solves the technical problem that the media information for indicating the target object in the related art cannot be flexibly displayed in the picture of the target scene.

Description

Media information display method, device and storage medium
Technical Field
The present invention relates to the field of media, and in particular, to a method, an apparatus, and a storage medium for displaying media information.
Background
Currently, when shooting is performed by a terminal, a user can only perform shooting using a self-timer mode of the terminal without help of shooting. The image shot by the self-shooting mode has the advantages that the ratio of the picture of the real scene in the whole image is small due to the limitation of the shooting angle, the image of the user is large, and the picture of the real scene and the image of the user cannot be adjusted, so that the self-shooting image cannot be flexibly displayed in the picture of the real scene.
In addition, the user can put the terminal in a certain fixed position and shoot himself through the way of taking the picture by the countdown, but like this the user can't watch the picture that shoots in real time, can't know the effect that oneself shows in the picture of real scene yet, the user can't put out the gesture well yet to the risk that the terminal was stolen is still had.
For the above-described problem that media information for indicating a target object cannot be flexibly displayed in a picture of a target scene, no effective solution has been proposed yet.
Disclosure of Invention
The embodiment of the invention provides a method, a device and a storage medium for displaying media information, which are used for at least solving the technical problem that the media information for indicating a target object in the related art cannot be flexibly displayed in a picture of a target scene.
According to an aspect of an embodiment of the present invention, there is provided an image processor method. The method comprises the following steps: shooting a target object through first camera equipment of the terminal to obtain first media information, wherein the first media information is used for identifying the target object; shooting a target scene through second camera equipment of the terminal; in the process of shooting a target scene, first media information is displayed in a shooting picture obtained by shooting the target scene, wherein the proportion between the size of a target display area of the first media information displayed in the shooting picture and the size of the shooting picture is adjusted through a first target operation instruction, and/or the target display area of the first media information displayed in the shooting picture is adjusted through a second target operation instruction.
According to another aspect of the embodiment of the present invention, there is also provided an image processor apparatus. The device comprises: the first shooting unit is used for shooting the target object through first shooting equipment of the terminal to obtain first media information, wherein the first media information is used for identifying the target object; a second photographing unit for photographing a target scene through a second photographing apparatus of the terminal; and the display unit is used for displaying the first media information in a shooting picture obtained by shooting the target scene in the process of shooting the target scene, wherein the proportion between the size of a target display area of the first media information displayed in the shooting picture and the size of the shooting picture is adjusted through a first target operation instruction, and/or the target display area of the first media information displayed in the shooting picture is adjusted through a second target operation instruction.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium. The storage medium has stored therein a computer program, wherein the computer program is arranged to execute the method of displaying media information of an embodiment of the invention when run.
According to another aspect of the embodiment of the invention, an electronic device is also provided. The electronic device comprises a memory in which a computer program is stored, and a processor arranged to execute the method of displaying media information according to an embodiment of the invention by means of the computer program.
In the embodiment of the invention, a target object is shot through first shooting equipment of a terminal to obtain first media information, wherein the first media information is used for identifying the target object; shooting a target scene through second camera equipment of the terminal; in the process of shooting a target scene, first media information is displayed in a shooting picture obtained by shooting the target scene, wherein the proportion between the size of a target display area of the first media information displayed in the shooting picture and the size of the shooting picture is adjusted through a first target operation instruction, and/or the target display area of the first media information displayed in the shooting picture is adjusted through a second target operation instruction. The first media information obtained by shooting the target object through the first shooting equipment of the terminal is displayed in a shooting picture obtained by shooting the target scene through the second shooting equipment of the terminal, namely, the first media information for indicating the target object is flexibly displayed in a picture of the real scene, so that the limitation that the picture occupation ratio of the real scene is small and the occupation ratio of the media information for indicating the target object is relatively large when the self-shooting is carried out through the front-facing camera is avoided, the effect that the media information for indicating the target object is flexibly displayed in the picture of the real scene is realized, and the technical problem that the media information for indicating the target object cannot be flexibly displayed in the picture of the target scene in the related art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment of a method of displaying media information according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of displaying media information according to an embodiment of the invention;
FIG. 3 is a schematic illustration of a scene of a media information display according to an embodiment of the invention;
FIG. 4 is a schematic view of another scene of media information display according to an embodiment of the invention;
FIG. 5 is a schematic illustration of a scene of a media information display for motion recognition according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a scene of another media information display according to an embodiment of the invention;
FIG. 7 is a schematic view of a scene of an image display of another motion recognition according to an embodiment of the invention;
FIG. 8 is a schematic diagram of a media information display that does not support simultaneous opening of front and rear cameras in accordance with an embodiment of the present invention;
FIG. 9A is a schematic diagram of a scene of another media information display according to an embodiment of the invention;
FIG. 9B is a schematic diagram of a scene of another media information display according to an embodiment of the invention;
FIG. 9C is a schematic diagram of a scene of another media information display according to an embodiment of the invention;
fig. 10 is a schematic view of a scene of media information display of a terminal according to an embodiment of the present invention;
FIG. 11 is a technical framework diagram of a media information display according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of dividing a face image into grids according to one embodiment of the present invention;
FIG. 13 is a schematic diagram of a planar recognition according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of determining the pose of a camera in the real world according to an embodiment of the invention;
fig. 15 is a schematic view of an image display apparatus according to an embodiment of the present invention; and
Fig. 16 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present invention, an embodiment of a method for displaying media information is provided.
Alternatively, in the present embodiment, the above-described display method of media information may be applied to a hardware environment constituted by the server 102 and the terminal 104 as shown in fig. 1. Fig. 1 is a schematic diagram of a hardware environment of a method of displaying media information according to an embodiment of the present invention. As shown in fig. 1, server 102 is connected to terminal 104 via a network including, but not limited to: the terminal 104 is not limited to a PC, a mobile phone, a tablet computer, etc., but is a wide area network, a metropolitan area network, or a local area network. The method for displaying media information according to the embodiment of the present invention may be performed by the server 102, may be performed by the terminal 104, or may be performed by both the server 102 and the terminal 104. The method for displaying the media information by the terminal 104 according to the embodiment of the present invention may be performed by a client installed thereon.
Fig. 2 is a flowchart of a method of displaying media information according to an embodiment of the present invention. As shown in fig. 2, the method may include the steps of:
In step S202, a target object is photographed by a first photographing device of the terminal, so as to obtain first media information, where the first media information is used to identify the target object.
In the technical solution provided in the above step S202 of the present application, the target object is photographed by the first image capturing device of the terminal, so as to obtain the first media information, where the first media information is used to identify the target object.
In this embodiment, the first image capturing apparatus may be, but not limited to, a front-end image capturing apparatus, for example, may be, but not limited to, a front-end camera, and is mounted on a terminal, which may be a terminal apparatus such as a smart phone, a tablet computer, a palm computer, and a mobile internet apparatus, without any limitation. The target object may be a self-timer object, for example, the target object is a user, the self-timer of the user in the real scene may be implemented through the first image capturing device, and the user may see the user through a shooting picture captured by the front camera, where the real scene is also a real scene.
It should be noted that the first image capturing apparatus of this embodiment is a front image capturing apparatus, which is merely an example of an embodiment of the present invention, and is not limited to the first image capturing apparatus of this embodiment, and any first media information that can achieve capturing of a target object to obtain a target object for identifying the target object is within the scope of this embodiment of the present invention, and is not illustrated here.
Shooting a target object in a real scene through first camera equipment to obtain first media information, wherein the first media information can be a self-shooting image of the target object and is used for identifying the target object. The first media information may also be a video obtained by shooting the target object by the first image capturing device, and is used for identifying the target object. Optionally, the first media information includes an image of a representative location of the target object, such as, for example, a facial image of the target object, or other designated location.
Step S204, shooting the target scene by the second image pickup apparatus of the terminal.
In the technical solution provided in the above step S204 of the present application, after the first image capturing device of the terminal captures the target object to obtain the first media information, the second image capturing device of the terminal captures the target scene.
The second image capturing apparatus of this embodiment may be, but not limited to, a rear-mounted image capturing apparatus, for example, a rear-mounted camera, mounted on the same terminal as the first image capturing apparatus, for capturing a target scene, which may be a real scene. Alternatively, the first image capturing apparatus and the second image capturing apparatus mounted on the same terminal of this embodiment may be turned on at the same time, or only either one of them may be turned on.
It should be noted that the second image capturing apparatus of this embodiment is a post-image capturing apparatus, which is merely an example of an embodiment of the present invention, and is not limited to the second image capturing apparatus of the embodiment of the present invention, and any image capturing apparatus that can capture a target scene is within the scope of the embodiment of the present invention, and is not illustrated herein.
In step S206, in the process of shooting the target scene, the first media information is displayed in a shooting picture obtained by shooting the target scene.
In the technical solution provided in the step S206, during the process of shooting the target scene, the first media information is displayed in the shooting picture obtained by shooting the target scene, wherein the ratio between the size of the target display area where the first media information is displayed in the shooting picture and the size of the shooting picture is adjusted by the first target operation instruction, and/or the target display area where the first media information is displayed in the shooting picture is adjusted by the second target operation instruction.
The shot screen obtained by shooting the target scene in this embodiment may be any screen of the target scene on which the first media information is desired to be displayed, for example, a landscape screen, a building screen, a person screen, etc., and is not limited in any way.
The first media information is displayed in a photographing screen obtained by photographing the target scene, and may be displayed in a photographing screen obtained by photographing the target scene through an augmented reality (Augmented Reality, abbreviated as AR) technique, for example, using an augmented reality developer platform (AR kit) and a software platform (AR core) constructing an augmented reality application program.
Optionally, an image of the target object is extracted from the first media information, and a certain material model may be added to the image of the target object, so as to enhance the interest of the image of the target object when displayed in the picture of the target scene. Alternatively, the image of the target object extracted from the first media information is displayed on a horizontal plane recognized from the photographed picture of the target scene so that from a different perspective, as if the image of the target object were actually present on the horizontal plane, thereby enhancing the authenticity of the image display. In addition, the display position of the first media information in the target scene can be flexibly adjusted, the display size and the display position of the first media information in the picture of the target scene can be adjusted according to the proportion of the first media information and the picture of the target scene, the limitation that the picture of the target scene is small in the ratio and the image of the target object identified by the media information is larger when the self-shooting is carried out through the front camera is avoided, and therefore the effect that the media information is flexibly displayed in the picture of the target scene is achieved.
In this embodiment, the first media information can be flexibly displayed in a photographing screen obtained by photographing the target scene. When the first media information is an image of the target object, the target display area where the first media information is displayed in the photographing screen may be an area occupied when the image of the target object is displayed in the photographing screen, and the size of the target display area may be a size of the image of the target object itself. In this embodiment, the ratio between the size of the target display area and the size of the shooting picture is adjusted by the first target operation instruction, for example, the image of the target object appears large in the shooting picture, so that the user is difficult to see other images except for the image of the target object in the shooting picture, the image of the target object is adjusted by the first target operation instruction, the ratio between the size of the image of the target object and the size of the shooting picture is reduced, so that the image of the target object is reduced in the shooting picture, and thus the user can see other images except for the image of the target object in the shooting picture, wherein the first target operation instruction can be an operation instruction triggered by performing a folding operation on the touch screen of the terminal through double fingers, and the first target operation instruction is not limited here; for another example, the image of the target object appears very small in the shooting picture, so that it is difficult for the user to see the image of the target object in the shooting picture, the image of the target object is adjusted through the first target operation instruction, the proportion between the size of the image of the target object and the size of the shooting picture is enlarged, so that the image of the target object becomes large in the shooting picture, and therefore the user can see the image of the target object clearly in the shooting picture, wherein the first target operation instruction can be an operation instruction triggered by opening operation on the touch screen of the terminal through double fingers, and the first target operation instruction is not limited.
Optionally, in this embodiment, in the process of shooting the target scene, according to the image size of the target object identified by the first media information and the size of the shooting picture of the target scene, the ratio between the size of the target display area where the first media information is displayed in the shooting picture and the size of the shooting picture may be automatically adjusted to be a target ratio, where the target ratio may be a ratio suitable for human eyes obtained by performing experiments according to the ratio between the size of a large number of target display areas and the size of the shooting picture.
Optionally, the target display area where the first media information is displayed in the shot screen is adjusted by the second target operation instruction, for example, the first media information is an image of the target object, and the target display area where the image of the target object is displayed in the shot screen is adjusted by the second target operation instruction, for example, the position where the image of the target object is displayed in the shot screen is adjusted. The second target operation instruction can be triggered by clicking the finger of the user at the corresponding position on the touch screen, and the image of the target object can be displayed at the position in the shooting picture corresponding to the clicking position, so that the user can flexibly adjust the target display area of the first media information displayed in the shooting picture.
Alternatively, the embodiment defaults to the target display area in which the first media information is displayed in the photographed picture, being set on a horizontal plane in the photographed picture obtained by photographing the target scene, for example, the image of the target object is displayed on a horizontal plane recognized from the photographed picture, so that from a different point of view, as if the image of the target object were actually present on the horizontal plane, thereby enhancing the authenticity of the image display.
Optionally, the first media information may include a real-time image of the target object, a real-time three-dimensional image obtained by synthesizing the real-time image of the target object with a predetermined virtual image, a ratio between a size of a target display area displayed in the photographed image and a size of the photographed image, and/or a real-time three-dimensional image obtained by synthesizing the real-time image of the target object with the predetermined virtual image, and the target display area displayed in the photographed image is adjusted by the second target operation instruction.
Shooting a target object through a first shooting device of the terminal through the steps S202 to S206 to obtain first media information, wherein the first media information is used for identifying the target object; shooting a target scene through second camera equipment of the terminal; in the process of shooting a target scene, first media information is displayed in a shooting picture obtained by shooting the target scene, wherein the proportion between the size of a target display area of the first media information displayed in the shooting picture and the size of the shooting picture is adjusted through a first target operation instruction, and/or the target display area of the first media information displayed in the shooting picture is adjusted through a second target operation instruction. The first media information obtained by shooting the target object through the first shooting equipment is displayed in the picture of the real scene obtained by shooting the second shooting equipment, namely, the first media information for indicating the target object is flexibly displayed in the picture of the real scene, so that the limitation that the picture occupation ratio of the real scene is small and the occupation ratio of the media information for indicating the target object is relatively large when the self-shooting is carried out through the front-facing camera is avoided, the effect that the media information for indicating the target object is flexibly displayed in the picture of the real scene is realized, and the technical problem that the media information for indicating the target object cannot be flexibly displayed in the picture of the target scene in the related art is solved.
As an optional implementation manner, step S204, capturing, by the second image capturing apparatus of the terminal, the target scene includes: the target scene is photographed by the second image pickup apparatus while the target object is photographed by the first image pickup apparatus.
In this embodiment, the front camera and the rear camera mounted on the terminal may be turned on at the same time, that is, in the process of photographing the target object in the target scene by the turned-on front camera, the target scene may also be photographed by the turned-on rear camera. The first media information is displayed in the shooting picture obtained by shooting the target scene through the front camera and the media information obtained by the subsequent shooting, so that the first media information obtained by shooting the target scene through the front camera can be displayed in real time in the shooting picture obtained by shooting the target scene through the rear camera, for example, a human image video obtained by the front camera is recognized by a human face recognition technology, a region of a human face is recognized by a human face recognition technology, and after a material model is added, the first media information is displayed in the shooting picture of the target scene obtained by shooting the target scene in real time, and the flexibility of displaying the media information is improved.
As an optional implementation manner, step S202, capturing, by a first image capturing device of a terminal, a target object, to obtain first media information includes: acquiring a real-time shooting picture obtained by shooting a target object through first shooting equipment; extracting a real-time image of a target object from a real-time shooting picture, wherein the first media information comprises the real-time image of the target object; in the process of shooting the target scene, step S206, displaying the first media information in a shooting picture obtained by shooting the target scene includes: in the process of shooting a target scene, a real-time image of a target object is displayed in a shooting picture obtained by shooting the target scene.
In this embodiment, a real-time photographing picture obtained by photographing the target object by the first image pickup apparatus is acquired, and a video picture in a real-time video obtained by photographing the target object by the first image pickup apparatus can be acquired. After a real-time photographing picture obtained by photographing a target object by the first photographing apparatus is obtained, a real-time image of the target object, for example, a face image of the target object is extracted from the real-time photographing picture, and face recognition is performed in the real-time photographing picture.
Optionally, when performing face recognition in the real-time shooting picture, the embodiment may perform image analysis on the shot real-time shooting picture to find the position of the face image and the size of the face region in the real-time shooting picture. The complete image scanning can find out all face images in the real-time shooting picture. Preferably, this embodiment does not need to find a face image. The real-time shot picture can be scanned from large to small and from thick to thin, and the scanned first face image is returned as a result. Thus, the largest face image in the real-time shooting picture can be found at the highest speed. And then, further characteristic analysis is carried out on the found face image, and characteristic points such as the outline, eyes, nose, mouth and the like of the face image are specifically positioned, so that the aim of identifying the face image from the target image is fulfilled.
After the real-time image of the target object is extracted from the real-time shooting picture, the real-time image of the target object is displayed in the shooting picture obtained by shooting the target scene in the process of shooting the target scene, for example, the face image of the target object is displayed in real time in the shooting picture obtained by shooting the target scene.
As an optional implementation manner, in the process of shooting the target scene, displaying the real-time image of the target object in the shooting picture obtained by shooting the target scene includes: in the process of shooting a target scene, displaying a synthesized real-time image in a shooting picture obtained by shooting the target scene, wherein the synthesized real-time image is a real-time image obtained by synthesizing a real-time image of a target object with a preset virtual image.
In this embodiment, the predetermined virtual image may be a three-dimensional (3D) material model set in advance for playing an augmented reality role, for example, a Santa Claus material model, a chicken material model, an album material model, or the like, without any limitation. After the first media information is obtained by shooting the target object in the target scene through the first image shooting equipment, a preset virtual image can be selected from a plurality of preset material models, the real-time image of the target object and the preset virtual image are synthesized to obtain a synthesized real-time image, the real-time image of the target object can be attached to a designated part in the preset virtual image according to the specification of the preset virtual image to obtain the synthesized real-time image, and then the synthesized real-time image is displayed in a shooting picture obtained by shooting the target scene in the process of shooting the target scene.
In this embodiment, in synthesizing the real-time image of the target object with the predetermined virtual image to obtain the synthesized real-time image, the target region corresponding to the real-time image of the target object may be determined in the real-time image of the target object. The predetermined virtual image is typically composed of bone and skin, which is a texture picture. The target features of the real-time image of the target object and the features of the image of the target area are in one-to-one correspondence, for example, the real-time image of the target object is a face image, the face image can be identified from a real-time shooting picture, the target features of the face image comprise the features of eyes, nose, mouth and the like, and the features of the image of the target area are the features of eyes, nose, mouth and the like correspondingly. The real-time image of the target object may be divided into grids, and the divided grids are in one-to-one correspondence with grids in the texture picture in the predetermined virtual image. After determining a target area corresponding to the real-time image of the target object, replacing the image in the target area in the predetermined virtual image with the real-time image of the target object to obtain a synthesized real-time image, and using the real-time image of the target object as skin texture to replace the texture of the corresponding area in the predetermined virtual image. When the image in the target area in the preset virtual image is replaced by the real-time image of the target object, attention needs to be paid to the alignment of the target feature of the real-time image of the target object and the feature of the image of the target area so as to realize the accurate replacement of the image in the target area in the preset virtual image and the real-time image of the target object, thereby realizing the accurate combination of the real-time image of the target object and the preset virtual image.
For example, the real-time image of the target object is a face image of the target object, the predetermined virtual image is a Santa Claus material model, and the face image is fused into the Santa Claus material model, so that a Santa Claus small person can be obtained, and the Santa Claus small person is a three-dimensional two-dimensional image, and further, the Santa Claus small person is displayed in the picture of the target scene, so that the Santa Claus small person looks like if the Santa Claus small person really appears in the target scene; when the real-time image of the target object is a video shot in real time and comprising the image of the target object, when the preset virtual image is an album material model, the real-time image of the target object is fused into the album material model to obtain a synthesized real-time image, and the synthesized real-time image is displayed in a picture of a target scene to look like the target object in an album and appear in the picture of the target scene, so that the interestingness of image display is increased.
As an optional implementation manner, in the process of shooting the target scene, displaying the real-time image of the target object in the shooting picture obtained by shooting the target scene includes: when a predetermined action is detected in the real-time image of the target object, second media information corresponding to the predetermined action is displayed when the real-time image of the target object is displayed in a photographing screen obtained by photographing the target scene.
In this embodiment, when the front camera and the rear camera of the terminal are turned on at the same time, the images shot by the front camera are displayed in the images of the target scene shot by the rear camera, so that the play method of motion recognition can be increased during image display.
In the process of continuously shooting a target object in a target scene through the first image shooting equipment, shooting the target scene through the second image shooting equipment to obtain a shooting picture of the target scene, and when a preset action appears in a real-time image of the target object, displaying second media information corresponding to the preset action when the real-time image of the target object is displayed in the shooting picture obtained by shooting the target scene. The second media information is used for indicating that the target object performs a predetermined action and can be displayed at a predetermined position of a shooting picture obtained by shooting the target scene. For example, if the target object performs the nodding action, the real-time image of the target object displayed in the shooting picture obtained by shooting the target scene is the real-time image of the target object performing the nodding action, when the target object is recognized as performing the nodding action, the second media information is displayed, and may be a group of loving images, displayed around the real-time image of the target object, at this time, if the target object performs the beeping action, the real-time image of the target object displayed in the shooting picture obtained by shooting the target scene is the real-time image of the target object performing the beeping action, and when the target object is recognized as performing the beeping action, the second media information is displayed, and may be an image of the target object that is kissed by the puppy.
For example, the predetermined virtual image is a material model of the Santa Claus, and the face image of the user captured by the first image capturing apparatus and the material model of the Santa Claus are fused to form one Santa Claus thumbnail to be displayed in the screen of the target scene captured by the second image capturing apparatus. When the user makes a nodding action, the Christmas AR small persons in the picture of the target scene can nod at the same time, and at the same time, after the nodding action is recognized, a group of lovers can appear around the Christmas AR small persons.
For another example, the predetermined virtual image is a material model of the video album, and the real-time picture taken by the first image pickup device is simultaneously displayed in the AR album in the picture of the target scene taken by the second image pickup device as if the user changed to one album in the picture of the target scene taken by the rear camera. If the user makes a beep action at the moment, the action of the beep is recognized, a puppy is arranged beside the AR album and runs out, and the album is kiss-matched, so that the image display has a dynamic effect, and the interestingness of the image display is improved.
As an alternative embodiment, the second media information includes at least one of: a predetermined animation, a predetermined dynamic image, a predetermined static image.
The second media information of this embodiment may be a predetermined animation, such as an animation that a puppy runs through to kisse; may be a predetermined dynamic image, for example, a flickering love image; the predetermined still image, for example, a still love image, may be provided without any limitation.
As an optional implementation manner, step S202, capturing, by a first image capturing device of a terminal, a target object, to obtain first media information includes: acquiring a target video obtained by shooting a target object through a first shooting device, wherein when the target object is shot through the first shooting device, a target scene is not shot through a second shooting device; extracting third media information containing an image of a target object from the target video, wherein the first media information comprises the third media information; step S206, in the process of shooting the target scene, displaying the first media information in a shooting picture obtained by shooting the target scene includes: in the process of shooting the target scene, third media information is displayed in a shooting picture obtained by shooting the target scene.
In this embodiment, only one of the front camera and the rear camera mounted on the terminal can be turned on. Thus, a target video obtained by photographing a target object by the first image pickup apparatus is acquired without turning on the second image pickup apparatus. After the target video obtained by photographing the target object by the first image pickup apparatus is acquired, third media information including an image of the target object, for example, a face image, is extracted in the target video. And then switching the first image pickup device to the second image pickup device, and displaying the third media information in a shooting picture obtained by shooting the target scene in the process of shooting the target scene through the second image pickup device.
Alternatively, the target photograph obtained by photographing the target object by the first image pickup apparatus is acquired without turning on the second image pickup apparatus. After a target photograph obtained by photographing a target object by a first image pickup apparatus is acquired, an image of the target object, for example, a face image is extracted from the target photograph. Then, the first image pickup apparatus is switched to the second image pickup apparatus, and an image of the target object is displayed in a photographing screen obtained by photographing the target scene in the process of photographing the target scene by the second image pickup apparatus.
The embodiment cannot simultaneously turn on the first image capturing apparatus and the second image capturing apparatus, and does not have the function of displaying the first media information captured by the first image capturing apparatus in real time in the captured image of the target scene captured by the second image capturing apparatus, but can also ensure that the first media information captured by the first image capturing apparatus is flexibly displayed in the captured image of the target scene captured by the second image capturing apparatus, and improve the interest of image display.
As an optional implementation manner, in the process of shooting the target scene, displaying the third media information in a shooting picture obtained by shooting the target scene includes: and displaying fourth media information obtained by synthesis in a shooting picture obtained by shooting the target scene in the process of shooting the target scene, wherein the fourth media information is the media information obtained by synthesizing the third media information with a preset virtual image.
In this embodiment, the predetermined virtual image may be a Santa Claus material model, a chicken material model, an album material model, or the like, without any limitation. After the third media information of the image containing the target object is extracted from the target video, a preset virtual image can be selected from a plurality of preset material models, the third media information of the image containing the target object and the preset virtual image are synthesized to obtain fourth media information, the third media information of the image containing the target object can be attached to a designated part in the preset virtual image according to the specification of the preset virtual image to obtain the fourth media information, and further the fourth media information is displayed in a shooting picture obtained by shooting the target scene in the process of shooting the target scene.
For example, a predetermined virtual image is a material model, and for a terminal that cannot open the first image capturing device and the second image capturing device simultaneously, a user records a video segment through the first image capturing device in advance, then selects the material model, and according to the specification of the material model, a face image or an image of other specified position in the video is scratched out to be attached to a specified part of the material model, so as to form a video of an AR small person. And then the front camera is switched to the rear camera, and the video of the AR small person appears in the picture of the target scene shot by the rear camera, so that the flexibility and the interestingness of image display are improved.
As an alternative embodiment, the third media information includes at least one of: a still image of the target object, a moving image of the target object, a video of the target object.
The third media information of this embodiment may be a still image of the target object, for example, the third media information is a photograph of the target object; the dynamic image of the target object may be, for example, a dynamic image of a target object which is prepared in advance; it may also be a video of the target object, for example, a pre-recorded video of the target object.
As an optional implementation manner, step S206, in the process of shooting the target scene, displaying the first media information in a shooting picture obtained by shooting the target scene includes: in the process of shooting a target scene, identifying a target horizontal plane in a shooting picture obtained by shooting the target scene; an image of a target object included in the first media information is displayed on a target level.
In this embodiment, in displaying the first media information in the photographing screen obtained by photographing the target scene, the target level in the photographing screen obtained by photographing the target scene may be identified to display the image of the target object included in the first media information on the target level, so that the image of the target object included in the first media information is displayed as if it were actually displayed on the target level from a different angle, thereby enhancing the authenticity of the display of the first media information.
Alternatively, the embodiment may perform planar analysis and object tracking based on feature recognition. The plane recognition consists in not only recognizing a plane but also distinguishing whether the plane is a horizontal plane. Only when the horizontal plane is identified, the image of the target object included in the first media information or the three-dimensional model obtained by synthesizing the image of the target object included in the first media information and the predetermined virtual image can be put on the plane to simulate the real physical effect. Therefore, the plane recognition and object tracking can be combined with the application of the feature algorithm and the gyroscope, the recognized plane can be subjected to angle correction according to the gesture of the terminal fed back by the gyroscope, namely, the azimuth correction and the coordinate calculation can be performed based on the gyroscope, further the inclination angle and the perspective effect of the plane are estimated, the image of the target object included in the first media information or the three-dimensional model obtained by combining the image of the target object included in the first media information with the preset virtual image are displayed on the target horizontal plane according to the inclination angle and the perspective effect of the estimated plane, and therefore the image of the target object included in the first media information or the three-dimensional model obtained by combining the image of the target object included in the first media information is displayed on the target horizontal plane in the same manner as the three-dimensional model obtained by combining the preset virtual image, and the reality of the first media information captured by the first imaging device in the captured image of the target scene captured by the second imaging device is enhanced.
As an optional implementation manner, in step S206, when the first media information is displayed in a shooting picture obtained by shooting the target scene in the process of shooting the target scene, the method further includes: responding to a first target operation instruction, amplifying an image of a target object included in the first media information to obtain an amplified image, and displaying the amplified image in a shooting picture obtained by shooting a target scene, wherein a target display area comprises a display area of the amplified image in the shooting picture; or responding to the first target operation instruction, carrying out reduction processing on the image of the target object included in the first media information to obtain a reduced image, and displaying the reduced image in a shooting picture obtained by shooting the target scene, wherein the target display area comprises a display area of the reduced image in the shooting picture.
In this embodiment, when the first media information is displayed in a photographing screen obtained by photographing a target scene, the image of the target object included in the first media information may be enlarged or reduced. Alternatively, the predetermined virtual image itself supports animation, and the three-dimensional model of the target object included in the first media information, which is synthesized in advance with the predetermined virtual image, may be enlarged or reduced in the screen of the target scene.
In this embodiment, the first target operation instruction may be triggered by an opening operation performed by two fingers on the touch screen with respect to the first media information, and in response to the first target operation instruction, the image of the target object included in the first media information is enlarged, the enlarged image is displayed in a picture of the target scene, the three-dimensional model obtained by combining the target object included in the first media information and the predetermined virtual image may be enlarged and displayed in the picture of the target scene, and the target display area includes a display area of the image after the reduction processing in the shot picture.
Optionally, the first target operation instruction is triggered by a folding operation of the double fingers on the touch screen for the first media information, and the first target operation instruction is responded to, so that an image of a target object included in the first media information is reduced, the reduced image is displayed in a picture of a target scene, and the target display area includes a display area of the reduced image in a shooting picture. Optionally, the embodiment reduces and displays a three-dimensional model obtained by synthesizing the target object included in the first media information and the predetermined virtual image in a screen of the target scene.
It should be noted that, the above operation of enlarging or reducing the image of the target object included in the first media information is only a preferred implementation of the embodiment of the present invention, and the first target operation instruction does not represent that the first target operation instruction of the embodiment of the present invention is triggered only by the opening operation performed by the two fingers on the touch screen for the image of the target object included in the first media information, and the first target operation instruction is triggered only by the closing operation performed by the two fingers on the touch screen for the image of the target object included in the first media information. Any operation manner that may enable the image of the target object included in the first media information to be enlarged or reduced is within the scope of the embodiments of the present invention and is not illustrated herein.
As an optional implementation manner, when the first media information is displayed in a shooting picture obtained by shooting the target scene in the process of shooting the target scene, the method further includes: and responding to the third operation instruction, displaying the first media information at a target position in a shooting picture obtained by shooting the target scene indicated by the third operation instruction, wherein the second target operation instruction comprises the third operation instruction, and the target display area comprises the target position.
In this embodiment, the display of the first media information in the picture of the target scene may be flexibly adjusted. The second target operation instruction comprises a third operation instruction, wherein the third operation instruction can be a clicking operation instruction, the clicking operation of a finger on a needle on the touch screen can be triggered, and the first media information is displayed on the target position of a picture of the target scene indicated by the third operation instruction in response to the third operation instruction. Optionally, clicking operation is performed on the target position of the displayed picture of the target scene through the touch screen, and the three-dimensional model obtained by fusing the image of the target object in the first media information and the predetermined virtual image is moved to the target position, so that the display position of the first media information in the shooting picture of the target scene is flexibly adjusted according to the needs of the user, and the flexibility of displaying the first media information is improved.
As an alternative embodiment, before displaying the first media information in the picture of the target scene, the method further comprises: processing the first media information according to the light information of the target scene, wherein the processed first media information has the light information; displaying the first media information in a picture of the target scene includes: and displaying the processed first media information in a picture of the target scene.
In this embodiment, screen capturing is performed based on the first image capturing apparatus. AR play is essentially a virtual and real play, so capturing the picture of the target scene is the basis of the entire play. The embodiment can directly use the first image pickup device installed on the terminal to collect the picture of the target scene. In addition to acquiring the picture of the target scene, the embodiment can further perform illumination analysis on the image to obtain the light information of the target scene, for example, estimate the intensity and direction of the ambient light to obtain the light information. According to the light information of the target scene, the first media information is processed, the processed first media information is displayed in a picture of the target scene, and according to the light information of the target scene, the three-dimensional model obtained by combining the image of the target object in the first media information with the preset virtual image is subjected to illumination processing, and then the processed three-dimensional model is put into the target scene, so that the combination of the virtual image and the reality is more realistic.
Optionally, the embodiment constructs a virtual 3D scene, puts a 3D model obtained by fusing an image of a target object included in the first media information and a predetermined virtual image into a specific position in the scene, adjusts a position, an observation angle and a field of view of the image capturing device, and renders the picture. But, as with ordinary 3D rendering, the construction of the 3D scene of this embodiment is required to be aligned with the target scene. The alignment here is mainly alignment of the imaging apparatus viewing position and angular field of view, alignment of 3D scene coordinates, and alignment of perspective distances. Also with the information fed back by the gyroscope, the pose, angle, and the like of the image pickup apparatus in the real world can be determined. In the virtual 3D scene, the image capturing device can be used as the center, the image capturing device is adjusted to the same gesture as the terminal provided with the image capturing device, then the position, the inclination angle, the size, the distance and the like of the plane in the virtual scene are calculated according to the plane information in the tracked target scene, finally the 3D model is put on the plane in the virtual scene to obtain a virtual picture, and the virtual picture and the acquired picture of the target scene are overlapped, so that the real effect of combining the 3D model and the real scene can be obtained.
The embodiment can be applied to a place where a user takes a picture without help of other people, and under the condition that the scenery which can be exposed by self-shooting is too small, the scheme of the embodiment can be selected, the image obtained by self-shooting is formed into a 3D small person, and the 3D small person appears in the picture of the target scene shot by the rear camera, so that the flexibility of image display is improved; in addition, the user can change own images into AR small persons to play in the pictures of the target scene of the rear camera, and in some terminals supporting the simultaneous opening of the first camera equipment and the second camera equipment, some interactive playing methods can be realized in real time, so that the interestingness and entertainment of image display are increased.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The technical scheme of the present invention will be described with reference to preferred embodiments.
In this embodiment, the user's face generally represents the user's avatar, and preferably the user's face image is extracted from the captured image and combined with a different material model, becomes an AR adult. And placing the AR pupil in the plane of the image of the real scene identified by the rear camera using an augmented reality developer platform (AR kit) and a software platform (AR core) that builds an augmented reality application.
Alternatively, this embodiment utilizes techniques of plane recognition and object tracking of the AR kit and AR core to display the AR minion in the plane of the image of the real scene, from a different perspective, presenting a display effect as if the AR minion were actually present on the plane.
Fig. 3 is a schematic view of a scene of media information display according to an embodiment of the present invention. As shown in fig. 3, the face image of the user is extracted from the photographed image, and the Santa Claus is turned into Santa Claus in combination with the material model of the Santa Claus, and the Santa Claus is placed on the desk of the image of the real scene recognized by the rear camera by the technique of plane recognition and object tracking of the AR kit and the AR core.
Fig. 4 is a schematic view of another scene of media information display according to an embodiment of the present invention. As shown in fig. 4, the face image of the user is extracted from the photographed image, and in combination with the material model of the chicken, becomes an AR small person riding on the chicken, and the planar recognition and object tracking techniques of AR kit and AR core place the AR small person riding on the chicken on the desk of the image of the real scene recognized by the rear camera.
The AR child of this embodiment supports an animated display itself. Optionally, the user may zoom in and out the AR personage by performing a double-finger operation on the touch screen, for example, performing a double-finger opening operation on the touch screen may zoom in the AR personage; the double-finger folding operation is performed on the touch screen, so that the AR minion can be reduced. Alternatively, this embodiment would automatically walk into position in the clicked image area by clicking on a different area in the image displayed by the rear camera.
The following describes an image display scheme in which the terminal supports simultaneous opening of front and rear cameras.
According to the embodiment, the face image in the face image can be identified by a face recognition technology and a material model is added to the face image. After the material model is added to the face image, the face image to which the material model is added may be displayed in a picture of a real scene photographed by a rear camera.
Because the front camera and the rear camera of the embodiment are opened at the same time, the shot images have real-time performance, so the embodiment can increase playing methods of some action recognition, and the following uses of two kinds of action recognition are exemplified:
First, a material model of Santa Claus is added to the identified face image, as shown in FIG. 3. When the front camera shoots that the user makes a nodding action, the Christmas AR small person displayed in the picture of the real scene shot by the rear camera can nod at the same time.
Fig. 5 is a schematic view of a scene of a media information display for motion recognition according to an embodiment of the present invention. As shown in fig. 5, after recognizing the nodding action of the user, a group of hearts is displayed around the christmas AR puppies displayed in the picture of the real scene to indicate that the user has made the nodding action, thereby increasing the interest of the image display.
Second, a material model of an album is added to the recognized face image.
Fig. 6 is a schematic view of another scene of media information display according to an embodiment of the present invention. As shown in fig. 6, the real-time picture photographed by the front camera is simultaneously displayed in the AR album in the picture of the real scene photographed by the rear camera as if the user changed to one album to appear in the real picture of the rear camera.
FIG. 7 is a schematic view of a scene of an image display for another motion recognition according to an embodiment of the invention. As shown in fig. 7, in the case of the scene shown in fig. 6, if the user makes a beep action at this time, the user is recognized that the beep action is performed, a puppy is present next to the album and runs out, and the album is kiss-matched to indicate that the user makes the beep action, thereby increasing the interest of the image display.
It should be noted that the above-mentioned image display scene of motion recognition is only an example of the embodiment of the present invention, and the motion recognition scene of the embodiment of the present invention is not limited.
The following describes an image display scheme in which the terminal does not support simultaneous opening of front and rear cameras.
The user can record a video through the front-facing camera of the terminal in advance. And extracting the face of the video or images of other appointed parts according to the specification of the selected material model, and attaching the face or images of other appointed parts to the appointed part of the material model to form the video of the AR small person. At this time, the video of the AR small person is switched to the rear camera of the terminal, and then appears in the picture of the real scene shot by the rear camera, so that the flexibility of image display is improved.
Fig. 8 is a schematic diagram of media information display that does not support simultaneous opening of front and rear cameras according to an embodiment of the present invention. As shown in fig. 8, in the case of turning on the rear camera, by operating the button 1 on the terminal, a picture of a real scene, such as a picture of a real scene, is photographed to show a child playing a toy. Then through operating button 2 on the terminal, switch the rear-mounted camera of terminal to be the leading camera, through operating button 3 for record a section video through leading camera, for example record the video of girl who takes the watermelon. Then, the button 4 is operated to complete the recording of the video, and a plurality of material models are called out at the same time, as shown in fig. 8, the material 1 is selected, then one Santa Claus model is selected from the plurality of material models, as shown in fig. 8, the material 2 is selected, the Santa Claus model is operated 5, and then the button 6 is operated to confirm the selection of the Santa Claus. In the embodiment, the face image of the girl in the video is extracted and attached to the selected material model of the Santa Claus to form a Santa Claus, then the button 7 is operated, the front camera of the terminal is switched to the rear camera of the terminal, and the actual screen effect is that the Santa Claus appears at the position of the serial number 8 in the picture of the child playing the toy.
Optionally, the embodiment performs a double-finger opening operation on the touch screen, so that the Christmas AR adult can be enlarged; and the double-finger folding operation is performed on the touch screen, so that the Christmas AR man can be reduced. Alternatively, the embodiment passes through different areas in the picture of the real scene, the Christmas AR small person automatically walks into the position in the clicked image area, so that the flexibility of displaying the image of the target object is improved.
Alternatively, the scheme of this embodiment may have the following application: the user arrives at a certain place and does not take a picture with the help of other people, but the scenery which can be exposed by self-timer is too little, and the people are quite large. The image display scheme of this embodiment may be selected. The user turns himself into a 3D christmas AR child, and according to a proper proportion, the christmas AR child appears at a proper position in a picture of a real scene shot by a rear camera, and the christmas AR child is used as a part of the picture, for example, appears in front of a recreation ground in the real scene, as shown in fig. 9A, wherein fig. 9A is a schematic view of another scene of media information display according to an embodiment of the present invention; for example, on a road of a real scene, as shown in fig. 9B, where fig. 9B is a schematic view of another scene of media information display according to an embodiment of the present invention; for example, on a table with toys in a real scene, as shown in fig. 9C, where fig. 9C is a schematic view of another scene of media information display according to an embodiment of the present invention, thereby improving flexibility of image display.
In addition, this embodiment enables the user to become an AR child, and play can be performed in the picture of the real scene taken by the rear camera. In some terminals supporting the simultaneous opening of the two cameras, some interactions with real-time performance are performed, so that entertainment is improved, and flexibility of image display is improved.
Fig. 10 is a schematic view of a scene of media information display of a terminal according to an embodiment of the present invention. As shown in fig. 10, the terminal of this embodiment may be a mobile terminal, and the target object is a user, and the user starts the front camera 1 of the mobile terminal to perform self-timer shooting on the user, for example, record a video for 10 seconds. However, at this time, the image of the user appears large, and the image of the user cannot be flexibly displayed in the screen of the real scene. At this time, the front camera 1 of the terminal is switched to the rear camera 3 by the front-rear switching button 2, and the scenery in which the user is located is shot by the rear camera 3, so that a shooting picture of the scenery is obtained. If the user wants to flexibly display own images in a shooting picture obtained by shooting a landscape through the rear camera 3, the user can select the video recorded through the front camera 1 through the terminal, extract the face image of the girl in the video, attach the face image to the selected material model 4, for example, the material model is a Santa Claus material model, so as to form a Santa Claus small person, and then display the Santa Claus small person in the shooting picture obtained by shooting the landscape through the rear camera 3, so that the self-shot images of the user can be flexibly displayed in the picture of the real scene.
Optionally, the user can perform a double-finger opening operation on the touch screen, and the Christmas AR adult can be enlarged; and the double-finger folding operation is performed on the touch screen, so that the Christmas AR man can be reduced. Alternatively, by clicking on a different area in the picture of the real scene, the christmas AR child will automatically walk into position in the clicked image area, allowing the user's image to be flexibly displayed in the picture of the real scene.
The embodiment can also display a group of love around the Christmas AR small person after recognizing the nodding action of the user so as to indicate the user to make the nodding action, thereby increasing the interestingness of the image display; and when the user makes a beep mouth action, a dog is arranged beside the Christmas AR small person to run out so as to indicate the user to make the beep mouth action, thereby increasing the interest of image display. It should be noted that the love, puppy of this embodiment is merely illustrative, and any animation, dynamic image, static image, etc. that can be used to instruct the user to act are within the scope of the embodiments of the present invention, and are not illustrated here.
The following describes a technical scheme of image display according to an embodiment of the present invention.
The embodiment realizes an AR shooting playing method based on AR technology. Alternatively, a complete set of AR technical solutions needs to contain several modules: a camera-based picture capturing and distance estimating module; a gyroscope-based azimuth correction and coordinate calculation module; a feature-based plane recognition and object tracking module, and a picture rendering module based on an image processor (Graphics Processing Unit, referred to as GPU for short).
Besides the basic function of the AR technology, the embodiment combines the face playing method, applies the recognized face fusion to the 3D material model to obtain a 3D model, and finally puts the 3D model into the AR scene.
Fig. 11 is a technical framework diagram of media information display according to an embodiment of the present invention. As shown in fig. 11, the scene includes an AR person, an AR photo frame, and the like; the service comprises a dynamic effect rendering frame for realizing dynamic effect rendering of the picture; the main technical framework comprises a face fusion module and an AR kit module, wherein the face fusion module is used for stripping face pictures in photos or videos selected by users, attaching the face pictures as skin to a material model, and the ARkit module is used for scene analysis, model placement, picture rendering and the like; algorithms used in this embodiment may include image acquisition/analysis algorithms, plane recognition/tracking algorithms, 3D rendering algorithms, 3D model algorithms, five sense organs positioning algorithms, etc.; the hardware devices used may include cameras, gyroscopes, image processor GPUs, etc.
The following describes a face fusion module according to an embodiment of the present invention.
Face recognition and five sense organs positioning: the embodiment can carry out image analysis on the picture or the video based on the face recognition technology, and find the position of the face and the size of the face area in the picture or the video. The complete image scanning can find out all face images in the pictures or videos, and optionally, the embodiment combines the actual requirements of product application, and only needs to find out one face image. Alternatively, the images are scanned from large to small from thick to thin, and the scanned first face image is returned as a scanning result, so that the largest face image in the picture can be found at the highest speed. After the face image is found, further feature analysis is performed, feature points such as the outline, eyes, nose, mouth and the like of the face image are specifically positioned, and five sense organs are positioned, so that the coordinates of the five sense organs can be obtained.
Face matting and model fusion: the embodiment uses the face contour information obtained in the previous step to scratch the face image from the photo or video. The 3D model is typically composed of bone and skin, which is essentially a texture picture. The embodiment uses the scratched face image as skin texture to replace the texture of the corresponding area in the 3D model. In the case of texture replacement, attention is paid to the alignment of the feature areas such as eyes, nose, mouth, etc., and optionally, the face image is divided into grids using the five-sense organ coordinates obtained in the previous step, as shown in fig. 12. Fig. 12 is a schematic diagram of dividing a face image into grids according to an embodiment of the present invention, and the grids of the face image are replaced in one-to-one correspondence with the grids in the texture picture in the 3D model prepared in advance, so as to realize accurate replacement of the face image and the textures in the 3D model.
The ARkit module of an embodiment of the present invention is described below.
And capturing a picture based on the camera. AR play is a play combining virtual and real, so capturing real scene pictures in this embodiment is a precondition for the whole play. The mobile equipment is provided with the camera, and the camera of the mobile equipment can be directly utilized to collect pictures of the real scene. Besides the acquired picture, the embodiment can further perform illumination analysis on the image, estimate the intensity and the direction of the ambient light, and finally perform illumination treatment to the 3D model to the same extent when the 3D model is put in, so that the virtual and the reality can be more real when combined.
Planar analysis and object tracking are performed based on feature recognition. Fig. 13 is a schematic diagram of a planar recognition according to an embodiment of the present invention. As shown in fig. 13, the plane recognition of this embodiment is to recognize not only a plane in a real scene but also to distinguish whether the plane is a horizontal plane, for example, a desk is a plane and is a horizontal plane. Only when a plane is identified and is horizontal, can a 3D model be put on the plane to simulate a real physical effect. Thus, the recognition of the plane and the object tracking may be a combination of a feature algorithm and a gyroscope. And correcting the angle of the identified plane according to the equipment gesture fed back by the gyroscope, so as to further estimate the inclination angle of the plane and the perspective effect.
Model delivery and rendering. The embodiment can construct a virtual 3D scene, put the 3D model at a specific position in the virtual 3D scene, adjust the position, the observation angle and the visual field of the camera, and render the picture. It should be noted that, the virtual 3D scene of this embodiment needs to be constructed in alignment with the real scene. Alignment here mainly refers to alignment of the viewing position and angular field of view of the camera, alignment of virtual 3D scene coordinates, and alignment of perspective distances. Also with the information fed back by the gyroscope, the pose of the camera in the real world, etc. can be determined.
Fig. 14 is a schematic diagram of determining the pose of a camera in the real world according to an embodiment of the invention. As shown in fig. 14, in the virtual 3D scene, the camera may be adjusted to the same posture as the mobile device with the camera as the center, and the camera may be adjusted according to the X-axis, Y-axis, and Z-axis of the camera, respectively. And then according to the tracked plane information, calculating the position, the inclination angle, the size, the distance and the like of the plane in the virtual 3D scene. And finally, putting the 3D model on a plane in the virtual 3D scene to obtain a virtual picture, and overlapping the virtual picture and the acquired real picture to obtain a real effect of combining the 3D model and the real scene.
Through the media information display scheme of the embodiment, the image of the user can be displayed in the real environment of the rear camera, and the user can feel that the user is truly at the appointed position of the rear camera by using the AR technology; in addition, the embodiment supports the scheme of simultaneously opening the two cameras, and the pictures acquired by the front cameras are displayed in real time in the pictures acquired by the rear cameras, so that the flexibility of image display is improved.
According to still another aspect of the embodiments of the present invention, there is also provided an image display apparatus for implementing the above-described method of displaying media information. Fig. 15 is a schematic view of an image display apparatus according to an embodiment of the present invention. As shown in fig. 15, the apparatus may include: a first photographing unit 10, a second photographing unit 20, and a display unit 30.
A first shooting unit 10, configured to shoot a target object through a first image capturing device of the terminal, so as to obtain first media information, where the first media information is used to identify the target object.
And a second photographing unit 20 for photographing the target scene through a second image pickup apparatus of the terminal.
And a display unit 30, configured to display first media information in a shot picture obtained by shooting the target scene during shooting the target scene, where a ratio between a size of a target display area where the first media information is displayed in the shot picture and a size of the shot picture is adjusted by the first target operation instruction, and/or a target display area where the first media information is displayed in the shot picture is adjusted by the second target operation instruction.
It should be noted that, the first photographing unit 10 in this embodiment may be used to perform step S202 in the embodiment of the present application, the second photographing unit 20 in this embodiment may be used to perform step S204 in the embodiment of the present application, and the display unit 30 in this embodiment may be used to perform step S206 in the embodiment of the present application.
The first photographing unit 10 of this embodiment photographs a target object through a first photographing apparatus of a terminal to obtain first media information, wherein the first media information is used to identify the target object, the second photographing unit 20 is used to photograph a target scene through a second photographing apparatus of the terminal, and the display unit 30 is used to display the first media information in a photographing screen obtained by photographing the target scene during photographing the target scene, wherein a ratio between a size of a target display area in which the first media information is displayed in the photographing screen and a size of the photographing screen is adjusted through a first target operation instruction, and/or a target display area in which the first media information is displayed in the photographing screen is adjusted through a second target operation instruction. The first media information obtained by shooting the target object through the first shooting equipment of the terminal is displayed in a shooting picture obtained by shooting the target scene through the second shooting equipment of the terminal, namely, the first media information for indicating the target object is flexibly displayed in a picture of the real scene, so that the limitation that the picture occupation ratio of the real scene is small and the occupation ratio of the media information for indicating the target object is relatively large when the self-shooting is carried out through the front-facing camera is avoided, the effect that the media information for indicating the target object is flexibly displayed in the picture of the real scene is realized, and the technical problem that the media information for indicating the target object cannot be flexibly displayed in the picture of the target scene in the related art is solved.
It should be noted here that the above units are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented in software or in hardware as part of the apparatus shown in fig. 1, where the hardware environment includes a network environment.
According to still another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the method for displaying media information described above.
Fig. 16 is a block diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 16, the electronic device may include: comprising a memory 161 and a processor 163, the memory 161 having stored therein a computer program, the processor 163 being arranged to perform the steps of any of the method embodiments described above by means of the computer program. Optionally, as shown in fig. 16, the electronic apparatus may further comprise a transmission device 165 and an input-output device 167.
Alternatively, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor 163 may be configured to execute the following steps by a computer program:
shooting a target object through first camera equipment of the terminal to obtain first media information, wherein the first media information is used for identifying the target object;
shooting a target scene through second camera equipment of the terminal;
In the process of shooting a target scene, first media information is displayed in a shooting picture obtained by shooting the target scene, wherein the proportion between the size of a target display area of the first media information displayed in the shooting picture and the size of the shooting picture is adjusted through a first target operation instruction, and/or the target display area of the first media information displayed in the shooting picture is adjusted through a second target operation instruction.
The processor 163 is also configured to perform the steps of: shooting the target scene by the second camera device of the terminal comprises: the target scene is photographed by the second image pickup apparatus while the target object is photographed by the first image pickup apparatus.
The processor 163 is also configured to perform the steps of: acquiring a real-time shooting picture obtained by shooting a target object through first shooting equipment; extracting a real-time image of a target object from a real-time shooting picture, wherein the first media information comprises the real-time image of the target object; in the process of shooting a target scene, a real-time image of a target object is displayed in a shooting picture obtained by shooting the target scene.
The processor 163 is also configured to perform the steps of: in the process of shooting a target scene, displaying a synthesized real-time image in a shooting picture obtained by shooting the target scene, wherein the synthesized real-time image is a real-time image obtained by synthesizing a real-time image of a target object with a preset virtual image.
The processor 163 is also configured to perform the steps of: when a predetermined action is detected in the real-time image of the target object, second media information corresponding to the predetermined action is displayed when the real-time image of the target object is displayed in a photographing screen obtained by photographing the target scene.
The processor 163 is also configured to perform the steps of: acquiring a target video obtained by shooting a target object through a first shooting device, wherein when the target object is shot through the first shooting device, a target scene is not shot through a second shooting device; extracting third media information containing an image of a target object from the target video, wherein the first media information comprises the third media information; in the process of shooting the target scene, third media information is displayed in a shooting picture obtained by shooting the target scene.
The processor 163 is also configured to perform the steps of: and displaying fourth media information obtained by synthesis in a shooting picture obtained by shooting the target scene in the process of shooting the target scene, wherein the fourth media information is the media information obtained by synthesizing the third media information with a preset virtual image.
The processor 163 is also configured to perform the steps of: in the process of shooting a target scene, identifying a target horizontal plane in a shooting picture obtained by shooting the target scene; an image of a target object included in the first media information is displayed on a target level.
The processor 163 is also configured to perform the steps of: when first media information is displayed in a shooting picture obtained by shooting a target scene in the process of shooting the target scene, responding to a first target operation instruction, amplifying an image of a target object included in the first media information to obtain an amplified image, wherein a target display area comprises a display area of the amplified image in the shooting picture, and displaying the amplified image in the shooting picture obtained by shooting the target scene; or responding to the first target operation instruction, carrying out reduction processing on the image of the target object included in the first media information to obtain a reduced image, and displaying the reduced image in a shooting picture obtained by shooting the target scene, wherein the target display area comprises a display area of the reduced image in the shooting picture.
The processor 163 is also configured to perform the steps of: and when the first media information is displayed in a shooting picture obtained by shooting the target scene in the process of shooting the target scene, responding to a third operation instruction, and displaying the first media information in a target position in the shooting picture obtained by shooting the target scene indicated by the third operation instruction, wherein the second target operation instruction comprises the third operation instruction, and the target display area comprises the target position.
Alternatively, it will be understood by those skilled in the art that the structure shown in fig. 16 is only illustrative, and the electronic device may be a smart phone (such as AndroID phone, iOS phone, etc.), a tablet computer, a palm computer, a Mobile internet device (Mobile INTERNET DEVICES, MID), a PAD, etc. Fig. 16 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 16, or have a different configuration than shown in fig. 16.
The memory 161 may be used to store software programs and modules, such as program instructions/modules corresponding to the display method and apparatus of media information in the embodiment of the present invention, and the processor 163 executes the software programs and modules stored in the memory 161, thereby performing various functional applications and data processing, that is, implementing the display method of media information described above. Memory 161 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, memory 161 may further include memory located remotely from processor 163, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 165 is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission device 165 includes a network adapter (NetworkInterface Controller, NIC) that may be connected to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 165 is a Radio Frequency (RF) module for wirelessly communicating with the internet.
Among them, the memory 161 is specifically used for storing application programs.
By adopting the embodiment of the invention, an image display scheme is provided. Shooting a target object through first camera equipment of the terminal to obtain first media information, wherein the first media information is used for identifying the target object; shooting a target scene through second camera equipment of the terminal; in the process of shooting a target scene, first media information is displayed in a shooting picture obtained by shooting the target scene, wherein the proportion between the size of a target display area of the first media information displayed in the shooting picture and the size of the shooting picture is adjusted through a first target operation instruction, and/or the target display area of the first media information displayed in the shooting picture is adjusted through a second target operation instruction. The first media information obtained by shooting the target object through the first shooting equipment of the terminal is displayed in a shooting picture obtained by shooting the target scene through the second shooting equipment of the terminal, namely, the first media information for indicating the target object is flexibly displayed in a picture of the real scene, so that the limitation that the picture occupation ratio of the real scene is small and the occupation ratio of the media information for indicating the target object is relatively large when the self-shooting is carried out through the front-facing camera is avoided, the effect that the media information for indicating the target object is flexibly displayed in the picture of the real scene is realized, and the technical problem that the media information for indicating the target object cannot be flexibly displayed in the picture of the target scene in the related art is solved.
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
shooting a target object through first camera equipment of the terminal to obtain first media information, wherein the first media information is used for identifying the target object;
shooting a target scene through second camera equipment of the terminal;
In the process of shooting a target scene, first media information is displayed in a shooting picture obtained by shooting the target scene, wherein the proportion between the size of a target display area of the first media information displayed in the shooting picture and the size of the shooting picture is adjusted through a first target operation instruction, and/or the target display area of the first media information displayed in the shooting picture is adjusted through a second target operation instruction.
Optionally, the storage medium is further arranged to store program code for performing the steps of: the target scene is photographed by the second image pickup apparatus while the target object is photographed by the first image pickup apparatus.
Optionally, the storage medium is further arranged to store program code for performing the steps of: acquiring a real-time shooting picture obtained by shooting a target object through first shooting equipment; extracting a real-time image of a target object from a real-time shooting picture, wherein the first media information comprises the real-time image of the target object; in the process of shooting a target scene, a real-time image of a target object is displayed in a shooting picture obtained by shooting the target scene.
Optionally, the storage medium is further arranged to store program code for performing the steps of: in the process of shooting a target scene, displaying a synthesized real-time image in a shooting picture obtained by shooting the target scene, wherein the synthesized real-time image is a real-time image obtained by synthesizing a real-time image of a target object with a preset virtual image.
Optionally, the storage medium is further arranged to store program code for performing the steps of: when a predetermined action is detected in the real-time image of the target object, second media information corresponding to the predetermined action is displayed when the real-time image of the target object is displayed in a photographing screen obtained by photographing the target scene.
Optionally, the storage medium is further arranged to store program code for performing the steps of: acquiring a target video obtained by shooting a target object through a first shooting device, wherein when the target object is shot through the first shooting device, a target scene is not shot through a second shooting device; extracting third media information containing an image of a target object from the target video, wherein the first media information comprises the third media information; in the process of shooting the target scene, third media information is displayed in a shooting picture obtained by shooting the target scene.
Optionally, the storage medium is further arranged to store program code for performing the steps of: and displaying fourth media information obtained by synthesis in a shooting picture obtained by shooting the target scene in the process of shooting the target scene, wherein the fourth media information is the media information obtained by synthesizing the third media information with a preset virtual image.
Optionally, the storage medium is further arranged to store program code for performing the steps of: in the process of shooting a target scene, identifying a target horizontal plane in a shooting picture obtained by shooting the target scene; an image of a target object included in the first media information is displayed on a target level.
Optionally, the storage medium is further arranged to store program code for performing the steps of: when first media information is displayed in a shooting picture obtained by shooting a target scene in the process of shooting the target scene, responding to a first target operation instruction, amplifying an image of a target object included in the first media information to obtain an amplified image, and displaying the amplified image in the shooting picture obtained by shooting the target scene, wherein a target display area comprises a display area of the amplified image in the shooting picture; or responding to the first target operation instruction, carrying out reduction processing on the image of the target object included in the first media information to obtain a reduced image, and displaying the reduced image in a shooting picture obtained by shooting the target scene, wherein the target display area comprises a display area of the reduced image in the shooting picture.
Optionally, the storage medium is further arranged to store program code for performing the steps of: and when the first media information is displayed in a shooting picture obtained by shooting the target scene in the process of shooting the target scene, responding to a third operation instruction, and displaying the first media information in a target position in the shooting picture obtained by shooting the target scene indicated by the third operation instruction, wherein the second target operation instruction comprises the third operation instruction, and the target display area comprises the target position.
Optionally, the storage medium is further configured to store a computer program for executing the steps included in the method in the above embodiment, which is not described in detail in this embodiment.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (8)

1. A method for displaying media information, comprising:
Shooting a target object through first camera equipment of a terminal to obtain a real-time three-dimensional virtual object corresponding to the target object, wherein the real-time three-dimensional virtual object is obtained by fusing a real-time face image of the target object to a three-dimensional virtual face area of a preset three-dimensional virtual object material model;
shooting a target scene through a second image pickup device of the terminal while shooting the target object through the first image pickup device;
In the process of shooting the target scene, simultaneously displaying the real-time three-dimensional virtual object in a shooting picture obtained by shooting the target scene according to a target display proportion, wherein the target display proportion is used for indicating a proportion between a size corresponding to a target display area and a size corresponding to the shooting picture, the target display area is an area corresponding to the real-time three-dimensional virtual object in the shooting picture, the target display proportion is obtained in advance based on a reference three-dimensional virtual object and a reference shooting picture, the reference three-dimensional virtual object is composed of a face image of the reference object and the predetermined three-dimensional virtual object material model, and the reference shooting picture is obtained by shooting the reference scene; displaying the real-time three-dimensional virtual object which is synchronously executing the preset action in the shooting picture and displaying virtual special effects corresponding to the preset action around the real-time three-dimensional virtual object under the condition that the target object is executing the preset action;
responding to a first target operation instruction, adjusting the display proportion of the real-time three-dimensional virtual object in the shooting picture, and/or
And displaying a moving process of the real-time three-dimensional virtual object to the target display position in the shooting picture in response to a second target operation instruction for indicating that the target display position in the shooting picture is clicked.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
Shooting a target object through first camera equipment of a terminal to obtain a real-time three-dimensional virtual object corresponding to the target object comprises the following steps: acquiring a real-time shooting picture obtained by shooting the target object through the first shooting equipment; and extracting a real-time face image of the target object from the real-time shooting picture, and synthesizing the real-time face image and the predetermined three-dimensional virtual object material model to obtain the real-time three-dimensional virtual object.
3. The method of claim 1, wherein the virtual special effects comprise at least one of: a predetermined animation, a predetermined dynamic image, a predetermined static image.
4. A method according to any one of claims 1 to 3, wherein simultaneously displaying the real-time three-dimensional virtual object in a photographed picture obtained by photographing the target scene during photographing the target scene comprises:
in the process of shooting the target scene, identifying a target level in a shooting picture obtained by shooting the target scene;
displaying the real-time three-dimensional virtual object on the target horizontal plane.
5. A method according to any one of claims 1 to 3, wherein the adjusting, in response to the first target operation instruction, a display scale of the real-time three-dimensional virtual object in the photographed picture, in response to the first target operation instruction, the adjusting a display scale corresponding to a target display area in which the real-time three-dimensional virtual object is displayed in the photographed picture, includes:
Responding to the first target operation instruction, amplifying the real-time three-dimensional virtual object to obtain the amplified real-time three-dimensional virtual object, and displaying the amplified real-time three-dimensional virtual object in a shooting picture obtained by shooting the target scene, wherein the target display area comprises a display area of the amplified real-time three-dimensional virtual object in the shooting picture; or alternatively
Responding to the first target operation instruction, carrying out reduction processing on the real-time three-dimensional virtual object to obtain the real-time three-dimensional virtual object after reduction processing, and displaying the real-time three-dimensional virtual object after reduction processing in a shooting picture obtained by shooting the target scene, wherein the target display area comprises a display area of the real-time three-dimensional virtual object after reduction processing in the shooting picture.
6. A display device for media information, comprising:
A first shooting unit, configured to shoot a target object through a first shooting device of a terminal, so as to obtain a real-time three-dimensional virtual object corresponding to the target object, where the real-time three-dimensional virtual object is obtained by fusing a real-time face image of the target object to a three-dimensional virtual face area of a predetermined three-dimensional virtual object material model;
A second photographing unit for photographing a target scene through a second photographing apparatus of the terminal while photographing the target object through the first photographing apparatus;
The display unit is used for simultaneously displaying the real-time three-dimensional virtual object in a shooting picture obtained by shooting the target scene according to a target display proportion, wherein the target display proportion is used for indicating the proportion between the size corresponding to a target display area and the size corresponding to the shooting picture, the target display area is an area corresponding to the real-time three-dimensional virtual object in the shooting picture, the target display proportion is obtained in advance based on a reference three-dimensional virtual object and a reference shooting picture, the reference three-dimensional virtual object is composed of a face image of the reference object and the predetermined three-dimensional virtual object material model, and the reference shooting picture is obtained by shooting the reference scene; displaying the real-time three-dimensional virtual object which is synchronously executing the preset action in the shooting picture and displaying media information corresponding to the preset action around the real-time three-dimensional virtual object under the condition that the target object is executing the preset action;
The device is also used for responding to a first target operation instruction, adjusting the display proportion of the real-time three-dimensional virtual object in the shooting picture, and responding to a second target operation instruction used for indicating that the target display position in the shooting picture is clicked, and displaying the moving process of the real-time three-dimensional virtual object moving to the target display position in the shooting picture.
7. A storage medium, characterized in that the storage medium has stored therein a computer program, wherein the computer program is arranged to execute the method of displaying media information according to any of the claims 1 to 5 when run.
8. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of displaying media information according to any of the claims 1 to 5 by means of the computer program.
CN201810091225.0A 2018-01-30 2018-01-30 Media information display method, device and storage medium Active CN108416832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810091225.0A CN108416832B (en) 2018-01-30 2018-01-30 Media information display method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810091225.0A CN108416832B (en) 2018-01-30 2018-01-30 Media information display method, device and storage medium

Publications (2)

Publication Number Publication Date
CN108416832A CN108416832A (en) 2018-08-17
CN108416832B true CN108416832B (en) 2024-05-14

Family

ID=63126659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810091225.0A Active CN108416832B (en) 2018-01-30 2018-01-30 Media information display method, device and storage medium

Country Status (1)

Country Link
CN (1) CN108416832B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353842B (en) * 2018-12-24 2024-06-28 阿里巴巴集团控股有限公司 Push information processing method and system
CN111414225B (en) * 2020-04-10 2021-08-13 北京城市网邻信息技术有限公司 Three-dimensional model remote display method, first terminal, electronic device and storage medium
CN111242107B (en) * 2020-04-26 2021-03-09 北京外号信息技术有限公司 Method and electronic device for setting virtual object in space
KR20210148074A (en) * 2020-05-26 2021-12-07 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 AR scenario content creation method, display method, device and storage medium
CN113747113A (en) * 2020-05-29 2021-12-03 北京小米移动软件有限公司 Image display method and device, electronic equipment and computer readable storage medium
CN111640197A (en) * 2020-06-09 2020-09-08 上海商汤智能科技有限公司 Augmented reality AR special effect control method, device and equipment
CN113362434A (en) * 2021-05-31 2021-09-07 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
CN106341720A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Method for adding face effects in live video and device thereof
CN107231524A (en) * 2017-05-31 2017-10-03 珠海市魅族科技有限公司 Image pickup method and device, computer installation and computer-readable recording medium
CN107343211A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Method of video image processing, device and terminal device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5145444B2 (en) * 2011-06-27 2013-02-20 株式会社コナミデジタルエンタテインメント Image processing apparatus, image processing apparatus control method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
CN106341720A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Method for adding face effects in live video and device thereof
CN107343211A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Method of video image processing, device and terminal device
CN107231524A (en) * 2017-05-31 2017-10-03 珠海市魅族科技有限公司 Image pickup method and device, computer installation and computer-readable recording medium

Also Published As

Publication number Publication date
CN108416832A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN108416832B (en) Media information display method, device and storage medium
US10684467B2 (en) Image processing for head mounted display devices
CN107315470B (en) Graphic processing method, processor and virtual reality system
KR101295471B1 (en) A system and method for 3D space-dimension based image processing
US20180373413A1 (en) Information processing method and apparatus, and program for executing the information processing method on computer
US8956227B2 (en) Storage medium recording image processing program, image processing device, image processing system and image processing method
US20050206610A1 (en) Computer-"reflected" (avatar) mirror
US20090202114A1 (en) Live-Action Image Capture
CN106575450A (en) Augmented reality content rendering via albedo models, systems and methods
CN111862348B (en) Video display method, video generation method, device, equipment and storage medium
CN111833457A (en) Image processing method, apparatus and storage medium
JP6563580B1 (en) Communication system and program
CN107862718A (en) 4D holographic video method for catching
CN114625468B (en) Display method and device of augmented reality picture, computer equipment and storage medium
US20220405996A1 (en) Program, information processing apparatus, and information processing method
CN108932750A (en) Methods of exhibiting, device, electronic equipment and the storage medium of augmented reality
CN112738498A (en) Virtual tour system and method
JP7344084B2 (en) Content distribution system, content distribution method, and content distribution program
Zaitseva et al. The development of mobile applications for the capturing and visualization of stereo and spherical panoramas
CN114333051A (en) Image processing method, virtual image processing method, image processing system and equipment
JP2024054570A (en) Synthetic video delivery system
WO2024015220A1 (en) Method and application for animating computer generated images
JP2021002402A (en) Information processor
CN116912463A (en) 3D avatar processing method, apparatus, electronic device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant