CN111625100A - Method and device for presenting picture content, computer equipment and storage medium - Google Patents

Method and device for presenting picture content, computer equipment and storage medium Download PDF

Info

Publication number
CN111625100A
CN111625100A CN202010493240.5A CN202010493240A CN111625100A CN 111625100 A CN111625100 A CN 111625100A CN 202010493240 A CN202010493240 A CN 202010493240A CN 111625100 A CN111625100 A CN 111625100A
Authority
CN
China
Prior art keywords
picture
virtual animation
information
picture content
real scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010493240.5A
Other languages
Chinese (zh)
Inventor
孙红亮
李炳泽
武明飞
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010493240.5A priority Critical patent/CN111625100A/en
Publication of CN111625100A publication Critical patent/CN111625100A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a method, an apparatus, a computer device and a storage medium for presenting picture content, wherein the method comprises: acquiring a real scene image; determining picture content information of a physical picture displayed in a real scene based on the real scene image; acquiring virtual animation information; and utilizing the virtual animation information to control AR equipment to present the AR effect corresponding to the virtual animation information in the associated area corresponding to the picture content of the entity picture.

Description

Method and device for presenting picture content, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method and an apparatus for presenting picture content, a computer device, and a storage medium.
Background
At present, for scenes such as exhibition of artistic works such as pictures and the like, a traditional exhibition mode is mostly adopted, namely, a plurality of artistic works such as static pictures and the like which are enjoyed by people are placed in an exhibition hall. The static displayed works of art, such as pictures, are single in expression form, it is difficult for users who have seen the works of art to understand the works of art deeply, and the display mode is lack of interaction with the users who participate in the exhibition.
Disclosure of Invention
The embodiment of the disclosure at least provides a picture content presentation method, a picture content presentation device, a computer device and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a method for presenting picture content, including:
acquiring a real scene image; determining picture content information of a physical picture displayed in a real scene based on the real scene image; acquiring virtual animation information; and utilizing the virtual animation information to control AR equipment to present the AR effect corresponding to the virtual animation information in the associated area corresponding to the picture content of the entity picture.
In the embodiment of the disclosure, the AR effect combined with the virtual animation can be presented in the associated area of the picture content of the entity picture, the traditional two-dimensional and closed three-dimensional display limitation is broken through, and the visual experience of the virtual animation on the 'jump paper' is brought to the user. Under the scene of being applied to exhibition hall's exhibition picture, can make picture content not confine to on the paper, not confine to static bandwagon effect, can enrich the presentation form of picture content, and can strengthen and the interaction between the user, further promoted user's visual experience sense.
In some embodiments of the disclosure, before controlling the AR device to present an AR effect corresponding to the virtual animation information in an associated area corresponding to the picture content of the physical picture using the virtual animation information, the method further comprises:
based on the real scene image, a target trigger action is detected to occur in the real scene.
In the embodiment, the interaction link between the user and the user can be increased, the presentation of the AR effect is triggered through the triggering action of the user, the visual experience feeling and the interaction interest of the user are further improved, and the better AR presentation effect is achieved.
In some embodiments of the present disclosure, the obtaining of the virtual animation information includes:
acquiring virtual animation information matched with the target trigger action from a preset virtual animation library; the virtual animation library comprises virtual animation information respectively corresponding to the various trigger actions.
In the embodiment, the binding relationship between the virtual animation information and the triggering action can be established, so that a user can trigger interested virtual animation information in a personalized manner, the personalized AR effect can be presented, and the user experience degree is further improved.
In some embodiments of the present disclosure, the method further comprises: identifying a motion indication direction of the target trigger motion; and determining an associated area corresponding to the picture content of the entity picture based on the action indication direction.
In this embodiment, the associated area of the AR effect to be presented may also be determined according to the action indication direction of the target trigger action, and the AR effect may be presented according to the area in which the user is interested, so as to further improve the visual viewing experience of the user.
In some embodiments of the present disclosure, the obtaining of the virtual animation information includes:
acquiring virtual animation information matched with the picture content information of the entity picture from a preset virtual animation library; the virtual animation library comprises virtual animation information respectively corresponding to the at least one type of picture content information.
In this embodiment, a virtual animation matching the current picture content may be automatically determined and rendered based on the binding relationship between the picture content information and the virtual animation information.
In some embodiments of the present disclosure, obtaining virtual animation information includes:
acquiring a plurality of pieces of virtual animation information matched with the picture content information of the entity picture; and determining virtual animation information corresponding to the target trigger action from the plurality of pieces of virtual animation information.
In the embodiment, the picture content information and the user trigger action can be combined to determine the virtual animation information to be presented, so that the link of acquiring the virtual animation information is richer, and the interestingness of AR effect presentation is improved.
In some embodiments of the present disclosure, the determining picture content information of the physical picture displayed in the real scene based on the real scene image comprises:
matching the real scene image with a reference image in a preset picture library, and determining a reference image matched with the entity picture in the real scene; and determining the picture content information of the matched reference image as the picture content information of the entity picture.
In some embodiments of the present disclosure, the preset picture library includes at least one set of reference images, each set of reference images includes at least one reference image representing the same physical picture, and picture content information of each reference image is represented by a picture content identifier.
In the embodiment, the reference picture in the preset picture library can be matched with the entity picture presented in the real scene, so that the picture content of the entity picture can be accurately identified.
In some embodiments of the present disclosure, the associated area corresponding to the picture content of the physical picture is a first area where the picture content is located and/or a second area having a preset relative position relationship with the picture content.
In a second aspect, embodiments of the present disclosure also provide a device for presenting picture content, including:
the first acquisition module is used for acquiring a real scene image;
the first determining module is used for determining picture content information of the entity picture displayed in the real scene based on the real scene image;
the second acquisition module is used for acquiring the virtual animation information;
and the presentation module is used for controlling AR equipment to present the AR effect corresponding to the virtual animation information in the associated area corresponding to the picture content of the entity picture by utilizing the virtual animation information.
In some embodiments of the present disclosure, the apparatus further comprises:
and the detection module is used for detecting that a target trigger action appears in the real scene based on the real scene image.
In some embodiments of the disclosure, the second obtaining module, when obtaining the virtual animation information, is specifically configured to:
acquiring virtual animation information matched with the target trigger action from a preset virtual animation library; the virtual animation library comprises virtual animation information respectively corresponding to the various trigger actions.
In some embodiments of the present disclosure, the apparatus further comprises:
the second determination module is used for identifying the action indication direction of the target trigger action; and determining an associated area corresponding to the picture content of the entity picture based on the action indication direction.
In some embodiments of the disclosure, the second obtaining module, when obtaining the virtual animation information, is specifically configured to:
acquiring virtual animation information matched with the picture content information of the entity picture from a preset virtual animation library; the virtual animation library comprises virtual animation information respectively corresponding to the at least one type of picture content information.
In some embodiments of the disclosure, the second obtaining module, when obtaining the virtual animation information, is specifically configured to:
acquiring a plurality of pieces of virtual animation information matched with the picture content information of the entity picture; and determining virtual animation information corresponding to the target trigger action from the plurality of pieces of virtual animation information.
In some embodiments of the disclosure, the first determining module, when determining the picture content information of the physical picture displayed in the real scene based on the real scene image, is specifically configured to:
matching the real scene image with a reference image in a preset picture library, and determining a reference image matched with the entity picture in the real scene; and determining the picture content information of the matched reference image as the picture content information of the entity picture.
In some embodiments of the present disclosure, the preset picture library includes at least one set of reference images, each set of reference images includes at least one reference image representing the same physical picture, and picture content information of each reference image is represented by a picture content identifier.
In some embodiments of the present disclosure, the associated area corresponding to the picture content of the physical picture is a first area where the picture content is located and/or a second area having a preset relative position relationship with the picture content.
In a third aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
The method, the device, the computer equipment and the storage medium provided by the embodiment of the disclosure can identify the picture content information of the real entity picture in the real scene based on the image of the real scene, can also obtain the virtual animation information, and can present the corresponding AR effect in the associated area corresponding to the picture content of the entity picture by utilizing the virtual animation information. Therefore, the AR effect combined with the virtual animation can be presented in the relevant area of the picture content of the entity picture, the traditional two-dimensional and closed three-dimensional display limitation is broken through, and the visual experience of the virtual animation on the 'jump paper' is brought to the user. Under the scene of being applied to exhibition hall's exhibition picture, can make picture content not confine to on the paper, not confine to static bandwagon effect, can enrich the presentation form of picture content, and can strengthen and the interaction between the user, further promoted user's visual experience sense.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a method of rendering pictorial content provided by an embodiment of the present disclosure;
FIG. 2A illustrates a pictorial content presentation schematic provided by an embodiment of the present disclosure;
FIG. 2B illustrates a rendering schematic of another pictorial content provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a presentation apparatus of pictorial content provided by an embodiment of the present disclosure;
fig. 4 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Augmented Reality (AR) technology superimposes entity information (visual information, sound, touch, etc.) on the real world after simulation, so that a real environment and a virtual object are presented on the same screen or space in real time.
The embodiment of the present disclosure may be applied to any computer device (such as a mobile phone, a tablet, AR glasses, etc.) or a server supporting AR technology, or a combination thereof, and in a case that the present disclosure is applied to a server, the server may be connected to other computer devices having a communication function and a camera, where the connection mode may be a wired connection or a Wireless connection, and the Wireless connection may be, for example, a bluetooth connection, a Wireless broadband (WIFI) connection, etc.
For example, the presentation of the AR effect in the AR device may be understood as presenting a virtual object merged into the real scene in the AR device, and may be directly rendering the presentation content of the virtual object and merging with the real scene, for example, presenting a set of virtual tea set, where the display effect is placed on a real desktop in the real scene, or presenting a merged display picture after merging the presentation content of the virtual object with the image of the real scene. The specific selection of which presentation manner depends on the device type of the AR device and the adopted picture presentation technology, for example, generally, since a real scene (not an imaged real scene image) can be directly seen from the AR glasses, the AR glasses can adopt a presentation manner of directly rendering a presentation picture of a virtual object; for mobile terminal devices such as mobile phones and tablet computers, because the pictures formed by imaging the real scene are displayed in the mobile terminal devices, the AR effect can be displayed by fusing the real scene images and the presentation content of the virtual object.
A method of presenting picture contents according to an embodiment of the present disclosure will be described in detail.
Referring to fig. 1, a flow chart of a method for presenting picture content according to an embodiment of the present disclosure is shown, which includes the following steps:
and S101, acquiring a real scene image.
In the embodiment of the disclosure, the method for presenting the drawing content can be applied to an AR device or a server. When the presentation method is applied to the AR device, an image acquisition device (such as a camera) in the AR device may be used to acquire a real scene image in a real scene, and the real scene image of a single frame may be acquired by shooting an image, or the real scene image of consecutive frames may be acquired by shooting a video. When the presentation method is applied to a server, the AR device or other computer devices with image acquisition functions can send acquired single-frame or continuous multi-frame real scene images to the server. The present disclosure is not limited to the specific manner of image acquisition and the number of frames of the acquired image.
For example, the user may be placed in a certain exhibition hall to watch the AR effect of the physical pictures of some exhibition halls after the virtual objects are superimposed by capturing the images of the various exhibition halls or exhibits in real time.
S102, determining picture content information of the entity picture displayed in the real scene based on the real scene image.
The real scene image in the embodiment of the present disclosure refers to an image of a real scene captured by an AR device or other computer device. The image of the real scene may include at least one physical object of the real scene. For example, for the real scene image in the exhibition hall, the solid object included in the real scene image may be at least one exhibit in the exhibition hall, such as a solid drawing in the exhibition hall, or a solid sculpture, etc.
The entity picture in the embodiment of the present disclosure refers to a certain entity picture actually existing in a real scene represented by a real scene image. For example, the picture is a picture displayed in an exhibition hall, such as a landscape picture, or a portrait picture, etc. The carrier medium for the physical drawing may be in the form of paper, electronic screen, etc., and the carrier medium for the physical drawing is not limited in this disclosure. The drawing contents depicted in the physical drawing may be blank contents or actual drawing contents, and the specific form or style of the drawing contents or the depicted object, etc., are not limited in the embodiments of the present disclosure.
In the disclosed embodiment, the picture content of the physical picture can be described by the picture content information. Illustratively, the picture content information may be represented by a picture content identifier. The form of the drawing content identification may not be particularly limited, and for example, the drawing content information may be described by a numeral identification such as "1, 2, 3, 4", etc., or a letter identification such as "A, B, C, D", etc., and also by a letter identification such as "stone", "tree", etc. Since the information of the picture contents used to describe the picture contents is various, it is not listed in the embodiments of the present disclosure.
For example, the entity drawing may be marked with a corresponding entity identifier, and the entity identifier is identified as the drawing identifier of the entity drawing. For example, a plurality of physical pictures displayed on a wall surface of an exhibition hall are numbered, and each number is used as a picture identifier of a corresponding physical picture. Alternatively, the image recognition technology based on deep learning may be used to directly recognize the entity picture presented in the image of the real scene to obtain the corresponding picture identifier.
In some embodiments of the present disclosure, the manner of determining the picture content information of the physical picture displayed in the real scene may adopt the following methods:
matching a real scene image with a reference image in a preset picture library, and determining a target reference image matched with an entity picture in the real scene; and determining preset picture content information corresponding to the target reference image as picture content information of the entity picture.
When the presentation method is applied to the AR equipment, the AR equipment can complete the matching process locally or at the cloud. The AR equipment can upload the real scene image to a server at the cloud end, then the matching process is completed by the server, and a target reference image matched with the entity image returned by the server is received.
In the case that the presentation method is applied to a server, the server may match the reference image in the preset picture library with the image of the real scene to determine a matched target reference image.
The preset picture library can comprise at least one group of reference images, and each group of reference images comprises at least one reference image representing the same entity picture. Reference images contained in different groups of reference images represent different entity pictures, and reference images contained in the same group of reference images represent the same entity picture. The picture content information of each reference image may be represented by a picture content identifier.
In order to realize accurate matching, when a picture library is constructed, reference images acquired after the same entity picture is acquired in different shooting modes can be added into a group of reference images representing the same entity picture. For example, different shooting modes may be shooting at different lighting intensities at different shooting angles.
For example, if 10 physical pictures need to be displayed in an exhibition hall of an exhibition hall, 10 groups of physical pictures can be added in a preset picture library, each physical picture is photographed in different photographing modes, and a plurality of reference images representing the physical pictures are respectively obtained and used as the reference images of the group corresponding to the physical picture.
In some embodiments of the disclosure, a corresponding relationship may be pre-established between each group of reference images in a preset picture library and picture content information of an entity picture represented by the group of reference images, wherein the entity picture represented by each group of reference images may have the picture content information corresponding to the entity picture.
In an exemplary specific matching process, a reference image most similar to an entity picture photographed in a real scene may be found from a preset picture library through similarity comparison and the like, the most similar reference image is used as a target reference image matched with the entity picture in the real scene, and then picture content information corresponding to the target reference image is determined as picture content information of the entity picture. For example, if the picture content information of the group to which the target reference image matched with the physical picture in the real scene is located is represented by "a", it may be determined that the picture content information of the physical picture in the real scene is also represented by "a".
And secondly, determining an image area where an entity picture in the real scene image is located by using the pre-trained target detection model and the real scene image, and identifying the image area where the entity picture is located.
The target detection model can be a neural network model, the image sample marked with the image area where the entity picture is located can be used for training the target detection model, and the trained target detection model can accurately identify the image area where the entity picture is located in a real scene.
For example, the identification of the image area where the entity picture is located may be identification of an identifier of the entity picture, for example, identification of a number of the entity picture, and the number of the entity picture may be obtained as picture content information of the entity picture based on the identification result.
For example, after determining an image area where an entity picture in an image of a real scene is located, the image area where the entity picture is located may be identified by using a picture content detection model trained in advance, so as to obtain picture content information of the entity picture. The picture content detection model can be trained by using image samples marked with picture content information of the solid picture in advance, and the trained picture content detection model can be used for accurately predicting the picture content information of the solid picture.
S103, acquiring virtual animation information.
In the embodiment of the present disclosure, the virtual animation information may be a virtual animation video rendered by a rendering tool, may also be a rendering parameter required for generating the virtual animation video, and may also be a two-dimensional or three-dimensional model parameter of the virtual object in multiple postures, and an animation effect when the virtual object presents different postures may be rendered by using the two-dimensional or three-dimensional model parameter. For example, in the case where the virtual object is a virtual character or a virtual animal, the model parameters of the virtual animation information may include facial key points, limb key points, and the like of the virtual character or the virtual animal.
The content presented by the virtual animation information is not limited in the embodiment of the present disclosure. Illustratively, animation effects of different poses of the virtual object may be presented. The virtual object may be a two-dimensional virtual object or a three-dimensional virtual object. The specific thing characterized by the virtual object may be determined based on the actual scene. For example, the virtual object may be any one or any combination of a virtual character, a virtual animal, a virtual article, a virtual building, a virtual plant, a virtual sticker, a virtual pictorial content, which the present disclosure is not limited to.
In the embodiment of the present disclosure, there are various ways to obtain virtual animation information, and the following are some possible embodiments:
the first method is to acquire virtual animation information by triggering an action.
In specific implementation, whether a target trigger action occurs in a real scene or not can be detected based on a real scene image, and virtual animation information matched with the target trigger action can be acquired from a preset virtual animation library under the condition that the target trigger action occurs in the real scene.
The virtual animation library comprises a plurality of kinds of virtual animation information, and the corresponding relation between the virtual animation information and the trigger action can be established in advance, so that the virtual animation library can comprise the virtual animation information corresponding to the trigger actions.
For example, the triggering action may be any triggering action by the user for triggering presentation of the AR effect, including but not limited to: any one or combination of gesture triggering action, limb posture triggering action, expression triggering action and the like. For the identification of the trigger action, the trigger action may also be obtained by performing image identification on the real scene image, for example, performing action detection processing on the real scene image through an action detection model to obtain an action detection result, where the action detection model may be obtained by training a real scene sample image labeled with an action type in advance.
Illustratively, the virtual animation information is bound with the target trigger action, so that after the target trigger action is identified, the corresponding virtual animation information can be automatically acquired, and the AR effect is further presented based on the virtual animation information.
And the second mode is to obtain the virtual animation information through the picture content information.
In a specific implementation, the virtual animation information matched with the picture content information of the entity picture can be obtained from a preset virtual animation library. The virtual animation library comprises virtual animation information respectively corresponding to at least one type of picture content information.
For example, a corresponding relationship between the picture content information of the physical picture and the virtual animation information may be established in advance, specifically, each kind of picture content information may correspond to at least one kind of virtual animation information, or at least one kind of picture content information may correspond to one kind of virtual animation information. For example, in the case where the picture content information is a character, the corresponding at least one type of virtual moving image information may be a dynamic special effect of the character's face.
And thirdly, acquiring virtual animation information by combining the picture content information and the trigger action.
In a specific implementation, a plurality of pieces of virtual animation information matched with the picture content information of the entity picture can be obtained first, and then the virtual animation information corresponding to the target trigger action is determined from the plurality of pieces of virtual animation information.
The third mode is the combination of the first mode and the second mode, a plurality of matched virtual animation information can be preliminarily determined through the picture content information, and then the corresponding virtual animation information can be triggered through the mode of action triggering by the user, so that the interaction with the user can be enhanced.
In the above three modes, the binding relationship between the various kinds of virtual animation information in the virtual animation library and the trigger action and/or the picture content information may be set autonomously according to the presentation requirement of the AR effect in the actual application scene, which is not limited by the present disclosure.
When the presentation method is applied to the AR device, the virtual animation information may be acquired locally or in a cloud, and the virtual animation information may be stored locally or in the cloud accordingly. In the case where the presentation method is applied to a server, the server may directly find virtual animation information from stored virtual animation information or other network devices.
And S104, utilizing the virtual animation information to control AR equipment to present the AR effect corresponding to the virtual animation information in the associated area corresponding to the picture content of the entity picture.
When the presentation method is applied to the AR device, the AR device may directly render the virtual animation information using a rendering tool, and then present an AR effect corresponding to the virtual animation information on an associated area corresponding to the picture content of the entity picture. Or, if the virtual animation information is a rendered virtual animation, the AR device may directly present the rendered virtual animation in the associated area corresponding to the picture content.
When the presentation method is applied to the server, the server can send the virtual animation information to the AR device, the AR device completes the rendering and presentation processes, the server can also render the virtual animation information by using a rendering tool, then the generated virtual animation is sent to the AR device, and then the AR device completes the presentation of the virtual animation. Or, the server may also obtain the rendered virtual animation from other network devices and send the virtual animation to the AR device, and the AR device presents the virtual animation.
In the embodiment of the disclosure, the associated area corresponding to the picture content of the physical picture may be a first area where the picture content is located and/or a second area having a preset relative position relationship with the picture content.
The first area may be an area where the picture content depicted by the physical picture is located, and the virtual animation may be superimposed in the first area, and further, the virtual animation may be superimposed on the depicted picture content, so as to present an AR effect of combining virtual and real. For example, as shown in fig. 2A, assuming that the drawn picture content is "several stones", through the above matching and the process of obtaining the virtual animation information, a virtual animation of a "jumping figure" may be obtained, and the picture content and the virtual animation are superimposed, so that an AR effect of a "jumping figure along a path formed by stones" may be obtained.
The second area may refer to an area outside the content of the drawing depicted by the physical drawing, and the area has a predetermined relative position relationship with the area where the physical drawing is located. Such as above or below or in front of the area where the physical drawing is located. The virtual animation is presented in the second area, and the special effect of the virtual object can be matched on the basis of the picture content presented by the original solid picture. For example, as shown in fig. 2B, if the drawn picture content is "a pond," a "rainy" virtual animation can be obtained through the above matching and obtaining process of the virtual animation information, and the AR effect of "raining over a pond" can be obtained by matching the picture content with the virtual animation.
In some embodiments of the present disclosure, the determination manner of the associated area corresponding to the picture content may be further combined with a trigger action of the user, that is, the associated area corresponding to the picture content is determined based on the area indicated by the user. Of course, the associated area may be the first area, the second area, or a combination of both, which is not limited in the present disclosure.
For example, after the target trigger action is detected, the action indication direction of the target trigger action is further identified, and then the associated area corresponding to the picture content of the entity picture is determined based on the action indication direction. Wherein, based on the motion indication direction, a relative positional relationship between the area indicated by the user and the drawing contents of the physical drawing may be determined, and then a second area conforming to the relative positional relationship may be determined. For example, by recognizing the real scene image and detecting that the target trigger motion points to the right of the physical picture, the set area range on the right of the physical picture can be determined as the related area of the picture content. And further, virtual animation can be displayed in the associated area, and the displayed virtual animation is combined with real picture content to realize presentation of the AR effect.
The method, the device, the computer equipment and the storage medium provided by the embodiment of the disclosure can identify the picture content information of the real entity picture in the real scene based on the image of the real scene, can also obtain the virtual animation information, and can present the corresponding AR effect in the associated area corresponding to the picture content of the entity picture by utilizing the virtual animation information. Therefore, the AR effect combined with the virtual animation can be presented in the relevant area of the picture content of the entity picture, the traditional two-dimensional and closed three-dimensional display limitation is broken through, and the visual experience of the virtual animation on the 'jump paper' is brought to the user. Under the scene of being applied to exhibition hall's exhibition picture, can make picture content not confine to on the paper, not confine to static bandwagon effect, can enrich the presentation form of picture content, and can strengthen and the interaction between the user, further promoted user's visual experience sense.
The following is an illustration of a specific application scenario of the disclosed embodiments.
First, a paper drawing library (corresponding to the preset drawing library) is established.
Scanning the physical picture displayed in the exhibition hall, storing the scanned image as a reference image in a cloud or a local place, and inputting picture content information of the displayed picture, wherein the picture content information is used as a label (tag) of each scanned reference image or each group of reference images.
Then, in the application phase, the physical picture to be overlaid may be scanned by using a mobile portable device with a camera, such as a mobile phone, and at this time, video frame data or picture data captured by the camera is sent to the server in the cloud. The server can match the previously established drawing library after receiving the data, and transmits the tag of the target reference image after the matching is successful back to the client.
Further, after receiving the tag, the client may download or locally read the virtual animation information corresponding to the tag from the cloud, or the client may also detect whether a target trigger action occurs in the real scene, and then acquire the virtual animation information corresponding to the target trigger action. And then, the AR effect after the virtual animation is superposed can be presented in the relevant area corresponding to the picture content of the entity picture by utilizing the virtual animation information.
If a picture needs to present an AR effect after being scanned by a mobile phone, the content of the picture can be first entered into a picture library in the cloud. In order to improve the identification efficiency, the picture content of the picture under different illumination and different shooting angles can be collected as a reference image in the collection process, in addition, the virtual animation information of the virtual animation to be presented under various forms can also be collected, and a unique identifier (id) is allocated to the virtual animation information under each form. The identifier (id) and the label (tag1) and/or the identifier (id) and the label (tag2) of the target trigger action are recorded in advance in a corresponding relationship.
For example, if a user wants to scan a picture to present an AR effect, the picture may be scanned in the above manner, after the server successfully matches the picture with the picture content in the preset picture library, the tag1 of the picture content may be identified, or the tag2 may be directly obtained, and then the corresponding id is found based on the tag1 and/or the tag2, so that the virtual animation information corresponding to the id may be taken out from the cloud or locally, and then the corresponding AR effect may be presented in the relevant area of the picture content.
For example, an AR effect of "three-dimensional hair waving" may be displayed in an area where hair is depicted by a portrait drawing, or a "little person jumping along a path formed by stones" may be displayed above a drawing in which a plurality of stones are drawn, or the like. The traditional two-dimensional and closed three-dimensional display limitation is broken through, the effect of fusion and superposition with a real scene is achieved, and the user experience is greatly improved, and meanwhile, the interestingness is increased through interaction between the user and the user.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, the embodiment of the present disclosure further provides a device for presenting picture content corresponding to the method for presenting picture content, and since the principle of the device in the embodiment of the present disclosure for solving the problem is similar to the method for presenting picture content in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 3, there is shown a schematic diagram of a drawing content presentation apparatus provided in an embodiment of the present disclosure, the apparatus including: a first obtaining module 31, a first determining module 32, a second obtaining module 33 and a presenting module 34.
A first obtaining module 31, configured to obtain a real scene image;
a first determining module 32, configured to determine picture content information of a physical picture displayed in a real scene based on the real scene image;
a second obtaining module 33, configured to obtain virtual animation information;
and the presenting module 34 is configured to control, by using the virtual animation information, the AR device to present an AR effect corresponding to the virtual animation information in an associated area corresponding to the picture content of the physical picture.
In some embodiments of the present disclosure, the apparatus further comprises:
a detecting module 35, configured to detect that a target trigger action occurs in the real scene based on the real scene image.
In some embodiments of the present disclosure, the second obtaining module 33, when obtaining the virtual animation information, is specifically configured to:
acquiring virtual animation information matched with the target trigger action from a preset virtual animation library; the virtual animation library comprises virtual animation information respectively corresponding to the various trigger actions.
In some embodiments of the present disclosure, the apparatus further comprises:
a second determining module 36, configured to identify a motion indication direction of the target trigger motion; and determining an associated area corresponding to the picture content of the entity picture based on the action indication direction.
In some embodiments of the present disclosure, the second obtaining module 33, when obtaining the virtual animation information, is specifically configured to:
acquiring virtual animation information matched with the picture content information of the entity picture from a preset virtual animation library; the virtual animation library comprises virtual animation information respectively corresponding to the at least one type of picture content information.
In some embodiments of the present disclosure, the second obtaining module 33, when obtaining the virtual animation information, is specifically configured to:
acquiring a plurality of pieces of virtual animation information matched with the picture content information of the entity picture; and determining virtual animation information corresponding to the target trigger action from the plurality of pieces of virtual animation information.
In some embodiments of the present disclosure, the first determining module 32, when determining the picture content information of the physical picture displayed in the real scene based on the real scene image, is specifically configured to:
matching the real scene image with a reference image in a preset picture library, and determining a reference image matched with the entity picture in the real scene; and determining the picture content information of the matched reference image as the picture content information of the entity picture.
In some embodiments of the present disclosure, the preset picture library includes at least one set of reference images, each set of reference images includes at least one reference image representing the same physical picture, and picture content information of each reference image is represented by a picture content identifier.
In some embodiments of the present disclosure, the associated area corresponding to the picture content of the physical picture is a first area where the picture content is located and/or a second area having a preset relative position relationship with the picture content.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 4, a schematic structural diagram of a computer device provided in an embodiment of the present disclosure includes: a processor 11 and a memory 12; the memory 12 stores machine-readable instructions executable by the processor 11, which when executed by the computer device are executed by the processor 11 to perform the steps of:
acquiring a real scene image; determining picture content information of a physical picture displayed in a real scene based on the real scene image; acquiring virtual animation information; and utilizing the virtual animation information to control AR equipment to present the AR effect corresponding to the virtual animation information in the associated area corresponding to the picture content of the entity picture.
The specific execution process of the above instructions may refer to the steps of the method for presenting picture content described in the embodiments of the present disclosure, which are not described herein again.
Furthermore, the disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program, which, when executed by a processor, performs the steps of the method for rendering picture content described in the above method embodiments.
The computer program product of the method for presenting picture content provided in the embodiments of the present disclosure includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the method for presenting augmented reality data described in the above method embodiments, which may be referred to in detail in the above method embodiments, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A method for rendering picture content, comprising:
acquiring a real scene image;
determining picture content information of a physical picture displayed in a real scene based on the real scene image;
acquiring virtual animation information;
and utilizing the virtual animation information to control AR equipment to present the AR effect corresponding to the virtual animation information in the associated area corresponding to the picture content of the entity picture.
2. The method of claim 1, wherein before using the virtual animation information to control an AR device to present an AR effect corresponding to the virtual animation information in an associated area corresponding to the drawing content of the physical drawing, the method further comprises:
based on the real scene image, a target trigger action is detected to occur in the real scene.
3. The method of claim 2, wherein the obtaining virtual animation information comprises:
acquiring virtual animation information matched with the target trigger action from a preset virtual animation library; the virtual animation library comprises virtual animation information respectively corresponding to the various trigger actions.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
identifying a motion indication direction of the target trigger motion;
and determining an associated area corresponding to the picture content of the entity picture based on the action indication direction.
5. The method of claim 1, wherein the obtaining virtual animation information comprises:
acquiring virtual animation information matched with the picture content information of the entity picture from a preset virtual animation library; the virtual animation library comprises virtual animation information respectively corresponding to the at least one type of picture content information.
6. The method of claim 2, wherein obtaining virtual animation information comprises:
acquiring a plurality of pieces of virtual animation information matched with the picture content information of the entity picture;
and determining virtual animation information corresponding to the target trigger action from the plurality of pieces of virtual animation information.
7. The method according to any one of claims 1 to 6, wherein said determining picture content information of a physical picture displayed in a real scene based on said real scene image comprises:
matching the real scene image with a reference image in a preset picture library, and determining a reference image matched with the entity picture in the real scene;
and determining the picture content information of the matched reference image as the picture content information of the entity picture.
8. The method according to claim 7, wherein the predetermined picture library comprises at least one set of reference images, each set of reference images comprises at least one reference image representing the same physical picture, and picture content information of each reference image is represented by a picture content identifier.
9. The method according to any one of claims 1 to 8, wherein the associated area corresponding to the picture content of the physical picture is a first area where the picture content is located and/or a second area having a preset relative position relationship with the picture content.
10. A device for rendering picture content, comprising:
the first acquisition module is used for acquiring a real scene image;
a determining module, configured to determine, based on the real scene image, picture content information of an entity picture displayed in a real scene;
the second acquisition module is used for acquiring the virtual animation information;
and the presentation module is used for controlling AR equipment to present the AR effect corresponding to the virtual animation information in the associated area corresponding to the picture content of the entity picture by utilizing the virtual animation information.
11. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor for executing the machine-readable instructions stored in the memory, the processor performing the steps of the method for rendering pictorial content recited in any one of claims 1 to 9 when the machine-readable instructions are executed by the processor.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a computer device, performs the steps of the method for rendering picture content according to any one of claims 1 to 9.
CN202010493240.5A 2020-06-03 2020-06-03 Method and device for presenting picture content, computer equipment and storage medium Pending CN111625100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010493240.5A CN111625100A (en) 2020-06-03 2020-06-03 Method and device for presenting picture content, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010493240.5A CN111625100A (en) 2020-06-03 2020-06-03 Method and device for presenting picture content, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111625100A true CN111625100A (en) 2020-09-04

Family

ID=72260332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010493240.5A Pending CN111625100A (en) 2020-06-03 2020-06-03 Method and device for presenting picture content, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111625100A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700576A (en) * 2020-12-29 2021-04-23 成都启源西普科技有限公司 Multi-modal recognition algorithm based on images and characters
CN113359985A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Data display method and device, computer equipment and storage medium
CN114697703A (en) * 2022-04-01 2022-07-01 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium
US20220397411A1 (en) * 2021-06-09 2022-12-15 Quinn Brown Systems and methods for delivering content to a user based on geolocation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105824412A (en) * 2016-03-09 2016-08-03 北京奇虎科技有限公司 Method and device for presenting customized virtual special effects on mobile terminal
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN110176077A (en) * 2019-05-23 2019-08-27 北京悉见科技有限公司 The method, apparatus and computer storage medium that augmented reality is taken pictures
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110738737A (en) * 2019-10-15 2020-01-31 北京市商汤科技开发有限公司 AR scene image processing method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105824412A (en) * 2016-03-09 2016-08-03 北京奇虎科技有限公司 Method and device for presenting customized virtual special effects on mobile terminal
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN110176077A (en) * 2019-05-23 2019-08-27 北京悉见科技有限公司 The method, apparatus and computer storage medium that augmented reality is taken pictures
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110738737A (en) * 2019-10-15 2020-01-31 北京市商汤科技开发有限公司 AR scene image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李雅梅: "南宋川南墓葬石刻艺术与计算机图像识别应用", 30 June 2011, 重庆大学出版社, pages: 112 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700576A (en) * 2020-12-29 2021-04-23 成都启源西普科技有限公司 Multi-modal recognition algorithm based on images and characters
CN112700576B (en) * 2020-12-29 2021-08-03 成都启源西普科技有限公司 Multi-modal recognition algorithm based on images and characters
CN113359985A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Data display method and device, computer equipment and storage medium
US20220397411A1 (en) * 2021-06-09 2022-12-15 Quinn Brown Systems and methods for delivering content to a user based on geolocation
CN114697703A (en) * 2022-04-01 2022-07-01 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium
CN114697703B (en) * 2022-04-01 2024-03-22 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
US10026229B1 (en) Auxiliary device as augmented reality platform
CN111640197A (en) Augmented reality AR special effect control method, device and equipment
CN111679742A (en) Interaction control method and device based on AR, electronic equipment and storage medium
CN111638784B (en) Facial expression interaction method, interaction device and computer storage medium
WO2016005948A2 (en) Augmented reality system
CN111640202A (en) AR scene special effect generation method and device
CN111696215A (en) Image processing method, device and equipment
CN111638797A (en) Display control method and device
WO2022262521A1 (en) Data presentation method and apparatus, computer device, storage medium, computer program product, and computer program
CN111640193A (en) Word processing method, word processing device, computer equipment and storage medium
CN111640192A (en) Scene image processing method and device, AR device and storage medium
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN111651049B (en) Interaction method, device, computer equipment and storage medium
CN111640165A (en) Method and device for acquiring AR group photo image, computer equipment and storage medium
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111640200A (en) AR scene special effect generation method and device
CN114153548A (en) Display method and device, computer equipment and storage medium
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
CN112150349A (en) Image processing method and device, computer equipment and storage medium
CN111640169A (en) Historical event presenting method and device, electronic equipment and storage medium
CN111638798A (en) AR group photo method, AR group photo device, computer equipment and storage medium
CN111652986A (en) Stage effect presentation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination