WO2022022036A1 - 一种展示方法、装置、设备、存储介质及计算机程序 - Google Patents

一种展示方法、装置、设备、存储介质及计算机程序 Download PDF

Info

Publication number
WO2022022036A1
WO2022022036A1 PCT/CN2021/095861 CN2021095861W WO2022022036A1 WO 2022022036 A1 WO2022022036 A1 WO 2022022036A1 CN 2021095861 W CN2021095861 W CN 2021095861W WO 2022022036 A1 WO2022022036 A1 WO 2022022036A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
viewing angle
user
virtual object
real
Prior art date
Application number
PCT/CN2021/095861
Other languages
English (en)
French (fr)
Inventor
侯欣如
栾青
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Publication of WO2022022036A1 publication Critical patent/WO2022022036A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present application relates to, but is not limited to, the field of computer vision technology, and in particular, relates to a display method, apparatus, device, storage medium, and computer program.
  • Augmented Reality (AR) technology is a technology that integrates virtual information with real-world information.
  • the real environment and virtual objects are displayed on the same interface in real time.
  • AR Augmented Reality
  • users can see virtual trees superimposed on the real campus playground, virtual flying birds superimposed in the sky, etc.
  • the display of the augmented reality scene has certain limitations, thereby affecting the viewing or interactive experience of the users.
  • embodiments of the present application provide a display method, apparatus, device, storage medium, and computer program.
  • An embodiment of the present application provides a display method, the method includes: determining a virtual object matching a real object in a real scene; determining a target viewing angle by recognizing an image including a current viewing user; A display effect of the virtual object is determined; according to the display effect, an augmented reality effect in which the real scene and the virtual object are superimposed is displayed through a display device.
  • determining the target viewing angle by identifying images including the current viewing user includes:
  • the direction of the eyes of each user in the image is determined; according to the direction of the line of sight of each user's eyes, the degree of concentration of each user is determined; the viewing angle of the user with the highest degree of concentration is determined. identified as the target viewing angle; or
  • the viewing angle of the target user is determined as the target viewing angle.
  • the target viewing angle when displaying the augmented reality effect, can be determined according to the number of viewing users, identities, viewing concentration or face images of the viewing users, and then the display effect of the current virtual object can be changed, so that when the user in front of the display device is displayed, the target viewing angle can be determined.
  • the viewing or interaction needs of users can be better met.
  • the target viewing angle includes viewing angles in all directions on a preset plane dimension;
  • the display effect includes a display track of the virtual object; and determining the target viewing angle according to the target viewing angle
  • the display effect of the virtual object includes: determining the display position of the virtual object corresponding to the viewing angle in each direction; determining the display track of the virtual object according to each display position of the virtual object;
  • For the display effect displaying an augmented reality effect in which the real scene and the virtual object are superimposed on a display device, including: according to the display track, displaying the real scene and the virtual object superimposed on the display device through the display device the augmented reality effect, so that the virtual object moves on the display device according to the display track.
  • the display track of the virtual object includes the display positions corresponding to the viewing angles in each direction of the preset plane dimension, the user can view the augmented reality effect in each direction of the preset plane dimension. Have the opportunity to see the expected display effect, which can better meet the user's viewing or interaction needs.
  • determining the display effect of the virtual object according to the target viewing angle further includes: by recognizing the image, determining the display duration corresponding to each display position of the virtual object; corresponding
  • the determining of the display track of the virtual object according to the display positions of the virtual object includes: determining the display track of the virtual object according to the display positions of the virtual object and the display duration corresponding to each display position. Show traces.
  • the display duration of the virtual object at each display position can be determined according to the situation of the current viewing user in the image, so that the viewing or interactive experience of the user can be further improved.
  • the determining the display duration corresponding to each display position of the virtual object by recognizing the image includes: determining the viewing angle of each user in the image by recognizing the image For each display position, determine the number of users whose viewing angle is consistent with the viewing angle corresponding to the display position in the image; according to the number of users, query the correspondence between the number of preset users and the display duration relationship to determine the display duration corresponding to the display position.
  • the display duration of the virtual object at each display position can be determined according to the number of users whose viewing angles are consistent with the viewing angles corresponding to each display position, thereby further improving the user's viewing or interactive experience.
  • determining the display effect of the virtual object according to the target viewing angle includes: acquiring the position of the real object in the real scene; according to the position of the real object and the target viewing angle , and determine the display effect of the virtual object.
  • the acquiring the position of the real object in the real scene includes: collecting an image including the real object through a camera of the display device; and determining the real object according to the image including the real object the location of the object; or,
  • the acquiring the position of the real object in the real scene includes: emitting a first light to the real scene; receiving a second light reflected by the real object in the real scene to the first light; according to the first light
  • the emission parameter of the light ray and the reflection parameter of the second light ray determine the position of the real object.
  • the processing efficiency is fast and can be The position of the real object can be more accurately determined, so the display efficiency and display effect of the augmented reality effect can be improved, and the user's viewing or interactive experience can be further improved.
  • the display device includes a display screen that is movable on a preset slide rail and is provided with a camera; the method further includes: when the display screen moves to a target position, using the The camera captures the image including the currently viewing user.
  • the position of the display screen can be automatically adjusted according to the actual situation, so that a more accurate current viewing user situation can be obtained, and a more accurate target viewing angle can be obtained, and a more suitable virtual object can be determined. Display effects to further enhance the user's viewing or interactive experience.
  • the real scene includes at least one real object
  • the virtual object includes a virtual tag and a guide line corresponding to the virtual tag; correspondingly, the determining a virtual object matching the real object in the real scene , including: determining attribute information of each real object in the real scene; determining a virtual label matching the real object according to the attribute information of each real object; determining a guide line corresponding to each virtual label.
  • the attribute information of the real object can be intuitively displayed through the virtual labels and guide lines, so that the viewing or interactive experience of the user can be improved.
  • the display effect of the virtual object includes the display position of each virtual label and the display position of the guide line corresponding to the virtual label; correspondingly, according to the display effect, the display device displays the display position.
  • the augmented reality effect in which the real scene and the virtual object are superimposed includes: for each virtual tag, displaying the virtual tag on the display device according to the display position of the virtual tag; The display position of the guide line is displayed on the display device, and the guide line is used to guide the virtual label and the real object matching the virtual label.
  • the real object corresponding to the virtual tag can be guided more accurately, so that a better augmented reality effect can be displayed, and the user's viewing or interactive experience can be improved.
  • An embodiment of the present application provides a display device, the device includes: a first determination part, configured to determine a virtual object matching a real object in a real scene; Identify and determine the target viewing angle; the third determining part is configured to determine the display effect of the virtual object according to the target viewing angle; the display part is configured to display the real scene and the virtual object through the display device according to the display effect.
  • the virtual objects are superimposed with an augmented reality effect.
  • An embodiment of the present application provides a display device, including a memory and a processor, where the memory stores a computer program that can be executed on the processor, and the processor implements the steps in the above method when the processor executes the program.
  • An embodiment of the present application provides a computer storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above method are implemented.
  • An embodiment of the present application provides a computer program, including computer-readable code, and when the computer-readable code runs in an electronic device, a processor in the electronic device executes the steps for implementing the above method.
  • the virtual object matching the real object in the real scene is first determined; then the target viewing angle is determined by recognizing the image including the current viewing user; and then the virtual object is determined according to the target viewing angle. display effect; finally, according to the display effect, an augmented reality effect in which the real scene and the virtual object are superimposed is displayed through a display device.
  • the target viewing angle can be determined according to the actual situation of the current viewing user, and the display effect of the current virtual object can be changed based on the target viewing angle, thereby changing the relationship between the currently displayed real scene and the virtual object.
  • the superimposed augmented reality effect can automatically meet the user's viewing or intelligent interaction needs.
  • the target viewing angle may also be determined according to the number of viewing users, their identities, viewing concentration, or the face images of viewing users, so as to change the display effect of the current virtual object. In this way, when the user in front of the display device is in a different When there are multiple users in front of the location or display device, the viewing or interaction needs of users can be better met.
  • FIG. 1 is a schematic flowchart of the implementation of a display method provided by an embodiment of the present application
  • FIG. 2A is a schematic flowchart of the implementation of a display method provided by an embodiment of the present application.
  • FIG. 2B is a schematic flowchart of the implementation of a method for determining the display duration corresponding to each display position of a virtual object by recognizing an image according to an embodiment of the present application;
  • FIG. 3A is a schematic flowchart of the implementation of a display method provided by an embodiment of the present application.
  • FIG. 3B is a schematic flowchart of the implementation of a method for obtaining the position of a real object in a real scene provided by an embodiment of the present application;
  • FIG. 3C is a schematic flowchart of the implementation of a method for obtaining the position of a real object in a real scene provided by an embodiment of the present application;
  • FIG. 4 is a schematic flowchart of the implementation of a display method provided by an embodiment of the present application.
  • FIG. 5A is a schematic flowchart of the implementation of a display method provided by an embodiment of the present application.
  • FIG. 5B is a schematic flowchart of the implementation of a method for displaying an augmented reality effect in which a real scene is superimposed on each virtual label and a guide line corresponding to the virtual label according to an embodiment of the present application;
  • 5C is a schematic diagram of a display effect of an augmented reality effect in which a real scene and a virtual object are superimposed according to an embodiment of the present application;
  • FIG. 6 is a schematic diagram of the composition and structure of a display device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a hardware entity of a display device provided by an embodiment of the present application.
  • first ⁇ second ⁇ third involved is only to distinguish similar objects, and does not mean With regard to the specific ordering of objects, it can be understood that “first ⁇ second ⁇ third” can be interchanged in a specific order or sequence if permitted, so that the embodiments of the present application described herein can be used in a manner other than those shown in the drawings. performed in an order other than that shown or described.
  • the augmented reality effect in which the real scene and the virtual object are superimposed in real time can be displayed based on an optical principle, or can be displayed based on a video synthesis technology.
  • the display device can use a transparent display screen.
  • a transparent display screen can be set between the real scene and the user, can receive light reflected from the real scene and penetrate the display screen, and can also display virtual objects that need to be superimposed in the real scene, so that the user can Through the transparent display screen, the real-time superimposed picture of the real scene and the virtual object can be viewed.
  • real-world images or videos can be obtained through the camera, and the obtained images or videos can be synthesized with virtual objects, and finally the synthesized images or videos can be displayed on the display device to realize the real scene and the real scene.
  • Augmented reality effect with real-time overlay of virtual objects can be displayed on the display device to realize the real scene and the real scene.
  • the displayed content and display effect of the augmented reality scene are usually irrelevant to the user watching in front of the display device.
  • the users in front of the display device are in different positions or there are multiple users in front of the display device, the augmented reality effect displayed by the display device cannot well meet the user's viewing or interaction needs.
  • the embodiment of the present application provides a display method, and the method can be executed by a processor, and the processor can be an integrated circuit chip with signal processing capability.
  • each step of the method can be completed by an integrated logic circuit of hardware in a processor or instructions in the form of software.
  • the processor may be a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
  • DSP Digital Signal Processor
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • FIG. 1 is a schematic flowchart of the implementation of a display method according to an embodiment of the present application. As shown in FIG. 1 , the method includes the following steps:
  • Step S101 determining a virtual object that matches the real object in the real scene
  • the real scene can be any suitable scene in the real world, such as campus playground, sky, office, museum, etc.
  • the real object can be any suitable object that actually exists in the real scene, such as the flagpole on the campus playground, the clouds in the sky, the desk in the office, the exhibits in the museum, etc. .
  • the virtual object may be information such as virtual images or texts matched with real objects.
  • the virtual props used to realize the dress-up effect of the desk in the real office scene are virtual objects that match the desk in the real office scene; or the virtual digital human used to explain the exhibits in the real museum scene , that is, the virtual object that matches the exhibits in the real museum scene; or the virtual label used to annotate each building in the real real estate scene and the guide line corresponding to each of the virtual labels, that is, the corresponding guide line.
  • Virtual objects that match each building in the real real estate scene are virtual objects that match the desk in the real office scene; or the virtual digital human used to explain the exhibits in the real museum scene , that is, the virtual object that matches the exhibits in the real museum scene; or the virtual label used to annotate each building in the real real estate scene and the guide line corresponding to each of the virtual labels, that is, the corresponding guide line.
  • the virtual object can be determined according to the matching relationship between a specific real object and a virtual object, or can be generated according to a specific virtual object generation strategy using technologies such as image, video or three-dimensional (3D, Three-Dimensional) model generation. , generated from real objects.
  • technologies such as image, video or three-dimensional (3D, Three-Dimensional) model generation.
  • 3D, Three-Dimensional three-dimensional
  • the virtual object can also interact with the watching user in real time.
  • the fighting action of the virtual character in the game can be controlled by a glove or hand stick matched with the game;
  • the movement of the virtual chess pieces is controlled by the gloves matched with the game.
  • Step S102 by identifying the image including the current viewing user, determine the target viewing angle
  • the current viewing user is the user who is currently viewing in front of the display device.
  • the image including the current viewing user may be collected in real time by an image capture device provided in the display device, or may be captured by other image capture devices outside the display device, which is not limited here.
  • the viewing angle is the viewing angle at which the user watches the display device or the real object in the real scene
  • the target viewing angle is the viewing angle at which the expected augmented reality effect can be viewed.
  • the target viewing angle of view may be determined according to the viewing situation of the current viewing user in the image by performing image recognition on the image including the current viewing user.
  • the method for determining the target viewing angle may be determined according to the actual situation, which is not limited in this embodiment of the present application.
  • the target viewing angle of view may be determined according to the area where the current viewing user is located, or the target viewing angle of view may be determined according to the direction of the current user's line of sight.
  • Step S103 determining the display effect of the virtual object according to the target viewing angle
  • the display effect of the virtual object is the effect when the virtual object is displayed on the display device, which may include but is not limited to one of the display position, display duration, display color, interaction method, display size, etc. of the virtual object on the display device. or more.
  • the display effect of the virtual object can be determined according to the corresponding relationship between the specific target viewing angle and the display effect, or can be obtained by rendering the rendering model according to the target viewing angle and combining the information of the real object matched with the virtual object.
  • those skilled in the art can select an appropriate manner according to the actual situation to determine the display effect of the virtual object, which is not limited here.
  • Step S104 displaying an augmented reality effect in which the real scene and the virtual object are superimposed on a display device according to the display effect.
  • the display device may be any suitable electronic device that supports an augmented reality display function, which may include, but is not limited to, one or more of a smart TV, a mobile phone, a tablet, a display screen, and the like.
  • the display device can also be a relatively novel display screen that can be moved on a slide rail or by other means.
  • the augmented reality effect in which the real scene and the virtual object are superimposed can be displayed through the display screen.
  • the user may also trigger related information on the augmented reality effect displayed on the display screen to obtain more detailed information or other related information.
  • the type of the display screen is not limited, and the display screen may be a touch screen or a non-touch screen.
  • the virtual object can be displayed on the display device according to the display effect of the virtual object, and then the augmented reality effect in which the real scene and the virtual object are superimposed can be displayed through the display device.
  • the augmented reality effect in which real scenes and virtual objects are superimposed can be displayed based on optical principles or based on video synthesis technology.
  • other suitable manners may also be used to display an augmented reality effect in which a real scene and a virtual object are superimposed on a display device according to the display effect of the virtual object, which is not limited in this embodiment of the present application.
  • the display of the augmented reality effect in which the real scene and the virtual object are superimposed may be performed based on the optical principle.
  • the display device is a transparent display screen, the display screen is set between the viewing user and the real scene, and the user can watch the real scene through the transparent display screen.
  • the augmented reality effect is displayed based on the optical principle
  • the virtual object viewed by the user is displayed by the display device itself according to the display effect of the virtual object
  • the real scene viewed by the user is the real scene in the real world based on the optical principle. displayed by the display device.
  • the display of the augmented reality effect in which the real scene and the virtual object are superimposed may be performed based on the video synthesis technology.
  • the image or video of the real scene can be collected by the camera, and according to the display effect of the virtual object, the virtual object and the collected image or video of the real scene can be synthesized, and finally the synthesized image or video is passed through the display device. display, so as to realize the display of augmented reality effect in which real scenes and virtual objects are superimposed.
  • step S102 may include the following step S102a:
  • Step S102a by identifying the number of users in the image, determine the image area with the largest number of users; determine the viewing position of the user in front of the display device according to the image area; determine the viewing angle of the viewing position as the target viewing angle.
  • any suitable image recognition algorithm can be used to recognize the number of users in the image, and determine the image area with the largest number of users.
  • the viewing position is the position where the user watches in front of the display device in the real scene.
  • the viewing position of the user in front of the display device corresponding to the image area can be calculated according to the position of the image area in the image and according to a specific mapping formula;
  • the corresponding relationship between the position in the image area and the corresponding viewing position in the real world determines the viewing position of the user in front of the display device corresponding to the image area.
  • the mapping formula and the corresponding relationship may be determined in advance by means of calibration or big data analysis.
  • the viewing angle of the viewing position is the viewing angle of the user viewing the display device or the real object from the viewing position.
  • the coordinates of the viewing position in the real scene in the three-dimensional space and the coordinates of the real object in the three-dimensional space can be used for calculation to obtain the viewing angle of the viewing position as the target viewing angle; The corresponding relationship between the viewing position and the viewing angle of view is obtained, and the viewing angle of the viewing position is obtained.
  • Those skilled in the art can select an appropriate manner to determine the viewing angle of the viewing position according to the actual situation, which is not limited here.
  • step S102 may include the following step S102b:
  • Step S102b determining the identity of the current viewing user by identifying the user identity in the image; determining the viewing angle of view of the user with the target identity in the image as the target viewing angle.
  • any suitable image recognition algorithm can be used to recognize the identity of the user through the features of the user in the image.
  • the characteristics of the user may include, but are not limited to, the user's facial characteristics, clothing characteristics, body shape characteristics, and the like.
  • the target identity may be a preset identity, which may include, but is not limited to, an identity with a specific viewing priority or an identity with a specific viewing permission, and the like.
  • the viewing angle of view of the user with the target identity in the image may be determined as the target viewing angle.
  • the image recognition algorithm can be used to identify the current viewing user according to the user's wearing characteristics and/or body shape characteristics in the image.
  • the teachers and primary school students in the middle school, and the viewing angle of the one or more primary school students is determined as the target viewing angle, so as to provide the primary school students with the expected augmented reality display effect, and satisfy the primary school students' curiosity and thirst for knowledge about the exhibits.
  • the image recognition algorithm can be used to identify the user's facial expression features and/or wearing features in the image, etc.
  • the tour group members among the users are currently viewed, and the viewing angle of the tour group members is determined as the target viewing angle, so as to provide the tour group members with the expected augmented reality display effect and improve the tour experience of the tour group members.
  • an image recognition algorithm can be used to identify the senior (Very Important Person, VIP) user among the current viewing users according to the user's facial features in the image,
  • the viewing angle of the VIP user is determined as the target viewing angle, so as to provide the VIP user with an expected augmented reality display effect and improve the VIP user's game interaction experience.
  • the facial features of the VIP users can be preset, and when performing identity recognition, the facial features of the users in the image are matched with the facial features of the pre-set VIP users, so as to identify the VIP among the current viewing users. user.
  • step S102 may include the following step S102c:
  • Step S102c by identifying the user's line of sight in the image, determine the direction of each user's binocular line of sight in the image; according to the direction of each user's binocular line of sight, determine the viewing concentration of each user; The viewing angle is determined as the target viewing angle.
  • the viewing concentration is the concentration of the user when viewing a real object or a display device, which can be determined by detecting the gaze direction of the user's eyes in the image. For example, when there are multiple users in front of the display device, the gaze direction of each user's eyes in the image can be detected, and the user's viewing concentration can be determined according to the binocular gaze direction. At this time, the viewing angle of the user with the highest viewing concentration can be determined as The target viewing angle ensures that the most attentive users can watch the expected augmented reality effect.
  • any suitable algorithm may be used to determine the user's binocular gaze direction and the user's viewing concentration corresponding to the binocular gaze direction, which is not limited in this embodiment of the present application.
  • step S102 may include the following step S102d:
  • step S102d a target user matching the target face is determined by recognizing the face image in the image; the viewing angle of view of the target user is determined as the target viewing angle.
  • the target face may be preset, and may include, but is not limited to, a face of a user with a specific viewing priority or a face of a user with a specific viewing authority, and the like.
  • any suitable face recognition algorithm may be used to recognize the face image in the image, which is not limited in this embodiment of the present application.
  • a virtual object matching a real object in a real scene is firstly determined; then a target viewing angle is determined by recognizing an image including the current viewing user; and then the target viewing angle is determined according to the target viewing angle
  • the display effect of the virtual object is displayed through a display device.
  • the target viewing angle can be determined according to the actual situation of the current viewing user, and the display effect of the current virtual object can be changed based on the target viewing angle, thereby changing the relationship between the currently displayed real scene and the virtual object.
  • the superimposed augmented reality effect can automatically meet the user's viewing or intelligent interaction needs.
  • the target viewing angle may also be determined according to the number of viewing users, their identities, viewing concentration, or the face images of viewing users, so as to change the display effect of the current virtual object. In this way, when the user in front of the display device is in a different When there are multiple users in front of the location or display device, the viewing or interaction needs of users can be better met.
  • An embodiment of the present application provides a display method, and the method can be executed by a processor. As shown in FIG. 2A , the method includes the following steps:
  • Step S201 determining a virtual object that matches the real object in the real scene
  • step S201 corresponds to the foregoing step S101, and reference may be made to the specific implementation manner of the foregoing step S101 during implementation.
  • Step S202 determining a target viewing angle of view by identifying the image including the current viewing user, where the target viewing angle includes the viewing angle of view in each direction on a preset plane dimension;
  • the preset plane dimension may include one or more of specific plane dimensions such as horizontal plane and vertical plane.
  • the viewing angle of view in each direction on the preset plane dimension includes the angle of view when viewing the real object or the display device in front of the display device along each direction of the preset plane dimension.
  • the viewing angle of a user viewing a real object or a display device at different positions from left to right in front of the display device may correspond to viewing angles in various directions in the horizontal plane dimension; users of different heights at the same position in front of the display device view real objects Or the viewing angle when the device is displayed, or the viewing angle when the user stands or crouches at the same position and watches the real object or the display device, which can correspond to the viewing angle in all directions in the vertical plane dimension.
  • Step S203 determining the display position of the virtual object corresponding to the viewing angle of view in each of the directions;
  • the display position of the virtual object corresponding to each viewing angle can be determined according to the corresponding relationship between the specific viewing angle and the display position, or according to each viewing angle, combined with the information of the real object matching the virtual object, Calculated through a specific operational model.
  • an appropriate manner may be selected according to the actual situation to determine the display position of the virtual object, which is not limited here.
  • Step S204 determining the display track of the virtual object according to each display position of the virtual object
  • the display track includes various positions on the display device where the virtual object can be displayed.
  • the virtual objects can be switched and displayed at various positions of the display device in a random or specific order, so that they can move on the display device according to the display track.
  • Step S205 displaying an augmented reality effect in which the real scene and the virtual object are superimposed on the display device according to the display track, so that the virtual object moves on the display device according to the display track.
  • the virtual object moves according to the display track on the display device, there are opportunities to see the expected display effect in all directions of the preset plane dimension. For example, when there are viewing users in the left, middle, and right directions in front of the display device, virtual objects can be displayed in turn at positions corresponding to the viewing angles of these three directions, so that users in these three directions can view the The expected display effect is observed within a specific time.
  • the above step S203 may further include: step S203a, determining the display duration corresponding to the virtual object at each display position by recognizing the image.
  • the above step S204 may include: determining the display track of the virtual object according to each display position of the virtual object and the display duration corresponding to each display position.
  • the display track includes each display position of the virtual object and the display duration of the virtual object at each display position.
  • the virtual labels can be displayed in turns according to the corresponding display time at the positions corresponding to the viewing angles of these three directions, so that the three directions can be displayed in turn. of users can watch the expected display effect within a specific time.
  • step S203a may include:
  • Step S231 determining the viewing angle of each user in the image by identifying the image
  • Step S232 for each display position, determine the number of users whose viewing angle is consistent with the viewing angle corresponding to the display position in the image;
  • Step S233 according to the number of the users, query the preset correspondence between the number of users and the display duration, and determine the display duration corresponding to the display position.
  • the preset corresponding relationship between the number of users and the display duration may be preset by the user according to actual needs, which is not limited here.
  • the display track of the virtual object is determined according to the viewing angles of the preset plane dimensions in all directions, and according to the display track, the real scene and the real scene are displayed through the display device according to the display track.
  • the superimposed augmented reality effect of the virtual object causes the virtual object to move on the display device according to the display track.
  • the display track of the virtual object includes the display positions corresponding to the viewing angles in each direction of the preset plane dimension, the user can view the augmented reality effect in each direction of the preset plane dimension.
  • the display duration of the virtual object at each display position can also be determined according to the number of users whose viewing angles are consistent with the viewing angles corresponding to the respective display positions, thereby further improving the user's viewing or interactive experience.
  • An embodiment of the present application provides a display method, and the method can be executed by a processor. As shown in FIG. 3A , the method includes the following steps:
  • Step S301 determining a virtual object that matches the real object in the real scene
  • Step S302 determining a target viewing angle of view by identifying images including the current viewing user
  • the foregoing steps S301 to S302 correspond to the foregoing steps S101 to S102 respectively, and the specific implementations of the foregoing steps S101 to S102 may be referred to during implementation.
  • Step S303 obtaining the position of the real object in the real scene
  • the position of the real object in the real scene is the position of the real object in the real world.
  • the position of the real object may be preset or obtained by detecting the real object in the real scene. Those skilled in the art can select an appropriate manner to acquire the position of the real object according to the actual situation during implementation, which is not limited here.
  • Step S304 determining the display effect of the virtual object according to the position of the real object and the target viewing angle
  • the display effect of the virtual object may be determined according to the position of the real object matched with the virtual object and the target viewing angle.
  • those skilled in the art may select an appropriate manner according to actual needs to determine the display effect of the virtual object based on optical principles, which is not limited here.
  • Step S305 display an augmented reality effect in which the real scene and the virtual object are superimposed through a display device.
  • step S305 corresponds to the aforementioned step S104, and reference may be made to the specific implementation manner of the aforementioned step S104 during implementation.
  • step S303 may include: step S331a and step S332a, wherein:
  • Step S331a collecting an image including the real object through the camera of the display device
  • the camera may include, but is not limited to, one or more of a standard camera, a telephoto camera, a wide-angle lens, a zoom camera, a digital light field camera, a digital camera, and the like.
  • the camera may be arranged at any suitable position of the display device, which may include, but is not limited to, the upper part, the lower part, the front part, the side surface, and the like of the display screen.
  • the camera may be built into the display device, or may be provided outside the display device, which is not limited here.
  • Step S332a determining the position of the real object according to the image including the real object.
  • the position of the real object in the real world can be calculated according to the position of the real object in the image and according to a specific mapping formula; it can also be calculated according to the position of the real object in the image and the corresponding real object in the real world.
  • the correspondence between the positions determines the position of the real object in the real world.
  • the mapping formula and the corresponding relationship may be determined in advance by means of calibration or big data analysis.
  • step S303 may include: step S331b, step S332b and step S333b, wherein:
  • Step S331b emitting a first light to the real scene
  • the first light may be emitted by the display device, or may be emitted by something other than the display device.
  • the first light can include, but is not limited to, any suitable light such as infrared light, visible light, and the like.
  • Step S332b receiving a second light ray reflected back to the first light ray by a real object in the real scene;
  • the second light may be received by any suitable photosensitive device, and the photosensitive device may include, but not limited to, an infrared sensor, an image sensor, and the like.
  • Step S333b determining the position of the real object according to the emission parameter of the first light ray and the reflection parameter of the second light ray.
  • the emission parameters of the first light rays may include, but are not limited to, one or more of emission time, light direction, light intensity, and the like.
  • the reflection parameters of the second light may include, but are not limited to, one or more of the receiving time of the second light, the direction of the light, the intensity of the light, and the like.
  • any suitable method may be used to determine the position of the real object according to the emission parameter of the first light ray and the reflection parameter of the second light ray.
  • the position of the real object can be determined according to the interval between the emission time of the first light ray and the reception time of the second light ray, combined with the propagation speed of the light; or the light direction of the first light ray and the light direction of the second light ray can be determined. , and combined with the position of the device that emits the first light and the position of the device that receives the second light to jointly determine the position of the real object.
  • the display method provided by the embodiment of the present application determines the display effect of the virtual object by acquiring the position of the real object in the real scene, and according to the position of the real object and the target viewing angle. In this way, since the target viewing angle and the position of the real object in the real world are taken into consideration when determining the display effect of the virtual object, a more suitable display effect of the virtual object can be determined, thereby further improving the user's viewing or interactive experience .
  • the position of the real object can be determined by recognizing the image including the real object, or the position of the real object can be determined according to the parameters of emitting light to the real scene and the parameters of the received reflected light, so that the processing efficiency is fast and can be more accurate Therefore, the display efficiency and display effect of the augmented reality effect can be improved, and the viewing or interactive experience of the user can be further improved.
  • An embodiment of the present application provides a display method, and the method can be executed by a processor. As shown in FIG. 4 , the method includes the following steps:
  • Step S401 when the display screen moves to a target position, capture the image including the current viewing user through the camera; wherein, the display screen is movable on a preset slide rail and is provided with a camera;
  • the target position is a suitable position that can display the augmented reality effect in which the real scene and the virtual object are superimposed.
  • the location is not limited in this embodiment of the present application.
  • the camera may be arranged at any suitable position of the display screen, which may include but not limited to the upper part, the lower part, the front side, the side surface, and the like of the display screen.
  • the camera may be built into the display screen, or may be disposed outside the display screen, which is not limited here.
  • Step S402 determining a virtual object that matches the real object in the real scene
  • Step S403 by identifying the image including the current viewing user, determine the target viewing angle
  • Step S404 determining the display effect of the virtual object according to the target viewing angle
  • Step S405 displaying an augmented reality effect in which the real scene and the virtual object are superimposed on a display device according to the display effect.
  • the foregoing steps S402 to S405 correspond to the foregoing steps S101 to S104 respectively, and during implementation, reference may be made to the specific implementations of the foregoing steps S101 to S104.
  • the display device is a display screen that is movable on a preset slide rail and is provided with a camera, so that the position of the display screen can be automatically adjusted according to the actual situation when displaying an augmented reality effect.
  • the image including the current viewing user can be captured by the camera, so that a more accurate current viewing user can be obtained, so that a more accurate target viewing angle can be determined, and then a more accurate target viewing angle can be determined.
  • a more suitable display effect of virtual objects to further enhance the user's viewing or interactive experience.
  • An embodiment of the present application provides a display method, and the method can be executed by a processor. As shown in FIG. 5A , the method includes the following steps:
  • Step S501 determining the attribute information of each real object in the real scene
  • the attribute information may include, but is not limited to, descriptive information such as the name, type, and description of the real object.
  • the attribute information of the real object may be preset and stored in the local storage or database, and the attribute information of each real object may be determined by reading the local storage or querying the database; the attribute information of the real object may also be Determined by acquiring an image containing each real object and identifying the image containing each real object.
  • Those skilled in the art can select an appropriate manner to determine the attribute information of each real object in the real scene according to the actual situation, which is not limited here.
  • Step S502 according to the attribute information of each real object, determine a virtual tag matching the real object;
  • the virtual tag may be a virtual object including attribute information of the corresponding real object, and may include but not limited to text or images used to represent the attribute information.
  • Step S503 determining a guide line corresponding to each of the virtual labels
  • the guide line may be a kind of virtual object for guiding the association relationship between the virtual label and the corresponding real object.
  • the guide lines may include, but are not limited to, any one or more of straight lines, curved lines, polylines, and the like.
  • the guide lines may include solid lines, dashed lines, any combination of dashed and solid lines, and the like.
  • Step S504 determining the target viewing angle of view by identifying the image including the current viewing user
  • Step S505 determine the display effect of each virtual label and the guide line corresponding to the virtual label
  • Step S506 display an augmented reality effect in which the real scene is superimposed with each virtual tag and the guide line corresponding to the virtual tag through a display device.
  • the foregoing steps S504 to S506 correspond to the foregoing steps S102 to S104 respectively, and during implementation, reference may be made to the specific implementations of the foregoing steps S102 to S104.
  • the display effect of the virtual object includes the display position of each virtual label and the display position of the guide line corresponding to each virtual label.
  • the above step S506 may include:
  • Step S511, for each virtual label display the virtual label on the display device according to the display position of the virtual label
  • Step S512 Display the guide line on the display device according to the display position of the guide line corresponding to the virtual label, where the guide line is used to guide the virtual label and a real object matching the virtual label.
  • FIG. 5C is a schematic diagram of the display effect of an augmented reality effect in which a real scene and a virtual object are superimposed according to an embodiment of the application.
  • the real scene 10 and the virtual object 20 are superimposed, and the real scene 10 and the virtual object 20 are superimposed.
  • the scene includes a real object 11
  • the virtual object 20 includes a virtual label 21 matching the real object 11 and a guide line 22 corresponding to the virtual label 21 .
  • the display method provided by the embodiment of the present application can determine a virtual tag matching the real object according to the attribute information of the real object, and guide the corresponding relationship between the virtual tag and the real object through a guide line.
  • the attribute information of real objects can be displayed intuitively through virtual labels and guide lines.
  • the display effect of the virtual label and the corresponding guide line is determined according to the target viewing angle. In this way, the real object corresponding to the virtual label can be guided more accurately, so that a better augmented reality effect can be displayed, and the user's viewing or interaction can be improved. experience.
  • the embodiments of the present application provide a display device, the device includes each unit included and each part included in each unit, which can be implemented by a processor in a display device; of course, it can also be implemented by specific logic circuit implementation.
  • the display device can be any suitable electronic device with information processing capability, which can have display function (such as smart display screen, smart phone, tablet computer, notebook computer, smart TV, etc.), or can not have display function.
  • the processor can be a central processing unit (Central Processing Unit, CPU), a microprocessor (Microprocessor Unit, MPU), a digital signal processor (DSP) or a field programmable gate array (Field Programmable Gate Array, FPGA), etc.
  • CPU Central Processing Unit
  • MPU Microprocessor Unit
  • DSP digital signal processor
  • FPGA Field Programmable Gate Array
  • FIG. 6 is a schematic diagram of the composition structure of a display device according to an embodiment of the present application.
  • the display device 600 includes: a first determination part 610, a second determination part 620, a third determination part 630 and a display part 640, wherein:
  • a first determining part 610 configured to determine a virtual object matching the real object in the real scene
  • the second determination part 620 is configured to determine the target viewing angle by identifying the image including the current viewing user
  • the third determining part 630 is configured to determine the display effect of the virtual object according to the target viewing angle
  • the display part 640 is configured to display, according to the display effect, an augmented reality effect in which the real scene and the virtual object are superimposed through a display device.
  • the second determining part is further configured to:
  • the direction of the eyes of each user in the image is determined; according to the direction of the line of sight of each user's eyes, the degree of concentration of each user is determined; the viewing angle of the user with the highest degree of concentration is determined. identified as the target viewing angle; or
  • the viewing angle of the target user is determined as the target viewing angle.
  • the target viewing angle includes viewing angles in various directions on a preset plane dimension
  • the display effect includes a display track of the virtual object.
  • the third determination section includes: a first determination subsection, configured to determine the display position of the virtual object corresponding to the viewing angle of view in each of the directions; and a second determination subsection, configured to determine the virtual object according to the For each display position, the display track of the virtual object is determined.
  • the presentation part is further configured to: according to the display track, display an augmented reality effect in which the real scene and the virtual object are superimposed through the display device, so that the virtual object is displayed on the display device according to the Displays track movement.
  • the third determination part further includes: a third determination sub-part, configured to determine the display duration corresponding to each display position of the virtual object by recognizing the image.
  • the third determining subsection is further configured to: determine the display track of the virtual object according to each display position of the virtual object and the display duration corresponding to each display position.
  • the third determination subsection is further configured to: determine the viewing angle of each user in the image by recognizing the image; for each display position, determine the difference between the viewing angle in the image and the user's viewing angle. The number of users with the same viewing angle corresponding to the display position; according to the number of users, the preset correspondence between the number of users and the display duration is inquired, and the display duration corresponding to the display position is determined.
  • the third determining part further includes: an obtaining sub-part, configured to obtain the position of the real object in the real scene; and a fourth determining sub-part, configured to obtain the position of the real object and the The target viewing angle determines the display effect of the virtual object.
  • the acquiring subsection is further configured to: acquire an image including the real object through a camera of the display device; determine the position of the real object according to the image including the real object;
  • the acquiring sub-section is further configured to: emit a first ray to the real scene; receive a second ray reflected back to the first ray by a real object in the real scene; according to the The emission parameter of the first light ray and the reflection parameter of the second light ray determine the position of the real object.
  • the display device includes a display screen that is movable on a preset slide rail and is provided with a camera; the display device further includes: a collection part configured to move the display screen to a target position when the display screen is moved to a target position. In this case, the image including the current viewing user is captured by the camera.
  • the real scene includes at least one real object
  • the virtual object includes a virtual label and a guide line corresponding to the virtual label.
  • the first determining part is further configured to: determine the attribute information of each real object in the real scene; determine the virtual tag matching the real object according to the attribute information of each real object; a guide line corresponding to the virtual label.
  • the display effect of the virtual object includes the display position of each virtual label and the display position of the guide line corresponding to the virtual label.
  • the display part is further configured to: for each virtual label, display the virtual label on the display device according to the display position of the virtual label; and display the virtual label according to the display position of the guide line corresponding to the virtual label , displaying the guide line on the display device, where the guide line is used to guide the virtual label and the real object matching the virtual label.
  • a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, a unit, a module or a non-modularity.
  • the embodiments of the present application if the above-mentioned display method is implemented in the form of a software functional part and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the software products are stored in a storage medium and include several instructions to make a
  • the stage display device performs all or part of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: a U disk, a mobile hard disk, a read only memory (Read Only Memory, ROM), a magnetic disk or an optical disk and other media that can store program codes.
  • the embodiments of the present application are not limited to any specific combination of hardware and software.
  • an embodiment of the present application provides a display device, including a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor implements the steps in the above method when the processor executes the program.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps in the above method.
  • the embodiments of the present application provide a computer program, including computer-readable codes.
  • a computer-readable codes When the computer-readable codes are run in an electronic device (such as a display device), a processor in the electronic device executes a program to execute a computer program. to implement the steps in the above method.
  • FIG. 7 is a schematic diagram of a hardware entity of the display device in the embodiment of the application.
  • the hardware entity of the display device 700 includes: a processor 701 , a communication interface 702 and a memory 703 , wherein
  • the processor 701 generally controls the overall operation of the presentation device 700 .
  • Communication interface 702 may allow the presentation device to communicate with other devices over a network.
  • the memory 703 is configured to store instructions and applications executable by the processor 701, and may also cache data to be processed or processed by the processor 701 and various parts of the presentation device 700 (eg, image data, audio data, voice communication data and Video communication data), which can be realized by flash memory (FLASH) or random access memory (Random Access Memory, RAM).
  • FLASH flash memory
  • RAM Random Access Memory
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms. of.
  • the unit described above as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit; it may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may all be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented either in the form of hardware or in the form of hardware plus software functional units.
  • the aforementioned program can be stored in a computer-readable storage medium, and when the program is executed, the execution includes: The steps of the above method embodiments; and the aforementioned storage medium may be a volatile storage medium or a non-volatile storage medium, including: a removable storage device, a read-only memory (Read Only Memory, ROM), a magnetic disk or an optical disk, etc. A medium that can store program code.
  • the above-mentioned integrated units of the present application are implemented in the form of software function modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include a number of instructions to make a computer
  • the display device performs all or part of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes various media that can store program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
  • Embodiments of the present disclosure provide a display method, apparatus, device, storage medium, and computer program, wherein the method includes: determining a virtual object matching a real object in a real scene; identifying an image including a current viewing user , determine a target viewing angle; determine a display effect of the virtual object according to the target viewing angle; and display an augmented reality effect in which the real scene and the virtual object are superimposed through a display device according to the display effect.
  • the augmented reality effect when the augmented reality effect is displayed, the user's viewing or intelligent interaction requirements can be automatically satisfied according to the actual situation of the current viewing user.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种展示方法、装置、设备、存储介质及计算机程序,其中,所述方法包括:确定与真实场景中真实对象匹配的虚拟对象(S101);通过对包括当前观看用户的图像进行识别,确定目标观看视角(S102);根据所述目标观看视角,确定所述虚拟对象的显示效果(S103);根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果(S104)。这样,在进行增强现实效果展示时,可以根据当前观看用户的实际情况,确定目标观看视角,并基于该目标观看视角来改变当前虚拟对象的显示效果,进而改变当前展示的真实场景与虚拟对象相叠加的增强现实效果,从而可以自动满足用户的观看或智能互动的需求。

Description

一种展示方法、装置、设备、存储介质及计算机程序
相关申请的交叉引用
本公开基于申请号为202010763328.4、申请日为2020年07月31日、申请名称为“一种展示方法、装置、设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及但不限于计算机视觉技术领域,尤其涉及一种展示方法、装置、设备、存储介质及计算机程序。
背景技术
增强现实(Augmented Reality,AR)技术是一种将虚拟信息与真实世界信息进行融合的技术,该技术通过在实时影像中渲染虚拟对象的方式实现将虚拟对象加载到真实世界中并进行互动,从而将真实的环境和虚拟对象实时地在同一个界面展示。比如,基于增强现实技术,用户可以看到叠加在真实的校园操场上的虚拟大树、叠加在天空中的虚拟飞翔小鸟等。然而,在相关技术中,当观看用户有多个时,增强现实场景的展示存在一定的局限性,从而影响到用户的观看或互动体验。
发明内容
有鉴于此,本申请实施例提供一种展示方法、装置、设备、存储介质及计算机程序。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种展示方法,所述方法包括:确定与真实场景中真实对象匹配的虚拟对象;通过对包括当前观看用户的图像进行识别,确定目标观看视角;根据所述目标观看视角,确定所述虚拟对象的显示效果;根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。
在一些实施例中,所述通过对包括当前观看用户的图像进行识别,确定目标观看视角,包括:
通过对所述图像中的用户数目进行识别,确定用户数目最多的图像区域;根据所述图像区域,确定用户在显示设备前的观看位置;将所述观看位置的观看视角确定为目标观看视角;或者
通过对所述图像中的用户身份进行识别,确定所述当前观看用户的身份;将所述图像中具有目标身份的用户的观看视角确定为目标观看视角;或者
通过对所述图像中的用户视线进行识别,确定所述图像中各用户的双眼视线方向;根据各用户的双眼视线方向,确定各用户的观看专注度;将观看专注度最高的用户的观看视角确定为目标观看视角;或者
通过对所述图像中的人脸图像进行识别,确定与目标人脸匹配的目标用户;将所述目标用户的观看视角确定为目标观看视角。
这样,在进行增强现实效果展示时,可以根据观看用户的数目、身份、观看专注度 或者观看用户的人脸图像确定目标观看视角,进而改变当前虚拟对象的显示效果,从而当显示设备前的用户处于不同位置或者显示设备前有多个用户时,可以更好地满足用户的观看或互动需求。
在一些实施例中,所述目标观看视角包括预设的平面维度上各方向上的观看视角;所述显示效果包括所述虚拟对象的显示轨迹;所述根据所述目标观看视角,确定所述虚拟对象的显示效果,包括:确定每一所述方向上的观看视角对应的所述虚拟对象的显示位置;根据所述虚拟对象的各显示位置,确定所述虚拟对象的显示轨迹;所述根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果,包括:根据所述显示轨迹,通过所述显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果,使得所述虚拟对象在所述显示设备上按照所述显示轨迹移动。
这样,由于虚拟对象的显示轨迹包括了预设的平面维度上各方向上的观看视角对应的显示位置,因此,用户在预设的平面维度的各个方向对该增强现实效果进行观看时,都可以有机会看到预期的展示效果,从而可以更好地满足用户的观看或互动需求。
在一些实施例中,所述根据所述目标观看视角,确定所述虚拟对象的显示效果,还包括:通过对所述图像进行识别,确定所述虚拟对象在各显示位置对应的显示时长;对应地,所述根据所述虚拟对象的各显示位置,确定所述虚拟对象的显示轨迹,包括:根据所述虚拟对象的各显示位置和每一显示位置对应的显示时长,确定所述虚拟对象的显示轨迹。
这样,可以根据图像中当前观看用户的情况,确定虚拟对象在各显示位置处的显示时长,从而可以进一步提升用户的观看或互动体验。
在一些实施例中,所述通过对所述图像进行识别,确定所述虚拟对象在各显示位置对应的显示时长,包括:通过对所述图像进行识别,确定所述图像中各用户的观看视角;针对每一显示位置,确定所述图像中观看视角与所述显示位置对应的观看视角一致的用户的数目;根据所述用户的数目,查询预设的用户的数目与显示时长之间的对应关系,确定所述显示位置对应的显示时长。
这样,可以根据观看视角分别与各显示位置对应的观看视角一致的用户的数目,确定虚拟对象在各显示位置处的显示时长,从而可以进一步提升用户的观看或互动体验。
在一些实施例中,所述根据所述目标观看视角,确定所述虚拟对象的显示效果,包括:获取所述真实场景中真实对象的位置;根据所述真实对象的位置和所述目标观看视角,确定所述虚拟对象的显示效果。
这样,由于在确定虚拟对象的显示效果时,同时考虑了目标观看角度和真实对象在真实世界中的位置,因而可以确定更加合适的虚拟对象的显示效果,从而可以进一步提升用户的观看或互动体验。
在一些实施例中,所述获取真实场景中真实对象的位置,包括:通过所述显示设备的摄像头采集包括所述真实对象的图像;根据所述包括所述真实对象的图像,确定所述真实对象的位置;或者,
所述获取真实场景中真实对象的位置,包括:向所述真实场景发射第一光线;接收所述真实场景中的真实对象对所述第一光线反射回的第二光线;根据所述第一光线的发 射参数和所述第二光线的反射参数确定所述真实对象的位置。
这样,由于对包括所述真实对象的图像进行识别确定真实对象的位置的方式,以及根据向真实场景发射光线的参数和接收的反射光线的参数确定真实对象的位置的方式,处理效率快且可以较为准确地确定真实对象的位置,因而可以提升增强现实效果的展示效率和展示效果,从而可以进一步提升用户的观看或互动体验。
在一些实施例中,所述显示设备包括在预设的滑轨上可移动且设置有摄像头的显示屏;所述方法还包括:在所述显示屏移动到目标位置的情况下,通过所述摄像头采集所述包括当前观看用户的图像。
这样,在进行增强现实效果展示时,可以根据实际情况自动调整显示屏的位置,从而可以获取更准确的当前观看用户的情况,进而可以获取更准确的目标观看视角,确定更加合适的虚拟对象的显示效果,以进一步提升用户的观看或互动体验。
在一些实施例中,所述真实场景中包括至少一个真实对象,所述虚拟对象包括虚拟标签和所述虚拟标签对应的引导线;对应地,所述确定与真实场景中真实对象匹配的虚拟对象,包括:确定真实场景中每一真实对象的属性信息;根据所述每一真实对象的属性信息,确定与所述真实对象匹配的虚拟标签;确定与每一所述虚拟标签对应的引导线。
这样,可以通过虚拟标签和引导线对真实对象的属性信息进行直观的显示,从而可以提升用户的观看或互动体验。
在一些实施例中,所述虚拟对象的显示效果包括每一虚拟标签的显示位置和所述虚拟标签对应的引导线的显示位置;对应地,所述根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果,包括:针对每一虚拟标签,根据所述虚拟标签的显示位置,在所述显示设备上显示所述虚拟标签;根据所述虚拟标签对应的引导线的显示位置,在所述显示设备上显示所述引导线,所述引导线用于指引所述虚拟标签和与所述虚拟标签匹配的真实对象。
这样,可以更准确地引导虚拟标签对应的真实对象,从而可以展示出更好的增强现实效果,进而可以提升用户的观看或互动体验。
本申请实施例提供一种展示装置,所述装置包括:第一确定部分,配置为确定与真实场景中真实对象匹配的虚拟对象;第二确定部分,配置为通过对包括当前观看用户的图像进行识别,确定目标观看视角;第三确定部分,配置为根据所述目标观看视角,确定所述虚拟对象的显示效果;展示部分,配置为根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。
本申请实施例提供一种展示设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述方法中的步骤。
本申请实施例提供一种计算机存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述方法中的步骤。
本申请实施例提供一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行用于实现上述方法中的步骤。
本申请实施例中,首先确定与真实场景中真实对象匹配的虚拟对象;然后通过对包括当前观看用户的图像进行识别,确定目标观看视角;进而根据所述目标观看视角,确 定所述虚拟对象的显示效果;最后根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。这样,在进行增强现实效果展示时,可以根据当前观看用户的实际情况,确定目标观看视角,并基于该目标观看视角来改变当前虚拟对象的显示效果,进而改变当前展示的真实场景与虚拟对象相叠加的增强现实效果,从而可以自动满足用户的观看或智能互动的需求。在一些实施例中,还可以根据观看用户的数目、身份、观看专注度或者观看用户的人脸图像确定目标观看视角,进而改变当前虚拟对象的显示效果,这样,当显示设备前的用户处于不同位置或者显示设备前有多个用户时,可以更好地满足用户的观看或互动需求。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并于说明书一起用于说明本申请的技术方案。
图1为本申请实施例提供的一种展示方法的实现流程示意图;
图2A为本申请实施例提供的一种展示方法的实现流程示意图;
图2B为本申请实施例提供的一种通过对图像进行识别确定虚拟对象在各显示位置对应的显示时长的方法的实现流程示意图;
图3A为本申请实施例提供的一种展示方法的实现流程示意图;
图3B为本申请实施例提供的一种获取真实场景中真实对象的位置的方法的实现流程示意图;
图3C为本申请实施例提供的一种获取真实场景中真实对象的位置的方法的实现流程示意图;
图4为本申请实施例提供的一种展示方法的实现流程示意图;
图5A为本申请实施例提供的一种展示方法的实现流程示意图;
图5B为本申请实施例提供的一种展示真实场景与每一虚拟标签以及虚拟标签对应的引导线相叠加的增强现实效果的方法的实现流程示意图;
图5C为本申请实施例提供的一种真实场景与虚拟对象相叠加的增强现实效果的展示效果示意图;
图6为本申请实施例提供的一种展示装置的组成结构示意图;
图7为本申请实施例提供的一种展示设备的硬件实体示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面结合附图和实施例对本申请的技术方案进一步详细阐述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
如果申请文件中出现“第一/第二”的类似描述则增加以下的说明,在以下的描述中, 所涉及的术语“第一\第二\第三”仅仅是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请的目的,不是旨在限制本申请。
为了更好地理解本申请实施例提供的展示方法,首先对相关技术中的展示方法进行说明。
在相关技术中,真实场景与虚拟对象实时叠加的增强现实效果可以基于光学原理进行展示,也可以基于视频合成技术进行展示。基于光学原理展示增强现实效果时,显示设备可以采用透明显示屏。在相关技术中,透明显示屏可以设置于真实场景与用户之间,可以接收真实场景中反射来并穿透显示屏的光线,并且还可以显示需要叠加在真实场景中的虚拟对象,这样,用户可以通过该透明显示屏观看到真实场景与虚拟对象实时叠加的画面。基于视频合成技术展示增强现实效果时,可以通过摄像头获取真实世界的图像或视频,将获取到的图像或视频与虚拟对象进行合成,最终通过显示设备显示合成后的图像或视频,实现真实场景与虚拟对象实时叠加的增强现实效果。
但是,相关技术中,通过显示设备展示增强现实场景时,增强现实场景的展示内容和展示效果通常与显示设备前观看的用户无关。而当显示设备前的用户处于不同位置或者显示设备前有多个用户时,显示设备展示的增强现实效果不能很好地满足用户的观看或互动需求。
以上述基于光学原理呈现增强现实效果的展示方案为例,由于用户是通过从显示设备中穿透出来的真实场景的反射光线观看到真实场景的,因此,当用户处于显示设备前不同位置时,观看到的真实场景中的真实对象在显示设备上的位置也是不同的,进而会导致不同位置的用户观看到的真实场景与虚拟对象的实时叠加效果存在差异,也即从不同的观看视角观看到的真实场景与虚拟对象的实时叠加效果存在差异。从而,只有特定观看视角的用户才能观看到预期的叠加效果,而其他观看视角的用户观看到的叠加效果会存在一定的偏差。
可见,在实现增强现实场景的展示时,需要更智能的解决方案来提高增强现实场景的展示效果,以更好地满足用户的观看或互动需求。
本申请实施例提供一种展示方法,该方法可以由处理器执行,处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。这里,处理器可以是通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者任何常规的处理器等。
图1为本申请实施例的展示方法的实现流程示意图,如图1所示,该方法包括以下步骤:
步骤S101,确定与真实场景中真实对象匹配的虚拟对象;
这里,真实场景可以是真实世界中的任意合适的场景,比如校园操场、天空、办公 室、博物馆等。真实场景中可以有一个或多个真实对象,真实对象可以是真实存在于真实场景中的任意合适的对象,比如校园操场上的旗杆、天空中的云、办公室的办公桌、博物馆的展览品等。
虚拟对象可以是与真实对象匹配的虚拟的影像或文字等信息。比如,用于实现真实办公室场景中办公桌的装扮效果的虚拟道具,即为与真实办公室场景中的办公桌匹配的虚拟对象;或者用于对真实博物馆场景中的展览品进行讲解的虚拟数字人,即为与真实博物馆场景中的展览品匹配的虚拟对象;或者用于对真实楼盘场景中的每一建筑物进行标注说明的虚拟标签和每一所述虚拟标签对应的引导线,即为与真实楼盘场景中的每一建筑物匹配的虚拟对象。在实施时,虚拟对象可以根据特定的真实对象与虚拟对象之间的匹配关系来确定,也可以按照特定的虚拟对象生成策略,利用图像、视频或者三维(3D,Three-Dimensional)模型生成等技术,根据真实对象来生成。本领域技术人员可以根据实际情况选择与真实对象匹配的虚拟对象,并选择合适的方式确定虚拟对象,这里并不限定。
在一些实施例中,虚拟对象还可以与观看的用户进行实时交互,例如,在增强现实类游戏场景中,可以通过与游戏配套的手套或手棒控制游戏中虚拟人物的打斗动作;或者,在进行增强现实的棋类比赛中,通过与比赛配套的手套控制虚拟棋子的移动等。
步骤S102,通过对包括当前观看用户的图像进行识别,确定目标观看视角;
这里,当前观看用户为当前在显示设备前进行观看的用户。包括当前观看用户的图像可以是通过设置在显示设备内的图像采集装置实时采集的,也可以是通过显示设备外部的其他图像采集装置采集的,这里并不限定。
观看视角为真实场景中用户观看显示设备或真实对象时的视角,目标观看视角则为可以观看到预期的增强现实效果的观看视角。在实施时,可以通过对包括当前观看用户的图像进行图像识别,根据该图像中的当前观看用户的观看情况,确定目标观看视角。目标观看视角的确定方法可以根据实际情况确定,本申请实施例对此不作限定。例如,可以根据当前观看用户所在的区域确定目标观看视角,或者根据当前用户的视线方向确定目标观看视角等。
步骤S103,根据所述目标观看视角,确定所述虚拟对象的显示效果;
这里,虚拟对象的显示效果为虚拟对象在显示设备上显示时的效果,可以包括但不限于虚拟对象在显示设备上的显示位置、显示时长、显示颜色、互动方式、显示大小等中的一种或多种。
虚拟对象的显示效果可以根据特定的目标观看视角与显示效果之间的对应关系来确定,也可以根据目标观看视角,结合与该虚拟对象匹配的真实对象的信息,通过渲染模型渲染得到。本领域技术人员在实施时,可以根据实际情况选择合适的方式确定虚拟对象的显示效果,这里并不限定。
步骤S104,根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。
这里,显示设备可以是任意合适的支持增强现实显示功能的电子设备,可以包括但不限于智能电视、手机、平板、显示屏等中的一种或多种。此外,显示设备还可以是一 种较为新颖的显示屏,该显示屏可以在滑轨上移动或通过其他方式移动。当显示屏移动到目标位置时,通过该显示屏可以展示出真实场景与虚拟对象相叠加的增强现实效果。在一些实施例中,用户还可以对显示屏显示的增强现实效果上的相关信息进行触发,以获取更详细的信息或者其它相关的信息。这里,不对显示屏的种类进行限制,显示屏可以是触摸屏也可以是非触摸屏。
在实施时,可以根据虚拟对象的显示效果,在显示设备上显示该虚拟对象,进而通过显示设备展示真实场景与虚拟对象相叠加的增强现实效果。真实场景与虚拟对象相叠加的增强现实效果,可以基于光学原理进行展示,也可以基于视频合成技术进行展示。这里,还可以采用其他合适的方式,根据虚拟对象的显示效果,通过显示设备展示真实场景与虚拟对象相叠加的增强现实效果,本申请实施例对此并不限定。
在一些实施例中,可以基于光学原理进行真实场景与虚拟对象相叠加的增强现实效果的展示。在实施时,显示设备为透明显示屏,显示屏设置在观看用户与真实场景之间,用户可以透过该透明显示屏观看到真实场景。在基于光学原理进行增强现实效果展示时,用户观看到的虚拟对象是根据虚拟对象的显示效果,由显示设备本身显示的,用户观看到的真实场景是真实世界中的真实场景基于光学原理透过显示设备展示出来的。
在一些实施例中,可以基于视频合成技术进行真实场景与虚拟对象相叠加的增强现实效果的展示。在实施时,可以通过摄像头采集真实场景的图像或视频,并根据虚拟对象的显示效果,将虚拟对象与采集到的真实场景的图像或视频进行合成,最终将合成后的图像或视频通过显示设备显示出来,从而实现真实场景与虚拟对象相叠加的增强现实效果的展示。
在一些实施例中,上述步骤S102可以包括如下步骤S102a:
步骤S102a,通过对所述图像中的用户数目进行识别,确定用户数目最多的图像区域;根据所述图像区域,确定用户在显示设备前的观看位置;将所述观看位置的观看视角确定为目标观看视角。
这里,可以采用任意合适的图像识别算法对图像中的用户数目进行识别,确定用户数目最多的图像区域。观看位置为真实场景中用户在显示设备前观看的位置。对于用户数目最多的图像区域,可以根据该图像区域在图像中的位置,按照特定的映射公式,计算得到该图像区域对应的用户在显示设备前的观看位置;也可以根据特定的图像区域在图像中的位置与对应的真实世界的观看位置之间的对应关系,确定该图像区域对应的用户在显示设备前的观看位置。在实施时,该映射公式和该对应关系可以预先通过标定或大数据分析等方式确定。
观看位置的观看视角为用户从观看位置观看显示设备或真实对象的视角。在实施时,可以利用真实场景中观看位置在三维空间中的坐标以及真实对象在三维空间中的坐标进行计算,得到该观看位置的观看视角,作为目标观看视角;也可以根据观看位置,查询特定的观看位置与观看视角之间的对应关系,得到该观看位置的观看视角。本领域技术人员可以根据实际情况选择合适的方式确定观看位置的观看视角,这里不作限定。
在一些实施例中,上述步骤S102可以包括如下步骤S102b:
步骤S102b,通过对所述图像中的用户身份进行识别,确定所述当前观看用户的身 份;将所述图像中具有目标身份的用户的观看视角确定为目标观看视角。
这里,可以采用任意合适的图像识别算法,通过图像中用户的特征来识别用户的身份。用户的特征可以包括但不限于用户的人脸特征、穿着特征、身形特征等。目标身份可以是预先设定的身份,可以包括但不限于具有特定观看优先级的身份或者具有特定观看权限的身份等。可以将图像中具有目标身份的用户的观看视角确定为目标观看视角。
例如,若当前观看场景为老师带领一个或多个小学生观看博物馆中通过增强现实技术展示的展品,则可以通过图像识别算法,根据图像中用户的穿着特征和/或身形特征,识别当前观看用户中的老师和小学生,并将该一个或多个小学生的观看视角确定为目标观看视角,从而为小学生提供预期的增强现实展示效果,满足小学生对展品的好奇心和求知欲。
再如,若当前观看场景为导游带领旅游团游览景点时观看通过增强现实技术展示的特色建筑物时,则可以通过图像识别算法,根据图像中用户的面部表情特征和/或穿着特征等,识别当前观看用户中的旅游团成员,并将旅游团成员的观看视角确定为目标观看视角,从而为旅游团成员提供预期的增强现实展示效果,提高旅游团成员的游览体验。
又如,若当前观看场景为多人进行增强现实游戏的互动时,则可以通过图像识别算法,根据图像中用户的人脸特征,识别当前观看用户中的高级(Very Important Person,VIP)用户,并将该VIP用户的观看视角确定为目标观看视角,从而为该VIP用户提供预期的增强现实展示效果,提高该VIP用户的游戏互动体验。在实施时,可以预先设置VIP用户的人脸特征,在进行身份识别时,根据图像中用户的人脸特征,与预先设置的VIP用户的人脸特征进行匹配,从而识别当前观看用户中的VIP用户。
在一些实施例中,上述步骤S102可以包括如下步骤S102c:
步骤S102c,通过对所述图像中的用户视线进行识别,确定所述图像中各用户的双眼视线方向;根据各用户的双眼视线方向,确定各用户的观看专注度;将观看专注度最高的用户的观看视角确定为目标观看视角。
这里,观看专注度为用户在观看真实对象或显示设备时的专注程度,可以通过对图像中用户双眼的视线方向进行检测而确定。例如,当显示设备前有多个用户时,可以检测图像中每个用户的双眼视线方向,根据双眼视线方向确定用户观看的专注度,这时可以将观看专注度最高的用户的观看视角确定为目标观看视角,保证观看最专注的用户可以观看到预期的增强现实效果。在实施时,可以采用任意合适的算法确定用户的双眼视线方向,以及与双眼视线方向对应的用户的观看专注度,本申请实施例对此并不限定。
在一些实施例中,上述步骤S102可以包括如下步骤S102d:
步骤S102d,通过对所述图像中的人脸图像进行识别,确定与目标人脸匹配的目标用户;将所述目标用户的观看视角确定为目标观看视角。
这里,目标人脸可以是预先设定的,可以包括但不限于具有特定观看优先级的用户的人脸或者具有特定观看权限的用户的人脸等。在实施时,可以采用任意合适的人脸识别算法对图像中的人脸图像进行识别,本申请实施例对此并不限定。
本申请实施例提供的展示方法,首先确定与真实场景中真实对象匹配的虚拟对象;然后通过对包括当前观看用户的图像进行识别,确定目标观看视角;进而根据所述目标 观看视角,确定所述虚拟对象的显示效果;最后根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。这样,在进行增强现实效果展示时,可以根据当前观看用户的实际情况,确定目标观看视角,并基于该目标观看视角来改变当前虚拟对象的显示效果,进而改变当前展示的真实场景与虚拟对象相叠加的增强现实效果,从而可以自动满足用户的观看或智能互动的需求。在一些实施例中,还可以根据观看用户的数目、身份、观看专注度或者观看用户的人脸图像确定目标观看视角,进而改变当前虚拟对象的显示效果,这样,当显示设备前的用户处于不同位置或者显示设备前有多个用户时,可以更好地满足用户的观看或互动需求。
本申请实施例提供一种展示方法,该方法可以由处理器执行,如图2A所示,该方法包括以下步骤:
步骤S201,确定与真实场景中真实对象匹配的虚拟对象;
这里,步骤S201与前述步骤S101对应,在实施时可以参照前述步骤S101的具体实施方式。
步骤S202,通过对包括当前观看用户的图像进行识别,确定目标观看视角,所述目标观看视角包括预设的平面维度上各方向上的观看视角;
这里,预设的平面维度可以包括特定的水平面、垂直面等平面维度中的一种或多种。预设的平面维度上各方向上的观看视角包括在显示设备前沿着该预设的平面维度的各个方向观看真实对象或显示设备时的视角。例如,在显示设备前从左到右的不同位置处用户观看真实对象或显示设备时的视角,可以对应水平面维度上各个方向上的观看视角;在显示设备前同一位置不同身高的用户观看真实对象或显示设备时的视角,或者同一位置用户站立或蹲下时观看真实对象或显示设备时的视角,可以对应垂直平面维度上各个方向上的观看视角。
步骤S203,确定每一所述方向上的观看视角对应的所述虚拟对象的显示位置;
这里,每一观看视角对应的虚拟对象的显示位置可以根据特定的观看视角与显示位置之间的对应关系来确定,也可以根据每一观看视角,结合与该虚拟对象匹配的真实对象的信息,通过特定的运算模型计算得到。在实施时,可以根据实际情况选择合适的方式确定虚拟对象的显示位置,这里并不限定。
步骤S204,根据所述虚拟对象的各显示位置,确定所述虚拟对象的显示轨迹;
这里,显示轨迹包括显示设备上虚拟对象可以显示的各个位置。虚拟对象可以按照随机的或特定的顺序在显示设备的各个位置处切换显示,从而可以在显示设备上按照该显示轨迹移动。
步骤S205,根据所述显示轨迹,通过所述显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果,使得所述虚拟对象在所述显示设备上按照所述显示轨迹移动。
这里,虚拟对象在显示设备上按照显示轨迹移动时,在预设的平面维度的各个方向都可以有机会看到预期的展示效果。例如,当显示设备前的左、中、右三个方向都有观看用户时,可以在这三个方向的视角分别对应的位置处轮流显示虚拟对象,以使这三个方向的用户可以分别在特定的时间内观看到预期的显示效果。
在一些实施例中,上述步骤S203还可以包括:步骤S203a,通过对所述图像进行识别,确定所述虚拟对象在各显示位置对应的显示时长。对应地,上述步骤S204可以包括:根据所述虚拟对象的各显示位置和每一显示位置对应的显示时长,确定所述虚拟对象的显示轨迹。这里,显示轨迹包括虚拟对象的各显示位置以及虚拟对象在每一显示位置处的显示时长。当虚拟对象在显示设备上按照显示轨迹移动时,虚拟对象可以按照随机的或特定的顺序在各个位置处切换显示,在每个位置处显示时可以显示对应的显示时长。例如,当显示设备前的左、中、右三个方向都有观看用户时,可以在这三个方向的视角分别对应的位置处轮流按照对应的显示时长显示虚拟标签,以使这三个方向的用户可以分别在特定的时间内观看到预期的显示效果。
在一些实施例中,如图2B所示,上述步骤S203a可以包括:
步骤S231,通过对所述图像进行识别,确定所述图像中各用户的观看视角;
步骤S232,针对每一显示位置,确定所述图像中观看视角与所述显示位置对应的观看视角一致的用户的数目;
步骤S233,根据所述用户的数目,查询预设的用户的数目与显示时长之间的对应关系,确定所述显示位置对应的显示时长。
这里,预设的用户的数目与显示时长之间的对应关系可以是用户根据实际需求预设的,这里并不限定。在一些实施例中,用户的数目与显示时长之间可以是正比关系,即在用户数目越多的显示位置,虚拟对象对应的显示时长越长。
本申请实施例提供的展示方法,在进行增强现实效果展示时,根据预设的平面维度上各方向上的观看视角确定虚拟对象的显示轨迹,并根据该显示轨迹,通过显示设备展示真实场景与该虚拟对象相叠加的增强现实效果,使得该虚拟对象在显示设备上按照所述显示轨迹移动。这样,由于虚拟对象的显示轨迹包括了预设的平面维度上各方向上的观看视角对应的显示位置,因此,用户在预设的平面维度的各个方向对该增强现实效果进行观看时,都可以有机会看到预期的展示效果,从而,可以更好地满足用户的观看或互动需求。在一些实施例中,还可以根据观看视角分别与各显示位置对应的观看视角一致的用户的数目,确定虚拟对象在各显示位置处的显示时长,从而可以进一步提升用户的观看或互动体验。
本申请实施例提供一种展示方法,该方法可以由处理器执行,如图3A所示,该方法包括以下步骤:
步骤S301,确定与真实场景中真实对象匹配的虚拟对象;
步骤S302,通过对包括当前观看用户的图像进行识别,确定目标观看视角;
这里,上述步骤S301至S302分别与前述步骤S101至S102对应,在实施时可以参照前述步骤S101至S102的具体实施方式。
步骤S303,获取所述真实场景中真实对象的位置;
这里,真实场景中真实对象的位置为该真实对象在真实世界中的位置。在实施时,真实对象的位置可以是预先设定的,也可以是通过对真实场景中的真实对象进行检测得到的。本领域技术人员在实施时可以根据实际情况选择合适的方式获取真实对象的位置,这里并不限定。
步骤S304,根据所述真实对象的位置和所述目标观看视角,确定所述虚拟对象的显示效果;
这里,虚拟对象的显示效果可以根据与虚拟对象匹配的真实对象的位置和目标观看视角来确定。本领域技术人员在实施时,可以基于光学原理,根据实际需要选择合适的方式确定虚拟对象的显示效果,这里并不限定。
步骤S305,根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。
这里,步骤S305与前述步骤S104对应,在实施时可以参照前述步骤S104的具体实施方式。
在一些实施例中,如图3B所示,上述步骤S303可以包括:步骤S331a和步骤S332a,其中:
步骤S331a,通过所述显示设备的摄像头采集包括所述真实对象的图像;
这里,摄像头可以包括但不限于定标准摄像头、长焦摄像头、广角镜头、变焦摄像头、数字光场摄像头和数码摄像头等中的一种或多种。摄像头可以设置于显示设备的任意合适位置,可以包括但不限于显示屏的上部、下部、前方、侧面等。在实施时,摄像头可以内置于显示设备中,也可以设置在显示设备外部,这里并不限定。
步骤S332a,根据所述包括所述真实对象的图像,确定所述真实对象的位置。
这里,可以根据该真实对象在图像中的位置,按照特定的映射公式计算得到该真实对象在真实世界的位置;也可以根据特定的真实对象在图像中的位置与对应的真实对象在真实世界的位置之间的对应关系,确定该真实对象在真实世界的位置。在实施时,该映射公式和该对应关系可以预先通过标定或大数据分析等方式确定。
在一些实施例中,如图3C所示,上述步骤S303可以包括:步骤S331b、步骤S332b和步骤S333b,其中:
步骤S331b,向所述真实场景发射第一光线;
这里,第一光线可以是显示设备发射的,也可以是显示设备以外的其他发射的。第一光线可以包括但不限于红外光、可见光等任意合适的光线。
步骤S332b,接收所述真实场景中的真实对象对所述第一光线反射回的第二光线;
这里,可以通过任意合适的感光器件接收第二光线,感光器件可以包括但不限于红外传感器、图像传感器等。
步骤S333b,根据所述第一光线的发射参数和所述第二光线的反射参数确定所述真实对象的位置。
这里,第一光线的发射参数可以包括但不限于发射时间、光线方向、光线强度等中的一种或多种。第二光线的反射参数可以包括但不限于第二光线的接收时间、光线方向、光线强度等中的一种或多种。在实施时,可以根据第一光线的发射参数和第二光线的反射参数,采用任意合适的方法确定真实对象的位置。例如,可以根据第一光线的发射时间和第二光线的接收时间之间的间隔,结合光的传播速度,确定真实对象的位置;或者可以根据第一光线的光线方向与第二光线的光线方向,并结合发射第一光线的装置的位置和接收第二光线的装置的位置,共同确定真实对象的位置。
本申请实施例提供的展示方法,通过获取真实场景中真实对象的位置,并根据真实对象的位置和目标观看视角,确定虚拟对象的显示效果。这样,由于在确定虚拟对象的显示效果时,同时考虑了目标观看角度和真实对象在真实世界中的位置,因而可以确定更加合适的虚拟对象的显示效果,从而可以进一步提升用户的观看或互动体验。此外,可以通过对包括所述真实对象的图像进行识别确定真实对象的位置,或者根据向真实场景发射光线的参数和接收的反射光线的参数确定真实对象的位置,这样处理效率快且可以较为准确地确定真实对象的位置,因而可以提升增强现实效果的展示效率和展示效果,从而可以更进一步地提升用户的观看或互动体验。
本申请实施例提供一种展示方法,该方法可以由处理器执行,如图4所示,该方法包括以下步骤:
步骤S401,在显示屏移动到目标位置的情况下,通过所述摄像头采集所述包括当前观看用户的图像;其中,所述显示屏在预设的滑轨上可移动且设置有摄像头;
这里,目标位置为可以展示真实场景与虚拟对象相叠加的增强现实效果的合适的位置,可以是预先设定的位置,也可以是检测到特定的真实场景或真实对象的情况下,显示屏所处的位置,本申请实施例对此并不限定。
摄像头可以设置于显示屏的任意合适位置,可以包括但不限于显示屏的上部、下部、前方、侧面等。在实施时,摄像头可以内置于显示屏中,也可以设置在显示屏外部,这里并不限定。
步骤S402,确定与真实场景中真实对象匹配的虚拟对象;
步骤S403,通过对包括当前观看用户的图像进行识别,确定目标观看视角;
步骤S404,根据所述目标观看视角,确定所述虚拟对象的显示效果;
步骤S405,根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。
这里,上述步骤S402至S405分别对应前述步骤S101至S104,在实施时,可以参照前述步骤S101至S104的具体实施方式。
本申请实施例提供的展示方法,显示设备为在预设的滑轨上可移动且设置有摄像头的显示屏,这样,在进行增强现实效果展示时,可以根据实际情况自动调整显示屏的位置。并且,在显示屏移动到目标位置的情况下,可以通过摄像头采集包括当前观看用户的图像,这样,可以获取更准确的当前观看用户的情况,从而可以确定更准确的目标观看视角,进而可以确定更加合适的虚拟对象的显示效果,以进一步提升用户的观看或互动体验。
本申请实施例提供一种展示方法,该方法可以由处理器执行,如图5A所示,该方法包括以下步骤:
步骤S501,确定真实场景中每一真实对象的属性信息;
这里,属性信息可以包括但不限于真实对象的名称、类型、描述等说明性信息。在实施时,真实对象的属性信息可以是预先设定后存储在本地存储器或者数据库中的,可以通过读取本地存储器或者查询数据库确定每一真实对象的属性信息;真实对象的属性信息还可以是通过采集包含每一真实对象的图像,并对包含每一真实对象的图像进行识 别而确定的。本领域技术人员可以根据实际情况选择合适的方式确定真实场景中每一真实对象的属性信息,这里并不限定。
步骤S502,根据所述每一真实对象的属性信息,确定与所述真实对象匹配的虚拟标签;
这里,虚拟标签可以是包括对应的真实对象的属性信息的一种虚拟对象,可以包括但不限于用于表示属性信息的文字或图像等。
步骤S503,确定与每一所述虚拟标签对应的引导线;
这里,引导线可以是用于引导虚拟标签与对应的真实对象之间的关联关系的一种虚拟对象。引导线可以包括但不限于直线、曲线、折线等中的任意一种或多种。在实施时,引导线可以包括实线、虚线以及虚线和实线的任意组合等。
步骤S504,通过对包括当前观看用户的图像进行识别,确定目标观看视角;
步骤S505,根据所述目标观看视角,确定每一虚拟标签和所述虚拟标签对应的引导线的显示效果;
步骤S506,根据所述显示效果,通过显示设备展示所述真实场景与每一虚拟标签和所述虚拟标签对应的引导线相叠加的增强现实效果。
这里,上述步骤S504至S506分别对应前述步骤S102至S104,在实施时,可以参照前述步骤S102至S104的具体实施方式。
在一些实施例中,虚拟对象的显示效果包括每一虚拟标签的显示位置和每一虚拟标签对应的引导线的显示位置。对应地,如图5B所示,上述步骤S506可以包括:
步骤S511,针对每一虚拟标签,根据所述虚拟标签的显示位置,在所述显示设备上显示所述虚拟标签;
步骤S512,根据所述虚拟标签对应的引导线的显示位置,在所述显示设备上显示所述引导线,所述引导线用于指引所述虚拟标签和与所述虚拟标签匹配的真实对象。
图5C为本申请实施例提供的一种真实场景与虚拟对象相叠加的增强现实效果的展示效果示意图,如图5C所示,在增强现实效果中,真实场景10与虚拟对象20相叠加,真实场景中包括真实对象11,虚拟对象20包括与真实对像11匹配的虚拟标签21以及与虚拟标签21对应的引导线22,引导线22的两个端点分别指引虚拟标签21和真实对象11。
本申请实施例提供的展示方法,可以根据真实对象的属性信息,确定与真实对象匹配的虚拟标签,并通过引导线引导虚拟标签与真实对象之间的对应关系。这样,可以通过虚拟标签和引导线对真实对象的属性信息进行直观的显示。并且根据目标观看视角确定虚拟标签和对应的引导线的显示效果,这样,可以更准确地引导虚拟标签对应的真实对象,从而可以展示出更好的增强现实效果,进而可以提升用户的观看或互动体验。
基于前述的实施例,本申请实施例提供一种展示装置,该装置包括所包括的各单元、以及各单元所包括的各部分,可以通过展示设备中的处理器来实现;当然也可通过具体的逻辑电路实现。在实施的过程中,展示设备可以是任意合适的具备信息处理能力的电子设备,可以具备显示功能(如智能显示屏、智能手机、平板电脑、笔记本电脑、智能电视等),也可以不具备显示功能(如服务器、嵌入式计算设备等);处理器可以为中央 处理器(Central Processing Unit,CPU)、微处理器(Microprocessor Unit,MPU)、数字信号处理器(DSP)或现场可编程门阵列(Field Programmable Gate Array,FPGA)等。
图6为本申请实施例展示装置的组成结构示意图,如图6所示,展示装置600包括:第一确定部分610、第二确定部分620、第三确定部分630和展示部分640,其中:
第一确定部分610,配置为确定与真实场景中真实对象匹配的虚拟对象;
第二确定部分620,配置为通过对包括当前观看用户的图像进行识别,确定目标观看视角;
第三确定部分630,配置为根据所述目标观看视角,确定所述虚拟对象的显示效果;
展示部分640,配置为根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。
在一些实施例中,所述第二确定部分还配置为:
通过对所述图像中的用户数目进行识别,确定用户数目最多的图像区域;根据所述图像区域,确定用户在显示设备前的观看位置;将所述观看位置的观看视角确定为目标观看视角;或者
通过对所述图像中的用户身份进行识别,确定所述当前观看用户的身份;将所述图像中具有目标身份的用户的观看视角确定为目标观看视角;或者
通过对所述图像中的用户视线进行识别,确定所述图像中各用户的双眼视线方向;根据各用户的双眼视线方向,确定各用户的观看专注度;将观看专注度最高的用户的观看视角确定为目标观看视角;或者
通过对所述图像中的人脸图像进行识别,确定与目标人脸匹配的目标用户;将所述目标用户的观看视角确定为目标观看视角。
在一些实施例中,所述目标观看视角包括预设的平面维度上各方向上的观看视角,所述显示效果包括所述虚拟对象的显示轨迹。所述第三确定部分包括:第一确定子部分,配置为确定每一所述方向上的观看视角对应的所述虚拟对象的显示位置;第二确定子部分,配置为根据所述虚拟对象的各显示位置,确定所述虚拟对象的显示轨迹。所述展示部分还配置为:根据所述显示轨迹,通过所述显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果,使得所述虚拟对象在所述显示设备上按照所述显示轨迹移动。
在一些实施例中,所述第三确定部分还包括:第三确定子部分,配置为通过对所述图像进行识别,确定所述虚拟对象在各显示位置对应的显示时长。对应地,所述第三确定子部分还配置为:根据所述虚拟对象的各显示位置和每一显示位置对应的显示时长,确定所述虚拟对象的显示轨迹。
在一些实施例中,所述第三确定子部分还配置为:通过对所述图像进行识别,确定所述图像中各用户的观看视角;针对每一显示位置,确定所述图像中观看视角与所述显示位置对应的观看视角一致的用户的数目;根据所述用户的数目,查询预设的用户的数目与显示时长之间的对应关系,确定所述显示位置对应的显示时长。
在一些实施例中,所述第三确定部分还包括:获取子部分,配置为获取所述真实场景中真实对象的位置;第四确定子部分,配置为根据所述真实对象的位置和所述目标观看视角,确定所述虚拟对象的显示效果。
在一些实施例中,所述获取子部分还配置为:通过所述显示设备的摄像头采集包括所述真实对象的图像;根据所述包括所述真实对象的图像,确定所述真实对象的位置;
在一些实施例中,所述获取子部分还配置为:向所述真实场景发射第一光线;接收所述真实场景中的真实对象对所述第一光线反射回的第二光线;根据所述第一光线的发射参数和所述第二光线的反射参数确定所述真实对象的位置。
在一些实施例中,所述显示设备包括在预设的滑轨上可移动且设置有摄像头的显示屏;所述展示装置还包括:采集部分,配置为在所述显示屏移动到目标位置的情况下,通过所述摄像头采集所述包括当前观看用户的图像。
在一些实施例中,所述真实场景中包括至少一个真实对象,所述虚拟对象包括虚拟标签和所述虚拟标签对应的引导线。对应地,所述第一确定部分还配置为:确定真实场景中每一真实对象的属性信息;根据所述每一真实对象的属性信息,确定与所述真实对象匹配的虚拟标签;确定与每一所述虚拟标签对应的引导线。
在一些实施例中,所述虚拟对象的显示效果包括每一虚拟标签的显示位置和所述虚拟标签对应的引导线的显示位置。对应地,所述展示部分还配置为:针对每一虚拟标签,根据所述虚拟标签的显示位置,在所述显示设备上显示所述虚拟标签;根据所述虚拟标签对应的引导线的显示位置,在所述显示设备上显示所述引导线,所述引导线用于指引所述虚拟标签和与所述虚拟标签匹配的真实对象。
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
在本申请实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块化的。
需要说明的是,本申请实施例中,如果以软件功能部分的形式实现上述的展示方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一台展示设备执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。
对应地,本申请实施例提供一种展示设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述方法中的步骤。
对应地,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述方法中的步骤。
对应地,本申请实施例提供一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备(如展示设备)中运行的情况下,所述电子设备中的处理器执行用于实现上述方法中的步骤。
这里需要指出的是:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请存储介质和设备实施例中未 披露的技术细节,请参照本申请方法实施例的描述而理解。
需要说明的是,图7为本申请实施例中展示设备的一种硬件实体示意图,如图7所示,该展示设备700的硬件实体包括:处理器701、通信接口702和存储器703,其中
处理器701通常控制展示设备700的总体操作。
通信接口702可以使展示设备通过网络与其他设备通信。
存储器703配置为存储由处理器701可执行的指令和应用,还可以缓存待处理器701以及展示设备700中各部分待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质可为易失性存储介质或非易失性存储介质,包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台展示设备执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本公开实施例提供了一种展示方法、装置、设备、存储介质及计算机程序,其中,所述方法包括:确定与真实场景中真实对象匹配的虚拟对象;通过对包括当前观看用户的图像进行识别,确定目标观看视角;根据所述目标观看视角,确定所述虚拟对象的显示效果;根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。根据本公开实施例,可以在进行增强现实效果展示时,根据当前观看用户的实际情况,自动满足用户的观看或智能互动的需求。

Claims (15)

  1. 一种展示方法,所述方法包括:
    确定与真实场景中真实对象匹配的虚拟对象;
    通过对包括当前观看用户的图像进行识别,确定目标观看视角;
    根据所述目标观看视角,确定所述虚拟对象的显示效果;
    根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。
  2. 根据权利要求1所述的方法,其中,所述通过对包括当前观看用户的图像进行识别,确定目标观看视角,包括:
    通过对所述图像中的用户数目进行识别,确定用户数目最多的图像区域;根据所述图像区域,确定用户在显示设备前的观看位置;将所述观看位置的观看视角确定为目标观看视角;或者
    通过对所述图像中的用户身份进行识别,确定所述当前观看用户的身份;将所述图像中具有目标身份的用户的观看视角确定为目标观看视角;或者
    通过对所述图像中的用户视线进行识别,确定所述图像中各用户的双眼视线方向;根据各用户的双眼视线方向,确定各用户的观看专注度;将观看专注度最高的用户的观看视角确定为目标观看视角;或者
    通过对所述图像中的人脸图像进行识别,确定与目标人脸匹配的目标用户;将所述目标用户的观看视角确定为目标观看视角。
  3. 根据权利要求1所述的方法,其中,所述目标观看视角包括预设的平面维度上各方向上的观看视角;所述显示效果包括所述虚拟对象的显示轨迹;
    所述根据所述目标观看视角,确定所述虚拟对象的显示效果,包括:确定每一所述方向上的观看视角对应的所述虚拟对象的显示位置;根据所述虚拟对象的各显示位置,确定所述虚拟对象的显示轨迹;
    所述根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果,包括:根据所述显示轨迹,通过所述显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果,使得所述虚拟对象在所述显示设备上按照所述显示轨迹移动。
  4. 根据权利要求3所述的方法,其中,所述根据所述目标观看视角,确定所述虚拟对象的显示效果,还包括:
    通过对所述图像进行识别,确定所述虚拟对象在各显示位置对应的显示时长;
    所述根据所述虚拟对象的各显示位置,确定所述虚拟对象的显示轨迹,包括:
    根据所述虚拟对象的各显示位置和每一显示位置对应的显示时长,确定所述虚拟对象的显示轨迹。
  5. 根据权利要求4所述的方法,其中,所述通过对所述图像进行识别,确定所述虚拟对象在各显示位置对应的显示时长,包括:
    通过对所述图像进行识别,确定所述图像中各用户的观看视角;
    针对每一显示位置,确定所述图像中观看视角与所述显示位置对应的观看视角一 致的用户的数目;
    根据所述用户的数目,查询预设的用户的数目与显示时长之间的对应关系,确定所述显示位置对应的显示时长。
  6. 根据权利要求1至5任一项所述的方法,其中,所述根据所述目标观看视角,确定所述虚拟对象的显示效果,包括:
    获取所述真实场景中真实对象的位置;
    根据所述真实对象的位置和所述目标观看视角,确定所述虚拟对象的显示效果。
  7. 根据权利要求6所述的方法,其中,所述获取真实场景中真实对象的位置,包括:
    通过所述显示设备的摄像头采集包括所述真实对象的图像;根据所述包括所述真实对象的图像,确定所述真实对象的位置;
    或者,向所述真实场景发射第一光线;接收所述真实场景中的真实对象对所述第一光线反射回的第二光线;根据所述第一光线的发射参数和所述第二光线的反射参数确定所述真实对象的位置。
  8. 根据权利要求1至7任一项所述的方法,其中,所述显示设备包括在预设的滑轨上可移动且设置有摄像头的显示屏;所述方法还包括:
    在所述显示屏移动到目标位置的情况下,通过所述摄像头采集所述包括当前观看用户的图像。
  9. 根据权利要求1至8任一项所述的方法,其中,所述真实场景中包括至少一个真实对象,所述虚拟对象包括虚拟标签和所述虚拟标签对应的引导线;
    所述确定与真实场景中真实对象匹配的虚拟对象,包括:
    确定真实场景中每一真实对象的属性信息;
    根据所述每一真实对象的属性信息,确定与所述真实对象匹配的虚拟标签;
    确定与每一所述虚拟标签对应的引导线。
  10. 根据权利要求9所述的方法,其中,所述虚拟对象的显示效果包括每一虚拟标签的显示位置和所述虚拟标签对应的引导线的显示位置;
    所述根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果,包括:
    针对每一虚拟标签,根据所述虚拟标签的显示位置,在所述显示设备上显示所述虚拟标签;
    根据所述虚拟标签对应的引导线的显示位置,在所述显示设备上显示所述引导线,所述引导线用于指引所述虚拟标签和与所述虚拟标签匹配的真实对象。
  11. 一种展示装置,包括:
    第一确定部分,配置为确定与真实场景中真实对象匹配的虚拟对象;
    第二确定部分,配置为通过对包括当前观看用户的图像进行识别,确定目标观看视角;
    第三确定部分,配置为根据所述目标观看视角,确定所述虚拟对象的显示效果;
    展示部分,配置为根据所述显示效果,通过显示设备展示所述真实场景与所述虚拟对象相叠加的增强现实效果。
  12. 根据权利要求11所述的装置,其中,所述第二确定部分还配置为:
    通过对所述图像中的用户数目进行识别,确定用户数目最多的图像区域;根据所述图像区域,确定用户在显示设备前的观看位置;将所述观看位置的观看视角确定为目标观看视角;或者
    通过对所述图像中的用户身份进行识别,确定所述当前观看用户的身份;将所述图像中具有目标身份的用户的观看视角确定为目标观看视角;或者
    通过对所述图像中的用户视线进行识别,确定所述图像中各用户的双眼视线方向;根据各用户的双眼视线方向,确定各用户的观看专注度;将观看专注度最高的用户的观看视角确定为目标观看视角;或者
    通过对所述图像中的人脸图像进行识别,确定与目标人脸匹配的目标用户;将所述目标用户的观看视角确定为目标观看视角。
  13. 一种展示设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1至10任一项所述方法中的步骤。
  14. 一种计算机存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1至10任一项所述方法中的步骤。
  15. 一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行用于实现权利要求1至10中任一项所述的方法中的步骤。
PCT/CN2021/095861 2020-07-31 2021-05-25 一种展示方法、装置、设备、存储介质及计算机程序 WO2022022036A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010763328.4A CN111881861B (zh) 2020-07-31 2020-07-31 一种展示方法、装置、设备及存储介质
CN202010763328.4 2020-07-31

Publications (1)

Publication Number Publication Date
WO2022022036A1 true WO2022022036A1 (zh) 2022-02-03

Family

ID=73205335

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/095861 WO2022022036A1 (zh) 2020-07-31 2021-05-25 一种展示方法、装置、设备、存储介质及计算机程序

Country Status (2)

Country Link
CN (1) CN111881861B (zh)
WO (1) WO2022022036A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365317A (zh) * 2020-11-12 2021-02-12 东方明珠新媒体股份有限公司 一种基于场景化虚拟餐桌的下单方法及设备
CN114911382A (zh) * 2022-05-06 2022-08-16 深圳市商汤科技有限公司 一种签名展示方法、装置及其相关设备和存储介质
CN115760269A (zh) * 2022-10-26 2023-03-07 北京城市网邻信息技术有限公司 户型特征生成方法、装置、电子设备及存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881861B (zh) * 2020-07-31 2023-07-21 北京市商汤科技开发有限公司 一种展示方法、装置、设备及存储介质
CN114584681A (zh) * 2020-11-30 2022-06-03 北京市商汤科技开发有限公司 目标对象的运动展示方法、装置、电子设备及存储介质
CN112601067B (zh) * 2020-12-11 2023-08-15 京东方科技集团股份有限公司 增强现实显示装置及其显示方法
CN112634773B (zh) * 2020-12-25 2022-11-22 北京市商汤科技开发有限公司 增强现实的呈现方法、装置、展示设备及存储介质
CN112632349B (zh) * 2020-12-31 2023-10-20 北京市商汤科技开发有限公司 展区指示方法、装置、电子设备及存储介质
CN113625872A (zh) * 2021-07-30 2021-11-09 深圳盈天下视觉科技有限公司 一种展示方法、系统、终端及存储介质
CN113794824B (zh) * 2021-09-15 2023-10-20 深圳市智像科技有限公司 室内可视化文档智能交互式采集方法、装置、系统及介质
CN114706511B (zh) * 2021-12-29 2024-07-23 联想(北京)有限公司 交互处理方法、装置及电子设备
CN117170504B (zh) * 2023-11-01 2024-01-19 南京维赛客网络科技有限公司 在虚拟人物交互场景中带人观看的方法、系统及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080111832A1 (en) * 2006-10-23 2008-05-15 International Business Machines Corporation System and method for generating virtual images according to position of viewers
US20110242134A1 (en) * 2010-03-30 2011-10-06 Sony Computer Entertainment Inc. Method for an augmented reality character to maintain and exhibit awareness of an observer
CN104995665A (zh) * 2012-12-21 2015-10-21 Metaio有限公司 用于在真实环境中表示虚拟信息的方法
CN107111371A (zh) * 2015-09-30 2017-08-29 华为技术有限公司 一种展示全景视觉内容的方法、装置及终端
CN110263657A (zh) * 2019-05-24 2019-09-20 亿信科技发展有限公司 一种人眼追踪方法、装置、系统、设备和存储介质
CN110321005A (zh) * 2019-06-14 2019-10-11 深圳传音控股股份有限公司 一种提高ar设备虚拟物件显示效果的方法、装置、ar设备和存储介质
US20200074743A1 (en) * 2017-11-28 2020-03-05 Tencent Technology (Shenzhen) Company Ltd Method, apparatus, device and storage medium for implementing augmented reality scene
CN111881861A (zh) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 一种展示方法、装置、设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107074110B (zh) * 2014-10-29 2019-07-05 松下知识产权经营株式会社 显示控制装置以及记录了显示控制程序的记录介质
CN109829977A (zh) * 2018-12-30 2019-05-31 贝壳技术有限公司 在虚拟三维空间中看房的方法、装置、电子设备及介质
CN109978945B (zh) * 2019-02-26 2021-08-31 浙江舜宇光学有限公司 一种增强现实的信息处理方法和装置
CN110716645A (zh) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 一种增强现实数据呈现方法、装置、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080111832A1 (en) * 2006-10-23 2008-05-15 International Business Machines Corporation System and method for generating virtual images according to position of viewers
US20110242134A1 (en) * 2010-03-30 2011-10-06 Sony Computer Entertainment Inc. Method for an augmented reality character to maintain and exhibit awareness of an observer
CN104995665A (zh) * 2012-12-21 2015-10-21 Metaio有限公司 用于在真实环境中表示虚拟信息的方法
CN107111371A (zh) * 2015-09-30 2017-08-29 华为技术有限公司 一种展示全景视觉内容的方法、装置及终端
US20200074743A1 (en) * 2017-11-28 2020-03-05 Tencent Technology (Shenzhen) Company Ltd Method, apparatus, device and storage medium for implementing augmented reality scene
CN110263657A (zh) * 2019-05-24 2019-09-20 亿信科技发展有限公司 一种人眼追踪方法、装置、系统、设备和存储介质
CN110321005A (zh) * 2019-06-14 2019-10-11 深圳传音控股股份有限公司 一种提高ar设备虚拟物件显示效果的方法、装置、ar设备和存储介质
CN111881861A (zh) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 一种展示方法、装置、设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365317A (zh) * 2020-11-12 2021-02-12 东方明珠新媒体股份有限公司 一种基于场景化虚拟餐桌的下单方法及设备
CN114911382A (zh) * 2022-05-06 2022-08-16 深圳市商汤科技有限公司 一种签名展示方法、装置及其相关设备和存储介质
CN115760269A (zh) * 2022-10-26 2023-03-07 北京城市网邻信息技术有限公司 户型特征生成方法、装置、电子设备及存储介质
CN115760269B (zh) * 2022-10-26 2024-01-09 北京城市网邻信息技术有限公司 户型特征生成方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN111881861B (zh) 2023-07-21
CN111881861A (zh) 2020-11-03

Similar Documents

Publication Publication Date Title
WO2022022036A1 (zh) 一种展示方法、装置、设备、存储介质及计算机程序
US9836889B2 (en) Executable virtual objects associated with real objects
US10789699B2 (en) Capturing color information from a physical environment
US20200258144A1 (en) Curated environments for augmented reality applications
CN105027033B (zh) 用于选择扩增现实对象的方法、装置和计算机可读媒体
EP2887322B1 (en) Mixed reality holographic object development
CN104471511B (zh) 识别指点手势的装置、用户接口和方法
US9165381B2 (en) Augmented books in a mixed reality environment
US20150379770A1 (en) Digital action in response to object interaction
US20130342568A1 (en) Low light scene augmentation
CN107004279A (zh) 自然用户界面相机校准
US11126845B1 (en) Comparative information visualization in augmented reality
US11232636B2 (en) Methods, devices, and systems for producing augmented reality
CN106575354A (zh) 有形界面对象的虚拟化
TW201104494A (en) Stereoscopic image interactive system
JP6656382B2 (ja) マルチメディア情報を処理する方法及び装置
CN111833458A (zh) 图像显示方法及装置、设备、计算机可读存储介质
CN107209567A (zh) 带有视觉反馈的注视致动的用户界面
CN111918114A (zh) 图像显示方法、装置、显示设备及计算机可读存储介质
CN112684893A (zh) 信息展示方法、装置、电子设备及存储介质
JP2022515608A (ja) 大面積透明タッチインターフェースにおける視差補正のためのシステム及び/又は方法
CN111833455B (zh) 图像处理方法、装置、显示设备及计算机存储介质
US20230334792A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20230334791A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
US20240185546A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21851039

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21851039

Country of ref document: EP

Kind code of ref document: A1