WO2022095468A1 - 增强现实场景下的展示方法、装置、设备、介质及程序 - Google Patents

增强现实场景下的展示方法、装置、设备、介质及程序 Download PDF

Info

Publication number
WO2022095468A1
WO2022095468A1 PCT/CN2021/102206 CN2021102206W WO2022095468A1 WO 2022095468 A1 WO2022095468 A1 WO 2022095468A1 CN 2021102206 W CN2021102206 W CN 2021102206W WO 2022095468 A1 WO2022095468 A1 WO 2022095468A1
Authority
WO
WIPO (PCT)
Prior art keywords
special effect
display
position data
target entity
entity object
Prior art date
Application number
PCT/CN2021/102206
Other languages
English (en)
French (fr)
Inventor
刘旭
栾青
李斌
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2022530223A priority Critical patent/JP2023504608A/ja
Publication of WO2022095468A1 publication Critical patent/WO2022095468A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present disclosure relates to the technical field of augmented reality, and in particular, to a display method, apparatus, device, medium and program in an augmented reality scene.
  • Augmented Reality (AR) technology superimposes physical information (visual information, sound, touch, etc.) into the real world after simulation, so that the real environment and virtual objects can be displayed on the same screen or in real time. spatial presentation.
  • physical information visual information, sound, touch, etc.
  • AR effects superimposed on physical objects can be displayed through AR devices.
  • Position in some cases, the physical object or AR device may move.
  • how can we continue to determine the display position of the AR effect so as to display the AR effect coherently, so as to provide more
  • the realistic display effect is a problem worthy of study.
  • Embodiments of the present disclosure provide a display solution in an augmented reality scene.
  • An embodiment of the present disclosure provides a display method in an augmented reality scene, the method is executed by an electronic device, and the method includes:
  • first display position data of an AR special effect matching the target entity object is determined, and based on the first display position data, the AR device is controlled display the AR special effects;
  • second display position data of the AR special effect is determined, and based on the second display position data, The AR device is controlled to continue to display the AR special effect according to the displayed progress of the AR special effect.
  • matching special effect data can be triggered to be displayed through the recognition result of the target entity object, so that the display effect can be closely associated with the target entity object, and the special effect data can be displayed in a more targeted manner.
  • different positioning methods are used to control AR devices to display AR special effects based on the recognition results of the target entity object, which can improve the coherence and stability of AR special effects in the display process, making the display of AR special effects more realistic.
  • the method when it is recognized that the current scene image contains a target entity object, before determining the first display position data of the AR special effect matching the target entity object, the method is as follows Identifying whether the target entity object is included in the current scene image: extracting feature points from the current scene image to obtain feature information corresponding to multiple feature points contained in the current scene image; the multiple feature points is located in the target detection area in the current scene image; based on the comparison between the feature information corresponding to the multiple feature points and the feature information corresponding to the multiple feature points contained in the pre-stored target entity object, determine Whether the target entity object is contained in the current scene image. In this way, by extracting multiple feature points contained in the target detection area, whether the current scene image contains the target entity object can be identified, and whether the current scene image contains the target entity object can be quickly and accurately determined by comparing the feature points.
  • the determining the first display position data of the AR special effect matching the target entity object includes: acquiring position information of the target entity object in the current scene image; the position information, determine the position data of the target entity object under the pre-established world coordinate system; and, based on the current scene image, determine the position data of the AR device under the world coordinate system; based on the The position data of the target entity object under the world coordinate system and the position data of the AR device under the world coordinate system determine the first display position data.
  • the first display position data of the AR special effect relative to the AR device in the same world coordinate system can be determined more accurately.
  • the augmented reality scene displayed by the device is more realistic.
  • the determining, based on the location information, the location data of the target entity object in a pre-established world coordinate system includes: corresponding to the AR device based on the location information, the image coordinate system and the AR device.
  • the location data of the target entity object in the world coordinate system and the location data of the AR device in the world coordinate system, and determining the first display location data includes: based on the location data of the target entity object in the The position data in the world coordinate system is determined, and the position data of the AR special effect in the world coordinate system is determined; based on the position data of the AR special effect in the world coordinate system and the AR device in the world coordinate
  • the position data under the system is determined, and the first placement data is determined.
  • the position data of the target entity object and the AR device in the world coordinate system can be accurately determined based on the current scene image, so that the AR special effect can be obtained accurately and quickly.
  • the determining the second display position data of the AR special effect includes: based on the current scene image, the historical scene image, and the relationship between the AR device and the historical scene image when shooting the historical scene image.
  • the relative position data of the target entity object under the pre-established world coordinate system determines the relative position data between the AR device and the target entity object when shooting the current scene image; based on the relative position data, Second placement data for the AR effect is determined.
  • the AR special effect includes an AR picture and audio content matched with the AR picture
  • the AR device is controlled based on the second display position data according to the AR special effect that has been displayed.
  • Continuing to display the AR special effect in the display progress includes: in the case that the target entity object is not recognized in the current scene image and the AR image is not displayed completely, controlling the AR special effect based on the second display position data
  • the AR device continues to display audio content matching the undisplayed AR image according to the displayed progress of the AR image.
  • the audio content that matches the AR image can continue to be displayed, which increases the display continuity of the AR special effects and enables the AR special effects to be displayed more realistically.
  • the method further includes: re-identifying the AR special effect in the current scene image In the case of the target entity object, based on the determined first display position data, the AR device is controlled to continue to display the AR special effect according to the displayed progress of the AR special effect.
  • the AR special effect will continue to be displayed based on the high-accuracy first display position data, which can improve the coherence and stability of the AR special effect during the display process, and make the AR special effect more efficient. Show more realistic.
  • the method before controlling the AR device to display the AR special effect, the method further includes: acquiring an AR special effect matching the target entity object; the AR special effect includes a plurality of virtual objects corresponding special effect data; the controlling the AR device to display the AR special effect includes: controlling the AR device to display the multiple virtual objects in sequence according to the display sequence of the special effect data corresponding to the multiple virtual objects respectively corresponding effect data. In this way, AR special effects composed of multiple virtual objects can be displayed to the user, making the display of AR special effects more vivid.
  • the target entity object includes a calendar
  • the method further includes: acquiring an AR special effect matching the calendar; the AR special effect contains first special effect data generated based on the cover content of the calendar; the controlling the AR device to display the AR special effect includes: in the case of recognizing the cover content of the calendar, based on the first special effect data, and control the AR device to display the AR special effect matching the content of the cover of the calendar.
  • an AR special effect introducing the calendar can be displayed on the calendar cover, so as to enrich the display content of the calendar and improve the user's interest in viewing the calendar.
  • the target entity object includes a calendar
  • the method further includes: acquiring an AR special effect matching the calendar; the AR special effect contains second special effect data generated based on at least one preset date in the calendar of marked events in the same period in history; the controlling the AR device to display the AR special effect includes: recognizing at least one of the calendars In the case of a preset date, based on the second special effect data, the AR device is controlled to display the AR special effect matching at least one preset date in the calendar. In this way, while displaying the calendar to the user, in the case of obtaining the preset date on the calendar, AR special effects corresponding to the preset date can also be displayed to the user to enrich the display content of the calendar.
  • Embodiments of the present disclosure provide a display device in an augmented reality scene, including:
  • an acquisition module configured to acquire the current scene image captured by the augmented reality AR device
  • the first control module is configured to, when it is recognized that the current scene image contains a target entity object, determine the first display position data of the AR special effect matching the target entity object, and based on the first display position data, and control the AR device to display the AR special effect;
  • the second control module is configured to, in the process of displaying the AR special effect, in response to the target entity object not being recognized in the current scene image, determine the second display position data of the AR special effect, and based on the The second display position data is controlled, and the AR device is controlled to continue to display the AR special effect according to the displayed progress of the AR special effect.
  • the acquiring module is configured to identify whether the target entity object is included in the current scene image in the following manner: extract feature points from the current scene image to obtain the current scene Feature information corresponding to multiple feature points included in the image; the multiple feature points are located in the target detection area in the current scene image; based on the feature information corresponding to the multiple feature points and the pre-stored The feature information corresponding to the multiple feature points included in the target entity object is compared to determine whether the target entity object is included in the current scene image.
  • the first control module is configured to acquire position information of the target entity object in the current scene image; based on the position information, determine that the target entity object is in a pre-established position data under the world coordinate system; and, based on the current scene image, determine the position data of the AR device under the world coordinate system; based on the position data of the target entity object under the world coordinate system and the position data of the AR device in the world coordinate system to determine the first display position data.
  • the first control module is configured to be based on the position information, the conversion relationship between the image coordinate system and the camera coordinate system corresponding to the AR device, and the relationship between the camera coordinate system corresponding to the AR device and the AR device.
  • the conversion relationship between the world coordinate systems determines the position data of the target entity object in the world coordinate system; the first control module is configured to be based on the target entity object in the world coordinate system. determine the position data of the AR special effect in the world coordinate system; based on the position data of the AR special effect in the world coordinate system and the position data of the AR device in the world coordinate system , and determine the first placement data.
  • the second control module is configured to, based on the current scene image, the historical scene image, and the AR device and the target entity object in advance when shooting the historical scene image
  • the relative position data under the established world coordinate system is used to determine the relative position data between the AR device and the target entity object when the current scene image is captured; based on the relative position data, the first AR special effect is determined. 2. Placement data.
  • the AR special effect includes an AR picture and audio content matching the AR picture
  • the second control module is configured to not recognize the target entity in the current scene image object, and when the AR screen is not displayed, control the AR device based on the second display position data to continue to display the audio that matches the AR screen that has not been displayed according to the displayed progress of the AR screen content.
  • the first control module is configured to, in the case that the target entity object is re-identified in the current scene image, based on the determined first display position data, control the The AR device continues to display the AR special effect according to the displayed progress of the AR special effect.
  • the acquisition module is configured to acquire the AR special effect matching the target entity object;
  • the AR special effect includes special effect data corresponding to multiple virtual objects;
  • the first control module is configured to control the AR device to display the multiple virtual objects in sequence according to the display sequence of the special effect data corresponding to the multiple virtual objects.
  • the target entity object includes a calendar
  • the acquisition module is configured to acquire and the calendar A matching AR special effect
  • the AR special effect includes first special effect data generated based on the cover content of the calendar
  • the first control module is configured to recognize the cover content of the calendar, based on the The first special effect data controls the AR device to display the AR special effect matching the content of the cover of the calendar.
  • the target entity object includes a calendar
  • the acquisition module is configured to acquire and the calendar matching AR special effects
  • the AR special effects include second special effects data generated based on at least one preset date in the calendar marked events in the same period in history
  • the first control module is configured to recognize the calendar In the case of at least one preset date in the calendar, based on the second special effect data, the AR device is controlled to display the AR special effect matching at least one preset date in the calendar.
  • Embodiments of the present disclosure further provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor and the The memories communicate with each other through a bus, and when the machine-readable instructions are executed by the processor, the display method in the augmented reality scene according to any one of the embodiments is executed.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the display method in an augmented reality scenario described in any one of the embodiments is executed.
  • An embodiment of the present disclosure further provides a computer program, where the computer program includes computer-readable codes, and when the computer-readable codes are executed in an electronic device, a processor of the electronic device executes any of the foregoing embodiments The display method in the augmented reality scene.
  • the embodiments of the present disclosure provide at least one display method, device, device, medium and program in an augmented reality scene, which can trigger the display of matching special effect data through the recognition result of the target entity object, so that the display effect can match that of the target entity.
  • the objects are closely related, and the special effect data can be displayed in a more targeted manner.
  • different positioning methods are used to control AR devices to display AR special effects based on the recognition results of the target entity object, which can improve the coherence and stability of AR special effects in the display process, making the display of AR special effects more realistic.
  • FIG. 1 shows a schematic flowchart of a display method in an augmented reality scenario provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of a system architecture to which the display method in an augmented reality scenario according to an embodiment of the present disclosure can be applied;
  • FIG. 3 shows a schematic flowchart of a method for determining whether a target entity object is included in a current scene image provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic flowchart of a method for determining display position data of AR special effects provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic flowchart of another method for determining display position data of AR special effects provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic flowchart of another method for determining display position data of AR special effects provided by an embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of a display screen of an AR special effect provided by an embodiment of the present disclosure
  • FIG. 8 shows a schematic structural diagram of a display apparatus 800 in an augmented reality scene provided by an embodiment of the present disclosure
  • FIG. 9 shows a schematic structural diagram of an electronic device 900 provided by an embodiment of the present disclosure.
  • Multiple or multiple in the embodiments of the present disclosure may refer to at least two or at least two, respectively.
  • AR special effects can be superimposed on physical objects, and the physical objects can be vividly introduced to users through AR special effects.
  • the physical object or AR device may move.
  • the position of the physical object changes, how can we continue to determine the display of AR special effects? position, so as to display AR special effects coherently to provide a more realistic display effect, which is a problem worthy of study.
  • the computer equipment includes, for example: terminal equipment or server or other processing equipment, the terminal equipment can be AR equipment with AR function, such as AR glasses, tablet computers, smart phones, smart wearable devices, etc.
  • a device with a display function and a data processing capability is not limited in the embodiments of the present disclosure.
  • the presenting method in the augmented reality scene may be implemented by the processor calling computer-readable instructions stored in the memory.
  • the target entity object may be an object with a specific shape, such as a book, calligraphy and painting, building and other entity objects.
  • the target entity object can be an entity object in the application scenario, and the entity object can be introduced through AR special effects to increase the user's understanding of the entity object.
  • the AR special effect matched with the target entity object includes at least one of an AR special effect having a preset relative positional relationship with the target entity object and an AR special effect having an associated relationship with the content of the target entity object.
  • the AR special effect having a preset relative positional relationship with the target entity object may refer to that the AR special effect and the target entity object have a preset relative positional relationship in the same coordinate system.
  • the AR special effect may include a three-dimensional AR picture and audio content, wherein the three-dimensional AR picture may be generated in a pre-built three-dimensional scene model.
  • the shape, size, position, attitude and other data of the 3D AR screen can be set in advance, as well as the positional relationship and relative attitude relationship between the AR screen and the target entity object, and the 3D corresponding to the 3D scene model.
  • the position data of the AR screen in the world coordinate system can be determined based on the position data of the target entity object in the world coordinate system.
  • the pose data of the AR screen under the world coordinate system can also be determined based on the pose data of the target entity object under the world coordinate system.
  • the pose data of the AR screen in the world coordinate system (the angle between the specified direction of the AR screen and the X-axis, Y-axis and Z-axis of the world coordinate system), this process does not need to build a 3D scene model. During the display process, it is more convenient and quicker.
  • the first display pose data of the AR special effect can also be determined based on the position information of the target entity object in the current scene image, and based on the image recognition technology, the target entity object in the current scene image can be more accurately determined location information in . Therefore, here, based on the position information of the target entity object, the first display position data and the first display attitude data of the AR special effect can be obtained relatively accurately, thereby providing support for the accurate display of the AR special effect.
  • the first presentation gesture data of the AR special effect can also be determined.
  • the first display position data and the first display gesture data of the AR special effect matching the target entity object are determined by the first positioning method, and the first display position data and the first display gesture data are determined based on the first display position data and The first is to display the attitude data, and control the AR device to display AR special effects.
  • the first positioning method is used to determine the target first display position data, that is, the AR special effect determined based on the position information of the target entity object in the current scene image. of the first placement data.
  • the relative position data between the AR device and the target entity object when shooting each scene image can be determined at the same time, and the relative position data can be saved, In this way, when the target entity object is not recognized in the current scene image, the relative position data between the saved AR device and the target entity object can be combined with the real-time positioning and map construction (Simultaneous Localization And Mapping, SLAM) technology to determine The relative position data between the AR device and the target entity object when shooting the current scene image.
  • the second display position data of the AR special effect may be determined based on the relative position data and the relative positional relationship between the AR special effect and the target entity object, and the process will be described in detail later.
  • the target entity object is a calendar
  • the AR special effect includes an AR screen with a total duration of 30s that is dynamically displayed. If the AR screen is displayed for the 10th s, the target entity object cannot be recognized in the current scene image captured by the AR device. , at this time, according to the second display position data of the AR special effect determined based on the second positioning method, or based on the second display position data and the second display attitude data, the AR device can be controlled to continue to display according to the progress of the AR screen display (continue Show from 10s).
  • the second display position data indicates that the AR device completely leaves the display position range of the AR screen, for example, the relative distance between the AR device and the calendar is greater than or equal to a preset threshold, or the second display gesture data indicates that the AR device The shooting angle of the device is completely away from the calendar.
  • the relative distance between the AR device and the calendar is determined to be less than the preset threshold based on the second display position data during the continuous display process, and the shooting angle of the AR device can still capture part of the calendar, the user can use the AR device to View part of the AR screen displayed, such as seeing the AR screen that matches part of the calendar.
  • the scene image contains the target entity object
  • determine the first display position data of the AR special effect matching the target entity object and control the AR device to display the AR special effect based on the first display position data; in the process of displaying the AR special effect.
  • the second display position data of the AR special effect is determined, and based on the second display position data, the AR device is controlled to continue to display the AR special effect according to the displayed progress of the AR effect.
  • the control terminal 203 uploads the display position information and the AR special effect to the network 202 , and sends the information to the current scene image acquisition terminal 201 through the network 202 .
  • the current scene image acquisition terminal 201 may be a vision processing device with a video capture module, or a host with a camera.
  • the display method in the augmented reality scene according to the embodiment of the present disclosure may be executed by the current scene image acquisition terminal 201 , and the above-mentioned system architecture may not include the network 202 and the control terminal 203 .
  • the current scene image when it is recognized that the current scene image contains the target entity object, before determining the first display position data of the AR special effect matching the target entity object, the current scene image can be identified in the following manner Whether the target entity object is included, as shown in Figure 3, including the following S201 to S202:
  • S201 Extract feature points on the current scene image to obtain feature information respectively corresponding to multiple feature points included in the current scene image; the multiple feature points are located in a target detection area in the current scene image.
  • an image detection algorithm may be used to locate a target detection area containing a solid object in the current scene image.
  • feature point extraction is performed in the target detection area.
  • the feature points located on the outline of the solid object, the feature points located in the identification pattern area, and the feature points located on the text area in the target detection area can be extracted.
  • the feature points can be uniformly extracted based on the corresponding location area of the target entity object in the current scene image. For example, when the target entity object is a calendar, it can be Uniform extraction is performed in the rectangular area corresponding to the calendar cover in the current scene image.
  • the feature information included in the feature point extracted here may include information that can represent the feature of the feature point, such as texture feature value, RGB feature value, gray value, etc. corresponding to the feature point.
  • the target entity object may be photographed in advance in the same manner to obtain and save feature information corresponding to multiple feature points included in the target entity object.
  • the multiple features extracted based on the current scene image can be firstly compared.
  • the feature information corresponding to the points respectively determines the first feature vector corresponding to the target detection area in the current scene image
  • the second feature vector corresponding to the target entity object is determined based on the feature information corresponding to the multiple feature points included in the target entity object.
  • the similarity between the target detection area and the target entity object can be determined through the first feature vector and the second feature vector, for example, it can be determined through the cosine formula.
  • the similarity between the first feature vector and the second feature vector is greater than or equal to a preset similarity threshold, it is determined that the current scene image contains the target entity object. Conversely, in the case that the similarity between the first feature vector and the second feature vector is determined to be less than the preset similarity threshold, it is determined that the current scene image does not contain the target entity object.
  • determine the first display position data of the AR special effect matching the target entity object may include the following S301 to S303:
  • S301 Obtain position information of a target entity object in a current scene image.
  • an image coordinate system can be established based on the current scene image, and the image coordinate values of multiple feature points included in the target entity object in the image coordinate system can be obtained to obtain position information of the target entity object in the current scene image.
  • it may be determined based on the position information, the transformation relationship between the image coordinate system and the camera coordinate system corresponding to the AR device, and the transformation relationship between the camera coordinate system corresponding to the AR device and the world coordinate system.
  • the position data of the target entity object in the world coordinate system may be determined based on the position information, the transformation relationship between the image coordinate system and the camera coordinate system corresponding to the AR device, and the transformation relationship between the camera coordinate system corresponding to the AR device and the world coordinate system.
  • the camera coordinate system corresponding to the AR device may take the focus center of the image acquisition component included in the AR device as the origin, and the three-dimensional rectangular coordinate system established with the optical axis as the Z axis. After the AR device captures the current scene image, it can be Based on the transformation relationship between the image coordinate system and the camera coordinate system, the position data of the target entity object in the camera coordinate system is determined.
  • the pre-established world coordinate system can be established with the center point of the target entity object as the origin.
  • the long side of the center is the X axis
  • the short side passing through the center of the calendar is the Y axis
  • the line passing through the center of the calendar and perpendicular to the cover of the calendar is the Z axis.
  • the conversion between the camera coordinate system and the world coordinate system is a rigid body conversion, that is, a conversion method in which the camera coordinate system can be rotated and translated to coincide with the world coordinate system.
  • the conversion relationship between the camera coordinate system and the world coordinate system can be determined by the position coordinates of multiple position points in the target entity object under the world coordinate system and the corresponding position coordinates under the camera coordinate system.
  • the position data of the target entity object in the world coordinate system can be determined based on the conversion relationship between the camera coordinate system corresponding to the AR device and the world coordinate system.
  • the position data of the AR device in the world coordinate system may be determined by using the current scene image captured by the AR device, for example, by selecting feature points in the current scene image, and by determining the selected feature points in the target entity object
  • the position coordinates in the world coordinate system of , and the position coordinates of the selected feature points in the camera coordinate system corresponding to the AR device can determine the position data of the AR device in the world coordinate system when the current scene image is captured.
  • the first position of the AR effect relative to the AR device can be determined.
  • a placement data Considering that the AR effect and the target entity object have a preset positional relationship in the same coordinate system, here, based on the position data of the target entity object and the AR device in the same world coordinate system, the first position of the AR effect relative to the AR device can be determined. A placement data.
  • the first display position data changes, for example, the position data of the target entity object in the world coordinate system changes, which can be displayed through the AR device.
  • the AR special effect that changes with the change of the position data of the target entity object; or, after the position data of the AR device in the world coordinate system changes, the AR device can also be used to display the changes with the position data of the AR device.
  • AR effects After the relative position data changes caused by the simultaneous movement of the target entity object and the AR device, the first display position data of the AR special effect will also change, which will change the display of the AR special effect.
  • the orientation of the AR device is moved from the left side of the target entity object to the right side of the target entity object, and the user can see the display of AR special effects through the AR device and the corresponding conversion has also occurred.
  • the process of determining the first display gesture data is similar to the process of determining the first display position data.
  • the first display position data of the AR special effect relative to the AR device in the same world coordinate system can be more accurately determined, This makes augmented reality scenes displayed through AR devices more realistic.
  • the position data of the AR special effect in the world coordinate system may include the position of the AR picture in the world coordinate system.
  • the position of the AR image in the world coordinate system may be represented by the coordinate value of the center point of the AR image in the world coordinate system.
  • the attitude of the AR picture in the world coordinate system can be determined through the relationship between the specified direction of the AR picture and each coordinate axis of the world coordinate system The included angle represents.
  • the attitude of the image acquisition component in the world coordinate system can be determined by using the orientation direction of the camera in the image acquisition component and the world coordinate system. Indicates the angle between the axes.
  • the first display location data may be determined by the position of the AR effect in the world coordinate system and the position of the AR device in the world coordinate system.
  • the first displayed pose data can be determined by the pose of the AR special effect in the world coordinate system and the pose of the AR device in the world coordinate system.
  • the position data of the target entity object and the AR device in the world coordinate system can be accurately determined based on the current scene image, so as to accurately and quickly Get the first display position data of the AR special effect.
  • the second display position data of the AR special effect is determined, as shown in FIG. 6, which may include the following S401 to S402:
  • the AR device Starting from the AR device shooting the first frame of the scene image containing the target entity object, it can be based on the world coordinate system established with the center point of the target entity object as the origin, and the selected feature points in the first frame scene image captured by the AR device respectively.
  • the position coordinates in the world coordinate system and the camera coordinate system corresponding to the AR device determine the position data of the AR device in the world coordinate system when the first frame of scene image is captured.
  • the target feature points included in the first frame of scene images can be found in the second frame of scene images, and the AR device captures the target feature points based on the target feature points.
  • the position data of the two frames of scene images in the camera coordinate system determines the position offset of the AR device when the second frame of scene images is photographed relative to when the first frame of scene images is photographed. Then, based on the position offset and the relative position data between the AR device and the target entity object in the pre-established world coordinate system when the first frame of scene image is captured, it is determined that the AR device and the target when the second frame of scene image is captured.
  • the relative position data of the entity object in the pre-established world coordinate system is determined from the AR device and the target when the second frame of scene image is captured.
  • the position offset of the AR device in the current scene image relative to the second frame of scene image can be determined, so that the AR device can be combined with the AR device to capture the current scene image Compared with the position offset when shooting the second frame of scene image, and the relative position data of the AR device and the target entity object in the pre-established world coordinate system when shooting the second frame of scene image, it is determined that the AR device is currently shooting The relative position data of the scene image and the target entity object in the pre-established world coordinate system.
  • the AR device is shooting the current scene image.
  • Relative pose data between target entity objects is similar to the determination process of the above-mentioned relative position data.
  • the AR special effect and the target entity object have a preset positional relationship in the same coordinate system, here it can also be determined based on the relative position data between the AR device and the target entity object when the current scene image is captured. Number of second placements for AR effects relative to AR devices.
  • the present disclosure by using the current scene image, the historical scene image, and the relative position data of the AR device and the target entity object in the world coordinate system when shooting the historical scene image, it can be more accurately determined that the AR device is shooting the current scene image
  • the relative position data between the target entity object and the target entity object, so that the second display position data of the AR special effect can be determined based on the accurate relative position data. to show.
  • the AR special effect may further include an AR picture and audio content matching the AR picture.
  • the AR device is controlled to continue to display the AR special effect according to the displayed progress of the AR special effect. , which can include:
  • the AR device is controlled based on the second display position data to continue to display the audio that matches the AR image that has not been displayed according to the displayed progress of the AR image. content.
  • the AR device leaves the display position range of the AR screen, if the display progress of the AR screen has not ended at this time, the progress of the AR screen can continue at this time, but the user cannot view the AR screen through the AR device. , you can only hear the audio content that matches the AR screen, which still brings users the experience of AR special effects.
  • the AR device returns to the display position of the AR screen, if the AR screen is still not displayed, the user can continue to display the undisplayed AR screen and the audio content matching the AR screen, bringing the user a consistent AR experience.
  • the audio content matching the AR image can continue to be displayed to increase the display continuity of the AR special effects. Make AR special effects more realistic display.
  • the second display position data of the AR special effect is determined, and based on the second display position data, the control After the AR device continues to display the AR special effect according to the displayed progress of the AR special effect, the method provided by the embodiment of the present disclosure further includes:
  • the AR device is controlled to continue to display the AR special effect according to the displayed progress of the AR special effect.
  • image recognition can be continued with respect to the current scene image captured by the AR device, and it is possible to identify the image in the current scene image. Whether to include the target entity object.
  • the first display position data determined by the first positioning method is still continued to control the AR device to continue to display the AR special effect according to the displayed progress of the AR special effect.
  • the AR special effect is continued to be displayed based on the first display position data with higher accuracy, which can improve the consistency and stability of the AR special effect during the display process. Makes the display of AR special effects more realistic.
  • the following process before controlling the AR device to display AR special effects, the following process may be performed:
  • the AR special effect contains special effect data corresponding to multiple virtual objects.
  • the AR special effect matching the target entity object can be acquired.
  • the special effect data corresponding to each virtual object in the AR special effect may include data such as appearance, color, and corresponding audio content of the virtual object when displayed through the AR device.
  • the AR device is controlled to display the special effect data corresponding to the multiple virtual objects in sequence.
  • the display sequence of each virtual object when displayed in the AR device can be preset, or it can be based on the attribute information of each virtual object and preset
  • the display order of different attributes determines the display data of the special effect data corresponding to the multiple virtual objects respectively.
  • the attribute information may include static objects, dynamic objects, and dynamic characters.
  • AR special effects composed of multiple virtual objects can be displayed to the user, so that the display of AR special effects is more vivid.
  • the target entity object includes a calendar
  • the display method proposed by the embodiments of the present disclosure further includes:
  • the AR special effect includes the first special effect data generated based on the cover content of the calendar.
  • the first special effect data here may be first special effect data respectively corresponding to a plurality of virtual objects corresponding to the cover content of the calendar.
  • the AR special effects include virtual objects such as a cartoon dragon, a cartoon squirrel, a virtual calendar cover wrapping the calendar cover, virtual text, auspicious clouds, etc., and may also include the display sequence of each virtual object when displayed through AR.
  • the AR device is controlled to display AR special effects matching the cover content of the calendar based on the first special effect data.
  • the display of the virtual calendar cover that wraps the cover of the calendar can be triggered first, and then the virtual calendar cover of the calendar is displayed in a dynamic form.
  • the title, then auspicious clouds can appear, and then the cartoon dragon and cartoon squirrel are displayed on the cover of the virtual calendar, and the calendar is introduced in detail in the form of a dialogue, and after the introduction, the display of AR special effects ends.
  • FIG 7 it is an AR screen displayed during the process of controlling the AR device to display AR special effects that match the content of the calendar cover. Users can learn about the calendar by watching the AR screen and listening to the corresponding audio content.
  • AR special effects introducing the calendar can be displayed on the calendar cover, so as to enrich the display content of the calendar and improve the user's interest in viewing the calendar.
  • the target entity object includes a calendar
  • the display method provided by the embodiments of the present disclosure further includes:
  • the AR special effect matching the calendar is obtained; the AR special effect includes second special effect data generated based on the marked events of at least one preset date in the calendar in the same historical period.
  • the calendar includes some preset dates where certain events occurred in the same period of history, for example, January 1st is New Year's Day, and AR special effects can be generated based on events that occurred on New Year's Day in history.
  • the second special effect data may include virtual text, audio content, and virtual images generated based on events that occurred in the same period in history, and may also include the display sequence of each second special effect data.
  • the AR device is controlled to display an AR effect matching the at least one preset date in the calendar.
  • the AR device may be controlled to display and introduce events that occurred during the same historical period on the preset date according to the second special effect data.
  • AR special effects corresponding to the preset date can also be displayed to the user to enrich the display content of the calendar.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • the embodiment of the present disclosure also provides a display device in an augmented reality scene corresponding to the display method in the augmented reality scene.
  • the method is similar, so the implementation of the apparatus can be referred to the implementation of the method.
  • the display device includes:
  • An acquisition module 801 configured to acquire a current scene image captured by an augmented reality AR device
  • the first control module 802 is configured to determine the first display position data of the AR special effect matching the target physical object when it is recognized that the target entity object is included in the current scene image, and control the AR device based on the first display position data Show AR special effects;
  • the second control module 803 is configured to, in the process of displaying the AR special effect, in response to the target entity object not being recognized in the current scene image, determine the second display position data of the AR special effect, and control the AR effect based on the second display position data The device continues to display AR effects according to the progress of AR effects.
  • the acquiring module 801 is configured to identify whether the current scene image contains a target entity object in the following manner: extract feature points from the current scene image, and obtain a plurality of feature points contained in the current scene image corresponding to feature information; multiple feature points are located in the target detection area in the current scene image; based on the feature information corresponding to the multiple feature points and the feature information corresponding to the multiple feature points contained in the pre-stored target entity object are compared. , to determine whether the current scene image contains the target entity object.
  • the first control module 802 is configured to obtain the position information of the target entity object in the current scene image; based on the position information, determine the position data of the target entity object in the pre-established world coordinate system; and determine the position data of the AR device in the world coordinate system based on the current scene image; determine the first display position data based on the position data of the target entity object in the world coordinate system and the position data of the AR device in the world coordinate system.
  • the first control module 802 is configured to be based on the position information, the transformation relationship between the image coordinate system and the camera coordinate system corresponding to the AR device, and the camera coordinate system and the world coordinate system corresponding to the AR device The conversion relationship between them determines the position data of the target entity object in the world coordinate system; the first control module 802 is configured to determine the position of the AR special effect in the world coordinate system based on the position data of the target entity object in the world coordinate system. Data; the first display position data is determined based on the position data of the AR special effect in the world coordinate system and the position data of the AR device in the world coordinate system.
  • the second control module 803 is configured to be based on the current scene image, the historical scene image, and the relative position of the AR device and the target entity object in the pre-established world coordinate system when shooting the historical scene image data, to determine the relative position data between the AR device and the target entity object when capturing the current scene image; based on the relative position data, determine the second display position data of the AR special effect.
  • the AR special effect includes an AR picture and audio content matching the AR picture
  • the second control module 803 is configured to not recognize the target entity object in the current scene image, and the AR picture has not been displayed.
  • the AR device is controlled based on the second display position data to continue to display the audio content matching the AR image that has not been displayed according to the displayed progress of the AR image.
  • the first control module 802 is configured to control the AR device based on the first display position data determined by the first positioning method when the target entity object is re-identified in the current scene image. Continue to display AR effects according to the displayed progress of AR effects.
  • the obtaining module 801 is configured to obtain AR special effects matching the target entity object; the AR special effects include multiple virtual objects corresponding special effect data; the first control module 802 is configured to control the AR device to sequentially display the special effect data corresponding to the plurality of virtual objects according to the display sequence of the special effect data corresponding to the plurality of virtual objects.
  • the target entity object includes a calendar
  • the obtaining module 801 is configured to obtain the AR special effect matching the calendar
  • the AR special effect includes First special effect data generated based on the cover content of the calendar
  • the first control module 802 is configured to control the AR device to display AR special effects matching the cover content of the calendar based on the first special effect data when the cover content of the calendar is recognized .
  • the target entity object includes a calendar
  • the obtaining module 801 is configured to obtain the AR special effect matching the calendar
  • the AR special effect includes The second special effect data is generated based on at least one preset date in the calendar at a historically contemporaneous marked event
  • the first control module 802 is configured to, in the case of recognizing at least one preset date in the calendar, based on the second special effect data , to control the AR device to display AR special effects that match at least one preset date in the calendar.
  • an embodiment of the present disclosure further provides an electronic device 900 .
  • the schematic structural diagram of the electronic device 900 provided by the embodiment of the present disclosure includes:
  • the processor 91 and the memory 92 communicate through the bus 93, and the machine-readable instructions When executed by the processor 91, the display method in the augmented reality scene in any of the foregoing embodiments is executed.
  • the memory 92 is configured to store execution instructions, including the memory 921 and the external memory 922; the memory 921 here is also called the internal memory, and is used to temporarily store the operation data in the processor 91 and the data exchanged with the external memory 922 such as the hard disk. 91 exchanges data with the external memory 922 through the internal memory 921.
  • the processor 91 and the memory 92 communicate through the bus 93, so that the processor 91 executes the following instructions:
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the display method in the augmented reality scenario described in the foregoing method embodiments is executed .
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program, where the computer program includes computer-readable codes, and when the computer-readable codes run in an electronic device, the processor of the electronic device executes the augmented reality scene described in any of the foregoing embodiments display method below.
  • Embodiments of the present disclosure further provide another computer program product, where the computer program product carries program code, and the instructions included in the program code can be configured to execute the display method in the enhanced scenario described in the above method embodiments.
  • the computer program product carries program code
  • the instructions included in the program code can be configured to execute the display method in the enhanced scenario described in the above method embodiments.
  • the above-mentioned computer program product can be realized by means of hardware, software or a combination thereof.
  • the computer program product may be embodied as a computer storage medium, and in other embodiments, the computer program product may be embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the apparatus involved in the embodiments of the present disclosure may be at least one of a system, a method, and a computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Examples (a non-exhaustive list) of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable Read Only Memory (Electrical Programmable Read Only Memory, EPROM) or Flash, Static Random Access Memory (Static Random-Access Memory, SRAM), Portable Compact Disc Read-Only Memory (CD-ROM), Digital Video Discs (DVDs), memory sticks, floppy disks, mechanical coding devices, such as punch cards or raised structures in grooves on which instructions are stored, and any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable Programmable Read Only Memory
  • Flash Static Random Access Memory
  • SRAM Static Random Access Memory
  • CD-ROM Portable Compact Disc Read-Only Memory
  • DVDs Digital Video Discs
  • memory sticks floppy disks
  • mechanical coding devices such as punch cards or raised structures in grooves on which instructions are stored, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as at least one of the Internet, a local area network, a wide area network, and a wireless network .
  • the network may include at least one of copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the present disclosure may be assembly instructions, Industry Standard Architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the “C” language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or Wide Area Network (WAN), or may be connected to an external computer (eg, using Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • electronic circuits such as programmable logic circuits, FPGAs, or Programmable Logic Arrays (PLAs), that can execute computer-readable Program instructions are read to implement various aspects of the present disclosure.
  • PDAs Programmable Logic Arrays
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk and other mediums that can store program codes.
  • Embodiments of the present disclosure provide a display method, apparatus, device, medium, and program in an augmented reality scene, wherein the method is executed by an electronic device, and the display method includes: acquiring a current scene image captured by an augmented reality AR device; In the case where it is recognized that the current scene image contains a target entity object, first display position data of an AR special effect matching the target entity object is determined, and based on the first display position data, the AR device is controlled Displaying the AR special effect; in the process of displaying the AR special effect, in response to the target entity object not being recognized in the current scene image, determine the second display position data of the AR special effect, and based on the The second display position data controls the AR device to continue to display the AR special effect according to the displayed progress of the AR special effect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例提供了一种增强现实场景下的展示方法、装置、设备、介质及程序,其中,所述方法由电子设备执行,该展示方法包括:获取增强现实AR设备拍摄的当前场景图像;在识别到所述当前场景图像中包含目标实体对象的情况下,确定与所述目标实体对象匹配的AR特效的第一展示位置数据,并基于所述第一展示位置数据,控制所述AR设备展示所述AR特效;在展示所述AR特效的过程中,响应于在所述当前场景图像中未识别到所述目标实体对象,确定所述AR特效的第二展示位置数据,并基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效。

Description

增强现实场景下的展示方法、装置、设备、介质及程序
相关申请的交叉引用
本专利申请要求2020年11月06日提交的中国专利申请号为202011232913.8、申请人为北京市商汤科技开发有限公司,申请名称为“增强现实场景下的展示方法、装置、电子设备及存储介质”的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本公开涉及增强现实技术领域,尤其涉及一种增强现实场景下的展示方法、装置、设备、介质及程序。
背景技术
增强现实(Augmented Reality,AR)技术,通过将实体信息(视觉信息、声音、触觉等)通过模拟仿真后,叠加到真实世界中,从而将真实的环境和虚拟的物体实时地在同一个画面或空间呈现。
相关技术中,随着AR技术的发展,AR设备的应用领域越来越广,可以通过AR设备展示叠加在实体对象上的AR特效,在AR特效在展示过程中,一般需要确定AR特效的展示位置,在一些情况下,实体对象或者AR设备可能发生移动,在移动过程中,若实体对象的位置发生变化,如何能够继续确定AR特效的展示位置,从而对AR特效进行连贯展示,以提供更加逼真的展示效果,是值得研究的问题。
发明内容
本公开实施例提供一种增强现实场景下的展示方案。
本公开实施例提供了一种增强现实场景下的展示方法,所述方法由电子设备执行,所述方法包括:
获取AR设备拍摄的当前场景图像;
在识别到所述当前场景图像中包含目标实体对象的情况下,确定与所述目标实体对象匹配的AR特效的第一展示位置数据,并基于所述第一展示位置数据,控制所述AR设备展示所述AR特效;
在展示所述AR特效的过程中,响应于在所述当前场景图像中未识别到所述目标实体对象,确定所述AR特效的第二展示位置数据,并基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效。
如此,能够通过目标实体对象的识别结果即可触发匹配的特效数据进行展示,使其展示效果能够与目标实体对象紧密关联,能够更有针对性的去展示特效数据。并且,基于目标实体对象的识别结果采用不同的定位方式控制AR设备对AR特效进行展示,能够提高AR特效在展示过程中的连贯性和稳定性,使得AR特效的展示更加逼真。
在本公开的一些实施例中,所述在识别到所述当前场景图像中包含目标实体对象的情况下,确定与所述目标实体对象匹配的AR特效的第一展示位置数据之前,按照以下方式识别所述当前场景图像中是否包含所述目标实体对象:对所述当前场景图像进行特征点提取,得到所述当前场景图像包含的多个特征点分别对应的特征信息;所述多个特征点位于所述当前场景图像中的目标检测区域中;基于所述多个特征点分别对应的特征信息与预先存储的所述目标实体对象包含的多个特征点分别对应的特征信息进行比对,确定所述当前场景图像中是否包含所述目标实体对象。如此,通过提取目标检测区域中包含的多个特征点,对当前场景图像中是否包含目标实体对象进行识别,通过特征点比对的方式可以快速准确的确定当前场景图像中是否包含目标实体对象。
在本公开的一些实施例中,所述确定与所述目标实体对象匹配的AR特效的第一展示位置数据,包括:获取所述目标实体对象在所述当前场景图像中的位置信息;基于所述位置信息,确定所述目标实体对象在预先建立的世界坐标系下的位置数据;以及,基于所述当前场景图像,确定所述AR设备在所述世界坐标系下的位置数据;基于所述目标实体对象在所述世界坐标系下的位置数据和所述AR设备在所述世界坐标系下的位置数据,确定所述第一展示位置数据。如此,通过确定出目标实体对象、AR设备在同一世界坐标系下的位置数据,能够更加精准地确定出AR特效相对于AR设备在同一世界坐标系下的第一展示位置数据,这样使得通过AR设备展示的增强现实场景更为逼真。
在本公开的一些实施例中,所述基于所述位置信息,确定所述目标实体对象在预先建立的世界坐标系下的位置数据,包括:基于所述位置信息、图像坐标系和AR设备对应的相机坐标系之间的转换关系、以及AR设备对应的相机坐标系与所述世界坐标系之间的转换关系,确定所述目标实体对象在所述世界坐标系下的位置数据;所述基于所述目标实体对象在所述世界坐标系下的位置数据和所述AR设备在所述世界坐标系下的位置数据,确定所述第一展示位置数据,包括:基于所述目标实体对象在所述世界坐标系下的位置数据,确定所述AR特效在所述世界坐标系下的位置数据;基于所述AR特效在所述世界坐标系下的位置数据和所述AR设备在所述世界坐标系下的位置数据,确定所述第一展示位置数据。如此,在识别到当前场景图像中包含目标实体对象的情况下,可以基于当前场景图像来准确的确定出目标实体对象和AR设备在世界坐标系下位置数据,从而能够准确快速的得到AR特效的第一展示位置数据。
在本公开的一些实施例中,所述确定所述AR特效的第二展示位置数据,包括:基于所述当前场景图像、历史场景图像、以及所述AR设备在拍摄所述历史场景图像时与所述目标实体对象在预先建立的世界坐标系下的相对位置数据,确定所述AR设备在拍摄当前场景图像时,与所述目标实体对象之间的相对位置数据;基于所述相对位置数据,确定所述AR特效的第二展示位置数据。如此,利用当前场景图像、历史场景图像、以及AR设备在拍摄历史场景图像时与目标实体对象在世界坐标系下的相对位置数据,能够较为准确的确定AR设备在拍摄当前场景图像时,与目标实体对象之间的相对位置数据,这样可以基于准确的相对位置数据,确定AR特效的第二展示位置数据,能够实现在识别不到目标实体对象的情况下,还可以对AR特效进行展示。
在本公开的一些实施例中,所述AR特效包括AR画面和与所述AR画面匹配的音频内容,所述基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效,包括:在所述当前场景图像中未识别到所述目标实体对象,且所述AR画面未展示完毕的情况下,基于所述第二展示位置数据控制所述AR设备按照所述AR画面的已展示进度,继续展示与未展示的所述AR画面匹配的音频内容。如此,在无法识别到目标实体对象的情况下,若AR画面没有展示完毕,还可以继续对与AR画面匹配的音频内容进行展示,增加AR特效的展示连贯性,使得AR特效能够更加逼真的展示。
在本公开的一些实施例中,所述在展示所述AR特效的过程中,响应于在所述当前场景图像中未识别到所述目标实体对象,确定所述AR特效的第二展示位置数据,并基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效之后,所述方法还包括:在所述当前场景图像中重新识别到所述目标实体对象的情况下,重新基于确定的所述第一展示位置数据,控制AR设备按照所述AR特效的已展示进度继续展示所述AR特效。如此,在重新识别到目标实体对象后,重新基于准确度较高的第一展示位置数据,对AR特效继续进行展示,能够提高AR特效在展示过程中的连贯性和稳定性,使得AR特效的展示更加逼真。
在本公开的一些实施例中,在控制所述AR设备展示所述AR特效之前,所述方法还包括:获取与所述目标实体对象匹配的AR特效;所述AR特效中包含多个虚拟对象分别对应的特效数据;所述控制所述AR设备展示所述AR特效,包括:按照所述多个虚拟对象分别对应的特效数据的展示顺序,控制所述AR设备依次展示所述多个虚拟对象分别对应的特效数据。如此,可以向用户展示多个虚拟对象构成的AR特效,使得AR特效的展示更加生动。
在本公开的一些实施例中,所述目标实体对象包含日历,在控制所述AR设备展示所述AR特效之前,所述方法还包括:获取与所述日历匹配的AR特效;所述AR特效中包含基于所述日历的封面内容生成的第一特效数据;所述控制所述AR设备展示所述AR特效,包括:在识别到所述日历的封面内容的情况下,基于所述第一特效数据,控制所述AR设备展示与所述日历的封面内容匹配的所述AR特效。如此,在识别到日历封面的情况下,可以在日历封面上展示介绍日历的AR特效,丰富日历的展示内容,提高用户对日历的观看兴趣。
在本公开的一些实施例中,所述目标实体对象包含日历,在控制所述AR设备展示所述AR特效之前,所述方法还包括:获取与所述日历匹配的AR特效;所述AR特效中包含基于所述日历中的至 少一个预设日期在历史同期的标记事件生成的第二特效数据;所述控制所述AR设备展示所述AR特效,包括:在识别到所述日历中的至少一个预设日期的情况下,基于所述第二特效数据,控制所述AR设备展示与所述日历中的至少一个预设日期匹配的所述AR特效。如此,在向用户展示日历的同时,在获取到日历上的预设日期的情况下,还可以向用户展示与预设日期对应的AR特效,丰富日历的展示内容。
以下装置、电子设备等的效果描述参见上述增强现实场景下的展示方法的说明。
本公开实施例提供了一种增强现实场景下的展示装置,包括:
获取模块,配置为获取增强现实AR设备拍摄的当前场景图像;
第一控制模块,配置为在识别到所述当前场景图像中包含目标实体对象的情况下,确定与所述目标实体对象匹配的AR特效的第一展示位置数据,并基于所述第一展示位置数据,控制所述AR设备展示所述AR特效;
第二控制模块,配置为在展示所述AR特效的过程中,响应于在所述当前场景图像中未识别到所述目标实体对象,确定所述AR特效的第二展示位置数据,并基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效。
在本公开的一些实施例中,所述获取模块,配置为按照以下方式识别所述当前场景图像中是否包含所述目标实体对象:对所述当前场景图像进行特征点提取,得到所述当前场景图像包含的多个特征点分别对应的特征信息;所述多个特征点位于所述当前场景图像中的目标检测区域中;基于所述多个特征点分别对应的特征信息与预先存储的所述目标实体对象包含的多个特征点分别对应的特征信息进行比对,确定所述当前场景图像中是否包含所述目标实体对象。
在本公开的一些实施例中,所述第一控制模块,配置为获取所述目标实体对象在所述当前场景图像中的位置信息;基于所述位置信息,确定所述目标实体对象在预先建立的世界坐标系下的位置数据;以及,基于所述当前场景图像,确定所述AR设备在所述世界坐标系下的位置数据;基于所述目标实体对象在所述世界坐标系下的位置数据和所述AR设备在所述世界坐标系下的位置数据,确定所述第一展示位置数据。
在本公开的一些实施例中,所述第一控制模块,配置为基于所述位置信息、图像坐标系和AR设备对应的相机坐标系之间的转换关系、以及AR设备对应的相机坐标系与所述世界坐标系之间的转换关系,确定所述目标实体对象在所述世界坐标系下的位置数据;所述第一控制模块,配置为基于所述目标实体对象在所述世界坐标系下的位置数据,确定所述AR特效在所述世界坐标系下的位置数据;基于所述AR特效在所述世界坐标系下的位置数据和所述AR设备在所述世界坐标系下的位置数据,确定所述第一展示位置数据。
在本公开的一些实施例中,所述第二控制模块,配置为基于所述当前场景图像、历史场景图像、以及所述AR设备在拍摄所述历史场景图像时与所述目标实体对象在预先建立的世界坐标系下的相对位置数据,确定所述AR设备在拍摄当前场景图像时,与所述目标实体对象之间的相对位置数据;基于所述相对位置数据,确定所述AR特效的第二展示位置数据。
在本公开的一些实施例中,所述AR特效包括AR画面和与所述AR画面匹配的音频内容,所述第二控制模块,配置为在所述当前场景图像中未识别到所述目标实体对象,且所述AR画面未展示完毕的情况下,基于所述第二展示位置数据控制所述AR设备按照所述AR画面的已展示进度,继续展示与未展示的所述AR画面匹配的音频内容。
在本公开的一些实施例中,所述第一控制模块,配置为在所述当前场景图像中重新识别到所述目标实体对象的情况下,重新基于定的所述第一展示位置数据,控制AR设备按照所述AR特效的已展示进度继续展示所述AR特效。
在本公开的一些实施例中,在所述第一控制模块配置为控制所述AR设备展示所述AR特效之前,所述获取模块,配置为获取与所述目标实体对象匹配的AR特效;所述AR特效中包含多个虚拟对象分别对应的特效数据;所述第一控制模块,配置为按照所述多个虚拟对象分别对应的特效数据的展示顺序,控制所述AR设备依次展示所述多个虚拟对象分别对应的特效数据。
在本公开的一些实施例中,所述目标实体对象包含日历,在所述第一控制模块配置为控制所述AR设备展示所述AR特效之前,所述获取模块,配置为获取与所述日历匹配的AR特效;所述AR特效中包含基于所述日历的封面内容生成的第一特效数据;所述第一控制模块,配置为在识别到所述日历的封面内容的情况下,基于所述第一特效数据,控制所述AR设备展示与所述日历的封面内容匹配的所述AR特效。
在本公开的一些实施例中,所述目标实体对象包含日历,在所述第一控制模块配置为控制所述AR设备展示所述AR特效之前,所述获取模块,配置为获取与所述日历匹配的AR特效;所述AR特效中包含基于所述日历中的至少一个预设日期在历史同期的标记事件生成的第二特效数据;所述第一控制模块,配置为在识别到所述日历中的至少一个预设日期的情况下,基于所述第二特效数据,控制所述AR设备展示与所述日历中的至少一个预设日期匹配的所述AR特效。
本公开实施例还提供一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行任一实施例所述的增强现实场景下的展示方法。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行任一实施例所述的增强现实场景下的展示方法。
本公开实施例还提供一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行上述任一实施例所述的增强现实场景下的展示方法。
本公开实施例至少提供一种增强现实场景下的展示方法、装置、设备、介质及程序,能够通过目标实体对象的识别结果即可触发匹配的特效数据进行展示,使其展示效果能够与目标实体对象紧密关联,能够更有针对性的去展示特效数据。并且,基于目标实体对象的识别结果采用不同的定位方式控制AR设备对AR特效进行展示,能够提高AR特效在展示过程中的连贯性和稳定性,使得AR特效的展示更加逼真。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开实施例的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种增强现实场景下的展示方法的流程示意图;
图2示出可以应用本公开实施例的增强现实场景下的展示方法的一种系统架构示意图;
图3示出了本公开实施例所提供的一种确定当前场景图像中是否包含目标实体对象的方法的流程示意图;
图4示出了本公开实施例所提供的一种确定AR特效的展示位置数据的方法流程示意图;
图5示出了本公开实施例所提供的另一种确定AR特效的展示位置数据的方法流程示意图;
图6示出了本公开实施例所提供的另一种确定AR特效的展示位置数据的方法流程示意图;
图7示出了本公开实施例所提供的一种AR特效的展示画面示意图;
图8示出了本公开实施例所提供的一种增强现实场景下的展示装置800的结构示意图;
图9示出了本公开实施例所提供的一种电子设备900的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例, 而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
本公开实施例中的多个或者多种可以分别指的是至少两个或者至少两种。
随着AR技术的发展,逐渐将AR技术应用于多种领域中,比如可以在实体对象上叠加AR特效,通过AR特效向用户形象生动地介绍实体对象,AR特效在展示过程中,一般需要确定AR特效的展示位置,在向用户展示AR特效的过程中,相关技术中,实体对象或者AR设备可能发生移动,在移动过程中,若实体对象的位置发生变化,如何能够继续确定AR特效的展示位置,从而对AR特效进行连贯展示,以提供更加逼真的展示效果,是值得研究的问题。
基于上述研究,本公开实施例提供了一种增强现实场景下的展示方法、装置、设备、介质及程序,能够通过目标实体对象的识别结果即可触发匹配的特效数据进行展示,使其展示效果能够与目标实体对象紧密关联,能够更有针对性的去展示特效数据。并且,基于目标实体对象的识别结果采用不同的定位方式控制AR设备对AR特效进行展示,能够提高AR特效在展示过程中的连贯性和稳定性,使得AR特效的展示更加逼真。
为便于对本公开实施例进行理解,首先对本公开实施例所公开的一种增强现实场景下的展示方法进行详细介绍,本公开实施例所提供的增强现实场景下的展示方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以是具有AR功能的AR设备,比如可以包括AR眼镜、平板电脑、智能手机、智能穿戴式设备等具有显示功能和数据处理能力的设备,本公开实施例中不作限定。在本公开的一些实施例中,该增强现实场景下的展示方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
参见图1所示,为本公开实施例提供的增强现实场景下的展示方法的流程示意图,该展示方法包括以下S101至S103:
S101,获取AR设备拍摄的当前场景图像。
示例性地,AR设备可以包括但不限于AR眼镜、平板电脑、智能手机、智能穿戴式设备等具有显示功能和数据处理能力的设备,这些AR设备中可以安装用于展示AR场景内容的应用程序,用户可以在该应用程序中体验AR场景内容。
示例性地,AR设备还可以包含用于拍摄图像的图像采集部件,比如三原色(Red Green Blue,RGB)摄像头,在获取到AR设备拍摄的当前场景图像后,可以对该当前场景图像进行识别,识别是否包含触发AR特效进行展示的目标实体对象。
S102,在识别到当前场景图像中包含目标实体对象的情况下,确定与目标实体对象匹配的AR特效的第一展示位置数据,并基于第一展示位置数据,控制AR设备展示AR特效。
示例性地,针对不同的应用场景,目标实体对象可以为具有特定形态的物体,比如可以为书本、字画、建筑物等实体物体。在不同的应用场景下,目标实体对象可以为该应用场景下的实体物体,通过AR特效可以对该实体物体进行介绍,增加用户对实体物体的了解。
示例性地,与目标实体对象匹配的AR特效,包括与目标实体对象具有预设相对位置关系的AR特效和与目标实体对象的内容具有关联关系的AR特效中的至少之一。
示例性地,与目标实体对象具有预设相对位置关系的AR特效,可以指代AR特效和目标实体对象在同一坐标系下具有预设相对位置关系。在本公开的一些实施例中,AR特效可以为包含三维的AR 画面和音频内容,其中三维的AR画面可以在预先构建的三维场景模型中生成。在该三维场景模型中可以提前设置好三维的AR画面的形貌、尺寸、位置、姿态等数据,以及AR画面与目标实体对象之间的位置关系、相对姿态关系、该三维场景模型对应的三维坐标系和目标实体对象所在的世界坐标系之间具有预设的转换关系。这样,在确定目标实体对象在世界坐标系下的位置数据后,可以基于目标实体对象在世界坐标系下的位置数据,确定出AR画面在世界坐标系下的位置数据。另外,还可以基于目标实体对象在世界坐标系下的姿态数据,同时确定出AR画面在世界坐标系下的姿态数据。
在本公开的一些实施例中,还可以预先设定目标实体对象与AR特效的三维画面之间的位置关系和姿态关系,比如可以设定AR画面和目标实体对象在同一坐标系(可以为预先建立的世界坐标系)下的位置关系和姿态关系。这样在得到目标实体对象在世界坐标系下的位置数据后,可以确定出AR画面在世界坐标系下的位置数据,还可以在得到目标实体对象在世界坐标系下的姿态数据后,可以确定出AR画面在世界坐标系下的姿态数据(AR画面的指定方向分别与世界坐标系的X轴、Y轴和Z轴之间的夹角),该过程无需构建三维场景模型,在针对AR特效的展示过程中,更加方便快捷。
示例性地,针对目标实体对象为日历的情况,可以以日历中心为原点、以通过日历中心的长边为X轴、以通过日历中心的短边为Y轴、以通过日历中心且垂直于日历封面的直线为Z轴建立世界坐标系。若AR画面与目标实体对象之间的位置关系包括AR画面在日历上表面且距离日历中心点预设距离的地方进行展示,则在确定出日历在世界坐标系下的位置数据后,可以基于此确定出AR画面的展示位置。另外,还可以基于日历在世界坐标系下的姿态数据,以及预先设定AR画面与日历之间的姿态关系,确定出日历在世界坐标系下的姿态数据。
示例性地,与目标实体对象的内容具有关联关系的AR特效,可以指AR特效的展示内容包含目标实体对象的内容、目标实体对象具有的作用或者为吸引用户了解目标实体对象的AR画面。在本公开的一些实施例中,针对目标实体对象为日历的情况,AR特效的展示内容可以包含与该日历对应的年份相关的AR画面,以及介绍该日历包含内容的音频内容等。
示例性地,在识别到当前场景图像中包含目标实体对象的情况下,可以使用第一定位方式来确定目标实体对象在当前场景图像中的位置信息,进而基于目标实体对象在当前场景图像中的位置信息,来确定AR特效的第一展示位置数据;其中,第一定位方式可以是利用标识物(marker)进行定位的方式,即利用目标实体对象的图像作为marker,确定目标实体对象在当前场景图像中的图像位置信息。在一些实施例中,还可以基于目标实体对象在当前场景图像中的位置信息,来确定AR特效的第一展示姿态数据,基于图像识别技术,可以较为准确地确定出目标实体对象在当前场景图像中的位置信息。因此,这里基于目标实体对象的位置信息可以较为准确的得到AR特效的第一展示位置数据和第一展示姿态数据,从而为AR特效的准确展示提供支持。
示例性地,还可以确定AR特效的第一展示姿态数据。在识别到当前场景图像中包含目标实体对象的情况下,通过第一定位方式确定与目标实体对象匹配的AR特效的第一展示位置数据和第一展示姿态数据,并基于第一展示位置数据和第一展示姿态数据,控制AR设备展示AR特效。
S103,在展示AR特效的过程中,响应于在当前场景图像中未识别到目标实体对象,确定AR特效的第二展示位置数据,并基于第二展示位置数据,控制AR设备按照AR特效的已展示进度继续展示AR特效。
在控制AR设备展示AR特效的过程中,目标实体对象和AR设备中的至少之一发生移动,可能会导致发生以下两种情况中的至少之一,即情况一:目标实体对象和AR设备之间的相对位置数据发生变化,情况二:目标实体对象和AR设备之间的相对姿态数据发生变化。此时AR设备在拍摄当前场景图像时,可能无法拍摄到目标实体对象,或者无法拍摄到完整的目标实体对象。这样,在对当前场景图像进行识别时,可能存在无法识别到目标实体对象的情况。
在本公开的一些实施例中,无法识别到目标实体对象的情况,可以包含两种情况,一种是当前场景图像中不包含目标实体对象,另一种当前场景图像中包含部分目标实体对象,比如目标实体对象为日历的情况下,当前场景图像中只包含日历的一个边角,这种情况可能无法检测到目标实体对象包含的足够的特征点,因此无法识别出目标实体对象。
考虑到在识别到当前场景图像中包含目标实体对象的情况下,通过第一定位方式确定目标第一展示位置数据,即是基于目标实体对象在当前场景图像中的位置信息,来确定的AR特效的第一展示位置数据。因此,在基于第一定位方式对目标实体对象进行定位的过程中,可以同时确定出AR设备在拍摄每张场景图像时,与目标实体对象之间的相对位置数据,并保存该相对位置数据,这样在当前场景图像中未识别到目标实体对象的情况下,可以结合保存的AR设备与目标实体对象之间的相对位置数据,以及实时定位与地图构建(Simultaneous Localization And Mapping,SLAM)技术,确定出AR设备在拍摄当前场景图像时,与目标实体对象之间的相对位置数据。在本公开的一些实施例中,可以基于该相对位置数据以及AR特效与目标实体对象的相对位置关系,确定出AR特效的第二展示位置数据,该过程将在后文进行详细阐述。
此外,在展示AR特效的过程中,响应于在当前场景图像中未识别到目标实体对象,还可以通过第二定位方式确定AR特效的第二展示姿态数据;其中,第二定位方式即为SLAM定位方式,并基于第二展示位置数据和第二展示姿态数据,控制AR设备按照AR特效的已展示进度继续展示AR特效,其中,第二展示姿态数据的确定过程与第二展示位置数据的确定过程相似。
示例性地,目标实体对象为日历,AR特效包括动态展示的总时长为30s的AR画面,若在该AR画面展示到第10s时,在AR设备拍摄的当前场景图像中识别不到目标实体对象,此时可以根据基于第二定位方式确定的AR特效的第二展示位置数据,或者基于第二展示位置数据和第二展示姿态数据,控制AR设备按照AR画面已展示的进度继续进行展示(继续从第10s处进行展示)。若在继续展示过程中,第二展示位置数据指示AR设备完全离开AR画面的展示位置范围,比如AR设备与日历之间的相对距离大于或等于预设阈值,或者,第二展示姿态数据指示AR设备的拍摄角度完全离开日历,此时尽管AR特效还在继续展示,但是用户无法通过AR设备观看到AR特效的AR画面。若在继续展示过程中,基于第二展示位置数据确定AR设备与日历之间的相对距离小于预设阈值,且AR设备的拍摄角度还可以拍摄到日历的部分区域,此时用户可以通过AR设备观看到展示的部分AR画面,比如看到与日历的部分区域匹配的AR画面。
本公开实施例中,在识别到当前场景图像中包含目标实体对象的情况下,基于确定的第一展示位置数据,控制AR设备展示AR特效。在向用户展示AR特效的过程中,若在当前场景图像中未识别到目标实体对象时,还可以根据确定出AR特效的第二展示位置数据,并基于第二展示位置数据控制AR设备继续对未展示的AR特效进行展示,能够提高AR特效在展示过程中的连贯性和稳定性,使得AR特效的展示更加逼真。
图2示出可以应用本公开实施例的增强现实场景下的展示方法的一种系统架构示意图;如图2所示,该系统架构中包括:当前场景图像获取终端201、网络202和控制终端203。为实现支撑一个示例性应用,当前场景图像获取终端201和控制终端203通过网络202建立通信连接,当前场景图像获取终端201通过网络202向控制终端203上报当前场景图像,控制终端203在识别到当前场景图像中包含目标实体对象的情况下,确定与目标实体对象匹配的AR特效的第一展示位置数据,并基于第一展示位置数据,控制AR设备展示AR特效;在展示AR特效的过程中,响应于在当前场景图像中未识别到目标实体对象,确定AR特效的第二展示位置数据,并基于第二展示位置数据,控制AR设备按照AR特效的已展示进度继续展示AR特效。最后,控制终端203将展示位置信息和AR特效上传至网络202,并通过网络202发送给当前场景图像获取终端201。
作为示例,当前场景图像获取终端201可以包括图像采集设备,控制终端203可以包括具有视觉信息处理能力的视觉处理设备或远程服务器。网络202可以采用有线或无线连接方式。其中,当控制终端203为视觉处理设备时,当前场景图像获取终端201可以通过有线连接的方式与视觉处理设备通信连接,例如通过总线进行数据通信;当控制终端203为远程服务器时,当前场景图像获取终端201可以通过无线网络与远程服务器进行数据交互。
或者,在一些场景中,当前场景图像获取终端201可以是带有视频采集模组的视觉处理设备,可以是带有摄像头的主机。这时,本公开实施例的增强现实场景下的展示方法可以由当前场景图像获取终端201执行,上述系统架构可以不包含网络202和控制终端203。
在本公开的一些实施例中,在识别到当前场景图像中包含目标实体对象的情况下,确定与目标实体对象匹配的AR特效的第一展示位置数据之前,可以按照以下方式识别当前场景图像中是否包含目标实体对象,如图3所示,包括以下S201至S202:
S201,对当前场景图像进行特征点提取,得到当前场景图像包含的多个特征点分别对应的特征信息;多个特征点位于当前场景图像中的目标检测区域中。
示例性地,在对当前场景图像进行识别过程中,首先,可以通过图像检测算法,定位出当前场景图像中包含实体对象的目标检测区域。然后在目标检测区域中进行特征点提取,比如可以提取目标检测区域中位于实体对象轮廓上的特征点、位于标识图案区域的特征点以及位于文字区域上的特征点等。示例性地,为了使得提取到的特征点能够完整的表示目标实体对象,特征点可以基于目标实体对象在当前场景图像中对应的位置区域进行均匀提取,比如目标实体对象为日历的情况下,可以在日历封面在当前场景图像中对应的矩形区域中进行均匀提取。
示例性地,这里提取到的特征点包含的特征信息可以包含特征点对应的纹理特征值、RGB特征值、灰度值等能够表示该特征点特征的信息。
S202,基于多个特征点分别对应的特征信息与预先存储的目标实体对象包含的多个特征点分别对应的特征信息进行比对,确定当前场景图像中是否包含目标实体对象。
示例性地,可以按照相同的方式预先对目标实体对象进行拍摄,得到并保存目标实体对象包含的多个特征点分别对应的特征信息。
示例性地,在基于多个特征点分别对应的特征信息与预先存储的目标实体对象包含的多个特征点分别对应的特征信息进行比对时,可以先基于当前场景图像提取到的多个特征点分别对应的特征信息确定当前场景图像中目标检测区域对应的第一特征向量,以及基于目标实体对象包含的多个特征点分别对应的特征信息确定目标实体对象对应的第二特征向量。然后可以通过第一特征向量和第二特征向量确定目标检测区域和目标实体对象之间的相似度,比如可以通过余弦公式进行确定。
示例性地,在确定第一特征向量和第二特征向量之间的相似度大于或等于预设相似度阈值的情况下,确定当前场景图像中包含目标实体对象。反之,在确定第一特征向量和第二特征向量之间的相似度小于预设相似度阈值的情况下,确定当前场景图像中不包含目标实体对象。
本公开实施例中,通过提取目标检测区域中包含的多个特征点,对当前场景图像中是否包含目标实体对象进行识别,通过特征点比对的方式可以快速准确的确定当前场景图像中是否包含目标实体对象。
针对上述S102,确定与目标实体对象匹配的AR特效的第一展示位置数据,如图4所示,可以包括以下S301至S303:
S301,获取目标实体对象在当前场景图像中的位置信息。
示例性地,可以以当前场景图像建立图像坐标系,获取目标实体对象包含的多个特征点在图像坐标系中的图像坐标值,得到目标实体对象在当前场景图像中的位置信息。
S302,基于位置信息,确定目标实体对象在预先建立的世界坐标系下的位置数据;以及,基于当前场景图像,确定AR设备在世界坐标系下的位置数据;
在本公开的一些实施例中,可以基于位置信息、图像坐标系和AR设备对应的相机坐标系之间的转换关系、以及AR设备对应的相机坐标系与世界坐标系之间的转换关系,确定目标实体对象在世界坐标系下的位置数据。
示例性地,AR设备对应的相机坐标系可以以AR设备包含的图像采集部件的聚焦中心为原点,以光轴为Z轴建立的三维直角坐标系,在AR设备拍摄到当前场景图像后,可以基于图像坐标系和相机坐标系之间的转换关系,确定出目标实体对象在相机坐标系下的位置数据。
示例性地,预先建立的世界坐标系可以以目标实体对象的中心点为原点进行建立,比如上文提到的在目标实体对象为日历的情况下,可以以日历的中心为原点,以通过日历中心的长边为X轴、以通过日历中心的短边为Y轴、以通过日历中心且垂直于日历封面的直线为Z轴进行建立的。
其中,相机坐标系和世界坐标系之间的转换为刚体转换,即相机坐标系经过旋转、平移可以与世 界坐标系重合的一种转换方式。相机坐标系和世界坐标系之间的转换关系可以通过目标实体对象中的多个位置点在世界坐标系下的位置坐标,以及在相机坐标系下对应的位置坐标进行确定。这里在得到目标实体对象在相机坐标系下的位置数据后,可以基于AR设备对应的相机坐标系与世界坐标系之间的转换关系,确定出目标实体对象在世界坐标系下的位置数据。
示例性地,AR设备在世界坐标系下的位置数据可以通过AR设备拍摄的当前场景图像来确定,比如在当前场景图像中选定特征点,通过确定选定的特征点在以目标实体对象建立的世界坐标系下的位置坐标,以及选定的特征点在AR设备对应的相机坐标系下的位置坐标,可以确定出AR设备在拍摄当前场景图像时在世界坐标系下的位置数据。
S303,基于目标实体对象在世界坐标系下的位置数据和AR设备在世界坐标系下的位置数据,确定第一展示位置数据。
考虑到AR特效与目标实体对象在相同坐标系下具有预设位置关系,因此这里基于目标实体对象和AR设备在相同的世界坐标系下的位置数据,可以确定出AR特效相对于AR设备的第一展示位置数据。
示例性地,随着目标实体对象和AR设备中的至少之一的移动,导致该第一展示位置数据发生变化,比如目标实体对象在世界坐标系下的位置数据发生变化,可以通过AR设备展示出随着目标实体对象的位置数据变化而变化的AR特效;或者,在AR设备在世界坐标系下的位置数据发生变化后,也可以通过AR设备展示出随着AR设备的位置数据变化而变化的AR特效。目标实体对象和AR设备同时发生移动导致的相对位置数据变化后,同样会导致AR特效的第一展示位置数据的变化,使得AR特效的展示发生变化,通过这样的方式可以带给用户更真实的AR体验,比如AR设备的朝向由目标实体对象的左侧移动目标实体对象的右侧,则用户通过AR设备可以看到AR特效的展示也发生了对应的转换。
在本公开的一些实施例中,第一展示姿态数据的确定过程与第一展示位置数据的确定过程相似。
本公开实施例中,通过确定出目标实体对象、AR设备在同一世界坐标系下的位置数据,能够更加精准地确定出AR特效相对于AR设备在同一世界坐标系下的第一展示位置数据,这样使得通过AR设备中展示的增强现实场景更为逼真。
在本公开的一些实施例中,针对S303,基于目标实体对象在世界坐标系下的位置数据和AR设备在世界坐标系下的位置数据,确定第一展示位置数据,如图5所示,可以包括以下S3031至S3032:
S3031,基于目标实体对象在世界坐标系下的位置数据,确定AR特效在世界坐标系下的位置数据。
示例性地,可以按照目标实体对象在世界坐标系下的位置数据,以及预先设置的AR特效与目标实体对象在相同坐标系下的位置关系(详见上文描述),确定出AR特效在世界坐标系下的位置数据。
S3032,基于AR特效在世界坐标系下的位置数据和AR设备在世界坐标系下的位置数据,确定第一展示位置数据。
示例性地,在AR特效包含AR画面的情况下,AR特效在世界坐标系下的位置数据可以包含AR画面在世界坐标系下的位置。其中,AR画面在世界坐标系下的位置可以通过AR画面的中心点在世界坐标系下的坐标值表示。
在本公开的一些实施例中,在确定上文提到的第一展示姿态数据时,确定AR画面在世界坐标系下的姿态,可以通过AR画面的指定方向与世界坐标系各个坐标轴之间的夹角表示。
在本公开的一些实施例中,AR设备在世界坐标系下的位置数据可以包含AR设备中的图像采集部件在世界坐标系下的位置。其中,图像采集部件在世界坐标系下的位置可以通过图像采集部件的设定位置点在世界坐标系下的坐标值表示。
在本公开的一些实施例中,在确定上文提到的第一展示姿态数据时,确定图像采集部件在世界坐标系下的姿态,可以通过图像采集部件中摄像头的朝向方向与世界坐标系各个坐标轴之间的夹角表示。
示例性地,第一展示位置数据可以通过AR特效在世界坐标系下的位置和AR设备在世界坐标系 下的位置确定。第一展示姿态数据可以通过AR特效在世界坐标系下的姿态和AR设备在世界坐标系下的姿态确定。
本公开实施例中,在识别到当前场景图像中包含目标实体对象的情况下,可以基于当前场景图像来准确的确定出目标实体对象和AR设备在世界坐标系下位置数据,从而能够准确快速的得到AR特效的第一展示位置数据。
针对上述S103,确定AR特效的第二展示位置数据,如图6所示,可以包括以下S401至S402:
S401,基于当前场景图像、历史场景图像、以及AR设备在拍摄历史场景图像时与目标实体对象在预先建立的世界坐标系下的相对位置数据,确定AR设备在拍摄当前场景图像时,与目标实体对象之间的相对位置数据。
示例性地,下面以当前场景图像为AR设备拍摄的第三帧场景图像为例,结合SLAM技术简要如何确定AR设备在拍摄当前场景图像时,与目标实体对象之间的相对位置数据。
从AR设备拍摄第一帧包含目标实体对象的场景图像开始,可以基于以目标实体对象的中心点为原点建立的世界坐标系,以及AR设备拍摄的第一帧场景图像中选定的特征点分别在世界坐标系和AR设备对应的相机坐标系下的位置坐标,确定出AR设备在拍摄第一帧场景图像时在世界坐标系下的位置数据。同时确定的还包含目标实体对象在AR设备在拍摄第一帧场景图像时在世界坐标系下的位置数据,基于AR设备在拍摄第一帧场景图像时在世界坐标系下的位置数据,以及目标实体对象在AR设备在拍摄第一帧场景图像时在世界坐标系下的位置数据,可以确定出AR设备拍摄第一帧场景图像时与目标实体对象在预先建立的世界坐标系下的相对位置数据。
在本公开的一些实施例中,当AR设备拍摄第二帧场景图像时,可以在第二帧场景图像中找到第一帧场景图像中包含的目标特征点,基于目标特征点分别在AR设备拍摄这两帧场景图像时在相机坐标系下的位置数据,确定出AR设备在拍摄第二帧场景图像时相对于拍摄第一帧场景图像时的位置偏移量。然后基于该位置偏移量,以及AR设备在拍摄第一帧场景图像时与目标实体对象在预先建立的世界坐标系下的相对位置数据,确定出AR设备在拍摄第二帧场景图像时与目标实体对象在预先建立的世界坐标系下的相对位置数据。
在本公开的一些实施例中,可以通过相同的方式,确定出AR设备在当前场景图像时,相对于拍摄第二帧场景图像时的位置偏移量,这样可以结合AR设备拍摄当前场景图像时相比拍摄第二帧场景图像时的位置偏移量,以及AR设备在拍摄第二帧场景图像时与目标实体对象在预先建立的世界坐标系下的相对位置数据,确定出AR设备在拍摄当前场景图像时与目标实体对象在预先建立的世界坐标系下的相对位置数据。
此外,还可以基于当前场景图像、历史场景图像、以及AR设备在拍摄历史场景图像时与目标实体对象在预先建立的世界坐标系下的相对姿态数据,确定AR设备在拍摄当前场景图像时,与目标实体对象之间的相对姿态数据。其中,相对姿态数据的确定过程与上述相对位置数据的确定过程相似。
S402,基于相对位置数据,确定AR特效的第二展示位置数据。
示例性地,考虑到AR特效与目标实体对象在相同坐标系下具有预设位置关系,因此这里同样可以基于AR设备在拍摄当前场景图像时,与目标实体对象之间的相对位置数据,确定出AR特效相对于AR设备的第二展示位置数。
本公开实施例中,利用当前场景图像、历史场景图像、以及AR设备在拍摄历史场景图像时与目标实体对象在世界坐标系下的相对位置数据,能够较为准确的确定AR设备在拍摄当前场景图像时,与目标实体对象之间的相对位置数据,这样可以基于准确的相对位置数据,确定AR特效的第二展示位置数据,能够实现在识别不到目标实体对象的情况下,还可以对AR特效进行展示。
在本公开的一些实施例中,AR特效还可以包括AR画面和与AR画面匹配的音频内容,针对上述S103,基于第二展示位置数据,控制AR设备按照AR特效的已展示进度继续展示AR特效,可以包括:
在当前场景图像中未识别到目标实体对象,且AR画面未展示完毕的情况下,基于第二展示位置数据控制AR设备按照AR画面的已展示进度,继续展示与未展示的AR画面匹配的音频内容。
示例性地,在AR设备离开AR画面的展示位置范围的情况下,若此时AR画面的展示进度还未结束,此时AR画面的进度可以继续进行,但是用户无法通过AR设备观看到AR画面,只能听到与AR画面匹配的音频内容,依然带给用户AR特效的体验。在AR设备重新回到AR画面的展示位置范围的情况下,若AR画面依然未展示完毕,可以继续为用户展示未展示的AR画面和与该AR画面匹配的音频内容,带给用户连贯性的AR体验。
本公开实施例中,在当前场景图像中无法识别到目标实体对象的情况下,若AR画面没有展示完毕,还可以继续对与AR画面匹配的音频内容进行展示,增加AR特效的展示连贯性,使得AR特效能够更加逼真的展示。
在本公开的一些实施例中,在展示AR特效的过程中,响应于在当前场景图像中未识别到目标实体对象,确定AR特效的第二展示位置数据,并基于第二展示位置数据,控制AR设备按照AR特效的已展示进度继续展示AR特效之后,本公开实施例提供的方法还包括:
当前场景图像中重新识别到目标实体对象的情况下,重新基于确定的第一展示位置数据,控制AR设备按照AR特效的已展示进度继续展示AR特效。
示例性地,在基于第二展示位置数据,控制AR设备按照AR特效的已展示进度继续展示AR特效的过程中,针对AR设备拍摄的当前场景图像,可以继续进行图像识别,识别当前场景图像中是否包含目标实体对象。在重新识别到当前场景图像中包含目标实体对象的情况下,依旧继续通过第一定位方式来确定的第一展示位置数据,来控制AR设备按照AR特效的已展示进度继续展示AR特效。
本公开实施例中,在重新识别到目标实体对象后,重新基于准确度较高的第一展示位置数据,对AR特效继续进行展示,能够提高AR特效在展示过程中的连贯性和稳定性,使得AR特效的展示更加逼真。
在本公开的一些实施例中,在控制AR设备展示AR特效之前,可以执行以下过程:
获取与目标实体对象匹配的AR特效;AR特效中包含多个虚拟对象分别对应的特效数据。
示例性地,在首次识别到目标实体对象时,可以获取与该目标实体对象匹配的AR特效。
AR特效中每个虚拟对象对应的特效数据可以包含该虚拟对象在通过AR设备展示时的形貌、颜色、以及对应的音频内容等数据。
在控制AR设备展示AR特效时,包括:
按照多个虚拟对象分别对应的特效数据的展示顺序,控制AR设备依次展示多个虚拟对象分别对应的特效数据。
示例性地,在包含AR特效包含多个虚拟对象时,可与预先设定好每个虚拟对象在AR设备中展示时的展示顺序,也可以基于每个虚拟对象的属性信息,以及预先设定不同属性的展示顺序,确定多个虚拟对象分别对应的特效数据的展示数据。其中,属性信息可以包含静态物体、动态物体以及动态人物等。
本公开实施例中,可以向用户展示多个虚拟对象构成的AR特效,使得AR特效的展示更加生动。
在本公开的一些实施例中,目标实体对象包含日历,在控制AR设备展示AR特效之前,本公开实施例提出的展示方法还包括:
获取与日历匹配的AR特效;AR特效中包含基于日历的封面内容生成的第一特效数据。
这里的第一特效数据可以为与日历的封面内容对应的多个虚拟对象分别对应的第一特效数据。
示例性地,AR特效中包含虚拟对象卡通龙、卡通松鼠、包裹日历封面的虚拟日历封面、虚拟文字、祥云等,还可以包含各个虚拟对象在通过AR展示时的展示顺序。
在控制AR设备展示AR特效时,可以包括:
在识别到日历的封面内容的情况下,基于第一特效数据,控制AR设备展示与日历的封面内容匹配的AR特效。
示例性地,在识别到日历的封面内容的情况下,基于第一特效数据,以及设置好的展示顺序,可以首先触发展示出包裹日历封面的虚拟日历封面,其次以动态形式展示出日历的虚拟标题,接着可以出现祥云,然后卡通龙和卡通松鼠展示在虚拟日历封面上,开始以对话的形式对日历进行详情介绍, 并在介绍完毕后,结束AR特效的展示。
如图7所示,是在控制AR设备展示与日历封面内容匹配的AR特效的过程中展示的AR画面,用户可以通过观看AR画面并听对应的音频内容,来了解该日历。
本公开实施例中,在识别到日历封面的情况下,可以在日历封面上展示介绍日历的AR特效,丰富日历的展示内容,提高用户对日历的观看兴趣。
在本公开的一些实施例中,目标实体对象包含日历,在控制AR设备展示AR特效之前,本公开实施例提供的展示方法还包括:
获取与日历匹配的AR特效;AR特效中包含基于日历中的至少一个预设日期在历史同期的标记事件生成的第二特效数据。
示例性地,日历中包含一些在历史同期中发生过特定事件的预设日期,比如1月1号为元旦节,可以基于在历史中的元旦节发生过的事件生成AR特效,该AR特效中的第二特效数据可以包含基于历史同期发生的事件生成的虚拟文字、音频内容以及虚拟画面等,同时还可以包含各个第二特效数据之间的展示顺序。
在控制AR设备展示AR特效时,可以包括:
在识别到日历中的至少一个预设日期的情况下,基于第二特效数据,控制AR设备展示与日历中的至少一个预设日期匹配的AR特效。
示例性地,在AR设备拍摄的当前场景图像中包含预设日期时,可以按照第二特效数据,控制AR设备对预设日期在历史同期发生的事件进行展示介绍。
本公开实施例中,在向用户展示日历的同时,在获取到日历上的预设日期的情况下,还可以向用户展示与预设日期对应的AR特效,丰富日历的展示内容。
本领域技术人员可以理解,在上述方法公开的实施例中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一技术构思,本公开实施例中还提供了与增强现实场景下的展示方法对应的增强现实场景下的展示装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述展示方法相似,因此装置的实施可以参见方法的实施。
参照图8所示,为本公开实施例提供的一种增强现实场景下的展示装置800的结构示意图,该展示装置包括:
获取模块801,配置为获取增强现实AR设备拍摄的当前场景图像;
第一控制模块802,配置为在识别到当前场景图像中包含目标实体对象的情况下,确定与目标实体对象匹配的AR特效的第一展示位置数据,并基于第一展示位置数据,控制AR设备展示AR特效;
第二控制模块803,配置为在展示AR特效的过程中,响应于在当前场景图像中未识别到目标实体对象,确定AR特效的第二展示位置数据,并基于第二展示位置数据,控制AR设备按照AR特效的已展示进度继续展示AR特效。
在本公开的一些实施例中,获取模块801,配置为按照以下方式识别当前场景图像中是否包含目标实体对象:对当前场景图像进行特征点提取,得到当前场景图像包含的多个特征点分别对应的特征信息;多个特征点位于当前场景图像中的目标检测区域中;基于多个特征点分别对应的特征信息与预先存储的目标实体对象包含的多个特征点分别对应的特征信息进行比对,确定当前场景图像中是否包含目标实体对象。
在本公开的一些实施例中,第一控制模块802,配置为获取目标实体对象在当前场景图像中的位置信息;基于位置信息,确定目标实体对象在预先建立的世界坐标系下的位置数据;以及基于当前场景图像确定AR设备在世界坐标系下的位置数据;基于目标实体对象在世界坐标系下的位置数据和AR设备在世界坐标系下的位置数据,确定第一展示位置数据。
在本公开的一些实施例中,第一控制模块802,配置为基于位置信息、图像坐标系和AR设备对应的相机坐标系之间的转换关系、以及AR设备对应的相机坐标系与世界坐标系之间的转换关系,确定目标实体对象在世界坐标系下的位置数据;第一控制模块802,配置为基于目标实体对象在世界坐 标系下的位置数据,确定AR特效在世界坐标系下的位置数据;基于AR特效在世界坐标系下的位置数据和AR设备在世界坐标系下的位置数据,确定第一展示位置数据。
在本公开的一些实施例中,第二控制模块803,配置为基于当前场景图像、历史场景图像、以及AR设备在拍摄历史场景图像时与目标实体对象在预先建立的世界坐标系下的相对位置数据,确定AR设备在拍摄当前场景图像时,与目标实体对象之间的相对位置数据;基于相对位置数据,确定AR特效的第二展示位置数据。
在本公开的一些实施例中,AR特效包括AR画面和与AR画面匹配的音频内容,第二控制模块803,配置为在当前场景图像中未识别到目标实体对象,且AR画面未展示完毕的情况下,基于第二展示位置数据控制AR设备按照AR画面的已展示进度,继续展示与未展示的AR画面匹配的音频内容。
在本公开的一些实施例中,第一控制模块802,配置为在当前场景图像中重新识别到目标实体对象的情况下,重新基于通过第一定位方式确定的第一展示位置数据,控制AR设备按照AR特效的已展示进度继续展示AR特效。
在本公开的一些实施例中,在第一控制模块802,配置为控制AR设备展示AR特效之前,获取模块801,配置为获取与目标实体对象匹配的AR特效;AR特效中包含多个虚拟对象分别对应的特效数据;第一控制模块802,配置为按照多个虚拟对象分别对应的特效数据的展示顺序,控制AR设备依次展示多个虚拟对象分别对应的特效数据。
在本公开的一些实施例中,目标实体对象包含日历,在第一控制模块802,配置为控制AR设备展示AR特效之前,获取模块801,配置为获取与日历匹配的AR特效;AR特效中包含基于日历的封面内容生成的第一特效数据;第一控制模块802,配置为在识别到日历的封面内容的情况下,基于第一特效数据,控制AR设备展示与日历的封面内容匹配的AR特效。
在本公开的一些实施例中,目标实体对象包含日历,在第一控制模块802,配置为控制AR设备展示AR特效之前,获取模块801,配置为获取与日历匹配的AR特效;AR特效中包含基于日历中的至少一个预设日期在历史同期的标记事件生成的第二特效数据;第一控制模块802,配置为在识别到日历中的至少一个预设日期的情况下,基于第二特效数据,控制AR设备展示与日历中的至少一个预设日期匹配的AR特效。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
对应于图1中的增强现实场景下的展示方法,本公开实施例还提供了一种电子设备900,如图9所示,为本公开实施例提供的电子设备900结构示意图,包括:
处理器91、存储器92、和总线93;存储器92存储有处理器91可执行的机器可读指令,当电子设备900运行时,处理器91与存储器92之间通过总线93通信,机器可读指令被处理器91执行时执行上述任一实施例中的增强现实场景下的展示方法。
存储器92配置为存储执行指令,包括内存921和外部存储器922;这里的内存921也称内存储器,用于暂时存放处理器91中的运算数据,以及与硬盘等外部存储器922交换的数据,处理器91通过内存921与外部存储器922进行数据交换,当电子设备900运行时,处理器91与存储器92之间通过总线93通信,使得处理器91执行以下指令:
获取增强现实AR设备拍摄的当前场景图像;在识别到当前场景图像中包含目标实体对象的情况下,确定与目标实体对象匹配的AR特效的第一展示位置数据,并基于第一展示位置数据,控制AR设备展示AR特效;在展示AR特效的过程中,响应于在当前场景图像中未识别到目标实体对象,确定AR特效的第二展示位置数据,并基于第二展示位置数据,控制AR设备按照AR特效的已展示进度继续展示AR特效。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的增强现实场景下的展示方法。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序,计算机程序包括计算机可读代码,在计算机可读代码在电子设备中运行的情况下,电子设备的处理器执行如上述任一实施例所述增强现实场景下的展示方法。
本公开实施例还提供另一种计算机程序产品,该计算机程序产品承载有程序代码,程序代码包括的指令可配置为执行上述方法实施例中所述的增强场景下的展示方法,具体可参见上述方法实施例。
其中,上述计算机程序产品可以通过硬件、软件或其结合的方式实现。在一些实施例中,所述计算机程序产品可以体现为计算机存储介质,在另一些实施例中,计算机程序产品可以体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
本公开实施例中涉及的设备可以是系统、方法和计算机程序产品中的至少之一。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是但不限于电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read-Only Memory,ROM)、可擦除可编程只读存储器(Electrical Programmable Read Only Memory,EPROM)或闪存、静态随机存取存储器(Static Random-Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和无线网中的至少之一下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和边缘服务器中的至少之一。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(Industry Standard Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言,诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、FPGA或可编程逻辑阵列(Programmable Logic Arrays,PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际 的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。
工业实用性
本公开实施例提供了一种增强现实场景下的展示方法、装置、设备、介质及程序,其中,所述方法由电子设备执行,该展示方法包括:获取增强现实AR设备拍摄的当前场景图像;在识别到所述当前场景图像中包含目标实体对象的情况下,确定与所述目标实体对象匹配的AR特效的第一展示位置数据,并基于所述第一展示位置数据,控制所述AR设备展示所述AR特效;在展示所述AR特效的过程中,响应于在所述当前场景图像中未识别到所述目标实体对象,确定所述AR特效的第二展示位置数据,并基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效。

Claims (14)

  1. 一种增强现实场景下的展示方法,所述方法由电子设备执行,所述方法包括:
    获取增强现实AR设备拍摄的当前场景图像;
    在识别到所述当前场景图像中包含目标实体对象的情况下,确定与所述目标实体对象匹配的AR特效的第一展示位置数据,并基于所述第一展示位置数据,控制所述AR设备展示所述AR特效;
    在展示所述AR特效的过程中,响应于在所述当前场景图像中未识别到所述目标实体对象,确定所述AR特效的第二展示位置数据,并基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效。
  2. 根据权利要求1所述的方法,其中,所述在识别到所述当前场景图像中包含目标实体对象的情况下,确定与所述目标实体对象匹配的AR特效的第一展示位置数据之前,按照以下方式识别所述当前场景图像中是否包含所述目标实体对象:
    对所述当前场景图像进行特征点提取,得到所述当前场景图像包含的多个特征点分别对应的特征信息;所述多个特征点位于所述当前场景图像中的目标检测区域中;
    基于所述多个特征点分别对应的特征信息与预先存储的所述目标实体对象包含的多个特征点分别对应的特征信息进行比对,确定所述当前场景图像中是否包含所述目标实体对象。
  3. 根据权利要求1或2所述的方法,其中,所述确定与所述目标实体对象匹配的AR特效的第一展示位置数据,包括:
    获取所述目标实体对象在所述当前场景图像中的位置信息;
    基于所述位置信息,确定所述目标实体对象在预先建立的世界坐标系下的位置数据;以及,基于所述当前场景图像,确定所述AR设备在所述世界坐标系下的位置数据;
    基于所述目标实体对象在所述世界坐标系下的位置数据和所述AR设备在所述世界坐标系下的位置数据,确定所述第一展示位置数据。
  4. 根据权利要求3所述的方法,其中,所述基于所述位置信息,确定所述目标实体对象在预先建立的世界坐标系下的位置数据,包括:
    基于所述位置信息、图像坐标系和AR设备对应的相机坐标系之间的转换关系、以及AR设备对应的相机坐标系与所述世界坐标系之间的转换关系,确定所述目标实体对象在所述世界坐标系下的位置数据;
    所述基于所述目标实体对象在所述世界坐标系下的位置数据和所述AR设备在所述世界坐标系下的位置数据,确定所述第一展示位置数据,包括:
    基于所述目标实体对象在所述世界坐标系下的位置数据,确定所述AR特效在所述世界坐标系下的位置数据;
    基于所述AR特效在所述世界坐标系下的位置数据和所述AR设备在所述世界坐标系下的位置数据,确定所述第一展示位置数据。
  5. 根据权利要求1至4任一所述的方法,其中,所述确定所述AR特效的第二展示位置数据,包括:
    基于所述当前场景图像、历史场景图像、以及所述AR设备在拍摄所述历史场景图像时与所述目标实体对象在预先建立的世界坐标系下的相对位置数据,确定所述AR设备在拍摄当前场景图像时,与所述目标实体对象之间的相对位置数据;
    基于所述相对位置数据,确定所述AR特效的第二展示位置数据。
  6. 根据权利要求1至5任一所述的方法,其中,所述AR特效包括AR画面和与所述AR画面匹配的音频内容,所述基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效,包括:
    在所述当前场景图像中未识别到所述目标实体对象,且所述AR画面未展示完毕的情况下,基于所述第二展示位置数据控制所述AR设备按照所述AR画面的已展示进度,继续展示与未展示的所述AR画面匹配的音频内容。
  7. 根据权利要求1至6任一所述的方法,其中,所述在展示所述AR特效的过程中,响应于在所述当前场景图像中未识别到所述目标实体对象,确定所述AR特效的第二展示位置数据,并基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效之后,所述方法还包括:
    在所述当前场景图像中重新识别到所述目标实体对象的情况下,重新基于确定的所述第一展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效。
  8. 根据权利要求1至7任一所述的方法,其中,在控制所述AR设备展示所述AR特效之前,所述方法还包括:
    获取与所述目标实体对象匹配的AR特效;所述AR特效中包含多个虚拟对象分别对应的特效数据;
    所述控制所述AR设备展示所述AR特效,包括:
    按照所述多个虚拟对象分别对应的特效数据的展示顺序,控制所述AR设备依次展示所述多个虚拟对象分别对应的特效数据。
  9. 根据权利要求1至8任一所述的方法,其中,所述目标实体对象包含日历,在控制所述AR设备展示所述AR特效之前,所述方法还包括:
    获取与所述日历匹配的AR特效;所述AR特效中包含基于所述日历的封面内容生成的第一特效数据;
    所述控制所述AR设备展示所述AR特效,包括:
    在识别到所述日历的封面内容的情况下,基于所述第一特效数据,控制所述AR设备展示与所述日历的封面内容匹配的所述AR特效。
  10. 根据权利要求1至9任一所述的方法,其中,所述目标实体对象包含日历,在控制所述AR设备展示所述AR特效之前,所述方法还包括:
    获取与所述日历匹配的AR特效;所述AR特效中包含基于所述日历中的至少一个预设日期在历史同期的标记事件生成的第二特效数据;
    所述控制所述AR设备展示所述AR特效,包括:
    在识别到所述日历中的至少一个预设日期的情况下,基于所述第二特效数据,控制所述AR设备展示与所述日历中的至少一个预设日期匹配的所述AR特效。
  11. 一种增强现实场景下的展示装置,包括:
    获取模块,配置为获取增强现实AR设备拍摄的当前场景图像;
    第一控制模块,配置为在识别到所述当前场景图像中包含目标实体对象的情况下,确定与所述目标实体对象匹配的AR特效的第一展示位置数据,并基于所述第一展示位置数据,控制所述AR设备展示所述AR特效;
    第二控制模块,配置为在展示所述AR特效的过程中,响应于在所述当前场景图像中未识别到所述目标实体对象,确定所述AR特效的第二展示位置数据,并基于所述第二展示位置数据,控制所述AR设备按照所述AR特效的已展示进度继续展示所述AR特效。
  12. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至10任一所述的增强现实场景下的展示方法。
  13. 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至10任一所述的增强现实场景下的展示方法。
  14. 一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现如权利要求1至10任一所述的增强现实场景下 的展示方法。
PCT/CN2021/102206 2020-11-06 2021-06-24 增强现实场景下的展示方法、装置、设备、介质及程序 WO2022095468A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022530223A JP2023504608A (ja) 2020-11-06 2021-06-24 拡張現実場面における表示方法、装置、機器、媒体及びプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011232913.8 2020-11-06
CN202011232913.8A CN112348968B (zh) 2020-11-06 2020-11-06 增强现实场景下的展示方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022095468A1 true WO2022095468A1 (zh) 2022-05-12

Family

ID=74428956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102206 WO2022095468A1 (zh) 2020-11-06 2021-06-24 增强现实场景下的展示方法、装置、设备、介质及程序

Country Status (3)

Country Link
JP (1) JP2023504608A (zh)
CN (1) CN112348968B (zh)
WO (1) WO2022095468A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116663329A (zh) * 2023-07-26 2023-08-29 西安深信科创信息技术有限公司 自动驾驶仿真测试场景生成方法、装置、设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348968B (zh) * 2020-11-06 2023-04-25 北京市商汤科技开发有限公司 增强现实场景下的展示方法、装置、电子设备及存储介质
CN113240819A (zh) * 2021-05-24 2021-08-10 中国农业银行股份有限公司 穿戴效果的确定方法、装置和电子设备
CN113359986B (zh) * 2021-06-03 2023-06-20 北京市商汤科技开发有限公司 增强现实数据展示方法、装置、电子设备及存储介质
CN113867875A (zh) * 2021-09-30 2021-12-31 北京市商汤科技开发有限公司 标记对象的编辑及显示方法、装置、设备、存储介质
CN114327059A (zh) * 2021-12-24 2022-04-12 北京百度网讯科技有限公司 手势处理方法、装置、设备以及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170092001A1 (en) * 2015-09-25 2017-03-30 Intel Corporation Augmented reality with off-screen motion sensing
CN110475150A (zh) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 虚拟礼物特效的渲染方法和装置、直播系统
US20200342681A1 (en) * 2018-06-19 2020-10-29 Google Llc Interaction system for augmented reality objects
CN112348968A (zh) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 增强现实场景下的展示方法、装置、电子设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5145444B2 (ja) * 2011-06-27 2013-02-20 株式会社コナミデジタルエンタテインメント 画像処理装置、画像処理装置の制御方法、及びプログラム
CN110180167B (zh) * 2019-06-13 2022-08-09 张洋 增强现实中智能玩具追踪移动终端的方法
CN110716645A (zh) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 一种增强现实数据呈现方法、装置、电子设备及存储介质
CN111640169A (zh) * 2020-06-08 2020-09-08 上海商汤智能科技有限公司 历史事件呈现方法、装置、电子设备及存储介质
CN111667588A (zh) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 人物图像处理方法、装置、ar设备以及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170092001A1 (en) * 2015-09-25 2017-03-30 Intel Corporation Augmented reality with off-screen motion sensing
US20200342681A1 (en) * 2018-06-19 2020-10-29 Google Llc Interaction system for augmented reality objects
CN110475150A (zh) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 虚拟礼物特效的渲染方法和装置、直播系统
CN112348968A (zh) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 增强现实场景下的展示方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116663329A (zh) * 2023-07-26 2023-08-29 西安深信科创信息技术有限公司 自动驾驶仿真测试场景生成方法、装置、设备及存储介质
CN116663329B (zh) * 2023-07-26 2024-03-29 安徽深信科创信息技术有限公司 自动驾驶仿真测试场景生成方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112348968B (zh) 2023-04-25
JP2023504608A (ja) 2023-02-06
CN112348968A (zh) 2021-02-09

Similar Documents

Publication Publication Date Title
WO2022095467A1 (zh) 增强现实场景下的展示方法、装置、设备、介质及程序
WO2022095468A1 (zh) 增强现实场景下的展示方法、装置、设备、介质及程序
US10055888B2 (en) Producing and consuming metadata within multi-dimensional data
US8644467B2 (en) Video conferencing system, method, and computer program storage device
US9595127B2 (en) Three-dimensional collaboration
US20220319139A1 (en) Multi-endpoint mixed-reality meetings
CN106846497B (zh) 应用于终端的呈现三维地图的方法和装置
US20120162384A1 (en) Three-Dimensional Collaboration
KR20140082610A (ko) 휴대용 단말을 이용한 증강현실 전시 콘텐츠 재생 방법 및 장치
US20230073750A1 (en) Augmented reality (ar) imprinting methods and systems
WO2023051356A1 (zh) 一种虚拟对象的显示方法及装置、电子设备和存储介质
JP7150894B2 (ja) Arシーン画像処理方法及び装置、電子機器並びに記憶媒体
KR102442637B1 (ko) 증강현실 추적 알고리즘을 위한 카메라 움직임 추정 방법 및 그 시스템
WO2022252688A1 (zh) 增强现实数据呈现方法、装置、电子设备及存储介质
WO2022166173A1 (zh) 视频资源处理方法、装置、计算机设备、存储介质及程序
CN104331241A (zh) 一种全景互动移动终端展示系统及方法
CN113178017A (zh) Ar数据展示方法、装置、电子设备及存储介质
CN105892890A (zh) 一种全景互动移动终端展示系统及方法
WO2019114092A1 (zh) 图像增强现实的方法、装置、增强现实显示设备及终端
CN109636917B (zh) 三维模型的生成方法、装置、硬件装置
KR20200103278A (ko) 뷰 방향이 표시되는 vr 컨텐츠 제공 시스템 및 방법
CN113031846B (zh) 用于展示任务的描述信息的方法、装置及电子设备
JP2023542598A (ja) 文字の表示方法、装置、電子機器及び記憶媒体
JP7214926B1 (ja) 画像処理方法、装置、電子機器及びコンピュータ読み取り可能な記憶媒体
US11915371B2 (en) Method and apparatus of constructing chess playing model

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022530223

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21888171

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21888171

Country of ref document: EP

Kind code of ref document: A1