WO2023045964A1 - 显示方法、装置、设备、计算机可读存储介质、计算机程序产品及计算机程序 - Google Patents

显示方法、装置、设备、计算机可读存储介质、计算机程序产品及计算机程序 Download PDF

Info

Publication number
WO2023045964A1
WO2023045964A1 PCT/CN2022/120170 CN2022120170W WO2023045964A1 WO 2023045964 A1 WO2023045964 A1 WO 2023045964A1 CN 2022120170 W CN2022120170 W CN 2022120170W WO 2023045964 A1 WO2023045964 A1 WO 2023045964A1
Authority
WO
WIPO (PCT)
Prior art keywords
augmented reality
reality environment
target object
target
identification code
Prior art date
Application number
PCT/CN2022/120170
Other languages
English (en)
French (fr)
Inventor
欧华富
文一凡
李斌
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023045964A1 publication Critical patent/WO2023045964A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present disclosure relates to but not limited to terminal technologies, and in particular, relates to a display method, device, device, computer-readable storage medium, computer program product and computer program.
  • augmented reality Augmented Reality
  • AR Augmented Reality
  • a browser or a third-party identification code scanner it is necessary to go through at least one page or link jump before entering
  • the AR environment displaying the AR effect corresponding to the current identification code requires additional waiting time, which affects the user's viewing experience, and needs to rely on a third-party identification code scanner, which limits the application range of the AR effect display.
  • Embodiments of the present disclosure provide a display method, device, device, and computer-readable storage medium.
  • An embodiment of the present disclosure provides a display method, including:
  • a first augmented reality effect corresponding to the first target object is displayed in the augmented reality environment.
  • the determining the first target object and the first virtual effect data based on the first target identification code includes: parsing the first target identification code to obtain target identification information; Based on the mapping relationship between the identification information of the target object and the target object, determine the first target object corresponding to the target identification information; based on the mapping relationship between the preset identification information and the virtual effect data, determine the The first virtual effect data.
  • the method further includes: when the display of the first augmented reality effect ends, in response to the scanning operation of the second target identification code in the augmented reality environment, based on the first Two target identification codes, determining a second target object and second virtual effect data; in response to identifying the second target object in the augmented reality environment, based on the second virtual effect data, in the augmented reality environment
  • the second augmented reality effect corresponding to the second target object is shown below.
  • the method further includes: entering the augmented reality environment in response to a start operation on the augmented reality environment.
  • the scanning operation of the first target identification code in the augmented reality environment includes: scanning the first target identification code with an image acquisition device of an electronic device in the augmented reality environment; The method further includes activating the image capture device in the augmented reality environment in response to entering the augmented reality environment.
  • the method further includes: under the augmented reality environment, using the image acquisition device to acquire an image to be identified in real time; identifying the first target object in the image to be identified; In response to identifying the first target object in the augmented reality environment, displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data , comprising: in response to recognizing the first target object in the image to be recognized, based on the first virtual effect data, displaying a first target object corresponding to the first target object in the augmented reality environment.
  • Augmented reality effects comprising: in response to recognizing the first target object in the image to be recognized, based on the first virtual effect data, displaying a first target object corresponding to the first target object in the augmented reality environment.
  • the entering the augmented reality environment in response to the startup operation of the augmented reality environment includes: entering the augmented reality environment in response to an access operation to the entry address of the augmented reality environment .
  • the augmented reality environment may include at least one interactive interface among an application platform, an application program or a small program running on an electronic device for presenting augmented reality effects.
  • the first target object may include a two-dimensional object and/or a three-dimensional object associated with the first target identification code.
  • the two-dimensional object may include at least one of the following: photos of exhibits, portraits of characters, and car posters; the three-dimensional object may include at least one of the following: exhibits, people, and buildings in real scenes things, vehicles.
  • An embodiment of the present disclosure provides a display device, including: a first determining part configured to determine a first target identification code based on the first target identification code in response to a scanning operation on a first target identification code in an augmented reality a target object and first virtual effect data; a first display portion configured to, in response to recognizing the first target object in the augmented reality environment, display the first virtual effect data in the augmented reality environment
  • the first augmented reality effect corresponding to the first target object is shown below.
  • An embodiment of the present disclosure provides an electronic device, including: a display screen; a memory for storing an executable computer program; and a processor for executing the executable computer program stored in the memory, in combination with the display screen to realize the above-mentioned display method.
  • An embodiment of the present disclosure provides a computer-readable storage medium storing a computer program for realizing the above-mentioned display method when executed by a processor.
  • the first target object and the first virtual effect data are determined; and in response to the The first target object is recognized in the augmented reality environment, and based on the first virtual effect data, a first augmented reality effect corresponding to the first target object is displayed in the augmented reality environment.
  • the first target identification code can be scanned in an augmented reality environment without relying on a third-party identification code scanner, making the application of augmented reality effect display more extensive; on the other hand, after scanning the first target identification code
  • the first target object can be directly identified, and the first augmented reality effect can be displayed based on the first virtual effect data, thereby improving the display efficiency of the augmented reality effect and enhancing the user's viewing experience.
  • FIG. 1A is a schematic diagram of an implementation architecture of a display system provided by an embodiment of the present disclosure
  • FIG. 1B is a schematic flow diagram of an implementation of a display method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of an implementation flow of a display method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of an implementation flow of a display method provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of an implementation flow of a display method provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of an implementation flow of a display method provided by an embodiment of the present disclosure.
  • FIG. 6A is a schematic diagram of an implementation flow of an AR effect display method in a single AR effect viewing scene provided by an embodiment of the present disclosure
  • FIG. 6B is a schematic diagram of an implementation flow of an AR effect display method in a multi-AR effect viewing scene provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of the composition and structure of a display device provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a hardware entity of an electronic device provided by an embodiment of the present disclosure.
  • Mini Program also known as Web Program (Web Program)
  • Web Program is a front-end-oriented language (such as JavaScript) developed and implemented in Hyper Text Markup Language (HTML, Hyper Text Markup Language) pages
  • the program of the service is downloaded by the client (such as a browser or any client with an embedded browser core) via the network (such as the Internet), and interpreted and executed in the browser environment of the client, saving installation in the client A step of.
  • client such as a browser or any client with an embedded browser core
  • the network such as the Internet
  • interpreted and executed in the browser environment of the client saving installation in the client A step of.
  • a small program for realizing singing service can be downloaded and run in the social network client.
  • Augmented Reality Augmented Reality
  • AR Augmented Reality
  • AR augmented reality technology is a relatively new technical content that promotes the integration of real world information and virtual world information content, which will Physical information, which is difficult to experience in the real world, is simulated and processed on the basis of computer and other science and technology, and the virtual information content is superimposed on the real world for effective application, and in this process can be Perceived by human senses, so as to achieve a sensory experience beyond reality. After overlapping between the real environment and virtual objects, they can exist simultaneously in the same picture and space.
  • WebView which is a web browsing control, can be embedded in the client to realize the hybrid development of the front end, and is used for processing requests, web page loading, rendering, web page interaction, etc.
  • the AR effect display solution after scanning a specific identification code through a browser or a third-party identification code scanner, at least one page or link jump is required to enter the AR environment display and the current identification code The corresponding AR effect.
  • a third-party identification code scanner which limits the application range of AR effect display;
  • the user due to the need to go through at least one page or link jump, the user needs to The AR effect corresponding to the current identification code can only be viewed after the jump is completed, which requires additional waiting time and affects the user's viewing experience, especially in a scene with multiple AR effects, the user needs to repeatedly scan, jump, Only the viewing steps can experience the AR effects corresponding to different identification codes, and the interactive operation is cumbersome, which further affects the user experience.
  • the embodiments of the present disclosure provide a display method, which can make the display of augmented reality effects more widely used, improve the display efficiency of augmented reality effects, and improve the viewing experience of users.
  • the display method provided by the embodiments of the present disclosure can be applied to electronic devices.
  • the electronic device provided by the embodiment of the present disclosure can be implemented as AR glasses, notebook computer, tablet computer, desktop computer, set-top box, mobile device (for example, mobile phone, portable music player, personal digital assistant, dedicated message device, portable game device) and other types of terminals.
  • the display method provided by the embodiments of the present disclosure may be applied to a client application platform of an electronic device.
  • the client application platform may be a network (Web) application platform or a small program.
  • the display method provided by the embodiments of the present disclosure may also be applied to an application program of an electronic device.
  • FIG. 1A is a schematic diagram of an implementation architecture of a display system provided by an embodiment of the present disclosure.
  • electronic devices terminal 400-1 and The terminal 400-2
  • the network 300 may be a wide area network or a local area network, or a combination of both.
  • the electronic device is used to send the first target identification code to the server 200 in response to the scanning operation of the first target identification code in the augmented reality environment; the server 200 is used to determine the first target based on the first target identification code object and the first virtual effect data, and return the first target object and the first virtual effect data to the electronic device; after receiving the first target object and the first virtual effect data, the electronic device responds to the enhanced
  • the first target object is recognized in the real environment, and in the display interface of the electronic device (the display interface 401-1 of the terminal 400-1 and the display interface 401-2 of the terminal 400-2 are shown as examples), based on the first target object A virtual effect data, displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment; in this way, the AR effect is presented in the electronic device.
  • the server 200 can be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Cloud servers for basic cloud computing services such as network services, cloud communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • the terminal and the server may be connected directly or indirectly through wired or wireless communication, which is not limited in this embodiment of the present disclosure.
  • FIG. 1B is a schematic diagram of an implementation flow of a display method provided by an embodiment of the present disclosure. As shown in FIG. 1B , the method includes:
  • Step S101 in response to a scanning operation on a first target identification code in an augmented reality environment, based on the first target identification code, determine a first target object and first virtual effect data.
  • the augmented reality environment may be any suitable interactive interface for presenting augmented reality effects, and may be implemented based on native augmented reality technology, or may be realized using webpage-based augmented reality technology, which is not limited here.
  • the augmented reality environment may be an interactive interface of an application platform, application program, or applet running on an electronic device for presenting augmented reality effects.
  • the electronic device can scan or recognize any object in a real scene in an augmented reality environment, and can also scan or recognize an object in a pre-acquired image.
  • the first target identification code may be a two-dimensional code, a barcode, or other scannable codes, which are not limited in this embodiment of the present disclosure.
  • the scanning operation of the first target identification code may be an operation of scanning the first target identification code in the real scene by using the camera of the electronic device in the augmented reality environment, or it may be an operation of scanning the pre-acquired image in the augmented reality environment
  • the operation of scanning the first target identification code in is not limited in this embodiment of the present disclosure.
  • the first target object can be any suitable object associated with the first target identification code, and can be a two-dimensional image, such as a photo of an exhibit, a portrait of a person, a car poster, etc., or a three-dimensional object, such as a real scene
  • the exhibits, people, buildings, vehicles, etc. in the exhibition are not limited here.
  • the first virtual effect data may be virtual special effect data for displaying an augmented reality effect corresponding to the first target object in an augmented reality environment.
  • the first virtual effect data may include at least one of the following: virtual stickers, virtual animations, and virtual items.
  • the virtual sticker can be two-dimensional or three-dimensional virtual additional information added to the real scene image collected by the electronic device.
  • the virtual sticker can be a virtual calendar added to the real scene image in an augmented reality environment
  • the virtual animation can be Two-dimensional or three-dimensional virtual objects that move according to preset actions added to real scene images.
  • Virtual objects can include virtual characters, virtual plants, virtual animals, etc.
  • virtual animation can be used to guide navigation routes in map navigation applications.
  • Virtual narrators; virtual items can be two-dimensional or three-dimensional decorations that are decorated in real scene images collected by electronic devices, for example, virtual items can be added to portraits in real scene images in an augmented reality environment virtual glasses.
  • the first target object and the first virtual effect data can be directly carried in the first target identification code, and the first target object and the first virtual effect data can be obtained by parsing the first target identification code.
  • the mapping relationship between the identification code, the object and the virtual effect data can be preset, and the first target object and the first virtual effect data corresponding to the first target identification code can be determined by querying the mapping relationship.
  • Step S102 in response to recognizing the first target object in the augmented reality environment, based on the first virtual effect data, display a first augmented reality corresponding to the first target object in the augmented reality environment realistic effect.
  • the electronic device may turn on its own image collection device (for example, a camera) to collect images, and recognize the collected images. If the first target object is recognized, based on the first virtual effect data, the first augmented reality effect corresponding to the first target object may be displayed in the augmented reality environment. During implementation, the first augmented reality effect corresponding to the first target object may be displayed by rendering the first virtual effect data in an augmented reality environment.
  • image collection device for example, a camera
  • the electronic device can recognize the pre-acquired image in the augmented reality environment, and if the first target object is recognized, based on the first virtual effect data, display the image corresponding to the first target object in the augmented reality environment.
  • the determination of the first target object and the first virtual effect data based on the first target identification code in the above step S101 may include the following steps S111 to S113:
  • Step S111 analyzing the first target identification code to obtain target identification information.
  • the target identification information may be identification information carried in the first target identification code and used to characterize the first target identification code.
  • those skilled in the art may carry appropriate target identification information in the first target identification code according to actual conditions, which is not limited here.
  • the target identification information is encoding information of the first target identification code, and different identification codes have different encoding information.
  • Step S112 Determine a first target object corresponding to the target identification information based on the preset mapping relationship between the identification information and the target object.
  • mapping relationship between the identification information and the target object may be set in advance according to the actual situation, which is not limited here.
  • the first target object corresponding to the target identification information can be obtained by querying the mapping relationship between the identification information and the target object by using the target identification information.
  • Step S113 based on the preset mapping relationship between the identification information and the virtual effect data, determine the first virtual effect data corresponding to the target identification information.
  • mapping relationship between the identification information and the virtual effect data may be set in advance according to the actual situation, which is not limited here.
  • the first virtual effect data corresponding to the target identification information can be obtained by querying the mapping relationship between the identification information and the virtual effect data by using the target identification information.
  • the augmented reality environment may include at least one interactive interface among an application platform, an application program or a small program running on an electronic device for presenting augmented reality effects.
  • the first target object may include a two-dimensional object and/or a three-dimensional object associated with the first target identification code.
  • the two-dimensional object may include at least one of the following: photos of exhibits, portraits of characters, and car posters; the three-dimensional object may include at least one of the following: exhibits, people, and buildings in real scenes things, vehicles.
  • the first target object and the first virtual effect data are determined; and in response to The first target object is recognized in the augmented reality environment, and based on the first virtual effect data, a first augmented reality effect corresponding to the first target object is displayed in the augmented reality environment.
  • the first target identification code can be scanned in an augmented reality environment without relying on a third-party identification code scanner, making the application of augmented reality effect display more extensive; on the other hand, after scanning the first target identification code
  • the first target object can be directly identified, and the first augmented reality effect can be displayed based on the first virtual effect data, thereby improving the display efficiency of the augmented reality effect and enhancing the user's viewing experience.
  • FIG. 2 is a schematic diagram of an implementation flow of a display method provided by an embodiment of the present disclosure. As shown in FIG. 2 , the method includes:
  • Step S201 in response to a scanning operation on a first target identification code in an augmented reality environment, based on the first target identification code, determine a first target object and first virtual effect data.
  • Step S202 in response to recognizing the first target object in the augmented reality environment, based on the first virtual effect data, display a first augmented reality corresponding to the first target object in the augmented reality environment realistic effect.
  • the above steps S201 to S202 correspond to the above steps S101 to S102, and for implementation, reference may be made to the specific implementation manners of the above steps S101 to S102.
  • Step S203 when the display of the first augmented reality effect ends, in response to the scanning operation of the second target identification code in the augmented reality environment, based on the second target identification code, determine the second target object and second virtual effect data.
  • the first augmented reality effect may be displayed in the form of an augmented reality dynamic graph, or may be displayed in the form of an augmented reality video, which is not limited in this embodiment of the present disclosure.
  • the user may continue to scan the second target identification code in the augmented reality environment after the display of the first augmented reality effect is completed.
  • the second target identification code may be the same as the first target identification code, or different from the first target identification code, which is not limited here.
  • the second target object can be any suitable object associated with the second target identification code, and can be a two-dimensional image, such as a photo of an exhibit, a portrait of a person, a car poster, etc., or a three-dimensional object, such as a real scene
  • the exhibits, people, buildings, cars, etc. in the exhibition are not limited here.
  • the second virtual effect data may be virtual special effect data for displaying augmented reality effects corresponding to the second target object in an augmented reality environment, such as virtual stickers, virtual animations, virtual items, and the like.
  • the second target object and the second virtual effect data can be directly carried in the second target identification code, and the second target object and the second virtual effect data can be obtained by parsing the second target identification code.
  • the mapping relationship between the identification code, the object and the virtual effect data may be preset, and the second target object and the second virtual effect data corresponding to the second target identification code are determined by querying the mapping relationship.
  • Step S204 in response to recognizing the second target object in the augmented reality environment, based on the second virtual effect data, display a second augmented reality corresponding to the second target object in the augmented reality environment realistic effect.
  • step S204 corresponds to the above-mentioned step S202, and for implementation, reference may be made to the specific implementation manner of the above-mentioned step S202.
  • the display of the first augmented reality effect ends, in response to the scanning operation of the second target identification code in the augmented reality environment, based on the second target identification code, the second target object and second virtual effect data, and in response to identifying the second target object in the augmented reality environment, displaying a second augmented reality corresponding to the second target object in the augmented reality environment based on the second virtual effect data Effect.
  • the user can directly scan the second target identification code in the same augmented reality environment, and by identifying the second target object, display the second object in the augmented reality environment.
  • the augmented reality effect can make the interaction in the viewing process of multiple augmented reality effects easier, and can improve the display efficiency of multiple augmented reality effects, thereby further improving the user's viewing experience.
  • FIG. 3 is a schematic diagram of an implementation flow of a display method provided by an embodiment of the present disclosure. As shown in FIG. 3 , the method includes:
  • Step S301 in response to the start operation of the augmented reality environment, enter the augmented reality environment.
  • the start operation of the augmented reality environment may be any suitable operation that triggers the electronic device to display an interactive interface for presenting augmented reality effects, including but not limited to launching a small program presenting augmented reality effects, opening an Entrance links to augmented reality environments, etc.
  • the user can start on the electronic device and enter the augmented reality environment by using an appropriate startup operation according to the actual situation, which is not limited here.
  • Step S302 in response to the scanning operation of the first target identification code in the augmented reality environment, based on the first target identification code, determine the first target object and the first virtual effect data.
  • Step S303 in response to recognizing the first target object in the augmented reality environment, based on the first virtual effect data, display a first augmented reality corresponding to the first target object in the augmented reality environment realistic effect.
  • steps S302 to S303 correspond to the above-mentioned steps S101 to S102, and for implementation, reference may be made to the specific implementation manners of the above-mentioned steps S101 to S102.
  • the above step S301 may include:
  • Step S311 in response to the access operation to the entry address of the augmented reality environment, enter the augmented reality environment.
  • the entry address of the augmented reality environment may include, but not limited to, one or more of entry buttons in the application program, portlets in the application platform, entry links, etc., and are not limited here.
  • the user may enter the augmented reality environment in response to a startup operation of the augmented reality environment before scanning the first target identification code.
  • the first target object can be directly identified after scanning the first target identification code, and the time for waiting for the augmented reality environment to jump after scanning the first target identification code can be reduced, thereby improving the display efficiency of the augmented reality effect and improving the user experience. viewing experience.
  • FIG. 4 is a schematic diagram of an implementation flow of a display method provided by an embodiment of the present disclosure. As shown in FIG. 4 , the method includes:
  • Step S401 in response to the start operation of the augmented reality environment, enter the augmented reality environment.
  • step S401 corresponds to the above-mentioned step S301, and for implementation, reference may be made to the specific implementation manner of the above-mentioned step S301.
  • Step S402 in response to entering the augmented reality environment, start the image acquisition device of the electronic device in the augmented reality environment.
  • the image acquisition device can be a camera installed at any suitable position on the electronic device, it can be a front camera, it can also be a rear camera, it can be a built-in camera, or it can be an external camera. Not limited.
  • any appropriate instruction can be executed to start the image acquisition device of the electronic device.
  • Step S403 in response to the scanning operation of the first target identification code by using the image acquisition device of the electronic device in the augmented reality environment, based on the first target identification code, determine the first target object and the first virtual effect data .
  • Step S404 in response to recognizing the first target object in the augmented reality environment, based on the first virtual effect data, display a first augmented reality corresponding to the first target object in the augmented reality environment realistic effect.
  • steps S403 to S404 correspond to the above steps S101 to S102, and for implementation, reference may be made to the specific implementation manners of the above steps S101 to S102.
  • the image acquisition device in response to entering the augmented reality environment, start the image acquisition device of the electronic device in the augmented reality environment, and respond to scanning the first target identification code by using the image acquisition device in the augmented reality environment Operation: Determine a first target object and first virtual effect data based on the first target identification code.
  • the image acquisition device can be automatically started after entering the augmented reality environment, and the image acquisition device can be used to scan the first target identification code, thereby further simplifying the user's operation, reducing the user's waiting time, and further improving the user's use. experience.
  • FIG. 5 is a schematic diagram of an implementation flow of a display method provided by an embodiment of the present disclosure. As shown in FIG. 5 , the method includes:
  • Step S501 in response to a startup operation on the augmented reality environment, enter the augmented reality environment.
  • Step S502 in response to entering the augmented reality environment, start the image acquisition device in the augmented reality environment.
  • Step S503 in response to the scanning operation of the first target identification code by using the image acquisition device of the electronic device in the augmented reality environment, based on the first target identification code, determine the first target object and the first virtual effect data .
  • Step S504 under the augmented reality environment, use the image acquisition device to acquire the image to be recognized in real time.
  • the image to be recognized may be an image collected by the image collection device in a real scene.
  • the electronic device may prompt the user for the first target object to be identified by displaying prompt text on the display screen or sending out a prompt voice through a speaker.
  • the user can point the image acquisition device at the first target object according to the prompt text or prompt voice, and the electronic device can use the image acquisition device to collect real-time images of the area where the first target object is located in the real scene in the augmented reality environment, and display the image as the image to be recognized.
  • Step S505 identifying the first target object in the image to be identified.
  • any suitable target recognition algorithm may be used to identify the first target object in the image to be recognized, such as a key point detection algorithm, a sliding window algorithm, a candidate region algorithm, etc., which are not limited here.
  • Step S506 in response to the recognition of the first target object in the image to be recognized, based on the first virtual effect data, display a first object corresponding to the first target object in the augmented reality environment.
  • Augmented reality effects in response to the recognition of the first target object in the image to be recognized, based on the first virtual effect data, display a first object corresponding to the first target object in the augmented reality environment. Augmented reality effects.
  • the image acquisition device is used to collect the image to be identified in real time, and the first target object in the image to be identified is identified.
  • a first target object is identified in the image to be identified, and based on the first virtual effect data, a first augmented reality effect corresponding to the first target object is displayed in an augmented reality environment.
  • the target identification code is the target QR code
  • the target object is the target image
  • the AR effect is the AR effect based on the webpage as an example for illustration.
  • web-based AR effect display combining two-dimensional code and image recognition has attracted much attention due to its lightweight and interesting features, and has been applied in various fields.
  • the process of the AR effect display solution includes the following steps:
  • Step S601 the user scans the QR code through a third-party QR code scanner in a browser, application platform or applet running on the electronic device;
  • Step S602 the electronic device jumps to the AR environment according to the link identified from the two-dimensional code, and displays a specific AR effect when a specific image is recognized;
  • step S603 the user exits the AR environment after watching the AR effect, and repeats the above steps S601 and S602 to watch the next AR effect.
  • the AR effect display solution in the above-mentioned related technologies implements QR code scanning and image recognition independently and in stages.
  • the operations of scanning, jumping, and viewing need to be repeated, and the interaction is cumbersome, which affects the user experience.
  • most of the above solutions rely on third-party applications or QR code scanners in the application platform, which adds additional jump interactions and limits the application range of AR effects.
  • the embodiment of the present disclosure provides a non-perceptual AR effect display method based on two-dimensional code and image recognition.
  • Users can enter the AR environment through various entry addresses of the AR environment to experience the viewing experience of the AR effect.
  • the entry addresses may include but are not limited to The address of the WebView environment provided by social platform official accounts, applets, browsers, or other applications.
  • After the electronic device enters the AR environment it can automatically turn on the camera for QR code scanning and image recognition, and display the AR effect bound to the QR code.
  • Users can switch display scenes of various AR effects without perception in the AR environment. For example, in an exhibition, exhibitors need to display a variety of products covering AR effects. When these products are displayed in relatively concentrated positions, users can use the AR effect display method provided by the embodiments of the present disclosure to experience various AR effects without perception. There is no need to repeat the process of scanning, jumping and viewing, which greatly improves the user's viewing experience.
  • the AR effect display method may include the following steps S611 to S616:
  • Step S611 acquiring the created first AR effect, and generating a first target QR code corresponding to the first AR effect.
  • Step S612 determine the first target image corresponding to the first target two-dimensional code, and bind the first target two-dimensional code with the first target image;
  • the mapping relationship between the target identification information and the first target image binds the first target two-dimensional code and the first target image.
  • Step S613 generating an entry link of the AR environment.
  • Step S614 access the entry link through a browser containing a web view (WebView) control, enter the AR environment, and turn on the camera.
  • WebView web view
  • Step S615 in the current AR environment, by scanning the QR code of the first target, analyzing the target identification information in the first target QR code, and determining the first target through the mapping relationship between the target identification information and the first target image. target image.
  • Step S616 in the current AR environment, recognize the first target image in the real scene, and display the first AR effect corresponding to the first target QR code when the first target image is recognized.
  • the user may close the browser.
  • the AR effect display method provided by the embodiments of the present disclosure may include the following steps S621 to S628:
  • Step S621 acquiring the created first and second AR effects, and generating a first target QR code corresponding to the first AR effect, and a second target QR code corresponding to the second AR effect.
  • Step S622 determining a first target image corresponding to the first target two-dimensional code, and a second target image corresponding to the second target two-dimensional code, and combining the first target two-dimensional code with the first target image Binding, and binding the second target two-dimensional code with the first target image;
  • the second target image can be bound by establishing the mapping relationship
  • a target two-dimensional code is bound to the first target image
  • the second target two-dimensional code is bound to the second target image by establishing a mapping relationship between the target identification information in the second target two-dimensional code and the second target image Image binding.
  • step S623 an entry link of the AR environment is generated.
  • Step S624 access the entry link through the browser containing the WebView control, enter the AR environment, and open the camera.
  • Step S625 in the current AR environment, by scanning the QR code of the first target, analyzing the target identification information in the first target QR code, and determining the second target through the mapping relationship between the target identification information and the first target image. a target image.
  • Step S626 in the current AR environment, recognize the first target image in the real scene, and display the first AR effect corresponding to the first target QR code when the first target image is recognized.
  • Step S627 after the first AR effect is displayed, in the current AR environment, by scanning the QR code of the second target, analyzing the target identification information in the QR code of the second target, and combining the target identification information with the second target
  • the mapping relationship between the target images determines the second target image.
  • Step S628 in the current AR environment, recognize the second target image in the real scene, and display the second AR effect corresponding to the second target QR code when the second target image is recognized.
  • the user can close the browser, or continue to scan the next target QR code to watch the next AR effect.
  • the first target two-dimensional code and the second target two-dimensional code can correspond to the aforementioned first target identification code and second target identification code respectively, and the first target image and the second target image can be Corresponding to the aforementioned first target object and second target object respectively.
  • the embodiments of the present disclosure provide a display device, which includes each unit included and each part included in each unit, which can be implemented by a processor in an electronic device; of course, it can also be implemented by a specific
  • the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP) or a field programmable gate array (FPGA).
  • FIG. 7 is a schematic diagram of the composition and structure of a display device provided by an embodiment of the present disclosure. As shown in FIG. 7 , the display device 700 includes: a first determination part 710 and a first display part 720, wherein:
  • the first determining part 710 is configured to determine the first target object and the first virtual effect data based on the first target identification code in response to the scanning operation of the first target identification code in the augmented reality environment;
  • the first display part 720 is configured to, in response to recognizing the first target object in the augmented reality environment, based on the first virtual effect data, display a The first augmented reality effect corresponding to the object.
  • the first determination part is further configured to: analyze the first target identification code to obtain target identification information; based on the preset mapping relationship between the identification information and the target object, determine and The first target object corresponding to the target identification information; determining the first virtual effect data corresponding to the target identification information based on a preset mapping relationship between identification information and virtual effect data.
  • the device further includes: a second determining part configured to respond to performing an operation on the second target identification code in the augmented reality environment when the display of the first augmented reality effect ends.
  • the scanning operation based on the second target identification code, determines the second target object and the second virtual effect data;
  • the second display part is configured to respond to the recognition of the second target object in the augmented reality environment , displaying a second augmented reality effect corresponding to the second target object in the augmented reality environment based on the second virtual effect data.
  • the apparatus further includes: a first starting part configured to enter the augmented reality environment in response to a start operation on the augmented reality environment.
  • the first determining part is further configured to respond to the scanning operation of the first target identification code by using the image acquisition device of the electronic device in the augmented reality environment, based on the first target identification code, A first target object and first virtual effect data are determined; the device further includes: a second activation part configured to activate the image acquisition device in the augmented reality environment in response to entering the augmented reality environment.
  • the device further includes: a collection part configured to use the image collection device to collect images to be recognized in real time in the augmented reality environment; a recognition part is configured to detect the image to be recognized The first target object in the image to be recognized is recognized; the display part is further configured to: in response to the recognition of the first target object in the image to be recognized, based on the first virtual effect data, in the Displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment.
  • the first launching part is further configured to enter the augmented reality environment in response to an access operation to an entry address of the augmented reality environment.
  • the augmented reality environment may include at least one interactive interface among an application platform, an application program or a small program running on an electronic device for presenting augmented reality effects.
  • the first target object may include a two-dimensional object and/or a three-dimensional object associated with the first target identification code.
  • the two-dimensional object may include at least one of the following: photos of exhibits, portraits of characters, and car posters; the three-dimensional object may include at least one of the following: exhibits, people, and buildings in real scenes things, vehicles.
  • a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course it may also be a unit, a module or a non-modular one.
  • This disclosure relates to the field of augmented reality.
  • acquiring the image information of the target object in the real environment and then using various visual correlation algorithms to detect or identify the relevant features, states and attributes of the target object, and thus obtain the image information that matches the specific application.
  • AR effect combining virtual and reality.
  • the target object may involve faces, limbs, gestures, actions, etc. related to the human body, or markers and markers related to objects, or sand tables, display areas or display items related to venues or places.
  • Vision-related algorithms can involve visual positioning, SLAM, 3D reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc.
  • Specific applications can not only involve interactive scenes such as guided tours, navigation, explanations, reconstructions, virtual effect overlays and display related to real scenes or objects, but also special effects processing related to people, such as makeup beautification, body beautification, special effect display, virtual Interactive scenarios such as model display.
  • the relevant features, states and attributes of the target object can be detected or identified through the convolutional neural network.
  • the above-mentioned convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
  • the above display method is realized in the form of software function parts and sold or used as an independent product, it can also be stored in a computer-readable storage medium.
  • the essence of the technical solution of the embodiments of the present disclosure or the part that contributes to the related technology can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to make a
  • An electronic device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: various media that can store program codes such as U disk, mobile hard disk, read-only memory (Read Only Memory, ROM), magnetic disk or optical disk.
  • embodiments of the present disclosure are not limited to any specific combination of hardware and software.
  • An embodiment of the present disclosure provides an electronic device, including a display screen; a memory for storing an executable computer program; and a processor for executing the executable computer program stored in the memory, in combination with the display screen to realize the above-mentioned Shows the steps in the method.
  • An embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above method are implemented.
  • Embodiments of the present disclosure also provide a computer program, the computer program includes computer readable codes, and when the computer readable codes are read and executed by a computer, part or all of the steps in the above method embodiments are implemented.
  • Embodiments of the present disclosure also provide a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute some or all of the steps in the methods described in the above method embodiments.
  • the above-mentioned computer program product may be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in other embodiments, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • FIG. 8 is a schematic diagram of a hardware entity of an electronic device in an embodiment of the present disclosure.
  • the hardware entity of the electronic device 800 includes: a display screen 801, a memory 802, and a processor 803, wherein, The display screen 801, the memory 802 and the processor 803 are connected through a communication bus 804; the memory 802 is used to store executable computer programs; the processor 803 is used to execute the executable computer programs stored in the memory 802, combined with the display screen 801, Implement the method provided by the embodiment of the present disclosure, for example, the display method provided by the embodiment of the present disclosure.
  • the memory 802 can be configured to store instructions and applications executable by the processor 803, and can also cache data to be processed or processed by the processor 803 and various modules in the electronic device 800 (for example, image data, audio data, voice communication data) and video communication data), which can be implemented by flash memory (FLASH) or random access memory (Random Access Memory, RAM).
  • flash memory FLASH
  • random access Memory Random Access Memory
  • the embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, configured to cause the processor 803 to implement the method provided in the embodiment of the present disclosure, for example, the display method provided in the embodiment of the present disclosure.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units; they may be located in one place or distributed to multiple network units; Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, each unit may be used as a single unit, or two or more units may be integrated into one unit; the above-mentioned integrated
  • the unit can be realized in the form of hardware or in the form of hardware plus software functional unit.
  • the above-mentioned integrated units of the present disclosure are realized in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
  • the essence of the technical solution of the present disclosure or the part that contributes to related technologies can be embodied in the form of software products, the computer software products are stored in a storage medium, and include several instructions to make a An electronic device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes various media capable of storing program codes such as removable storage devices, ROMs, magnetic disks or optical disks.
  • the embodiment of the present disclosure discloses a display method, device, device, computer readable storage medium, computer program product and computer program.
  • the method includes: in response to a scanning operation performed on a first target identification code in an augmented reality environment, determining a first target object and first virtual effect data based on the first target identification code;
  • the first target object is recognized in the environment, and based on the first virtual effect data, a first augmented reality effect corresponding to the first target object is displayed in the augmented reality environment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开实施例公开了一种显示方法、装置、设备、计算机可读存储介质、计算机程序产品及计算机程序。该方法包括:响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据;响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。

Description

显示方法、装置、设备、计算机可读存储介质、计算机程序产品及计算机程序
相关申请的交叉引用
本公开基于申请号为202111134428.1、申请日为2021年09月27日、申请名称为“显示方法、装置、设备及计算机可读存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及但不限于终端技术,尤其涉及一种显示方法、装置、设备、计算机可读存储介质、计算机程序产品及计算机程序。
背景技术
相关技术中,在增强现实(Augmented Reality,AR)效果的显示方案中,通常通过浏览器或第三方识别码扫描器扫描特定的识别码后,需要经过至少一次页面或链接的跳转,才能进入AR环境展示与当前识别码对应的AR效果,需要额外的等待时长,影响用户的观看体验,并且需要依赖第三方识别码扫描器,导致AR效果展示的应用范围受限。
发明内容
本公开实施例提供一种显示方法、装置、设备及计算机可读存储介质。
本公开实施例的技术方案是这样实现的:
本公开实施例提供一种显示方法,包括:
响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据;
响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
在一些实施例中,所述基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据,包括:对所述第一目标识别码进行解析,得到目标识别信息;基于预设的识别信息与目标对象之间的映射关系,确定与所述目标识别信息对应的第一目标对象;基于预设的识别信息与虚拟效果数据之间的映射关系,确定与所述目标识别信息对应的第一虚拟效果数据。
在一些实施例中,所述方法还包括:在所述第一增强现实效果展示结束的情况下,响应于在所述增强现实环境下对第二目标识别码进行的扫描操作,基于所述第二目标识别码,确定第二目标对象和第二虚拟效果数据;响应于在所述增强现实环境下识别到所述第二目标对象,基于所述第二虚拟效果数据,在所述增强现实环境下展示与所述第二目标对象对应的第二 增强现实效果。
在一些实施例中,所述方法还包括:响应于对所述增强现实环境的启动操作,进入所述增强现实环境。
在一些实施例中,所述在增强现实环境下对第一目标识别码进行的扫描操作,包括:在增强现实环境下,利用电子设备的图像采集装置对第一目标识别码进行的扫描操作;所述方法还包括:响应于进入所述增强现实环境,在所述增强现实环境下启动所述图像采集装置。
在一些实施例中,所述方法还包括:在所述增强现实环境下,利用所述图像采集装置实时采集待识别的图像;对所述待识别的图像中的第一目标对象进行识别;所述响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果,包括:响应于在所述待识别的图像中识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
在一些实施例中,所述响应于对所述增强现实环境的启动操作,进入所述增强现实环境,包括:响应于对所述增强现实环境的入口地址的访问操作,进入所述增强现实环境。
在一些实施例中,所述增强现实环境可以包括运行在电子设备上的用于呈现增强现实效果的应用平台、应用程序或小程序中的至少一个交互界面。
在一些实施例中,所述第一目标对象可以包括与所述第一目标识别码关联的二维对象和/或三维对象。
在一些实施例中,所述二维对象可以包括以下至少之一:展品的照片、人物的肖像图、汽车海报;所述三维对象可以包括以下至少之一:真实场景中的展品、人、建筑物、车辆。
本公开实施例提供一种显示装置,包括:第一确定部分,被配置为响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据;第一显示部分,被配置为响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
本公开实施例提供一种电子设备,包括:显示屏;存储器,用于存储可执行计算机程序;处理器,用于执行所述存储器中存储的可执行计算机程序时,结合所述显示屏实现上述的显示方法。
本公开实施例提供一种计算机可读存储介质,存储有计算机程序,用于引起处理器执行时,实现上述的显示方法。
本公开实施例中,通过响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于该第一目标识别码,确定第一目标对象和第一虚拟效 果数据;并响应于在该增强现实环境下识别到该第一目标对象,基于该第一虚拟效果数据,在该增强现实环境下展示与该第一目标对象对应的第一增强现实效果。这样,一方面,可以在增强现实环境下扫描第一目标识别码,不依赖第三方的识别码扫描器,使得增强现实效果展示的应用更广泛;另一方面,在扫描第一目标识别码后可以直接识别第一目标对象,基于第一虚拟效果数据展示第一增强现实效果,从而可以提高增强现实效果的展示效率,提升用户的观看体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1A为本公开实施例提供的一种显示系统的实现架构示意图;
图1B为本公开实施例提供的一种显示方法的实现流程示意图;
图2为本公开实施例提供的一种显示方法的实现流程示意图;
图3为本公开实施例提供的一种显示方法的实现流程示意图;
图4为本公开实施例提供的一种显示方法的实现流程示意图;
图5为本公开实施例提供的一种显示方法的实现流程示意图;
图6A为本公开实施例提供的一种单一AR效果观看场景下的AR效果展示方法的实现流程示意图;
图6B为本公开实施例提供的一种多AR效果观看场景下的AR效果展示方法的实现流程示意图;
图7为本公开实施例提供的一种显示装置的组成结构示意图;
图8为本公开实施例提供的一种电子设备的硬件实体示意图。
具体实施方式
为了使本公开的目的、技术方案和优点更加清楚,下面将结合附图对本公开作进一步地详细描述,所描述的实施例不应视为对本公开的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本公开保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
除非另有定义,本文所使用的所有的技术和科学术语与属于本公开的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本公开实施例的目的,不是旨在限制本公开。
对本公开实施例进行进一步详细说明之前,对本公开实施例中涉及的名词和术语进行说明,本公开实施例中涉及的名词和术语适用于如下的解 释。
1)小程序(Mini Program),也称为网络程序(Web Program),是一种基于面向前端的语言(例如JavaScript)开发的、在超文本标记语言(HTML,Hyper Text Markup Language)页面中实现服务的程序,由客户端(例如浏览器或内嵌浏览器核心的任意客户端)经由网络(如互联网)下载、并在客户端的浏览器环境中解释和执行的软件,节省在客户端中安装的步骤。例如,在社交网络客户端中可以下载、运行用于实现唱歌服务的小程序。
2)增强现实(Augmented Reality,AR),增强现实技术也被称为扩增现实,AR增强现实技术是促使真实世界信息和虚拟世界信息内容之间综合在一起的较新的技术内容,其将原本在现实世界的空间范围中比较难以进行体验的实体信息在电脑等科学技术的基础上,实施模拟仿真处理,将虚拟信息内容叠加在真实世界中加以有效应用,并且在这一过程中能够被人类感官所感知,从而实现超越现实的感官体验。真实环境和虚拟物体之间重叠之后,能够在同一个画面以及空间中同时存在。
3)网页视图(WebView),为一种网页浏览控件,可以内嵌在客户端中,实现前端的混合式开发,用于处理请求、网页加载、渲染、网页交互等。
相关技术中,在AR效果的显示方案中,通常通过浏览器或第三方识别码扫描器扫描特定的识别码后,需要经过至少一次页面或链接的跳转,才能进入AR环境展示与当前识别码对应的AR效果。这样,一方面,需要依赖第三方识别码扫描器,导致AR效果展示的应用范围受限;另一方面,由于需要经过至少一次页面或链接的跳转,用户在扫描特定的识别码后,需要等待跳转完成后才能观看到与当前识别码对应的AR效果,从而需要额外的等待时长,影响用户的观看体验,特别是在有多个AR效果的场景下,用户需要重复扫描、跳转、观看的步骤才能体验不同识别码对应的AR效果,交互操作繁琐,进一步影响用户体验。
本公开实施例提供一种显示方法,能够使得增强现实效果展示应用更广泛,并能提高增强现实效果的展示效率,提升用户的观看体验。本公开实施例提供的显示方法可以应用于电子设备。本公开实施例提供的电子设备可以实施为AR眼镜、笔记本电脑,平板电脑,台式计算机,机顶盒、移动设备(例如,移动电话,便携式音乐播放器,个人数字助理,专用消息设备,便携式游戏设备)等各种类型的终端。在一些实施方式中,本公开实施例提供的显示方法可以应用于电子设备的客户端应用平台中。其中,客户端应用平台可以为网络(Web)端应用平台或小程序。在一些实施方式中,本公开实施例提供的显示方法,还可以应用于电子设备的应用程序中。
参见图1A,图1A是本公开实施例提供的一种显示系统的实现架构示意图,为实现支撑一个客户端应用平台,在显示系统100中,电子设备(示例性示出了终端400-1和终端400-2)通过网络300连接服务器200,网络 300可以是广域网或者局域网,又或者是二者的组合。电子设备用于响应于在增强现实环境下对第一目标识别码进行的扫描操作,将该第一目标识别码发送至服务器200;服务器200用于基于该第一目标识别码,确定第一目标对象和第一虚拟效果数据,并将该第一目标对象和该第一虚拟效果数据返回给电子设备;电子设备接收到该第一目标对象和该第一虚拟效果数据后,响应于在该增强现实环境下识别到该第一目标对象,在电子设备的展示界面(示例性示出了终端400-1的展示界面401-1和终端400-2的展示界面401-2)中,基于该第一虚拟效果数据,在该增强现实环境下展示与该第一目标对象对应的第一增强现实效果;如此,实现在电子设备中呈现AR效果。
在一些实施例中,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本公开实施例中不做限制。
下面,将结合本公开实施例提供的电子设备的示例性应用和实施,说明本公开实施例提供的显示方法。
本公开实施例提供一种显示方法,图1B是本公开实施例提供的一种显示方法的实现流程示意图,如图1B所示,该方法包括:
步骤S101,响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据。
这里,增强现实环境可以是任意合适的用于呈现增强现实效果的交互界面,可以是基于原生增强现实技术实现的,也可以是利用基于网页的增强现实技术实现的,这里并不限定。例如,增强现实环境可以是运行在电子设备上用于呈现增强现实效果的应用平台、应用程序或小程序等的交互界面。电子设备可以在增强现实环境下对真实场景中的任意对象进行扫描或识别,也可以对预先获取的图像中的对象进行扫描或识别。
第一目标识别码可以是二维码、条码,还可以是其他可以扫描的码,本公开实施例对此不作限定。
对第一目标识别码进行的扫描操作可以是在增强现实环境下利用电子设备的摄像头对真实场景中的第一目标识别码进行扫描的操作,也可以是在增强现实环境下对预先获取的图像中的第一目标识别码进行扫描的操作,本公开实施例对此不作限定。
第一目标对象可以是与第一目标识别码关联的任意合适的对象,可以是二维的图像,如展品的照片、人物的肖像图、汽车海报等,也可以是三维的物体,如真实场景中的展品、人、建筑物、车辆等,这里并不限定。
第一虚拟效果数据可以是用于在增强现实环境下展示与第一目标对象 对应的增强现实效果的虚拟特效数据。在一些实施例中,第一虚拟效果数据可以包括以下中的至少一个:虚拟贴纸、虚拟动画和虚拟物品。其中,虚拟贴纸可以是在电子设备采集的真实场景图像中添加的二维或三维的虚拟附加信息,例如,虚拟贴纸可以是在增强现实环境下为真实场景图像添加的虚拟日历;虚拟动画可以是在真实场景图像中添加的按照预设动作进行运动的二维或三维的虚拟对象,虚拟对象可以包括虚拟人物、虚拟植物、虚拟动物等,例如,虚拟动画可以是地图导航类应用中指引导航线路的虚拟讲解员;虚拟物品可以是在是在电子设备采集的真实场景图像中进行装饰的二维或三维的装饰物,例如,虚拟物品可以是在增强现实环境下为真实场景图像中的人像添加的虚拟眼镜。
在一些实施方式中,可以在第一目标识别码中直接携带第一目标对象和第一虚拟效果数据,通过解析第一目标识别码可以得到第一目标对象和第一虚拟效果数据。在另一些实施方式中,可以预先设定识别码、对象以及虚拟效果数据之间的映射关系,通过查询该映射关系确定与第一目标识别码对应的第一目标对象和第一虚拟效果数据。
步骤S102,响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
在一些实施方式中,在增强现实环境下,电子设备可以开启自身的图像采集装置(例如,摄像头)进行图像采集,并对采集到的图像进行识别。在识别到第一目标对象的情况下,可以基于第一虚拟效果数据,在增强现实环境下展示与该第一目标对象对应的第一增强现实效果。在实施时,可以通过在增强现实环境下对第一虚拟效果数据进行渲染,以展示与该第一目标对象对应的第一增强现实效果。
在一些实施方式中,电子设备可以在增强现实环境下对预先获取的图像进行识别,在识别到第一目标对象的情况下,基于第一虚拟效果数据,在增强现实环境下展示与该第一目标对象对应的第一增强现实效果。
在一些实施方式中,上述步骤S101中所述的基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据,可以包括如下步骤S111至步骤S113:
步骤S111,对所述第一目标识别码进行解析,得到目标识别信息。
这里,目标识别信息可以是第一目标识别码中携带的用于表征该第一目标识别码的标识信息。在实施,本领域技术人员可以根据实际情况在第一目标识别码中携带合适的目标识别信息,这里并不限定。
在一些实施方式中,目标识别信息为第一目标识别码的编码信息,不同的识别码具有不同的编码信息。
步骤S112,基于预设的识别信息与目标对象之间的映射关系,确定与所述目标识别信息对应的第一目标对象。
这里,识别信息与目标对象之间的映射关系可以是预先根据实际情况设定的,这里并不限定。利用目标识别信息查询该识别信息与目标对象之间的映射关系,可以得到与该目标识别信息对应的第一目标对象。
步骤S113,基于预设的识别信息与虚拟效果数据之间的映射关系,确定与所述目标识别信息对应的第一虚拟效果数据。
这里,识别信息与虚拟效果数据之间的映射关系可以是预先根据实际情况设定的,这里并不限定。利用目标识别信息查询该识别信息与虚拟效果数据之间的映射关系,可以得到与该目标识别信息对应的第一虚拟效果数据。
在一些实施例中,所述增强现实环境可以包括运行在电子设备上的用于呈现增强现实效果的应用平台、应用程序或小程序中的至少一个交互界面。
在一些实施例中,所述第一目标对象可以包括与所述第一目标识别码关联的二维对象和/或三维对象。
在一些实施例中,所述二维对象可以包括以下至少之一:展品的照片、人物的肖像图、汽车海报;所述三维对象可以包括以下至少之一:真实场景中的展品、人、建筑物、车辆。
在本公开实施例中,通过响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于该第一目标识别码,确定第一目标对象和第一虚拟效果数据;并响应于在该增强现实环境下识别到该第一目标对象,基于该第一虚拟效果数据,在该增强现实环境下展示与该第一目标对象对应的第一增强现实效果。这样,一方面,可以在增强现实环境下扫描第一目标识别码,不依赖第三方的识别码扫描器,使得增强现实效果展示的应用更广泛;另一方面,在扫描第一目标识别码后可以直接识别第一目标对象,基于第一虚拟效果数据展示第一增强现实效果,从而可以提高增强现实效果的展示效率,提升用户的观看体验。
本公开实施例提供一种显示方法,图2是本公开实施例提供的一种显示方法的实现流程示意图,如图2所示,该方法包括:
步骤S201,响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据。
步骤S202,响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
这里,上述步骤S201至步骤S202对应于前述步骤S101至步骤S102,在实施时,可以参照前述步骤S101至步骤S102的具体实施方式。
步骤S203,在所述第一增强现实效果展示结束的情况下,响应于在所述增强现实环境下对第二目标识别码进行的扫描操作,基于所述第二目标识别码,确定第二目标对象和第二虚拟效果数据。
这里,第一增强现实效果可以是通过增强现实动态图的形式展示,也可以通过增强现实视频的形式展示,本公开实施例对此不作限定。
用户可以在第一增强现实效果展示结束后,在该增强现实环境下继续对第二目标识别码进行扫描。第二目标识别码可以与第一目标识别码相同,也可以与第一目标识别码不同,这里并不限定。
第二目标对象可以是与第二目标识别码关联的任意合适的对象,可以是二维的图像,如展品的照片、人物的肖像图、汽车海报等,也可以是三维的物体,如真实场景中的展品、人、建筑物、汽车等,这里并不限定。第二虚拟效果数据可以是用于在增强现实环境下展示与第二目标对象对应的增强现实效果的虚拟特效数据,如虚拟贴纸、虚拟动画、虚拟物品等。
在一些实施方式中,可以在第二目标识别码中直接携带第二目标对象和第二虚拟效果数据,通过解析第二目标识别码可以得到第二目标对象和第二虚拟效果数据。在另一些实施方式中,可以预先设定识别码、对象以及虚拟效果数据之间的映射关系,通过查询该映射关系确定与第二目标识别码对应的第二目标对象和第二虚拟效果数据。
步骤S204,响应于在所述增强现实环境下识别到所述第二目标对象,基于所述第二虚拟效果数据,在所述增强现实环境下展示与所述第二目标对象对应的第二增强现实效果。
这里,上述步骤S204对应于前述步骤S202,在实施时,可以参照前述步骤S202的具体实施方式。
在本公开实施例中,在第一增强现实效果展示结束的情况下,响应于在增强现实环境下对第二目标识别码进行的扫描操作,基于第二目标识别码,确定第二目标对象和第二虚拟效果数据,并响应于在该增强现实环境下识别到该第二目标对象,基于该第二虚拟效果数据,在该增强现实环境下展示与该第二目标对象对应的第二增强现实效果。这样,用户可以在观看完第一增强现实效果后,在同一个增强现实环境下直接对第二目标识别码进行扫描,并通过对第二目标对象进行识别,在该增强现实环境下展示第二增强现实效果,从而可以使得多个增强现实效果的观看过程中交互更加简单,并能提高多个增强现实效果的展示效率,从而进一步提高用户的观看体验。
本公开实施例提供一种显示方法,图3是本公开实施例提供的一种显示方法的实现流程示意图,如图3所示,该方法包括:
步骤S301,响应于对增强现实环境的启动操作,进入所述增强现实环境。
这里,对所述增强现实环境的启动操作可以是任意合适的触发电子设备显示用于呈现增强现实效果的交互界面的操作,包括但不限于启动呈现增强现实效果的小程序、在浏览器中打开增强现实环境的入口链接等。在实施时,用户可以根据实际情况,采用合适的启动操作在电子设备上启动 并进入增强现实环境,这里并不限定。
步骤S302,响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据。
步骤S303,响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
这里,上述步骤S302至步骤S303对应于前述步骤S101至步骤S102,在实施时,可以参照前述步骤S101至步骤S102的具体实施方式。
在一些实施例中,上述步骤S301可以包括:
步骤S311,响应于对所述增强现实环境的入口地址的访问操作,进入所述增强现实环境。
这里,增强现实环境的入口地址可以包括但不限于在应用程序中的入口按钮、应用平台中的入口小程序、入口链接等中的一种或多种,这里并不限定。例如,可以通过在应用程序中对增强现实环境的入口按钮进行点击,进入该增强现实环境;也可以通过在应用平台中点击增强现实环境的入口小程序,进入该增强现实环境;还可以通过在浏览器中打开增强现实环境的入口链接,进入该增强现实环境。
在本公开实施例中,用户可以对第一目标识别码进行扫描之前,响应于对该增强现实环境的启动操作,进入该增强现实环境。这样,在扫描第一目标识别码后可以直接识别第一目标对象,可以减少在扫描第一目标识别码后等待增强现实环境跳转的时间,从而可以提高增强现实效果的展示效率,提升用户的观看体验。
本公开实施例提供一种显示方法,图4是本公开实施例提供的一种显示方法的实现流程示意图,如图4所示,该方法包括:
步骤S401,响应于对增强现实环境的启动操作,进入所述增强现实环境。
这里,上述步骤S401对应于前述步骤S301,在实施时,可以参照前述步骤S301的具体实施方式。
步骤S402,响应于进入所述增强现实环境,在所述增强现实环境下启动电子设备的图像采集装置。
在一些实施方式中,图像采集装置可以是安装在电子设备上任意合适位置处的摄像头,可以是前置摄像头,也可以是后置摄像头,可以是内置摄像头,也可以是外置摄像头,这里并不限定。在实施时,可以在电子设备进入增强现实环境后,执行任意合适的指令启动电子设备的图像采集装置。
步骤S403,响应于在增强现实环境下利用所述电子设备的图像采集装置对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据。
步骤S404,响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
这里,上述步骤S403至步骤S404对应于前述步骤S101至步骤S102,在实施时,可以参照前述步骤S101至步骤S102的具体实施方式。
在本公开实施例中,响应于进入增强现实环境,在该增强现实环境下启动电子设备的图像采集装置,并响应于在增强现实环境下利用该图像采集装置对第一目标识别码进行的扫描操作,基于第一目标识别码,确定第一目标对象和第一虚拟效果数据。这样,可以在进入增强现实环境后自动启动图像采集装置,并利用该图像采集装置对第一目标识别码进行扫描,从而可以进一步简化用户的操作,减少用户的等待时间,从而进一步提升用户的使用体验。
本公开实施例提供一种显示方法,图5是本公开实施例提供的一种显示方法的实现流程示意图,如图5所示,该方法包括:
步骤S501,响应于对所述增强现实环境的启动操作,进入所述增强现实环境。
步骤S502,响应于进入所述增强现实环境,在所述增强现实环境下启动所述图像采集装置。
步骤S503,响应于在增强现实环境下利用所述电子设备的图像采集装置对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据。
这里,上述步骤S501至步骤S503对应于前述步骤S401至步骤S403,在实施时,可以参照前述步骤S401至步骤S403的具体实施方式。
步骤S504,在所述增强现实环境下,利用所述图像采集装置实时采集待识别的图像。
这里,待识别的图像可以是图像采集装置在真实场景中采集到的图像。
在一些实施方式中,确定第一目标对象后,电子设备可以通过在显示屏上显示提示文本或者通过扬声器发出提示语音,以提示用户待识别的第一目标对象。用户可以根据提示文本或提示语音,将图像采集装置对准第一目标对象,电子设备可以在增强现实环境下,利用图像采集装置实时采集真实场景中第一目标对象所在区域的图像,并将该图像作为待识别的图像。
步骤S505,对所述待识别的图像中的第一目标对象进行识别。
这里,可以采用任意合适的目标识别算法对待识别的图像中的第一目标对象进行识别,例如关键点检测算法、滑动窗口算法、候选区域算法等,这里并不限定。
步骤S506,响应于在所述待识别的图像中识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标 对象对应的第一增强现实效果。
在本公开实施例中,在扫描到第一目标识别码后,在增强现实环境下,利用图像采集装置实时采集待识别的图像,对待识别的图像中的第一目标对象进行识别,响应于在待识别的图像中识别到第一目标对象,基于第一虚拟效果数据,在增强现实环境下展示与第一目标对象对应的第一增强现实效果。这样,在扫描第一目标识别码后可以直接利用图像采集装置识别真实场景中的第一目标对象,从而可以提高第一目标对象识别的灵活性,进一步提升用户的观看体验。
下面将说明本公开实施例在实际应用场景中的示例性应用。以目标识别码为目标二维码、目标对象为目标图像、AR效果为基于网页的AR效果为例进行说明。
随着二维码技术的全面普及以及基于网页的AR技术的发展,二维码与图像识别结合的基于网页的AR效果展示由于轻量化、趣味性的特点备受关注,在各个领域得到应用。
相关技术中,AR效果展示方案的流程包括如下步骤:
步骤S601,用户通过电子设备上运行的浏览器、应用平台或小程序中的第三方二维码扫描器扫描二维码;
步骤S602,电子设备根据从二维码中识别的链接跳转到AR环境,在识别到特定图像的情况下,展示特定的AR效果;
步骤S603,用户观看AR效果结束后退出AR环境,并重复进行上述步骤S601和步骤S602观看下一个AR效果。
上述相关技术中的AR效果展示方案将二维码扫描和图像的识别独立分阶段实施,在多AR效果的展示场景中,需要重复进行扫描、跳转、观看的操作,交互繁琐,影响用户体验;而且上述方案大多依赖第三方应用程序或应用平台中的二维码扫描器,增加了额外的跳转交互,同时限制了AR效果的应用范围。
本公开实施例提供一种基于二维码和图像识别的无感知AR效果展示方法,用户可以通过多种AR环境的入口地址进入AR环境,进行AR效果的观看体验,入口地址可以包括但不限于社交平台公众号、小程序、浏览器、或其它应用程序等提供的WebView环境的地址。电子设备进入AR环境后可以自动打开摄像头进行二维码扫描和图像识别,展示与二维码绑定的AR效果。用户可以在AR环境下无感知切换多种AR效果的展示场景。比如在展会中,参展商需要展示多款涵盖AR效果的产品,在这些产品展示位置相对集中的情况下,用户可以采用本公开实施例提供的AR效果展示方法,无感知体验多种AR效果,而不需要重复扫描、跳转和观看的流程,极大地提升了用户的观看体验。
在一些实施例中,如图6A所示,在单一AR效果的观看场景下,本公开实施例提供的AR效果展示方法可以包括如下步骤S611至步骤S616:
步骤S611,获取制作的第一AR效果,生成与该第一AR效果对应的第一目标二维码。
步骤S612,确定与该第一目标二维码对应的第一目标图像,并将该第一目标二维码与该第一目标图像绑定;这里,可以通过建立第一目标二维码中的目标识别信息与第一目标图像之间的映射关系将该第一目标二维码与该第一目标图像绑定。
步骤S613,生成AR环境的入口链接。
步骤S614,通过包含网页视图(WebView)控件的浏览器访问该入口链接,进入AR环境,并打开摄像头。
步骤S615,在当前AR环境下,通过对第一目标二维码进行扫描,解析第一目标二维码中的目标识别信息,通过目标识别信息与第一目标图像之间的映射关系确定第一目标图像。
步骤S616,在当前AR环境下,对真实场景中的第一目标图像进行识别,并在识别到该第一目标图像的情况下,展示与该第一目标二维码对应的第一AR效果。
用户观看完毕第一AR效果后,可以关闭浏览器。
在一些实施例中,如图6B所示,在多AR效果的观看场景下,本公开实施例提供的AR效果展示方法可以包括如下步骤S621至步骤S628:
步骤S621,获取制作的第一AR效果和第二AR效果,并生成与该第一AR效果对应的第一目标二维码,以及与该第二AR效果对应的第二目标二维码。
步骤S622,确定与该第一目标二维码对应的第一目标图像,以及与该第二目标二维码对应的第二目标图像,并将该第一目标二维码与该第一目标图像绑定,以及将该第二目标二维码与该第人目标图像绑定;这里,可以通过建立第一目标二维码中的目标识别信息与第一目标图像之间的映射关系将该第一目标二维码与该第一目标图像绑定,通过建立第二目标二维码中的目标识别信息与第二目标图像之间的映射关系将该第二目标二维码与该第二目标图像绑定。
步骤S623,生成AR环境的入口链接。
步骤S624,通过包含WebView控件的浏览器访问该入口链接,进入AR环境,并打开摄像头。
步骤S625,在当前AR环境下,通过对第一目标二维码进行扫描,解析第一目标二维码中的目标识别信息,通过该目标识别信息与第一目标图像之间的映射关系确定第一目标图像。
步骤S626,在当前AR环境下,对真实场景中的第一目标图像进行识别,并在识别到该第一目标图像的情况下,展示与该第一目标二维码对应的第一AR效果。
步骤S627,在第一AR效果展示结束后,在当前AR环境下,通过对 第二目标二维码进行扫描,解析第二目标二维码中的目标识别信息,通过该目标识别信息与第二目标图像之间的映射关系确定第二目标图像。
步骤S628,在当前AR环境下,对真实场景中的第二目标图像进行识别,并在识别到该第二目标图像的情况下,展示与该第二目标二维码对应的第二AR效果。
用户观看完毕第二AR效果后,可以关闭浏览器,也可以继续扫描下一个目标二维码以观看下一个AR效果。
需要说明的是,在实施时,上述第一目标二维码、第二目标二维码可以分别对应前述的第一目标识别码和第二目标识别码,第一目标图像、第二目标图像可以分别对应前述的第一目标对象和第二目标对象。
基于前述的实施例,本公开实施例提供一种显示装置,该装置包括所包括的各单元、以及各单元所包括的各部分,可以通过电子设备中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图7为本公开实施例提供的一种显示装置的组成结构示意图,如图7所示,该显示装置700包括:第一确定部分710和第一显示部分720,其中:
第一确定部分710,被配置为响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据;
第一显示部分720,被配置为响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
在一些实施例中,所述第一确定部分还被配置为:对所述第一目标识别码进行解析,得到目标识别信息;基于预设的识别信息与目标对象之间的映射关系,确定与所述目标识别信息对应的第一目标对象;基于预设的识别信息与虚拟效果数据之间的映射关系,确定与所述目标识别信息对应的第一虚拟效果数据。
在一些实施例中,所述装置还包括:第二确定部分,被配置为在所述第一增强现实效果展示结束的情况下,响应于在所述增强现实环境下对第二目标识别码进行的扫描操作,基于所述第二目标识别码,确定第二目标对象和第二虚拟效果数据;第二显示部分,被配置为响应于在所述增强现实环境下识别到所述第二目标对象,基于所述第二虚拟效果数据,在所述增强现实环境下展示与所述第二目标对象对应的第二增强现实效果。
在一些实施例中,所述装置还包括:第一启动部分,被配置为响应于对所述增强现实环境的启动操作,进入所述增强现实环境。
在一些实施例中,所述第一确定部分还被配置为响应于在增强现实环境下利用电子设备的图像采集装置对第一目标识别码进行的扫描操作,基 于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据;所述装置还包括:第二启动部分,被配置为响应于进入所述增强现实环境,在所述增强现实环境下启动所述图像采集装置。
在一些实施例中,所述装置还包括:采集部分,被配置为在所述增强现实环境下,利用所述图像采集装置实时采集待识别的图像;识别部分,被配置为对所述待识别的图像中的第一目标对象进行识别;所述显示部分还被配置为:响应于在所述待识别的图像中识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
在一些实施例中,所述第一启动部分还被配置为响应于对所述增强现实环境的入口地址的访问操作,进入所述增强现实环境。
在一些实施例中,所述增强现实环境可以包括运行在电子设备上的用于呈现增强现实效果的应用平台、应用程序或小程序中的至少一个交互界面。
在一些实施例中,所述第一目标对象可以包括与所述第一目标识别码关联的二维对象和/或三维对象。
在一些实施例中,所述二维对象可以包括以下至少之一:展品的照片、人物的肖像图、汽车海报;所述三维对象可以包括以下至少之一:真实场景中的展品、人、建筑物、车辆。
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本公开装置实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
在本公开实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块化的。
本公开涉及增强现实领域,通过获取现实环境中的目标对象的图像信息,进而借助各类视觉相关算法实现对目标对象的相关特征、状态及属性进行检测或识别处理,从而得到与具体应用匹配的虚拟与现实相结合的AR效果。示例性的,目标对象可涉及与人体相关的脸部、肢体、手势、动作等,或者与物体相关的标识物、标志物,或者与场馆或场所相关的沙盘、展示区域或展示物品等。视觉相关算法可涉及视觉定位、SLAM、三维重建、图像注册、背景分割、对象的关键点提取及跟踪、对象的位姿或深度检测等。具体应用不仅可以涉及跟真实场景或物品相关的导览、导航、讲解、重建、虚拟效果叠加展示等交互场景,还可以涉及与人相关的特效处理,比如妆容美化、肢体美化、特效展示、虚拟模型展示等交互场景。可通过卷积神经网络,实现对目标对象的相关特征、状态及属性进行检测或识别处理。上述卷积神经网络是基于深度学习框架进行模型训练而得到的网络模型。
需要说明的是,本公开实施例中,如果以软件功能部分的形式实现上述的显示方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一台电子设备(可以是个人计算机、服务器、或者网络设备等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本公开实施例不限制于任何特定的硬件和软件结合。
本公开实施例提供一种电子设备,包括显示屏;存储器,用于存储可执行计算机程序;处理器,用于执行所述存储器中存储的可执行计算机程序时,结合所述显示屏实现上述的显示方法中的步骤。
本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述方法中的步骤。
本公开实施例还提供一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码被计算机读取并执行的情况下,实现上述方法实施例中的部分或全部步骤。
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述方法中的部分或全部步骤,具体可参见上述方法实施例。其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一些实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一些实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
这里需要指出的是:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本公开存储介质、设备、计算机程序以及计算机程序产品实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
需要说明的是,图8为本公开实施例中电子设备的一种硬件实体示意图,如图8所示,该电子设备800的硬件实体包括:显示屏801、存储器802和处理器803,其中,显示屏801、存储器802和处理器803通过通信总线804连接;存储器802,用于存储可执行计算机程序;处理器803,用于执行存储器802中存储的可执行计算机程序时,结合显示屏801,实现本公开实施例提供的方法,例如,本公开实施例提供的显示方法。
存储器802可以配置为存储由处理器803可执行的指令和应用,还可以缓存待处理器803以及电子设备800中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。
本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,用于引起处理器803执行时,实现本公开实施例提供的方法,例如,本公开实施例提供的显示方法。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本公开的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本公开的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本公开实施例的实施过程构成任何限定。上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本公开实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本公开上述集成的单元如果以软件功能模块的形式实现并作为 独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台电子设备(可以是个人计算机、服务器、或者网络设备等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本公开的实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。
工业实用性
本公开实施例公开了一种显示方法、装置、设备、计算机可读存储介质、计算机程序产品及计算机程序。该方法包括:响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据;响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。通过本公开实施例,能够使得增强现实效果展示应用更广泛,并能提高增强现实效果的展示效率,提升用户的观看体验。

Claims (24)

  1. 一种显示方法,包括:
    响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据;
    响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
  2. 根据权利要求1所述的方法,其中,所述基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据,包括:
    对所述第一目标识别码进行解析,得到目标识别信息;
    基于预设的识别信息与目标对象之间的映射关系,确定与所述目标识别信息对应的第一目标对象;
    基于预设的识别信息与虚拟效果数据之间的映射关系,确定与所述目标识别信息对应的第一虚拟效果数据。
  3. 根据权利要求1或2所述的方法,所述方法还包括:
    在所述第一增强现实效果展示结束的情况下,响应于在所述增强现实环境下对第二目标识别码进行的扫描操作,基于所述第二目标识别码,确定第二目标对象和第二虚拟效果数据;
    响应于在所述增强现实环境下识别到所述第二目标对象,基于所述第二虚拟效果数据,在所述增强现实环境下展示与所述第二目标对象对应的第二增强现实效果。
  4. 根据权利要求1至3中任一项所述的方法,所述方法还包括:
    响应于对所述增强现实环境的启动操作,进入所述增强现实环境。
  5. 根据权利要求4所述的方法,其中,所述在增强现实环境下对第一目标识别码进行的扫描操作,包括:在增强现实环境下,利用电子设备的图像采集装置对第一目标识别码进行的扫描操作;
    所述方法还包括:
    响应于进入所述增强现实环境,在所述增强现实环境下启动所述图像采集装置。
  6. 根据权利要求5所述的方法,所述方法还包括:
    在所述增强现实环境下,利用所述图像采集装置实时采集待识别的图像;
    对所述待识别的图像中的第一目标对象进行识别;
    所述响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果,包括:
    响应于在所述待识别的图像中识别到所述第一目标对象,基于所述第 一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
  7. 根据权利要求4至6中任一项所述的方法,其中,所述响应于对所述增强现实环境的启动操作,进入所述增强现实环境,包括:
    响应于对所述增强现实环境的入口地址的访问操作,进入所述增强现实环境。
  8. 根据权利要求1至7中任一项所述的方法,其中,所述增强现实环境包括运行在电子设备上的用于呈现增强现实效果的应用平台、应用程序或小程序中的至少一个交互界面。
  9. 根据权利要求1至8中任一项所述的方法,其中,所述第一目标对象包括与所述第一目标识别码关联的二维对象和/或三维对象。
  10. 根据权利要求9所述的方法,其中,
    所述二维对象包括以下至少之一:展品的照片、人物的肖像图、汽车海报;
    所述三维对象包括以下至少之一:真实场景中的展品、人、建筑物、车辆。
  11. 一种显示装置,包括:
    第一确定部分,被配置为响应于在增强现实环境下对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据;
    第一显示部分,被配置为响应于在所述增强现实环境下识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
  12. 根据权利要求11所述的装置,其中,所述第一确定部分还被配置为:对所述第一目标识别码进行解析,得到目标识别信息;基于预设的识别信息与目标对象之间的映射关系,确定与所述目标识别信息对应的第一目标对象;基于预设的识别信息与虚拟效果数据之间的映射关系,确定与所述目标识别信息对应的第一虚拟效果数据。
  13. 根据权利要求11或12所述的装置,其中,所述装置还包括:第二确定部分,被配置为在所述第一增强现实效果展示结束的情况下,响应于在所述增强现实环境下对第二目标识别码进行的扫描操作,基于所述第二目标识别码,确定第二目标对象和第二虚拟效果数据;第二显示部分,被配置为响应于在所述增强现实环境下识别到所述第二目标对象,基于所述第二虚拟效果数据,在所述增强现实环境下展示与所述第二目标对象对应的第二增强现实效果。
  14. 根据权利要求11至13中任一项所述的装置,其中,所述装置还包括:第一启动部分,被配置为响应于对所述增强现实环境的启动操作,进入所述增强现实环境。
  15. 根据权利要求14所述的装置,其中,所述第一确定部分还被配置为响应于在增强现实环境下利用电子设备的图像采集装置对第一目标识别码进行的扫描操作,基于所述第一目标识别码,确定第一目标对象和第一虚拟效果数据;所述装置还包括:第二启动部分,被配置为响应于进入所述增强现实环境,在所述增强现实环境下启动所述图像采集装置。
  16. 根据权利要求15所述的装置,其中,所述装置还包括:采集部分,被配置为在所述增强现实环境下,利用所述图像采集装置实时采集待识别的图像;识别部分,被配置为对所述待识别的图像中的第一目标对象进行识别;所述显示部分还被配置为:响应于在所述待识别的图像中识别到所述第一目标对象,基于所述第一虚拟效果数据,在所述增强现实环境下展示与所述第一目标对象对应的第一增强现实效果。
  17. 根据权利要求14至16中任一项所述的装置,其中,所述第一启动部分还被配置为响应于对所述增强现实环境的入口地址的访问操作,进入所述增强现实环境。
  18. 根据权利要求11至17中任一项所述的装置,其中,所述增强现实环境可以包括运行在电子设备上的用于呈现增强现实效果的应用平台、应用程序或小程序中的至少一个交互界面。
  19. 根据权利要求11至18中任一项所述的装置,其中,所述第一目标对象可以包括与所述第一目标识别码关联的二维对象和/或三维对象。
  20. 根据权利要求19所述的装置,其中,所述二维对象可以包括以下至少之一:展品的照片、人物的肖像图、汽车海报;所述三维对象可以包括以下至少之一:真实场景中的展品、人、建筑物、车辆。
  21. 一种电子设备,包括:
    显示屏;存储器,用于存储可执行计算机程序;
    处理器,用于执行所述存储器中存储的可执行计算机程序时,结合所述显示屏实现权利要求1至10中任一项所述的方法。
  22. 一种计算机可读存储介质,其上存储有计算机程序,用于引起处理器执行时,实现权利要求1至10中任一项所述的方法。
  23. 一种计算机程序,包括计算机可读代码,在计算机可读代码在设备上运行的情况下,设备中的处理器执行用于实现权利要求1至10中任一所述的方法。
  24. 一种计算机程序产品,配置为存储计算机可读指令,所述计算机可读指令被执行时使得计算机执行权利要求1至10中任一所述的方法。
PCT/CN2022/120170 2021-09-27 2022-09-21 显示方法、装置、设备、计算机可读存储介质、计算机程序产品及计算机程序 WO2023045964A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111134428.1A CN113867528A (zh) 2021-09-27 2021-09-27 显示方法、装置、设备及计算机可读存储介质
CN202111134428.1 2021-09-27

Publications (1)

Publication Number Publication Date
WO2023045964A1 true WO2023045964A1 (zh) 2023-03-30

Family

ID=78991006

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120170 WO2023045964A1 (zh) 2021-09-27 2022-09-21 显示方法、装置、设备、计算机可读存储介质、计算机程序产品及计算机程序

Country Status (2)

Country Link
CN (1) CN113867528A (zh)
WO (1) WO2023045964A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117131888A (zh) * 2023-04-10 2023-11-28 荣耀终端有限公司 一种自动扫描虚拟空间二维码方法、电子设备及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113867528A (zh) * 2021-09-27 2021-12-31 北京市商汤科技开发有限公司 显示方法、装置、设备及计算机可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626183A (zh) * 2020-05-25 2020-09-04 深圳市商汤科技有限公司 一种目标对象展示方法及装置、电子设备和存储介质
CN111918114A (zh) * 2020-07-31 2020-11-10 北京市商汤科技开发有限公司 图像显示方法、装置、显示设备及计算机可读存储介质
CN112148197A (zh) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 增强现实ar交互方法、装置、电子设备及存储介质
US20210201030A1 (en) * 2019-12-26 2021-07-01 Paypal Inc Securing virtual objects tracked in an augmented reality experience between multiple devices
CN113326709A (zh) * 2021-06-17 2021-08-31 北京市商汤科技开发有限公司 展示方法、装置、设备及计算机可读存储介质
CN113409474A (zh) * 2021-07-09 2021-09-17 上海哔哩哔哩科技有限公司 基于增强现实的对象展示方法及装置
CN113867528A (zh) * 2021-09-27 2021-12-31 北京市商汤科技开发有限公司 显示方法、装置、设备及计算机可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902710B (zh) * 2012-08-08 2015-08-26 成都理想境界科技有限公司 基于条形码的增强现实方法、系统及移动终端
CN108269307B (zh) * 2018-01-15 2023-04-07 歌尔科技有限公司 一种增强现实交互方法及设备
CN109360275B (zh) * 2018-09-30 2023-06-20 北京观动科技有限公司 一种物品的展示方法、移动终端及存储介质
CN110716645A (zh) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 一种增强现实数据呈现方法、装置、电子设备及存储介质
CN113127126B (zh) * 2021-04-30 2023-06-27 上海哔哩哔哩科技有限公司 对象展示方法及装置
CN113359985A (zh) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 数据展示方法、装置、计算机设备以及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210201030A1 (en) * 2019-12-26 2021-07-01 Paypal Inc Securing virtual objects tracked in an augmented reality experience between multiple devices
CN111626183A (zh) * 2020-05-25 2020-09-04 深圳市商汤科技有限公司 一种目标对象展示方法及装置、电子设备和存储介质
CN111918114A (zh) * 2020-07-31 2020-11-10 北京市商汤科技开发有限公司 图像显示方法、装置、显示设备及计算机可读存储介质
CN112148197A (zh) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 增强现实ar交互方法、装置、电子设备及存储介质
CN113326709A (zh) * 2021-06-17 2021-08-31 北京市商汤科技开发有限公司 展示方法、装置、设备及计算机可读存储介质
CN113409474A (zh) * 2021-07-09 2021-09-17 上海哔哩哔哩科技有限公司 基于增强现实的对象展示方法及装置
CN113867528A (zh) * 2021-09-27 2021-12-31 北京市商汤科技开发有限公司 显示方法、装置、设备及计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117131888A (zh) * 2023-04-10 2023-11-28 荣耀终端有限公司 一种自动扫描虚拟空间二维码方法、电子设备及系统

Also Published As

Publication number Publication date
CN113867528A (zh) 2021-12-31

Similar Documents

Publication Publication Date Title
WO2023045964A1 (zh) 显示方法、装置、设备、计算机可读存储介质、计算机程序产品及计算机程序
US10839605B2 (en) Sharing links in an augmented reality environment
US9870633B2 (en) Automated highlighting of identified text
WO2023020622A1 (zh) 一种显示方法、装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品
US20190333478A1 (en) Adaptive fiducials for image match recognition and tracking
CN106982240B (zh) 信息的显示方法和装置
US20160012136A1 (en) Simultaneous Local and Cloud Searching System and Method
CN112684894A (zh) 增强现实场景的交互方法、装置、电子设备及存储介质
JP2014524062A (ja) ライブビューの拡張
US10176500B1 (en) Content classification based on data recognition
CN113342221A (zh) 评论信息引导方法、装置、存储介质及电子设备
CN112990043A (zh) 一种服务交互方法、装置、电子设备及存储介质
CN111815782A (zh) Ar场景内容的显示方法、装置、设备及计算机存储介质
CN113326709B (zh) 展示方法、装置、设备及计算机可读存储介质
WO2023155477A1 (zh) 画作展示方法、装置、电子设备、存储介质和程序产品
CN114296627B (zh) 内容显示方法、装置、设备及存储介质
CN111652986B (zh) 舞台效果呈现方法、装置、电子设备及存储介质
CN114049467A (zh) 显示方法、装置、设备、存储介质及程序产品
CN111665947B (zh) 一种宝箱展示方法、装置、电子设备及存储介质
CN114356087A (zh) 一种基于增强现实的交互方法、装置、设备及存储介质
CN116137662A (zh) 页面展示方法及装置、电子设备、存储介质和程序产品
Raposo et al. Revisiting the city, augmented with digital technologies: the SeeARch tool
CN114511671A (zh) 展品展示方法、导览方法、装置、电子设备与存储介质
CN114063785A (zh) 信息输出方法、头戴式显示设备及可读存储介质
CN113867874A (zh) 页面编辑及显示方法、装置、设备、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22872010

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE