CN113326709B - Display method, device, equipment and computer readable storage medium - Google Patents

Display method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN113326709B
CN113326709B CN202110672282.XA CN202110672282A CN113326709B CN 113326709 B CN113326709 B CN 113326709B CN 202110672282 A CN202110672282 A CN 202110672282A CN 113326709 B CN113326709 B CN 113326709B
Authority
CN
China
Prior art keywords
user
special effect
information
article
number information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110672282.XA
Other languages
Chinese (zh)
Other versions
CN113326709A (en
Inventor
田真
李斌
欧华富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110672282.XA priority Critical patent/CN113326709B/en
Publication of CN113326709A publication Critical patent/CN113326709A/en
Priority to PCT/CN2022/085589 priority patent/WO2022262379A1/en
Priority to TW111119948A priority patent/TW202301188A/en
Application granted granted Critical
Publication of CN113326709B publication Critical patent/CN113326709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a display method, a display device, display equipment and a computer-readable storage medium. The method comprises the following steps: entering an augmented reality environment by scanning an information code arranged on the identification article; identifying the identified object in the augmented reality environment to obtain an object identification result, and acquiring and playing the introduction content of the object corresponding to the number information according to the number information corresponding to the object identification result. Through the method and the device, the diversity and the flexibility of the mode of displaying the content by the terminal are improved when the terminal interacts with the user.

Description

Display method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to terminal technologies, and in particular, to a display method, apparatus, device, and computer readable storage medium.
Background
At present, when a terminal interacts with a user, the corresponding content is usually displayed under the condition that the user clicks a display button, or after some content is displayed, other pages are automatically displayed for the user, and the like. That is, in the related art, the terminal has few ways for displaying the content when interacting with the user, and the display way is not flexible enough.
Disclosure of Invention
The embodiment of the disclosure provides a display method, a device, equipment and a computer readable storage medium, which can increase the diversity and flexibility of the way in which a terminal displays content.
The technical scheme of the embodiment of the disclosure is realized as follows:
the embodiment of the disclosure provides a display method, which comprises the following steps: entering an augmented reality environment by scanning an information code for identifying an article; the information code is arranged on the identification article; identifying the identified object in the augmented reality environment to obtain an object identification result; and acquiring and playing the introduction content of the object corresponding to the number information according to the number information corresponding to the object identification result.
In the above method, the identifying the article includes: an article identification card; the number information includes: item number information; the article identification result comprises: identity information of the article identification card; the identity information corresponding to different article identification cards is different; the introduction includes: a first introduction; the step of obtaining and playing the introduction content of the object corresponding to the number information according to the number information corresponding to the object identification result comprises the following steps: determining article number information associated with the identity information according to the identity information of the article identification card; and determining one type of introduction content corresponding to the article number information from preset multiple types of introduction content of the article object, and determining the first introduction content corresponding to the article number information from the one type of introduction content to play.
In the above method, the identifying the article includes: an identity card; the number information includes: user numbering information; the identification result comprises: identity information of the identity identification card; the identity information corresponding to different identity identification cards is different; the introduction includes: a second introduction; the step of obtaining and playing the introduction content of the object corresponding to the number information according to the number information corresponding to the object identification result comprises the following steps: determining user number information associated with the identity information according to the identity information of the identity identification card; according to the user number information, one type of introduction content corresponding to the user number information is determined from at least one type of preset introduction content associated with the object of the object, and the second introduction content corresponding to the user number information is determined from the one type of introduction content to play.
The method further comprises the following steps: acquiring a face image in the augmented reality environment; generating special effect data corresponding to the user to which the user number information belongs under the condition that the face image is acquired, and previewing; and under the condition that shooting control operation is detected, obtaining a special effect image or a special effect video based on the special effect data.
The method further comprises the following steps: under the condition that sharing operation is detected, sharing the special effect image or the special effect video; or, in the case of detecting the re-shooting operation, re-shooting the special effect image or the special effect video.
In the above method, the generating special effect data corresponding to the user to which the user number information belongs under the condition that the face image is acquired includes: acquiring material information under the condition that the face image is acquired, wherein the material information represents special effect style information; and generating the special effect data corresponding to the user to which the user number information belongs according to the material information.
In the above method, the material information includes at least one of the following: specific date-like effects, number-like effects, and user-specific effects.
In the above method, the generating the special effect data corresponding to the user to which the user number information belongs according to the material information includes: under the condition that the material information is a user special effect corresponding to the registration time of a preset application program, determining the user registration time according to the user number information; and updating the special effect of the user based on the user registration time to generate the special effect data.
In the above method, the generating the special effect data corresponding to the user to which the user number information belongs according to the material information includes: under the condition that the material information is a special effect of a user corresponding to the access times of a preset application program, determining the access times of the user according to the user number information; and updating the special effect of the user based on the access times of the user to generate the special effect data.
In the above method, the generating the special effect data corresponding to the user to which the user number information belongs according to the material information includes: under the condition that the material information is a user exclusive special effect corresponding to the access time of a preset application program, determining the access time of a user according to the user number information; and updating the special effect of the user based on the user access time to generate the special effect data.
In the above method, the generating the special effect data corresponding to the user to which the user number information belongs according to the material information includes: acquiring current date information under the condition that the material information is the specific date special effect; and when the date information is specific date information, updating the specific date special effect based on the date information to generate the special effect data.
In the above method, the generating the special effect data corresponding to the user to which the user number information belongs according to the material information includes: and updating the serial number special effects based on the user serial number information under the condition that the material information is the serial number special effects, and generating the special effect data.
In the above method, after the face image is acquired in the augmented reality environment, the method further includes: and under the condition that the face image is acquired, displaying preset special effect data corresponding to the object.
In the above method, the entering the augmented reality environment by scanning the information code for identifying the object includes: displaying a code scanning inlet by scanning an information code for identifying an article; and entering the augmented reality environment under the condition that the code scanning operation aiming at the code scanning inlet is detected.
In the method, the object is food; the preset multi-class introduction content comprises: presetting a plurality of types of introduction contents for manufacturing the food; the category of introductions includes: a category of introductions for making said food product; the first introduction includes: a content for making the food; alternatively, the preset multi-class introduction content includes: preset multi-class story content associated with the food product; the category of introductions includes: a class of story content; the first introduction includes: a story content.
The embodiment of the disclosure provides a display device, comprising: the identification unit is used for entering an augmented reality environment by scanning an information code for identifying the object; the information code is arranged on the identification article; identifying the identified object in the augmented reality environment to obtain an object identification result; and the playing unit is used for acquiring and playing the introduction content of the article object corresponding to the number information according to the number information corresponding to the article identification result.
In the above device, the identification article comprises: an article identification card; the number information includes: item number information; the article identification result comprises: identity information of the article identification card; the identity information corresponding to different article identification cards is different; the introduction includes: a first introduction; the playing unit is further used for determining article number information associated with the identity information according to the identity information of the article identification card; and determining one type of introduction content corresponding to the article number information from preset multiple types of introduction content of the article object, and determining the first introduction content corresponding to the article number information from the one type of introduction content to play.
In the above device, the identification article comprises: an identity card; the number information includes: user numbering information; the identification result comprises: identity information of the identity identification card; the identity information corresponding to different identity identification cards is different; the introduction includes: a second introduction; the playing unit is further used for determining user number information associated with the identity information according to the identity information of the identity identification card; according to the user number information, one type of introduction content corresponding to the user number information is determined from at least one type of preset introduction content associated with the object of the object, and the second introduction content corresponding to the user number information is determined from the one type of introduction content to play.
The device further comprises: the acquisition unit is used for acquiring the face image in the augmented reality environment; the generating unit is used for generating special effect data corresponding to the user to which the user number information belongs under the condition that the face image is acquired, and previewing the special effect data; and the acquisition unit is also used for acquiring a special effect image or a special effect video based on the special effect data under the condition that shooting control operation is detected.
The device further comprises: the sharing unit is used for sharing the special effect image or the special effect video under the condition that the sharing operation is detected; or the acquisition unit is also used for re-shooting the special effect image or the special effect video under the condition that the re-shooting operation is detected.
In the above device, the generating unit is further configured to acquire material information under the condition that the face image is acquired, where the material information characterizes the special effect style information; and generating the special effect data corresponding to the user to which the user number information belongs according to the material information.
In the above device, the material information includes at least one of the following: specific date-like effects, number-like effects, and user-specific effects.
In the above device, the generating unit is further configured to determine a user registration time according to the user number information when the material information is a specific special effect corresponding to a registration time of a preset application program; and updating the special effect of the user based on the user registration time to generate the special effect data.
In the above device, the generating unit is further configured to determine the number of user accesses according to the user number information when the material information is a special effect specific to a user corresponding to the number of accesses of the preset application; and updating the special effect of the user based on the access times of the user to generate the special effect data.
In the above device, the generating unit is further configured to determine, according to the user number information, a user access time when the material information is a user-specific special effect corresponding to an access time for a preset application; and updating the special effect of the user based on the user access time to generate the special effect data.
In the above device, the generating unit is further configured to obtain current date information when the material information is the specific date type special effect; and when the date information is specific date information, updating the specific date special effect based on the date information to generate the special effect data.
In the above apparatus, the generating unit is further configured to update the serial number special effects based on the user serial number information to generate the special effect data when the material information is the serial number special effect.
The device further comprises: and the display unit is used for displaying preset special effect data corresponding to the object under the condition that the face image is acquired after the face image is acquired in the augmented reality environment.
In the above device, the identification unit is further configured to display a code scanning entry by scanning an information code for identifying the article; and entering the augmented reality environment under the condition that the code scanning operation aiming at the code scanning inlet is detected.
In the device, the object is food; the preset multi-class introduction content comprises: presetting a plurality of types of introduction contents for manufacturing the food; the category of introductions includes: a category of introductions for making said food product; the first introduction includes: a content for making the food; alternatively, the preset multi-class introduction content includes: preset multi-class story content associated with the food product; the category of introductions includes: a class of story content; the first introduction includes: a story content.
An embodiment of the present disclosure provides an electronic device, including: a display; a memory for storing an executable computer program; and the processor is used for realizing the display method by combining the display when executing the executable computer program stored in the memory.
Embodiments of the present disclosure provide a computer readable storage medium having stored thereon a computer program for causing a processor to execute the above-described presentation method.
According to the display method, the display device, the display equipment and the computer-readable storage medium, the technical scheme is adopted, the information code arranged on the identification article is scanned to enter an augmented reality environment, the identification article is identified in the environment, the article identification result is obtained, and the introduction content of the article object corresponding to the number information is obtained and played according to the number information corresponding to the article identification result. Therefore, the technology of 'one object and one code' can be combined to realize the playing of the introduction content of the object corresponding to each identification object; therefore, the diversity of the modes of displaying the content by the terminal when the terminal interacts with the user is increased, and the flexibility of displaying the content by the terminal is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
FIG. 1 is a schematic flow chart of an alternative method for displaying according to an embodiment of the disclosure;
FIG. 2 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 3 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 4 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 5 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 6 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 7 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 8A is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 8B is a schematic diagram of a display effect of an exemplary 3D text special effect provided by embodiments of the present disclosure;
FIG. 9A is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 9B is another display effect schematic of an exemplary 3D text special effect provided by an embodiment of the present disclosure;
FIG. 10 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 11 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
FIG. 12 is a schematic flow chart of an alternative method of presentation provided by an embodiment of the present disclosure;
FIG. 13 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
fig. 14 is a schematic view showing the effect of exemplary roast duck postcards provided in an embodiment of the present disclosure;
fig. 15 is a schematic view of a display effect of an exemplary H5 loading page and a schematic view of a display effect of identifying a postcard of a roast duck in an AR environment provided by an embodiment of the present disclosure;
fig. 16 is a schematic illustration of a display effect of a video frame in an exemplary story video playing process according to an embodiment of the present disclosure;
FIG. 17 is a schematic flow chart of an alternative method of displaying provided in an embodiment of the disclosure;
fig. 18 is a schematic view showing an exemplary roast duck identification card according to an embodiment of the present disclosure;
Fig. 19 is a schematic diagram showing an exemplary H5 loading page according to an embodiment of the present disclosure;
fig. 20 is a schematic view showing an effect of a certain video frame in an exemplary process of playing an enterprise advertisement video according to an embodiment of the present disclosure;
fig. 21 is a schematic view showing an exemplary lovely duck sticker displayed on a collected face image according to an embodiment of the disclosure;
FIG. 22 is a schematic illustration of a display effect of an exemplary prompt window provided by an embodiment of the present disclosure;
fig. 23 is a schematic structural view of a display device according to an embodiment of the disclosure;
fig. 24 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure will be further described in detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by those skilled in the art without making inventive efforts are within the scope of protection of the present disclosure.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the disclosure.
Before explaining the embodiments of the present disclosure in further detail, terms and terminology involved in the embodiments of the present disclosure are explained, and the terms and terminology involved in the embodiments of the present disclosure are applicable to the following explanation.
1) A applet (Mini Program), also known as a Web Program (Web Program), is a Program developed based on a front-end oriented language (e.g., javaScript) that implements services in hypertext markup language (HTML, hyper Text Markup Language) pages, and is software that is downloaded by a client (e.g., a browser or any client with an embedded browser core) via a network (e.g., the internet) and interpreted and executed in the browser environment of the client, saving steps installed in the client. For example, an applet for implementing singing services may be downloaded and run in a social networking client.
2) Augmented reality (Augmented Reality, AR), which is also called augmented reality, is a newer technology content that facilitates the integration between real world information and virtual world information content, which carries out simulated simulation processing on the basis of scientific technology such as a computer on the entity information that is otherwise difficult to experience in the spatial range of the real world, and the superposition effectively applies virtual information content in the real world, and in the process can be perceived by human senses, thereby realizing a sensory experience that exceeds reality. After overlapping between the real environment and the virtual object, the real environment and the virtual object can exist in the same picture and space simultaneously.
3) unionID, if a developer owns a plurality of mobile applications, web applications, and public accounts (including applets), can distinguish the uniqueness of the user by unionID, because the unionID of the user is unique as long as it is a mobile application, web application, and public account (including applet) under the same application open platform account. In other words, the unionID is the same for the same user for different applications under the same application open platform.
4) H5, which has a two-layer meaning, is in the narrow sense H5, which is just one programming language, is the fifth generation hypertext markup language (Hyper Text Markup Language, HTML5); while H5 in the broad sense covers most pages on the internet that use HTML5 technology.
The 'one-object one-code' is not a simple code scanning promotion tool, and a consumer, a brand merchant and a circulation are connected based on a cloud platform, so that an enterprise-level software servitization (Software as a Service, saaS) technology for comprehensive solutions of digital marketing, anti-counterfeiting tracing, storage logistics and the like can be provided, and the technology is an effective technology for digital transformation of enterprise marketing.
The brand merchant uses a one-object one-code technology to assign each commodity an intelligent two-dimensional code with a digital identity. After code scanning persons (such as consumers, shopping guides and channel providers) realize anti-counterfeiting, tracing and other requirements, brands also finish data collection. After the user scans the code, "one object and one code" automatically establishes a personal account of the code scanner based on a third party tool and a brand merchant used when scanning the code, helps brands to count original data (such as gender, age and the like) of the code scanner, behavior data (such as what is participated in.
At present, when a terminal interacts with a user, the corresponding content is usually displayed under the condition that the user clicks a display button, or after some content is displayed, other pages are automatically displayed for the user, and the like. That is, in the related art, the terminal has few ways for displaying the content when interacting with the user, and the display way is not flexible enough.
The embodiment of the disclosure provides a display method, which can increase the diversity and flexibility of the mode of displaying contents by a terminal. The display method provided by the embodiment of the disclosure is applied to the electronic equipment.
An exemplary application of the electronic device provided by the embodiments of the present disclosure is described below, and the electronic device provided by the embodiments of the present disclosure may be implemented as various types of user terminals (hereinafter referred to as terminals) such as augmented reality (Augmented Reality, AR) glasses, notebook computers, tablet computers, desktop computers, set-top boxes, mobile devices (e.g., mobile phones, portable music players, smart watches, personal digital assistants, dedicated information devices, portable game devices), and the like.
Fig. 1 is a schematic flow chart of an alternative method for displaying according to an embodiment of the disclosure, which will be described with reference to the steps shown in fig. 1.
S101, entering an augmented reality environment by scanning an information code for identifying an article; the information code is arranged on the identification article.
In the embodiment of the disclosure, each article object corresponds to an identification article, the identification articles corresponding to different article objects are different, each identification article is provided with an information code, and the terminal can enter the AR environment by scanning the information codes on the identification articles.
In the embodiment of the present disclosure, the information code may be a two-dimensional code, a bar code, or other scannable codes, which is not limited in the embodiment of the present disclosure.
In some embodiments, the identification item may be an item identification card, or the identification item may be an identification card. In other embodiments, the identification article may include both an identification card and an article identification card.
S102, identifying the identified object in the augmented reality environment to obtain an object identification result.
In the embodiment of the disclosure, when the terminal enters an AR environment, the terminal may start an image acquisition device (for example, a camera) thereof to acquire an image, and when an identification article is acquired, the terminal identifies the acquired identification article and obtains an article identification result. Since each object corresponds to a labeled article and the labeled articles corresponding to different objects are different, the article identification result of each labeled article is also different from the article identification results of other labeled articles. It should be noted that the article identification result may be identity information for identifying the article.
In some embodiments, the terminal may recognize the identification object by recognizing the shape, color, etc. of the identification object, or recognizing the pattern, text, etc. on the identification object.
In other embodiments, the identification of the identification object by the terminal can also be realized by scanning an information code on the identification object; based on this implementation, S102 described above may be implemented by: and identifying the information code for identifying the article in the AR environment to obtain an article identification result. In addition, when the terminal identifies the information code on the identified article, the article identification result may be the two-dimensional code identification of the identified article obtained by the identification.
S103, acquiring and playing the introduction content of the object corresponding to the number information according to the number information corresponding to the object identification result.
In embodiments of the present disclosure, the identification article and the article object may be interrelated. Illustratively, each article object corresponds to an identified article, and the information code on the identified article corresponding to each article object is different, or the information code on the identified article corresponding to each article object is different, and the contents such as the pattern, the text, the color, etc. on the identified article are also different. The article identification result of each identified article corresponds to one piece of number information, and the number information corresponding to the article identification result of different identified articles is different.
In some embodiments, the item object may be a food item, for example, a roast duck; and the identification article may be an identification card, and each roast duck corresponds to the identification card.
It should be noted that, the introduction content of the object may be at least one of video, audio, animation, and two-dimensional/three-dimensional virtual model. For example, the content of the presentation of the item object is a two-dimensional/three-dimensional presentation video associated with the item object, or a two-dimensional/three-dimensional presentation video of other items associated with the item object, and embodiments of the present disclosure are not limited.
In some embodiments, in the introducing content of the object is at least one of video, animation, two-dimensional/three-dimensional virtual model, the introducing content of the object corresponding to the identified object may include the number information corresponding to the object. For example, in playing the introduction to the object, the number information may be displayed on the display screen of the introduction.
The description will be given taking the introduction of the object as an example. Under the condition that the terminal obtains the article identification result, the terminal can acquire the introduction video of the article object corresponding to the number information according to the number information corresponding to the article identification result, and play the introduction video. For example, continuing to take a roast duck as an example, when the terminal identifies the identification card a in the AR environment and obtains the article identification result corresponding to the identification card a, the terminal may determine corresponding number information according to the article identification result, and determine an introduction video of the roast duck corresponding to the number information according to the number information.
In the embodiment of the disclosure, the information code arranged on the identification article is scanned to enter an augmented reality environment, the identification article is identified in the environment, the article identification result is obtained, and the introduction content of the article object corresponding to the number information is obtained and played according to the number information corresponding to the article identification result. Therefore, the technology of 'one object and one code' can be combined to realize the playing of the introduction content of the object corresponding to each identification object; therefore, compared with the prior art, the method increases the diversity of the modes of displaying the content by the terminal when the terminal interacts with the user, and improves the flexibility of displaying the content by the terminal.
In some embodiments of the present disclosure, S101 may be implemented through S1011-S1012, and fig. 2 is an optional flowchart of a display method provided by an embodiment of the present disclosure, which will be described with reference to the steps shown in fig. 2.
S1011, displaying a code scanning entrance by scanning an information code for identifying the article.
S1012, in a case where a code scanning operation for the code scanning entry is detected, entering an augmented reality environment.
In an embodiment of the disclosure, after scanning the information code on the identification article, the terminal may display a code scanning entry and jump to the AR environment if a code scanning operation of the user for the code scanning entry is detected.
In some embodiments, the terminal may jump to the H5 loading page in the case of scanning the information code on the identification item, and display a scan code entry on the H5 loading page, and jump from the H5 loading page to the AR environment in the case of detecting a scan code operation of the user for the scan code entry on the H5 loading page.
In some embodiments, the terminal may display an opening prompt message, a confirmation portal and a return portal when receiving a code scanning operation of a user for the code scanning portal, prompt the user whether to open the camera by opening the prompt message, and open the camera and enter the AR environment when detecting a confirmation operation of the user for the confirmation portal; in the case that the return operation of the user for the return entry is detected, the return continues to display the H5 load page, or returns to other pages.
It will be appreciated that an "entry" may represent a triggerable virtual button such as a link or control, for example, a "swipe code entry" may represent a swipe code link or swipe code control.
In the embodiment of the disclosure, a code scanning inlet is displayed by scanning an information code for identifying an article, and an AR environment is entered under the condition that a code scanning operation aiming at the code scanning inlet is detected; the intelligence of the terminal is improved.
In some embodiments of the present disclosure, the above S103 may be implemented through S1031-S1032; the steps shown in fig. 3 will be described with reference to fig. 1 as an example.
S1031, determining article number information associated with the identity information according to the identity information of the article identification card; identifying the article includes: an article identification card; the numbering information includes: item number information; the article identification result includes: identity information of the article identification card; the identity information corresponding to different article identification cards is different; the introduction includes: the first introduction.
S1032, determining one type of introduction content corresponding to the article number information from the preset multiple types of introduction content of the article object, and determining a first introduction content corresponding to the article number information from the one type of introduction content for playing.
In an embodiment of the disclosure, the identifying article may include an article identification card, the identifying of the article identification card may obtain identity information of the article identification card, and article number information associated with the identity information of the article identification card may be obtained according to the identity information of the article identification card, and identity information corresponding to different article identification cards is different, and article number information associated with the identity information of different article identification cards is also different. For example, the article object K1 corresponds to the article identification card M1, the article object K2 corresponds to the article identification card M2, the identity information F1 is obtained by identifying the article identification card M1, the associated number information B1 is the article number information of the article object K1 according to the F1, the identity information F2 is obtained by identifying the article identification card M2, the associated number information B2 is the article number information of the article object K2 according to the F2, and the B1 and the B2 are different.
For example, in the case where the item object is a roast duck, the item identification card may be a postcard, and the item number information may be a number of each roast duck.
In an embodiment of the disclosure, the object corresponds to a preset plurality of types of introductions, and each type of introductions includes at least one introductions, and each item number information corresponds to one of the plurality of types of introductions. When the terminal obtains the article number information, according to the article number information, a corresponding type of introduction content is determined from a plurality of types of introduction contents corresponding to the article object, and a first introduction content corresponding to the article number information is determined from the type of introduction content.
In some embodiments, where the terminal identifies an information code on the item identification card, the item identification result may be an information code identification of the item identification card.
In the embodiment of the disclosure, the introduction content corresponding to the article number information is determined to be played through the article number information corresponding to the article identification card, so that the diversity of the mode of displaying the content by the terminal when the terminal interacts with the user is increased, and the flexibility of displaying the content by the terminal is improved.
In some embodiments of the present disclosure, the above S1032 may be implemented by S1:
s1, according to article number information, determining the introduction content of one type of food corresponding to the article number information from the introduction content of a plurality of types of preset food, and determining the content of one type of food corresponding to the article number information from the introduction content of one type of food to play; the object of the article is food, and the preset multi-class introduction content comprises: presetting introduction contents of various foods; one class of introductory content includes: introduction to a class of food products; the first introduction includes: a food preparation content.
In some embodiments of the present disclosure, the object is a food item, such that the preset multi-class introduction corresponding to the object is an introduction of a preset multi-class food item, and the first introduction is a introduction of one food item, and each item number information corresponds to a certain production in the introduction of a certain type of food item among the introductions of the preset multi-class food item. When the terminal obtains the article number information, the terminal can determine the introduction content of one type of corresponding food from the introduction content of a plurality of preset types of food according to the article number information, and determine the content of one food corresponding to the article number information from the introduction content of one type of food, and play the content.
Illustratively, the food may be roast ducks, the introduction of the plurality of types of preparation foods may be introduction of all preparation links of roast ducks, the introduction of one type of preparation foods may be introduction of a certain preparation link of roast ducks, and the introduction of one preparation food (first introduction) may be introduction of a certain preparation method adopted in the certain preparation link.
In some embodiments of the present disclosure, the above S1031 may be further implemented by S2:
s2, determining one story content corresponding to the article number information from preset multi-class story contents according to the article number information, and determining one story content corresponding to the article number information from the one story content and playing the one story content; the object is food; the preset multi-class introduction content comprises: preset multi-class story content associated with a food item; one class of introductory content includes: a class of story content; the first introduction includes: a story content.
In some embodiments of the present disclosure, the object is a food item, such that the preset multi-class introduction corresponding to the object is a preset multi-class story content related to the food item, and the first introduction is one story content related to the food item, and each item number information corresponds to one of the preset multi-class story content, and one of the certain types of story content. Under the condition that the terminal obtains the article number information, according to the article number information, a corresponding type of story content is determined from preset multi-type story contents related to food, and from the type of story content, a story content corresponding to the article number information is determined and played.
Illustratively, the food may be roast ducks, and the preset multi-story content related to the food may be various types of story content related to the roast ducks, for example, may be a related story, an love story, a friendship story, a funneling story, etc. In the case where one type of story content is a related story, one story content (first introduction content) may be a related story content of a certain user; in the case where one type of story content is a love story, one story content related to a food item may be a love story content of a certain user; in the case where one type of story content is a friend story, one story content may be a friend story content of a certain user.
In the embodiment of the disclosure, the content of one food or one story content corresponding to the article number information is played according to the article number information, so that the content diversity of the introduction content played by the terminal is increased.
In some embodiments of the present disclosure, the above S103 may also be implemented by S1033-S1034; the steps shown in fig. 4 will be described with reference to fig. 1 as an example.
S1033, determining user number information associated with the identity information according to the identity information of the identity identification card; identifying the article includes: an identity card; the numbering information includes: user numbering information; the identification result comprises: identity information of the identity identification card; the identity information corresponding to different identity identification cards is different; the introduction includes: the second introduction.
S1034, according to the user number information, determining one type of introduction content corresponding to the user number information from at least one type of preset introduction content associated with the object, and determining a second introduction content corresponding to the user number information from one type of enterprise introduction content for playing.
In the embodiment of the disclosure, the identification article may include an identification card, the identification card is identified to obtain the identity information of the identification card, and the associated user number information may be obtained according to the identity information of the identification card, where the user number information is the user number information of the user to which the article object corresponding to the identification card belongs, and the identity information of different identification cards is different, and the user number information associated with the identity information of different identification cards is also different. For example, the article object K1 corresponds to the identity card S1, the article object K2 corresponds to the identity card S2, the identity card S1 is identified to obtain identity information f1, user number information Y1 of the user to which the article object K1 belongs is obtained according to the identity information f1, the identity card S2 is identified to obtain identity information f2, user number information Y2 of the user to which the article object K2 belongs is obtained according to the identity information f2, and Y1 is different from Y2.
Illustratively, in the case where the object is a roast duck, the identification card may be a roast duck identification card, and each roast duck has a roast duck identification card, and the user number information may be an identification card number of the roast duck identification card, representing a number of a user who purchased each roast duck.
In an embodiment of the present disclosure, the object corresponds to at least one type of preset introductions associated with the object, and each type of introductions includes at least one introductions, and each user number information corresponds to one of at least one type of preset introductions associated with the object. When the terminal obtains the user number information, one type of introduction content can be determined from the at least one type of preset introduction content associated with the object according to the user number information, and a second introduction content corresponding to the user number information can be determined from the one type of introduction content.
Illustratively, in the case that the food is roast duck, the at least one type of preset introduction content associated with the object may be propaganda introduction content of different aspects of enterprises (or merchants, etc.) related to the roast duck, for example, the at least one type of preset introduction content associated with the object may include: introduction to enterprise culture, introduction to enterprise scale, introduction to enterprise development history, and the like. In the case that one category of introductions is an introduction of an enterprise culture, a certain introductions (second introductions) in the category of introductions may be introductions from the enterprise culture; in the case that one category of introductions is an enterprise-scale introductions, a certain introductions of the category of introductions may be scale introductions of a certain sub-company or factory or part of the enterprise, etc.; in the case where a category of introductions is one of a business development history, a certain one of the category of introductions may be one of a development history for a certain period of time throughout the business development history.
In some embodiments, where the terminal recognizes an information code on the identification card, the item recognition result may be an information code identification of the identification card.
In the embodiment of the disclosure, the user number information corresponding to the identity card is used for determining that the introduction content corresponding to the user number information is played, so that the diversity of the mode of displaying the content by the terminal when the terminal interacts with the user is increased, and the flexibility of displaying the content by the terminal is improved.
In some embodiments of the present disclosure, after S101, the implementation of S201-S203 may be further included; the steps shown in fig. 5 will be described with reference to the example in which S201 to S203 are performed after S103 in fig. 1.
S201, acquiring a face image in an augmented reality environment.
S202, generating special effect data corresponding to the user to which the user number information belongs under the condition that the face image is acquired, and previewing.
In the embodiment of the disclosure, the terminal may collect a face image in an AR environment, generate special effect data corresponding to a user to which user number information belongs in the case of collecting the face image, and display the special effect data on a screen for previewing.
In the embodiment of the present disclosure, the special effect data may be 3D special effect data, two-dimensional special effect data, or the like. For example, the special effects data may be a 3D special effects decal.
In some embodiments, the terminal may enter the AR environment after the first introduction or the second introduction is played, and collect the face image. The end of playing the first introduction or the second introduction may refer to ending the playing after the first introduction or the second introduction finishes playing all the content, or may refer to stopping the playing when receiving the playing stopping operation of the user during the playing process.
In other embodiments, the terminal may also enter the AR environment and collect the face image while the first introduction or the second introduction is being played. In this manner, the terminal may play the first introduction content or the second introduction content through one window on the same page, and collect the face image in the AR environment through another window on the page, which may also be implemented in other manners, and the embodiment of the present disclosure is not limited to this.
And S203, obtaining a special effect image or a special effect video based on the special effect data under the condition that shooting control operation is detected.
In the embodiment of the disclosure, the terminal may acquire a corresponding special effect image or special effect video according to the acquired face and the previewed special effect data under the conditions that the face image is acquired and the shooting control operation of the user is detected; for example, in the case where the photographing control operation is a photographing operation, the terminal may photograph to obtain a special effect image; and in the case that the shooting control operation is a video recording operation, the terminal can shoot to obtain the special effect video.
In some embodiments, the terminal may display the generated special effect data around the face or body of the user in case of detecting the face image, and control the special effect data to move simultaneously in case of the face movement of the user. In other embodiments, the terminal may match and display the generated special effect data at the face position of the user according to the size of the face of the user in the case of detecting the face image, for example, in the case of the special effect data being a mask sticker, the terminal may display a mask sticker suitable for the size of the face of the user on the face of the user according to the size of the face of the user in the collected face image.
In the embodiment of the disclosure, a terminal acquires a face image in an augmented reality environment, generates special effect data corresponding to a user to which user number information belongs and previews the special effect data under the condition that the face image is acquired, and obtains a special effect image or a special effect video based on the special effect data under the condition that shooting control operation is detected; the flexibility and diversity of content display of the terminal in the process of interaction with the user are further improved.
In some embodiments of the present disclosure, after the step S203, the method further includes: s104 or S105, fig. 6 is an optional flowchart of a display method provided in an embodiment of the present disclosure, and will be described with reference to the steps shown in fig. 6.
And S104, sharing the special effect image or the special effect video under the condition that the sharing operation is detected.
In the embodiment of the disclosure, the terminal may share the captured special effect image or special effect video to the user or the website designated by the sharing operation under the condition that the sharing operation of the user is detected.
In some embodiments, after capturing the special effect image or the special effect video, the terminal may display a prompt window on a display page of the special effect image or the special effect video, display the sharing portal through the prompt window, and share the captured special effect image or the captured special effect video to the user or the website designated by the sharing operation when the triggering operation of the user on the sharing portal is detected.
S105, re-shooting the special effect image or the special effect video under the condition that the re-shooting operation is detected.
In the embodiment of the disclosure, the terminal returns to the AR environment when the re-shooting operation is detected, and continues to acquire the face in the AR environment, and re-shoots the special effect image or the special effect video based on the special effect data when the shooting control operation is detected.
In some embodiments, after capturing the special effect image or the special effect video, the terminal may display a prompt window on a display page of the special effect image or the special effect video, display a re-shooting entrance through the prompt window, return to the AR environment when detecting a triggering operation of the user for the re-shooting entrance, and continue face collection in the AR environment, and re-capture based on the special effect data to obtain the special effect image or the special effect video when detecting a capturing control operation.
In the embodiment of the disclosure, the terminal shares the special effect image or the special effect video when the sharing operation is detected, or shoots the special effect image or the special effect video again when the re-shooting operation is detected; the intelligence of the terminal is improved.
In some embodiments of the present disclosure, after the step S201, the method further includes:
s204, under the condition that the face image is acquired, displaying preset special effect data corresponding to the object.
In the embodiment of the disclosure, the object corresponds to preset special effect data, and the terminal can display the preset special effect data under the condition that the terminal acquires the face image.
In some embodiments, the terminal may match and display preset effect data at a user's face position according to a size of the user's face, and change partial data of the effect data displayed at the user's face position according to a change in an expression of the user's face. For example, in the case where the special effect data is a mask sticker, the terminal may display a mask sticker suitable for the size of the user's face on the user's face according to the size of the user's face in the collected face image, and change the facial expression, shape, color, etc. of the mask according to the change of the facial expression of the user.
For example, in the case that the object is a roast duck, the preset special effect data may be a special effect sticker of the cartoon duck, for example, a cartoon duck mask, etc., and the terminal may display a cartoon duck mask sticker suitable for the face size of the user on the face of the user according to the size of the user face in the collected face image, and change the expression of the cartoon duck mask according to the expression change of the face of the user.
In the embodiment of the disclosure, under the condition that a terminal acquires a face image, preset special effect data corresponding to an object is displayed; the diversity of the content displayed by the terminal in the process of interacting with the user is further improved.
In some embodiments of the present disclosure, the above S202 may be implemented by S2021-S2022, and the step of S202 in fig. 5 will be described with reference to the steps shown in fig. 7 by taking the implementation of S2021-S2022 as an example.
S2021, under the condition that the face image is acquired, acquiring material information, wherein the material information represents special effect style information.
S2022, generating special effect data corresponding to the user to which the user number information belongs according to the material information.
In the embodiment of the disclosure, each user number information corresponds to one user, the user number information corresponding to different users is different, and the terminal can acquire special effect style information for generating special effect data under the condition that the face image is acquired, and generates specific special effect data corresponding to the user of the user number information according to the acquired special effect style information.
In the embodiment of the disclosure, under the condition that a face image is acquired, material information is acquired, the material information characterizes special effect style information, and special effect data corresponding to a user to which user number information belongs is generated according to the material information; the diversity of the displayed content is improved when the terminal interacts with the user.
In some embodiments of the present disclosure, the material information includes at least one of: specific date-like effects, number-like effects, and user-specific effects. It is to be appreciated that the material information can be a special effects pattern template for generating special effects data, for example, the special date special effects can be one or more special effects pattern templates for generating all special date special effects, for example, the special date can be an end noon festival, a spring festival, etc.; the numbering class effect may be one or more effect style templates for generating all numbering class effects, e.g., the numbering may be 100, 203, etc.; the user-specific effects may be one or more effect style templates for generating user-specific effects corresponding to each user.
In some embodiments of the present disclosure, before generating the special effect data, the terminal may further acquire user information, for example, acquire a registration time, an access number, an access time, etc. of the user for a preset application program; therefore, the terminal can select the corresponding user-specific special effect according to the user information. It is to be appreciated that the object may correspond to a preset application, which may be an applet or a third party application, and embodiments of the present disclosure are not limited in this respect. The registration time may be a date when the user first created an account on the preset application; the number of accesses may be a historical number of times the user logged in (accessed) the preset application from the registration time; the access time may be the date of each login of the preset application program by the user and the number of days of previous login of the preset application program.
In some embodiments of the present disclosure, the above S2022 may be implemented by S301-S302; fig. 8A is a schematic flow chart of an alternative method for displaying provided in an embodiment of the disclosure, and will be described with reference to the steps shown in fig. 8A.
S301, determining user registration time according to user number information under the condition that the material information is a special effect of the user corresponding to the registration time of a preset application program.
S302, updating the special effects of the user based on the user registration time to generate special effect data.
In the embodiment of the present disclosure, when the material information is a special effect of a user corresponding to a registration time of a user with respect to a preset application, the terminal may determine a user (e.g., a user name, etc.) corresponding to the user number information according to the user number information, further determine the registration time of the user, and update the special effect of the user corresponding to the registration time by using the registration time of the user, so that the special effect of the user corresponding to the updated registration time is used as special effect data corresponding to the user.
It may be appreciated that the user-specific special effect corresponding to the registration time may be a registration time special effect pattern template, so that the special effect data corresponding to a certain user generated after updating may be special effect data corresponding to the registration time of the certain user. For example, after acquiring the registration time of the certain user, the terminal may push, for the certain user, special effects of a specific style related to the registration time according to the morning and evening of the registration time. For example, in the case where the registration time of the certain user is early, the generated special effect data may be the 3D text special effect "xx year xx month xx sun care store old friends, hello"; fig. 8B is a schematic diagram illustrating an exemplary display effect of the 3D text effect on a screen of the terminal, and as shown in fig. 8B, the 3D text effect may be displayed on one side of the face image.
In some embodiments of the present disclosure, the updating of the user-specific special effects by the terminal based on the user registration time may be achieved by: the terminal can generate a layer containing characters of the user registration time, superimpose the layer on the layer where the user special effect corresponding to the registration time is located, and obtain the user special effect (generated special effect data) corresponding to the updated registration time through the superposition of the layer; or the terminal can generate an image according to the user registration time, and perform image fusion on the image and the image of the user special effect corresponding to the registration time, and obtain the updated user special effect corresponding to the registration time through image fusion; or, the terminal may render the special effect data of the special effect corresponding to the registration time by using the registration time, for example, in the case that the special effect of the user corresponding to the registration time is a special effect sticker, the terminal may render a sticker area corresponding to the registration time in the special effect sticker according to the registration time, and obtain the special effect of the user corresponding to the updated registration time through area rendering. It will be appreciated that there are many ways to generate special effects data, which is not limited by the present disclosure.
In the embodiment of the disclosure, when the material information is a special effect of a user corresponding to a registration time of a preset application program, determining a user registration time according to user number information, and updating the special effect of the user based on the user registration time to generate special effect data; the diversity of special effect data generated by the terminal is increased.
In some embodiments of the present disclosure, the above S2022 may be implemented by S401-S402; fig. 9A is a schematic flow chart of an alternative method for displaying provided in an embodiment of the disclosure, and will be described with reference to the steps shown in fig. 9A.
S401, determining the access times of the user according to the user number information under the condition that the material information is the special effect of the user corresponding to the access times of the preset application program.
And S402, updating the special effects of the user based on the access times of the user to generate special effect data.
In the embodiment of the present disclosure, when the material information is a special effect of a user corresponding to the number of accesses of the user to the preset application, the terminal may determine, according to the user number information, a user (for example, a user name, etc.) corresponding to the user number information, further determine the historical number of accesses of the user to the preset application, and update the special effect of the user corresponding to the number of accesses by using the number of accesses of the user, so that the special effect of the user corresponding to the updated number of accesses is used as special effect data corresponding to the user.
It may be understood that the user-specific special effect corresponding to the number of accesses may be an access number special effect pattern template, so that the special effect data corresponding to a certain user generated after updating may be special effect data corresponding to the number of accesses of the certain user. The terminal may generate the history special effect data corresponding to the certain user according to the access times after obtaining the access times of the certain user; for example, in case of 20 accesses, the terminal may generate a 3D text special effect "this is your 20 th time; fig. 9B is a schematic diagram illustrating an exemplary display effect of the 3D text effect on a screen of the terminal, and as shown in fig. 9B, the 3D text effect may be displayed at an upper position of the face image. For example, after the terminal obtains the number of times of access of the user, the terminal may also push special effects of a specific style related to the number of times of access, and the like, for the user according to the number of times of access.
It should be noted that, here, the principle of updating the user-specific special effect based on the number of user accesses is the same as the above principle of updating the user-specific special effect based on the user registration time.
In the embodiment of the disclosure, under the condition that the material information is a special effect of a user corresponding to the access times of a preset application program, determining the access times of the user according to the user number information, and updating the special effect of the user based on the access times of the user to generate special effect data; the diversity of special effect data generated by the terminal is increased.
In some embodiments of the present disclosure, the above S2022 may be implemented by S501-S502; fig. 10 is a schematic flow chart of an alternative method for displaying provided in the embodiment of the present disclosure, and will be described with reference to the steps shown in fig. 10.
S501, determining user access time according to user number information under the condition that the material information is a special effect of the user corresponding to the access time of the preset application program.
S502, updating the special effect of the user based on the access time of the user to generate special effect data.
In the embodiment of the present disclosure, when the material information is a special effect of a user corresponding to an access time of a user with respect to a preset application program, the terminal may determine, according to user number information, a user (e.g., a user name, etc.) corresponding to the user number information, further determine a time of the user that has accessed the preset application program last time from a time of the user that has accessed the preset application program last time, and update the special effect of the user corresponding to the access time by using the time of the user that has accessed the preset application program last time from the time of the user that has accessed the preset application program last time, so that the special effect of the user corresponding to the updated access time is used as special effect data corresponding to the user.
In some embodiments, the user-specific special effect corresponding to the access time may be an access time special effect pattern template, so that the special effect data corresponding to the certain user generated after the update may be special effect data corresponding to the access time of the certain user. For example, the generated special effects data corresponding to the user may be 3D text special effects "xxx days have passed since you last visited bookstore".
In some embodiments, in the process of generating the special effect data corresponding to the access time of a certain user, the terminal may determine the special effect data of the access time generated last for the certain user according to the time when the certain user accesses the preset application program last time, select a special effect pattern template different from the special effect template used for generating the special effect data of the access time last time from multiple access time special effect pattern templates, and generate the special effect data corresponding to the access time of the certain user by using the selected special effect template and combining the time when the certain user accesses the preset application program last time with the time when the certain user accesses the preset application program last time.
It should be noted that, here, the principle of updating the user-specific special effect based on the user access time is the same as the above principle of updating the user-specific special effect based on the user registration time.
In the embodiment of the disclosure, when the material information is a special effect of a user corresponding to access time of a preset application program, determining access time of the user according to user number information, and updating the special effect of the user based on the access time of the user to generate special effect data; the diversity of special effect data generated by the terminal is increased.
In some embodiments of the present disclosure, the above S2022 may be implemented by S601-S602; fig. 11 is a schematic flow chart of an alternative method for displaying provided in an embodiment of the present disclosure, and will be described with reference to the steps shown in fig. 11.
S601, acquiring current date information when the material information is specific to a specific date.
S602, when the date information is specific date information, updating the specific date special effects based on the date information to generate special effect data.
In the embodiment of the disclosure, when the material information is a specific date special effect, the terminal acquires current date information, and when the acquired current date information is a specific date (for example, legal holidays, saturday, sunday, etc.), updates the specific date special effect by using the acquired current date information, thereby taking the updated specific date special effect as special effect data corresponding to the user.
It may be understood that the specific date special effect corresponding to the date information may be a specific date special effect pattern template, so that the generated special effect data may be special effect data corresponding to the date information. The special date special effect pattern template can be one special date special effect pattern template corresponding to all special dates, or a plurality of special date special effect pattern templates corresponding to all special dates. For example, in the case that the date information is the midday festival, the generated special effect data may be special effect data of the midday festival, for example, may be a sticker including a rice dumpling and a dragon boat, may also be a 3D text special effect of "happy midday festival", and the like.
It should be noted that, here, the principle of updating the special date special effect based on the acquired date information is the same as the above principle of updating the user-specific special effect based on the user registration time.
In the embodiment of the disclosure, when the material information is specific date special effects, current date information is acquired, and when the date information is specific date information, the specific date special effects are updated based on the date information, so that special effect data are generated; the diversity of special effect data generated by the terminal is increased.
In some embodiments of the present disclosure, the above S2022 may be implemented by S701; s2022 in fig. 7 will be described by taking an implementation as an example by S701 in conjunction with the steps shown in fig. 12.
And S701, updating the serial special effects based on the user serial information to generate special effect data when the material information is the serial special effects.
In the embodiment of the present disclosure, when the material information is a serial number special effect, the terminal may update the serial number special effect by using the obtained serial number information of a certain user, so that the updated serial number special effect is used as special effect data corresponding to the certain user to which the serial number information of the certain user belongs. For example, the special effect data corresponding to a certain user may be a sticker, for example, in the case where certain user number information is 100, the special effect data corresponding to the user number information may be a sticker of "you are the person of 100 th bit xxxx".
In the embodiment of the disclosure, under the condition that the material information is serial number special effects, based on user serial number information, the serial number special effects are updated to generate special effect data; the diversity of special effect data generated by the terminal is increased.
The special effect data generated above and corresponding to the user includes: the specific data generated by updating the user-specific data corresponding to the registration time, the specific data generated by updating the user-specific data corresponding to the user access times, the specific data generated by updating the specific date-like specific data corresponding to the date information, and the specific data generated by updating the serial number-like specific data corresponding to the user serial number information, and in the case of at least one specific data of the above five specific data, the at least one specific data and the preset specific data corresponding to the object can be simultaneously displayed in a superimposed manner or displayed sequentially, which is not limited in the embodiment of the disclosure.
The following description is made by taking an object as a roast duck, the description content is a description video, the object identification card is a roast duck postcard, the identification card is a scene of the roast duck identification card as an example, and the above-mentioned display method is exemplarily described in combination with several practical application scenes.
Fig. 13 is a schematic flow chart of an alternative method for displaying provided in an embodiment of the present disclosure, and will be described with reference to the steps shown in fig. 13.
S11, the terminal jumps to an H5 loading page by scanning the two-dimensional code of the roast duck postcard, and displays a code scanning control and code scanning prompt information through the H5 loading page.
And S12, the terminal enters an AR environment under the condition that the terminal detects the code scanning operation aiming at the code scanning control.
S13, the terminal identifies the roast duck postcard in the AR environment to obtain a two-dimensional code identification of the roast duck postcard.
Illustratively, fig. 14 shows a schematic illustration of the display effect of a roast duck postcard; 15 shows a schematic display effect of an H5 loading page and a schematic display effect of identifying a roast duck postcard in an AR environment, as shown in fig. 15, the code scanning control is a "…" control of an upper right corner of the H5 loading page, the code scanning prompt information is "click upper right corner '…' select xxx browser to scan a roast duck postcard cover", and the terminal jumps to the AR environment and identifies the roast duck postcard in the AR environment when receiving a code scanning operation of a user for the "…" control. It should be noted that, as shown in fig. 15, the terminal may display the icon of the browser simultaneously after the text message "browser" while displaying the prompt message.
In some embodiments, the terminal may display an opening prompt message, a confirmation control and a return control when receiving a code scanning operation of the user for the "…" control, prompt the user whether to open the camera by opening the prompt message, and open the camera and enter the AR environment when detecting a confirmation operation of the user for the confirmation control; and returning to continue to display the H5 loading page or returning to other pages under the condition that the return operation of the user on the return control is detected.
S141, determining a kind of introduction video of the roast duck corresponding to the roast duck number from preset types of introduction videos of the roast duck according to the roast duck number associated with the two-dimensional code identification of the roast duck postcard by the terminal, and acquiring and playing one of the introduction videos of the roast duck corresponding to the roast duck number from the kind of introduction video of the roast duck.
S142, the terminal determines one class of story videos corresponding to the roast duck numbers from preset multi-class story videos of the roast ducks according to the roast duck numbers, and acquires one story video corresponding to the roast duck numbers from the one class of story videos and plays the one story video.
Illustratively, fig. 16 is a schematic illustration of a presentation effect of a video frame during a story video.
Fig. 17 is a schematic flow chart of an alternative method for displaying provided in an embodiment of the present disclosure, and will be described with reference to the steps shown in fig. 17.
S21, the terminal jumps to an H5 loading page by scanning the two-dimensional code of the roast duck identity card, and displays a code scanning control and code scanning prompt information through the H5 loading page.
Illustratively, fig. 18 shows a schematic view of the display effect of the roast duck identification card. FIG. 19 shows a schematic presentation effect of an H5 load page; as shown in FIG. 19, the code scanning control is a "…" control of the upper right corner of the H5 loading page, and the code scanning prompt message is "click the upper right corner '…' to select the xxx browser to scan the roast duck identification card cover". In addition, under the condition that the terminal receives the code scanning operation of the user aiming at the control of …, the terminal can jump to the AR environment and identify the roast duck identity card in the AR environment.
S22, the terminal enters an AR environment under the condition that the terminal detects code scanning operation aiming at the code scanning control.
S23, the terminal identifies the roast duck identity card in the AR environment, and a two-dimensional code identification of the roast duck identity card is obtained.
S24, the terminal determines a type of enterprise propaganda video corresponding to the user identity card number from at least one type of preset enterprise propaganda video corresponding to the roast duck according to the user identity card number associated with the two-dimensional code identification of the roast duck identity card, and determines the enterprise propaganda video corresponding to the user identity card number to play from the type of enterprise introduction video.
Fig. 20 is a schematic view showing a video frame during the process of playing a promotional video of an enterprise.
S25, under the condition that the enterprise propaganda video playing is finished, face images are collected in the AR environment.
And S261, displaying cartoon duck stickers corresponding to the roast ducks on the collected face images under the condition that the face images are collected.
Illustratively, fig. 21 is a schematic illustration of a display effect of a cartoon duck decal on an acquired face image.
S262, under the condition that the face image is acquired, at least one special effect of a specific date special effect, a serial number special effect and a user exclusive special effect is acquired, and special effect stickers corresponding to the users to which the user identity card numbers belong are generated according to the at least one selected special effect.
And S27, recognizing the expression of the acquired facial image, and rendering special effect data corresponding to the cartoon duck sticker based on the expression of the user under the condition that the expression of the user is recognized to be changed, so as to obtain the cartoon duck sticker which is the same as the expression of the user, and previewing.
And S28, under the condition that shooting control operation is detected, based on the special effect sticker, a special effect image is obtained, a prompt window is displayed on the special effect image, and a re-shooting control and a sharing friend control are displayed through the prompt window.
Illustratively, fig. 22 is an effect diagram of a prompt window displayed on a special effect image.
S291, sharing the special effect image under the condition that the sharing operation aiming at the sharing friend control is detected.
And S292, re-shooting the special effect image or the special effect video under the condition that the re-shooting operation for the re-shooting control is detected.
The present disclosure also provides a display device. Fig. 23 is a schematic structural view of a display device according to an embodiment of the disclosure; as shown in fig. 23, the display device 1 includes: an identification unit 10 for entering an augmented reality environment by scanning an information code identifying an article; the information code is arranged on the identification article; identifying the identified object in the augmented reality environment to obtain an object identification result; and the playing unit 20 acquires and plays the introduction content of the object corresponding to the number information according to the number information corresponding to the object identification result.
In some embodiments of the present disclosure, the identifying the article comprises: an article identification card; the number information includes: item number information; the article identification result comprises: identity information of the article identification card; the identity information corresponding to different article identification cards is different; the introduction includes: a first introduction; the playing unit 20 is further configured to determine, according to the identity information of the article identification card, article number information associated with the identity information; and determining one type of introduction content corresponding to the article number information from preset multiple types of introduction content of the article object, and determining the first introduction content corresponding to the article number information from the one type of introduction content to play.
In some embodiments of the present disclosure, the identifying the article comprises: an identity card; the number information includes: user numbering information; the identification result comprises: identity information of the identity identification card; the identity information corresponding to different identity identification cards is different; the introduction includes: a second introduction; the playing unit 20 is further configured to determine user number information associated with the identity information according to the identity information of the identity card; according to the user number information, one type of introduction content corresponding to the user number information is determined from at least one type of preset introduction content associated with the object of the object, and the second introduction content corresponding to the user number information is determined from the one type of introduction content to play.
In some embodiments of the present disclosure, the display device 1 further comprises: an acquisition unit 30 for acquiring a face image in the augmented reality environment; a generating unit 40, configured to generate special effect data corresponding to a user to which the user number information belongs and preview the special effect data when the face image is acquired; and the acquisition unit 30 is further configured to obtain a special effect image or a special effect video based on the special effect data when the shooting control operation is detected.
In some embodiments of the present disclosure, the display device 1 further comprises: the sharing unit 50 is configured to share the special effect image or the special effect video when a sharing operation is detected; or, the acquisition unit 30 is further configured to re-capture the special effect image or the special effect video when the re-shooting operation is detected.
In some embodiments of the present disclosure, the generating unit 40 is further configured to acquire material information, where the material information characterizes the special effect style information, when the face image is acquired; and generating the special effect data corresponding to the user to which the user number information belongs according to the material information.
In some embodiments of the present disclosure, the material information includes at least one of: specific date-like effects, number-like effects, and user-specific effects.
In some embodiments of the present disclosure, the generating unit 40 is further configured to determine, when the material information is a user-specific special effect corresponding to a registration time for a preset application, a user registration time according to the user number information; and updating the special effect of the user based on the user registration time to generate the special effect data.
In some embodiments of the present disclosure, the generating unit 40 is further configured to determine, according to the user number information, a user access number when the material information is a user-specific special effect corresponding to the access number for a preset application; and updating the special effect of the user based on the access times of the user to generate the special effect data.
In some embodiments of the present disclosure, the generating unit 40 is further configured to determine, according to the user number information, a user access time when the material information is a user-specific special effect corresponding to the access time for the preset application; and updating the special effect of the user based on the user access time to generate the special effect data.
In some embodiments of the present disclosure, the generating unit 40 is further configured to obtain current date information when the material information is the specific date special effect; and when the date information is specific date information, updating the specific date special effect based on the date information to generate the special effect data.
In some embodiments of the present disclosure, the generating unit 40 is further configured to update the serial special effects based on the user serial information to generate the special effect data when the material information is the serial special effect.
In some embodiments of the present disclosure, the display device 1 further comprises: and the display unit 60 is configured to display preset special effect data corresponding to the object in a case where the face image is acquired after the face image is acquired in the augmented reality environment.
In some embodiments of the present disclosure, the identification unit 10 is further configured to display a scan code entry by scanning an information code identifying an article; and entering the augmented reality environment under the condition that the code scanning operation aiming at the code scanning inlet is detected.
In some embodiments of the present disclosure, the article object is a food item; the preset multi-class introduction content comprises: presetting a plurality of types of introduction contents for manufacturing the food; the category of introductions includes: a category of introductions for making said food product; the first introduction includes: a content for making the food; alternatively, the preset multi-class introduction content includes: preset multi-class story content associated with the food product; the category of introductions includes: a class of story content; the first introduction includes: a story content.
The embodiment of the disclosure also provides electronic equipment. Fig. 24 is a schematic structural diagram of an electronic device 2 according to an embodiment of the present disclosure, as shown in fig. 24, including: a display 201, a memory 202, and a processor 203, wherein the display 201, the memory 202, and the processor 203 are connected by a communication bus 204; a memory 202 for storing an executable computer program; the processor 203 is configured to implement, in conjunction with the display 201, a method provided by an embodiment of the present disclosure, for example, a presentation method provided by an embodiment of the present disclosure when executing an executable computer program stored in the memory 202.
The disclosed embodiments provide a computer readable storage medium having a computer program stored thereon for, when executed by the processor 203, implementing a method provided by the disclosed embodiments, for example, a presentation method provided by the disclosed embodiments.
In some embodiments of the present disclosure, the storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM, among others; but may be a variety of devices including one or any combination of the above memories.
In some embodiments of the disclosure, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
In summary, according to the technical implementation scheme, the information code arranged on the identification object is scanned to enter the augmented reality environment, the identification object is identified in the environment, the object identification result is obtained, and the introduction content of the object corresponding to the number information is obtained and played according to the number information corresponding to the object identification result. Therefore, the technology of 'one object and one code' can be combined to realize the playing of the introduction content of the object corresponding to each identification object; therefore, compared with the prior art, the method increases the diversity of the modes of displaying the content by the terminal when the terminal interacts with the user, and improves the flexibility of displaying the content by the terminal.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (15)

1. A display method, comprising:
entering an augmented reality environment by scanning an information code for identifying an article; the information code is arranged on the identification article; the identification article comprises: an article identification card and an identity identification card;
identifying the identified object in the augmented reality environment to obtain an object identification result; the article identification result comprises: identity information of the article identification card and identity information of the identity identification card; the identity information corresponding to different article identification cards is different; the identity information corresponding to different identity identification cards is different;
determining article number information associated with the identity information according to the identity information of the article identification card;
determining one type of introduction content corresponding to the article number information from preset multiple types of introduction content of the article object, and determining a first introduction content corresponding to the article number information from the one type of introduction content to play;
determining user number information associated with the identity information according to the identity information of the identity identification card;
According to the user number information, determining one type of introduction content corresponding to the user number information from at least one type of preset introduction content associated with the object of the object, and determining a second introduction content corresponding to the user number information from the one type of introduction content to play;
acquiring a face image in the augmented reality environment;
generating special effect data corresponding to the user to which the user number information belongs under the condition that the face image is acquired, and previewing;
and under the condition that shooting control operation is detected, obtaining a special effect image or a special effect video based on the special effect data.
2. The method according to claim 1, wherein the method further comprises:
under the condition that sharing operation is detected, sharing the special effect image or the special effect video; or alternatively, the process may be performed,
and re-shooting the special effect image or the special effect video when the re-shooting operation is detected.
3. The method according to claim 1, wherein generating special effects data corresponding to the user to which the user number information belongs in the case of acquiring a face image includes:
Acquiring material information under the condition that the face image is acquired, wherein the material information represents special effect style information;
and generating the special effect data corresponding to the user to which the user number information belongs according to the material information.
4. The method of claim 3, wherein the step of,
the material information includes at least one of: specific date-like effects, number-like effects, and user-specific effects.
5. The method according to claim 4, wherein the generating the special effects data corresponding to the user to which the user number information belongs based on the material information includes:
under the condition that the material information is a user special effect corresponding to the registration time of a preset application program, determining the user registration time according to the user number information;
and updating the special effect of the user based on the user registration time to generate the special effect data.
6. The method according to claim 4, wherein the generating the special effects data corresponding to the user to which the user number information belongs based on the material information includes:
Under the condition that the material information is a special effect of a user corresponding to the access times of a preset application program, determining the access times of the user according to the user number information;
and updating the special effect of the user based on the access times of the user to generate the special effect data.
7. The method according to claim 4, wherein the generating the special effects data corresponding to the user to which the user number information belongs based on the material information includes:
under the condition that the material information is a user exclusive special effect corresponding to the access time of a preset application program, determining the access time of a user according to the user number information;
and updating the special effect of the user based on the user access time to generate the special effect data.
8. The method according to claim 4, wherein the generating the special effects data corresponding to the user to which the user number information belongs based on the material information includes:
acquiring current date information under the condition that the material information is the specific date special effect;
and when the date information is specific date information, updating the specific date special effect based on the date information to generate the special effect data.
9. The method according to claim 4, wherein the generating the special effects data corresponding to the user to which the user number information belongs based on the material information includes:
and updating the serial number special effects based on the user serial number information under the condition that the material information is the serial number special effects, and generating the special effect data.
10. The method according to any one of claims 3-9, wherein after the acquisition of the face image in the augmented reality environment, the method further comprises:
and under the condition that the face image is acquired, displaying preset special effect data corresponding to the object.
11. The method according to any of claims 1-10, wherein said entering an augmented reality environment by scanning an information code identifying an item comprises:
displaying a code scanning inlet by scanning an information code for identifying an article;
and entering the augmented reality environment under the condition that the code scanning operation aiming at the code scanning inlet is detected.
12. The method of claim 1, the item object being a food product;
the preset multi-class introduction content comprises: presetting a plurality of types of introduction contents for manufacturing the food; the category of introductions includes: a category of introductions for making said food product; the first introduction includes: a content for making the food; or alternatively, the process may be performed,
The preset multi-class introduction content comprises: preset multi-class story content associated with the food product; the category of introductions includes: a class of story content; the first introduction includes: a story content.
13. A display device, comprising:
the identification unit is used for entering an augmented reality environment by scanning an information code for identifying the object; the information code is arranged on the identification article; identifying the identified object in the augmented reality environment to obtain an object identification result; wherein the identifying the item comprises: an article identification card and an identity identification card; the article identification result comprises: identity information of the article identification card; the identity information corresponding to different article identification cards is different; identity information of the identity identification card; the identity information corresponding to different identity identification cards is different;
the playing unit is used for determining article number information associated with the identity information according to the identity information of the article identification card; determining one type of introduction content corresponding to the article number information from preset multiple types of introduction content of the article object, and determining a first introduction content corresponding to the article number information from the one type of introduction content to play;
The playing unit is further used for determining user number information associated with the identity information according to the identity information of the identity identification card; according to the user number information, determining one type of introduction content corresponding to the user number information from at least one type of preset introduction content associated with the object of the object, and determining a second introduction content corresponding to the user number information from the one type of introduction content to play;
the acquisition unit is used for acquiring the face image in the augmented reality environment;
the generating unit is used for generating special effect data corresponding to the user to which the user number information belongs under the condition of the acquired face image and previewing the special effect data;
the acquisition unit is also used for acquiring a special effect image or a special effect video based on the special effect data under the condition that shooting control operation is detected.
14. An electronic device, comprising:
a display;
a memory for storing an executable computer program;
a processor for implementing the method of any one of claims 1 to 12 in combination with the display when executing the executable computer program stored in the memory.
15. A computer readable storage medium, having stored thereon a computer program for causing a processor to perform the method of any one of claims 1 to 12.
CN202110672282.XA 2021-06-17 2021-06-17 Display method, device, equipment and computer readable storage medium Active CN113326709B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110672282.XA CN113326709B (en) 2021-06-17 2021-06-17 Display method, device, equipment and computer readable storage medium
PCT/CN2022/085589 WO2022262379A1 (en) 2021-06-17 2022-04-07 Display method and apparatus, device, computer readable storage medium, and computer program
TW111119948A TW202301188A (en) 2021-06-17 2022-05-27 Display method, equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672282.XA CN113326709B (en) 2021-06-17 2021-06-17 Display method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113326709A CN113326709A (en) 2021-08-31
CN113326709B true CN113326709B (en) 2023-07-04

Family

ID=77423757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672282.XA Active CN113326709B (en) 2021-06-17 2021-06-17 Display method, device, equipment and computer readable storage medium

Country Status (3)

Country Link
CN (1) CN113326709B (en)
TW (1) TW202301188A (en)
WO (1) WO2022262379A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326709B (en) * 2021-06-17 2023-07-04 北京市商汤科技开发有限公司 Display method, device, equipment and computer readable storage medium
CN113867528A (en) * 2021-09-27 2021-12-31 北京市商汤科技开发有限公司 Display method, device, equipment and computer readable storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102299898A (en) * 2010-06-23 2011-12-28 上海博路信息技术有限公司 Barcode-based enhanced display method for article information
CN104008357A (en) * 2013-02-25 2014-08-27 陈郁文 Audio-video interaction system and operation carrier for audio-visual interaction system
CN106816077B (en) * 2015-12-08 2019-03-22 张涛 Interactive sandbox methods of exhibiting based on two dimensional code and augmented reality
CN106844456A (en) * 2016-12-12 2017-06-13 烟台魔眼网络科技有限公司 Augmented reality rich-media content presentation mode based on Quick Response Code
US9754168B1 (en) * 2017-05-16 2017-09-05 Sounds Food, Inc. Incentivizing foodstuff consumption through the use of augmented reality features
CN107895312A (en) * 2017-12-08 2018-04-10 快创科技(大连)有限公司 A kind of shopping online experiencing system based on AR technologies
CN108829250A (en) * 2018-06-04 2018-11-16 苏州市职业大学 A kind of object interaction display method based on augmented reality AR
CN109167936A (en) * 2018-10-29 2019-01-08 Oppo广东移动通信有限公司 A kind of image processing method, terminal and storage medium
CN110209852A (en) * 2019-06-12 2019-09-06 北京我的天科技有限公司 Brand recognition method and apparatus based on AR technology
JP7247048B2 (en) * 2019-07-31 2023-03-28 エヌ・ティ・ティ・コミュニケーションズ株式会社 Information presentation system, information presentation method, server device and its program
CN111178864A (en) * 2020-03-31 2020-05-19 青岛麦卡智厨科技有限公司 System for be applied to unmanned scene of selling goods and shoot discernment article through cell-phone and carry out shopping
CN111640199B (en) * 2020-06-10 2024-01-09 浙江商汤科技开发有限公司 AR special effect data generation method and device
CN113326709B (en) * 2021-06-17 2023-07-04 北京市商汤科技开发有限公司 Display method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
TW202301188A (en) 2023-01-01
CN113326709A (en) 2021-08-31
WO2022262379A1 (en) 2022-12-22

Similar Documents

Publication Publication Date Title
CN109478124B (en) Augmented reality device and augmented reality method
US8745502B2 (en) System and method for interfacing interactive systems with social networks and media playback devices
WO2023020622A1 (en) Display method and apparatus, electronic device, computer-readable storage medium, computer program, and computer program product
CN113326709B (en) Display method, device, equipment and computer readable storage medium
US20160012136A1 (en) Simultaneous Local and Cloud Searching System and Method
WO2019171128A1 (en) In-media and with controls advertisement, ephemeral, actionable and multi page photo filters on photo, automated integration of external contents, automated feed scrolling, template based advertisement post and actions and reaction controls on recognized objects in photo or video
US20070276721A1 (en) Computer implemented shopping system
EP4088247A1 (en) Systems for identifying products within audio-visual content
WO2013120851A1 (en) Method for sharing emotions through the creation of three-dimensional avatars and their interaction through a cloud-based platform
CN108446927A (en) Consumer drives ad system
Paasonen Online pornography: Ubiquitous and effaced
US20160132216A1 (en) Business-to-business solution for picture-, animation- and video-based customer experience rating, voting and providing feedback or opinion based on mobile application or web browser
CN113283969A (en) Display method, device, equipment and computer readable storage medium
WO2023045964A1 (en) Display method and apparatus, device, computer readable storage medium, computer program product, and computer program
US10922744B1 (en) Object identification in social media post
CN113722619A (en) Content display method, device, equipment and computer readable storage medium
JP2006215842A (en) Human movement line tracing system and advertisement display control system
KR102102572B1 (en) System and method for providing online shopping mall
KR102290855B1 (en) Digital sinage system
WO2022262389A1 (en) Interaction method and apparatus, computer device and program product, storage medium
CN114049467A (en) Display method, display device, display apparatus, storage medium, and program product
CN115687816A (en) Resource processing method and device
CN114092166A (en) Information recommendation processing method, device, equipment and computer readable storage medium
JP7482300B1 (en) Property information distribution system, property information distribution device, property information distribution program, and property information distribution method
JP7281012B1 (en) Program, information processing method and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051355

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant