CN111311339A - Target object display method and device and electronic equipment - Google Patents

Target object display method and device and electronic equipment Download PDF

Info

Publication number
CN111311339A
CN111311339A CN202010384115.0A CN202010384115A CN111311339A CN 111311339 A CN111311339 A CN 111311339A CN 202010384115 A CN202010384115 A CN 202010384115A CN 111311339 A CN111311339 A CN 111311339A
Authority
CN
China
Prior art keywords
terminal
service
pedestrian
resource
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010384115.0A
Other languages
Chinese (zh)
Inventor
詹劲
方渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010384115.0A priority Critical patent/CN111311339A/en
Publication of CN111311339A publication Critical patent/CN111311339A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0252Targeted advertisements based on events or environment, e.g. weather or festivals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • G06Q30/0271Personalized advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Environmental & Geological Engineering (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the specification discloses a target object display method, a target object display device and electronic equipment, and pedestrians are identified; when the pedestrian is identified, acquiring a service resource material corresponding to a service scene for configuring the terminal; acquiring an environment resource material corresponding to the environment information around the position of the terminal; combining the service resource material and the environment resource material to obtain rendering data; rendering the rendering data to obtain a target object displayed on an interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.

Description

Target object display method and device and electronic equipment
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a target object display method and device and electronic equipment.
Background
When advertisement putting or commodity promotion is carried out, advertisement content or commodity promotion content which is manufactured in advance is generally put on a specified device for display, such as advertisement animation, small videos, images and the like, so that a user group for watching is expanded, interest of the user on the displayed advertisement content or commodity promotion content is improved, and more users can know commodity information.
In the prior art, when new advertisement content needs to be delivered, the advertisement content is usually updated and produced in the background in advance, and then the original advertisement content is replaced with the updated and produced advertisement content for batch delivery.
Disclosure of Invention
In view of this, embodiments of the present specification provide a target object display method, an apparatus, and an electronic device, which are used to solve the problems in the prior art that the extensibility is poor and the interaction scenes with a user are few due to the simplification of advertisement content or commodity promotion content.
The embodiment of the specification adopts the following technical scheme:
an embodiment of the present specification provides a target object display method, which is applied to a terminal, and includes:
identifying a pedestrian;
when the pedestrian is identified, acquiring a service resource material corresponding to a service scene for configuring the terminal;
acquiring an environment resource material corresponding to the environment information around the position of the terminal;
combining the service resource material and the environment resource material to obtain rendering data;
rendering the rendering data to obtain a target object displayed on an interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.
An embodiment of the present specification further provides a target object display apparatus, which is applied to a terminal, and includes:
an identification module that identifies a pedestrian;
the first acquisition module is used for acquiring a service resource material corresponding to a service scene for configuring the terminal when the pedestrian is identified;
the second acquisition module is used for acquiring environment resource materials corresponding to the environment information around the position of the terminal;
the combination module is used for combining the service resource material and the environment resource material to obtain rendering data;
and the rendering module is used for rendering the rendering data to obtain a target object displayed on the interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.
Embodiments of the present specification further provide an electronic device, including at least one processor and a memory, where the memory stores programs and is configured to enable the at least one processor to execute the following steps:
identifying a pedestrian;
when the pedestrian is identified, acquiring a service resource material corresponding to a service scene for configuring the terminal;
acquiring an environment resource material corresponding to the environment information around the position of the terminal;
combining the service resource material and the environment resource material to obtain rendering data;
rendering the rendering data to obtain a target object displayed on an interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
the method comprises the steps that a terminal is utilized to carry out real-time monitoring and identification on pedestrians passing by, when the pedestrians are identified, a service resource material corresponding to a service scene of the terminal is obtained, and an environment resource material corresponding to the surrounding environment information of the terminal is obtained, rendering data are obtained after the service resource material and the environment resource material are combined, and after the rendering data are rendered, a target object capable of reflecting the service scene and the environment information of the terminal can be obtained.
Therefore, the terminal can acquire different environment resource materials corresponding to different environment information according to the change of the ambient environment information, so that the change of the ambient environment information can be embodied in a service scene displayed by the target object, the visual effect and the attraction of the target object are improved, the visual fatigue of pedestrians caused by the fact that the terminal displays a single target object is avoided, and the user experience is good.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the specification and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the specification and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a target object displaying method provided in an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a target object displaying method provided in an embodiment of the present specification;
fig. 3 is a schematic flowchart of a target object displaying method provided in an embodiment of the present specification;
fig. 4 is a schematic flowchart illustrating a process of generating a target object by performing combined rendering on multiple resource attribute data in a target object displaying method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a target object display apparatus provided in an embodiment of the present specification.
Detailed Description
Generally, when advertisement putting or commodity promotion is carried out, advertisement content or commodity promotion content can be manufactured in advance and then put on each terminal device in a unified mode, so that single advertisement content which is visible everywhere is caused, visual fatigue of pedestrians is caused easily after the single advertisement content is put on for a long time, and the pedestrians can not be attracted to know the displayed commodities easily.
In this case, conventionally, there is a possibility that a plurality of versions of advertisement contents, for example, advertisement contents of different celebrities, are prepared in advance for an advertisement of the same product, but this is limited extension and change, and it is difficult to change according to the current scene or pedestrian, and expansibility and interactivity are poor, and it is difficult to attract attention of pedestrians.
In addition, for the display terminal device with the interactive function, a preset interactive file is configured in the terminal to interact with the pedestrian, but because the preset interactive file is provided, only a relatively single interactive operation can be provided, the adaptive change cannot be performed along with the change of the scene or the change of the pedestrian, when a new advertisement content or an interactive operation needs to be displayed, a new advertisement content or an interactive operation needs to be configured again, and the expansibility is poor.
Therefore, the application provides a target object display method, a target object display device and electronic equipment, wherein a terminal is used for monitoring and identifying pedestrians passing by in real time, when the pedestrians are identified, a service resource material corresponding to a service scene of the configured terminal and an environment resource material corresponding to the ambient environment information of the terminal position are respectively obtained, rendering data are obtained by combining the service resource material and the environment resource material, and after the rendering data are rendered, a target object capable of reflecting the service scene and the environment information of the terminal can be obtained.
Therefore, the terminal can acquire different environment resource materials corresponding to different environment information according to the change of the ambient environment information, so that the change of the ambient environment information can be embodied in a service scene displayed by the target object, the visual effect and the attraction of the target object are improved, the visual fatigue of pedestrians caused by the fact that the terminal displays a single target object is avoided, and the user experience is good.
In order to make the objects, technical solutions and advantages of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to the specific embodiments of the present specification and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present application.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a target object displaying method provided in an embodiment of the present specification, where the target object displaying method provided in the embodiment of the present specification is applied to a terminal.
S101: the pedestrian is identified.
In this specification, the pedestrian may refer to a pedestrian passing near the terminal, and in order to attract the attention of the pedestrian to the target object displayed by the terminal, the pedestrian passing through the terminal may be monitored and identified in real time to capture the sight of the pedestrian to the target object displayed by the terminal in real time, and determine whether the pedestrian is interested in the displayed target object, so that the targeted target object may be displayed and changed according to the monitored pedestrian information.
The terminal may be a terminal device that displays a target object according to business needs to attract a pedestrian to pay attention to the target object, and the terminal is configured with a monitoring and recognizing device for monitoring the pedestrian. The terminal may be a fixed terminal, and specifically may be an advertisement delivery device, a self-service payment machine, and the like, for example, an advertisement image playing device, a self-service inquiry device, and the like in a mall, a playing device on public transportation such as a public transport subway, and the like, and is not limited specifically herein.
S103: and when the pedestrian is identified, acquiring a service resource material corresponding to a service scene for configuring the terminal.
In the embodiment of the present specification, the service resource material may be understood as material data required for configuring rendering data corresponding to a target object, and by obtaining the service resource material matched with the service scene, a target object capable of displaying the service scene may be obtained after rendering the rendering data.
Different terminals can configure different service scenes according to different use requirements, and then can acquire corresponding service resource materials according to the service scenes configured by the terminals so as to display the service information matched with the service scenes configured by the terminals in a target object, for example, a terminal in a shopping mall, wherein the configured service scenes can be shopping or catering entertainment and the like, so that the terminal in the shopping mall can acquire the corresponding service resource materials according to the configured service scenes, and the displayed target object can be matched with the scene atmosphere of the shopping mall or the catering entertainment.
It should be noted that the service resource material may be stored locally at the server or the terminal, and is not limited in this embodiment.
Specifically, when the target object is a three-dimensional animation, the service resource material may include the following different types of resource attribute data:
model mesh;
carrying out skeleton animation;
pasting a material map;
voice data;
script code.
In this embodiment of the present specification, the resource attribute data may refer to configuration files of various types required for configuring rendering data, and the corresponding rendering data may be configured by assembling different types of resource attribute data. The embodiments of the present description may configure different types of resource attribute data according to different service scenarios, so that the terminal may configure corresponding resource attribute data according to the service scenario of the current location.
Wherein, the model mesh may refer to a basic network model configuring the target object. In skeletal animation, a model has a skeletal structure of interconnected "bones," and animation can be generated for the model by changing the orientation and position of the bones. The voice data may refer to a voice configured according to the scene or the feature information to enhance vividness of the target object, and may be, without specific limitation, a local dialect, a voice of a cartoon character, a foreign language, or the like. Script code may refer to code that runs a skeletal animation in a specified orientation and position.
As an application embodiment, when the pedestrian is identified, acquiring a service resource material corresponding to a service scene configuring the terminal may include:
when the pedestrian is identified, judging whether the terminal displays a target object or not;
and if not, acquiring a service resource material corresponding to the service scene of the terminal.
In the embodiment of the specification, for the terminal device, the terminal can continuously identify and monitor pedestrians in the coming and going directions, which may cause that different pedestrians are identified by the terminal in the process of displaying the target object, in this case, the terminal is in the display cooling time period to avoid interfering with the display of the current target object. Therefore, in the process of displaying the target object by the terminal, even if the terminal identifies a different pedestrian, the terminal needs to wait for the completion of the display of the current target object and then perform a new round of display operation of the target object according to the newly identified pedestrian.
In a specific application scenario, acquiring a service resource material corresponding to a service scenario configuring the terminal may include:
determining business service information corresponding to a business scene for configuring the terminal;
and acquiring a business resource material corresponding to the business service information.
Specifically, the corresponding business service information can be determined according to the business scene configured by the terminal, and the specific business service performed by the terminal can be determined, for example, for a terminal located in a coffee shop in a mall, the configured business scene is a coffee business scene, the business service corresponding to the terminal can be determined to be a coffee selling business service, and then business resource materials related to the coffee selling business service can be obtained, so that a target object conforming to the coffee selling business scene can be displayed on an interface of the terminal.
Further, acquiring the service resource material corresponding to the service information may include:
and acquiring the business resource materials adaptive to the environment information from different business resource materials corresponding to the business service information.
In the embodiment of the present specification, in order to better combine the service resource material and the environment resource material, for the same service information, different service resource materials may be configured according to a change of the environment information, so that, in the process of obtaining the service resource material corresponding to the service information, the service resource material adapted to the environment information may be obtained, so as to implement a fit between the service resource material and the environment resource material.
For example, according to the rainy weather environment information, the terminal can be configured to drink coffee and enjoy rain scenery at the same time before the window, and can be configured to drink coffee and enjoy firework in the new year according to the environment information in snow in spring.
As another application example, when the pedestrian is identified, the method may further include:
acquiring an image of the pedestrian;
extracting feature information of the pedestrian from the image;
acquiring a characteristic resource material corresponding to the characteristic information;
combining the business resource materials and the environment resource materials, including:
and combining the service resource material, the characteristic resource material and the environment resource material to obtain rendering data.
In this embodiment, the image of the pedestrian may be understood as a trigger condition for triggering the terminal to perform the target object display, and specifically may be a face image, a body image, a posture image, a gesture image, and the like of the pedestrian, where the body image of the pedestrian may refer to a whole body image or a partial image of the pedestrian, and thus when the pedestrian approaches the terminal, the image recognition device on the terminal may recognize the image of the pedestrian, so as to trigger the terminal to perform the display of the target object corresponding to the image according to the recognized image, so as to actively interact with the pedestrian in a form of displaying the target object, and attract the attention of the pedestrian to the target object.
The feature information of the pedestrian may be understood as information reflecting the attribute features of the pedestrian, such as gender, age group, skin color, dressing style, language habit, and the like, and is not specifically limited herein.
Further, acquiring the feature resource material corresponding to the feature information may include:
and acquiring characteristic resource materials matched with the environment information and the service scene from different characteristic resource materials corresponding to the characteristic information.
In the embodiment of the present specification, for the same feature information, different feature resource materials can be configured according to different environment information and different service scenes, so that the terminal can adapt to the corresponding feature resource materials according to the configured service scenes and the environment information of the location, so that the feature resource materials, the environment resource materials and the service resource materials can be better combined.
Specifically, the embodiment of the present specification may further collect the voice of the pedestrian when the image of the pedestrian is recognized, so that the feature information of the pedestrian may be determined according to the image and the voice of the pedestrian. In this way, the terminal can determine the characteristic information of the pedestrian according to the voice and the image of the pedestrian, so that the image characteristic and the language characteristic of the pedestrian can be determined, and the target object carrying the pedestrian characteristic element is generated.
In a specific application embodiment, identifying the image of the pedestrian may include:
when the pedestrian faces the interface of the terminal, identifying at least one of the following images of the pedestrian:
a face image of the pedestrian;
a human body image of the pedestrian;
a pose image of the pedestrian;
a gesture image of the pedestrian.
For example, when the face image of a pedestrian is identified by the terminal of the equipment such as the advertisement playing equipment, the inquiry navigation equipment, the commodity promotion equipment and the like in the mall, the corresponding feature information can be determined according to the identified face image, so that the corresponding advertisement animation can be generated according to the feature information, the interaction with the pedestrian is actively carried out, and the attention and interest of the pedestrian to the advertisement commodity are improved. For example, when the recognized face image of the pedestrian is a child, an advertisement animation mainly including a cartoon character may be generated and played in the voice of the cartoon character, thereby attracting the attention of the pedestrian.
As another application example, when the pedestrian is identified, the method may further include:
collecting a human body image of the pedestrian;
extracting gesture information of the pedestrian from the human body image;
acquiring an interactive resource material corresponding to the gesture information;
and combining the service resource material, the interaction resource material and the environment resource material to obtain rendering data.
In order to enhance the interaction between the terminal and the pedestrian, the corresponding interactive resource materials can be configured according to different gesture information, so that the displayed target object can change along with the gesture of the pedestrian to form interaction, and the interest is increased.
In an embodiment of the present specification, gesture information of a pedestrian can be extracted from a collected human body image of the pedestrian, so that a corresponding interactive resource material can be obtained according to the gesture information, rendering data is obtained by combining a business resource material, the interactive resource material and an environment material, and a corresponding target object can be displayed in a terminal, so that the target object can interact with the pedestrian according to a gesture of the pedestrian.
In a specific application scene, when a pedestrian is detected to approach the terminal within a preset range, the human body image of the pedestrian can be identified, and then corresponding advertisement animation which asks the pedestrian well or reminds the pedestrian of the current weather date and the like can be generated so as to attract the pedestrian to deepen the understanding of advertisement commodities; when a pedestrian makes a specified gesture (such as waving a hand) to the terminal, a gesture image of the pedestrian can be recognized, or the pedestrian sends a specified voice instruction to the terminal (such as when the pedestrian says that the pedestrian likes a car, the terminal plays an advertisement animation with car elements to the pedestrian), and the like, the terminal can be triggered to generate a corresponding target object, so that advertisement content is displayed in an interactive mode with the pedestrian, and the pedestrian is attracted.
S105: and acquiring environment resource materials corresponding to the environment information around the position of the terminal.
In the embodiment of the present specification, the environment information may be understood as information data reflecting an environment in which the terminal is currently located, and specifically may be time, geographic location, information of a place where the terminal is located, weather, date, and the like, which is not specifically limited herein. Through obtaining environmental information, can match corresponding environment resource material according to current environmental information, just so, can show different target object along with environmental information's change to this attracts pedestrian's attention, avoids reducing pedestrian's visual effect because of showing the same object for a long time.
The environment resource material can be understood as a resource material capable of reflecting the environment information, so that the current environment information can be reflected in the service scene displayed by the target object, and the service scene is deeply matched with the current service scene. The specific meaning and composition of the environment resource material are the same as those of the service resource material described in the above embodiments, and are not described herein again.
As an application embodiment, acquiring an environmental resource material corresponding to environmental information around the location of the terminal may include:
collecting the environment information of the position of the terminal;
and acquiring the environment resource materials corresponding to the environment information from a server.
In the embodiment of the specification, the terminal can monitor the environmental information of the position in real time, and can acquire the current environmental information in real time when a target object needs to be displayed, so as to acquire the environmental resource material corresponding to the current environmental information from the server.
Further, collecting the environment information of the location of the terminal may include:
and when the environmental information of the terminal is monitored to change, acquiring the environmental information of the terminal.
In this embodiment, the terminal may automatically display the target object according to the change of the current environment information, and in this case, when monitoring that the environment information changes, the terminal may automatically acquire the current environment information to acquire the corresponding environment resource material according to the current environment information, so that the target object displayed in the terminal can adapt to the change of the environment information.
Specifically, the changed environment information may be one of the trigger events for the terminal to perform the target object update presentation. For example, for an advertisement terminal in a store, at seven nights, when the weather changes from sunny to rainy or snowy, environment resource materials related to the nights, rains or snows can be acquired according to the current environment information to render a target object with the current environment information and an advertisement theme, and meanwhile, voice reminding pedestrians of paying attention to the weather change is played to attract the attention of the pedestrians, so that the target object displayed on the terminal can better conform to the current scene, and the interest of the pedestrians in the displayed advertisement content is improved.
S107: and combining the service resource materials and the environment resource materials to obtain rendering data.
In this embodiment of the present specification, the rendering data may be understood as a configuration file required for configuring the target object, and the target object may be obtained by rendering the configuration file by using a rendering engine in the terminal, so that the target object may be displayed on an interface of the terminal.
The rendering data is obtained by combining the service resource materials and the environment resource materials, and the corresponding service resource materials and the corresponding environment resource materials can be combined in real time according to the change of the service scene and the environment information, so that different rendering data can be configured in real time, and the displayed target object can change along with the change of the service scene and the environment information, is in accordance with the current service scene and the environment information, and attracts the attention of passers-by.
Further, in combination with the above-described embodiments, after the image of the pedestrian is recognized, the feature information of the pedestrian is determined according to the recognized image, so that the feature resource material corresponding to the feature information can be acquired. The characteristic resource material may refer to rendering resource data capable of reflecting characteristic information in the target object, and the specific meaning and composition of the characteristic resource material are similar to those of the service resource material and the environment resource material described in the above embodiments, and are not described herein again.
As an application embodiment, the combining the business resource material and the environment resource material may further include:
and combining the service resource material, the environment resource material and the feature resource material to obtain rendering data, so that the environment information and the feature information are reflected in the service scene displayed by the target object.
For convenience of description, the business resource material, the environment resource material and the feature resource material may be collectively referred to as a resource material.
In a specific application scene, the same resource material may be applicable to different service scenes, environment information and/or feature information, and in this case, compared with presetting corresponding rendering data or target objects according to the service scenes, the environment information and/or the feature information in a server or a terminal, presetting different resource materials in the server or the terminal can improve the expandability of the displayed target objects, can adapt to the real-time changes of the service scenes, the environment information and/or the feature information where the terminal is located, and can reduce the storage space occupied by the terminal.
Specifically, at a terminal located in a coffee shop of a mall, when a pedestrian faces the terminal, according to a human body image of the pedestrian identified by the terminal, it is determined that the characteristic information of the pedestrian is a young male, a casual wear, and the like, and then, according to the characteristic information, a corresponding characteristic resource material can be obtained, and then, in combination with the current business resource material and the environment resource material, corresponding rendering data can be obtained in combination, for example, a young female drinks an objective animation of coffee in a leisure place under a sunshade outside the coffee shop, and greets the young male pedestrian or introduces a voice of a signboard coffee of the coffee shop in combination with the young female, so that the attention of the pedestrian to the coffee shop can be attracted, and the passenger flow volume can be increased.
Therefore, the terminal can obtain the corresponding business resource materials, environment resource materials and characteristic resource materials according to the current business scene, the environment information and the determined characteristic information of the pedestrian so as to perform real-time combined rendering, display the target animation matched with the current scene and the characteristics of the pedestrian in real time on the interface of the terminal, display along with the change of the target object, enhance the interestingness, attract the attention of the pedestrian and simultaneously avoid the visual fatigue of the pedestrian.
In a specific application scenario, if the terminal identifies images of multiple pedestrians, the obtaining of a feature resource material corresponding to the feature information may include:
determining relation characteristic information of the pedestrians according to the images of the pedestrians;
and acquiring corresponding characteristic resource materials according to the relation characteristic information, wherein the characteristic resource materials comprise multi-person interaction actions and multi-person interaction dialogue voices so as to reflect the multi-person interaction actions and the multi-person interaction dialogue voices in a service scene displayed by the target object.
Due to the fact that the situation that multiple pedestrians are together exists, the multiple pedestrians can face the display interface of the terminal at the same time, under the situation, the terminal can recognize images of the multiple pedestrians, the relation characteristic information of the multiple pedestrians can be determined according to the recognized images, and then corresponding characteristic resource materials can be obtained according to the relation characteristic information, wherein the corresponding characteristic resource materials can include multi-person interaction actions and multi-person interaction voice, and the target objects of multi-person interaction and conversation are displayed for the pedestrians by combining the current service scene and the current environment information of the terminal.
The relationship characteristic information may refer to that the terminal determines interpersonal relationships among multiple pedestrians according to the acquired images of the multiple pedestrians, and specifically, may determine closeness among the multiple pedestrians according to distances, actions, or expressions among the multiple pedestrians, so as to determine the relationship characteristic information according to the determination result, specifically, the relationship characteristic information may be friends, lovers, relationships among relatives, and the like, and is not limited specifically here.
In a specific application scene, if the terminal monitors that the distance between a pedestrian and the terminal is reduced, the terminal can identify a face image of the pedestrian and indicate that the pedestrian is approaching a display interface of the terminal, under the condition, the terminal can acquire feature resource materials of interaction characters which change and are amplified continuously and display the interaction characters in the display interface to the pedestrian along with the approaching of the pedestrian by combining the current service scene and the environmental information, so that the visual experience that the interaction characters in the target object approach the pedestrian is realized.
For another example, in a scene of snowing in a railway station and winter, for a self-service payment machine in a shop, in order to attract a pedestrian to perform face brushing payment, when the pedestrian approaches the payment machine to prepare payment, a face image of the pedestrian can be recognized, and the feature information of the pedestrian is determined to be a young woman. Matching corresponding resource attribute data by combining current scene information and characteristic information, displaying a section of animation of a female cartoon character carrying a trunk to go home in the snow field, and reporting' Happy New year! Swiping the face to go out the fortune card, bringing the good fortune home' and the like to attract the attention of the pedestrian to the face swiping payment function, and completing the interaction with the pedestrian once.
Furthermore, voice data of the dialect corresponding to the geographic position can be configured for the terminal according to the geographic position of the terminal, and when the target object is displayed, voice interaction of the local dialect is matched, so that the target object is closer to local pedestrians. In addition, after voice interaction of the local dialect is played, corresponding mandarin translation or English translation and the like can be played, so that the method adapts to the language habit of non-local pedestrians, and the expansion performance and the interaction performance of the terminal display target object are improved.
As another application embodiment, acquiring a feature resource material corresponding to the feature information may include:
and matching the characteristic resource materials corresponding to the characteristic information from the characteristic resource materials stored in the terminal.
In the embodiment of the specification, different feature resource materials configured according to possible feature information can be stored locally in the terminal, so that the terminal can directly call the locally stored feature resource materials according to the current feature information, and the locally stored feature resource materials can be combined, rendered and displayed conveniently, the reaction time of the terminal is shortened, and the user experience is improved.
It should be noted that, the storage modes of the service resource material and the environment resource material may be the same, and are not described herein again.
Furthermore, when the resource attribute data is updated, the terminal can download updated characteristic resource materials, business resource materials, environment resource materials and the like from the server so as to meet the requirements of constantly changing environment information and characteristic information, and the target object corresponding to new environment information and/or characteristic information does not need to be rendered in advance and then stored in the terminal for displaying.
In addition, the terminal can also send a resource material acquisition request to the server according to the current service scene, the environmental information and/or the characteristic information, so that the server configures the corresponding resource material according to the service scene, the environmental information and/or the characteristic information, then the configured resource material is returned to the terminal, and the terminal performs operations such as combination, rendering and the like on the received resource material, so as to reduce the occupied storage space of the terminal.
S109: rendering the rendering data to obtain a target object displayed on an interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.
In this embodiment of the present specification, the target object may refer to an object displayed on an interface of the terminal device according to a service requirement, and may specifically be an animation, a small video, an image, and the like, which is not specifically limited herein. In the embodiment of the present specification, the terminal may generate a corresponding target object for display by combining the scene and/or the feature information where the terminal is located according to the service requirement, for example, an advertisement animation adapted to the scene information may be generated by combining the scene information such as the place, the geographic location, the time, the holiday, the weather, and the like where the terminal is located, so as to avoid visual fatigue of the user due to the fact that a single advertisement content is played in all scenes.
The terminal can utilize a real-time rendering engine configured in the terminal to render the combined rendering data in real time to obtain the target object corresponding to the current environment information and/or the feature information, so that different target objects can be rendered and generated on an interface of the terminal along with the change of the environment information and/or the feature information, and the interest of pedestrians on the displayed target object is attracted.
As an application embodiment, rendering the rendering data to obtain a target object displayed on an interface of the terminal may include:
rendering by using the rendering data to obtain a target animation;
carrying out transparent effect processing on the target animation to obtain the target transparent animation;
and displaying the target transparent animation on the display content of the current interface of the terminal.
In the embodiment of the present specification, the target transparent animation may be understood as being capable of being displayed on the interface of the terminal without covering the target object of the existing display content when the existing display content exists on the current interface of the terminal, so that the existing display content may be continuously displayed, the corresponding target transparent animation may be displayed according to the current scene information and/or the feature information, the interest of the existing display content may be enhanced, and the real-time interaction with the pedestrian may be realized.
For example, for a self-service payment machine in a supermarket, when a user uses the self-service payment machine to settle accounts, a face image of the user can be identified, characteristic information of the user is determined, and then a target transparent animation related to promotion of a face brushing payment function is displayed on a detail page or a payment page of an order in combination with current business service information and environmental information, so that the content displayed by the target transparent animation can be matched with the current business scene and the environmental information without shielding order details and payment information displayed on the current page. In addition, the method can also show the preferential activity of using the face brushing payment function, and assist voice broadcasting, asking for a good, and the like to attract the attention of the user.
Further, the displaying the target transparent animation on the display content of the current interface of the terminal may include:
and displaying the target transparent animation at the specified position of the display content of the current interface of the terminal.
In the embodiment of the specification, the target transparent animation can be displayed at the designated position of the existing display content according to the relationship between the display content of the current interface of the terminal and the target transparent animation processed by the transparent effect, so that the target transparent animation can be better integrated with the existing display content, the target transparent animation cannot be obtrusive, and the visual effect can be improved.
Continuing the above example, in a scene of snowing in railway stations and winter, the self-service payment machine in the store conducts face brushing payment for attracting pedestrians, shows an animation that a small ant carries the luggage case to go home in the snowfield, and is matched with voice broadcast of' happy new year! Swiping the face to go out the blessing card, bringing the good fortune home' and the like to attract the attention of the pedestrian to the face swiping payment function. When a parent is identified to bring a child to a self-service payment machine before making a payment, a target transparent animation which is played by two cartoon characters in the snow is generated by rendering according to the identified face information, and the target transparent animation is displayed beside a small ant in the displayed animation so as to attract the interest of pedestrians without covering the original animation content.
In the target object display method provided by the embodiment of the present specification, a terminal is used to perform real-time monitoring and identification on a passing pedestrian, when the pedestrian is identified, a service resource material corresponding to a service scene configured with the terminal and an environment resource material corresponding to ambient environment information at a terminal position are respectively obtained, rendering data is obtained by combining the service resource material and the environment resource material, and after the rendering data is rendered, a target object capable of reflecting the service scene and the environment information of the terminal can be obtained.
Therefore, the terminal can acquire different environment resource materials corresponding to different environment information according to the change of the ambient environment information, so that the change of the ambient environment information can be embodied in a service scene displayed by the target object, the visual effect and the attraction of the target object are improved, the visual fatigue of pedestrians caused by the fact that the terminal displays a single target object is avoided, and the user experience is good.
Fig. 2 is a flowchart illustrating a target object display method provided in an embodiment of the present specification, in which an advertisement playing terminal changes displayed advertisement content according to a change of a scene to be a specific application scene.
S201: when the terminal identifies that a pedestrian approaches, namely the terminal identifies the pedestrian, the current environmental information of the terminal is collected.
In the embodiment of the present specification, the terminal may perform real-time monitoring and identification on the environment information where the terminal is located, such as scene information of geographic location, time, weather, date, and traffic volume, so that the terminal can sense the change of the environment information in time, and thus can perform adaptive change on the displayed advertisement content in time.
S203: and judging whether the environmental information changes.
In this embodiment of the present specification, the collected environment information may be determined in real time according to a preset environment change determination rule, for example, whether a current location of the terminal moves is determined, whether a current weather changes from sunny to rainy and snowy weather is determined, whether a current time is from daytime to night is determined, whether a current date is a holiday is determined, and the like, which is not limited herein.
S205: if not, continuing to display the current advertisement content.
S207: and if so, acquiring an environment resource material matched with the current environment information, wherein the environment resource material can reflect the environment information.
In the embodiments of the present specification, the specific meaning of the environmental resource material is the same as that described in the embodiments of the present specification, and is not described herein again.
S209: and combining the acquired environment resource materials and the service resource materials to obtain corresponding rendering data.
S211: and rendering the rendering data obtained by combination in real time by using a real-time rendering engine in the terminal to obtain new advertisement content.
S213: and displaying the new advertisement content on the interface of the terminal.
In the target object display method provided in the embodiments of the present specification, current scene information of a terminal is collected in real time, and whether the current scene information changes is determined, if yes, corresponding resource attribute data is configured by using the current scene information of the terminal, and the multiple resource attribute data are combined to obtain corresponding rendering data, and after the rendering data is rendered in real time, new advertisement content can be rendered according to the change of the scene where the terminal is located, so that the expandability and interestingness of the displayed advertisement content by the terminal are improved, and visual fatigue of a user caused by the fact that the terminal displays fixed advertisement content is avoided.
Fig. 3 is a schematic flowchart of a target object processing method provided in an embodiment of the present specification, where the embodiment of the present specification takes interaction between an advertisement playing terminal and a user as a specific application scenario.
S301: the pedestrian is identified.
S303: when a face recognition engine of the terminal recognizes a face image of a pedestrian, determining characteristic information of the pedestrian according to the recognized face image.
In the embodiments of the present specification, the feature information may refer to data indicating features of a pedestrian, such as sex, estimated age, whether a person accompanies, and is not particularly limited herein.
S305: whether the interactive advertisement content is displayed at present or not is triggered and judged, namely whether the terminal is interacted with other pedestrians or not is judged.
In the embodiment of the present specification, the interactive advertisement content may refer to a target advertisement displayed by interacting with a pedestrian.
S307: if yes, the current interactive advertisement content is continuously displayed.
In this case, it indicates that the terminal is currently interacting with other pedestrians, and in order to avoid mutual interference between pedestrians, the terminal does not respond to the newly recognized image, and after the current pedestrian interaction is finished, the terminal needs to be triggered again to perform a new round of pedestrian interaction.
S309: if not, acquiring a feature resource material corresponding to the feature information, an environment resource material corresponding to the current environment information and a service resource material corresponding to the service scene.
In the embodiment of the present specification, the obtained feature information and the current environment information may be combined to obtain a corresponding resource material, for example, when the identified face is a young woman, the feature resource material with elements such as an animation character of the young woman may be obtained on the basis of the current environment information, so as to attract the attention of pedestrians, and make the pedestrians continuously and deeply understand the displayed advertisement goods.
S311: and combining the acquired multiple resource materials to obtain corresponding rendering data.
In the embodiment of the present specification, the rendering data is one of the resource data described in the above embodiments, and its specific meaning is the specific meaning of the resource data in the above embodiments, which is not described herein again.
S313: and rendering the obtained rendering data in real time by using a real-time rendering engine in the terminal to obtain new interactive advertisement content.
S315: and displaying new interactive advertisement content on an interface of the terminal so as to finish interaction with pedestrians once.
In the target object display method provided by the embodiment of the specification, the terminal can monitor and recognize pedestrians in real time, when the face image of the pedestrian is recognized, whether the terminal interacts with other pedestrians is judged, if yes, the current interactive advertisement content continues to be displayed to avoid mutual interference among the pedestrians until the current interactive operation is finished.
If not, the characteristic information of the pedestrian can be determined according to the face image, a plurality of corresponding resource materials are obtained by utilizing the characteristic information, the current environment information and the service scene, corresponding rendering data are obtained after the resource materials are combined, and after the rendering data are rendered in real time, new interactive advertisement content can be rendered according to the service scene where the terminal is located, the environment information and the characteristic information of the pedestrian, the characteristic information of the pedestrian is displayed in the interactive advertisement content in a corresponding display element mode, so that the expandability and interestingness of the displayed advertisement content of the terminal are improved, and the visual fatigue of the pedestrian caused by the fact that the terminal displays the fixed advertisement content is avoided.
Fig. 4 is a schematic flowchart of a target object generation method by performing combined rendering on multiple resource attribute data according to an embodiment of the present specification.
S401: and combining the plurality of resource attribute data to generate 3D rendering data, wherein the resource attribute data are obtained by matching according to the current scene information of the terminal and/or the characteristic information of the pedestrian.
In this embodiment of the present specification, a corresponding resource attribute data set may be configured according to preset scene information and pedestrian feature information, and according to a type of the resource attribute data, the resource attribute data set may specifically include a model network subset, a skeleton animation subset, a texture map subset, a speech information subset, a script code subset, and the like, which is not specifically limited herein.
And each subset is configured with various resource attribute data according to preset scene information and pedestrian characteristic information.
After combining a plurality of resource attribute data corresponding to the current scene information and/or the characteristic information of the pedestrian of the terminal in a matching manner, a corresponding rendering animation packet, namely 3D rendering data, can be obtained.
S403: and rendering the 3D rendering data in real time to obtain a target object so as to display the target object on an interface of a terminal.
The terminal can utilize the real-time rendering engine to render the 3D rendering data generated by combination in real time to obtain a target object corresponding to the current scene information and/or the pedestrian characteristic information of the terminal in a matching manner, so that elements corresponding to the current scene information and/or the pedestrian characteristic information can be embodied in the target object and displayed on an interface of the terminal.
Fig. 5 is a schematic structural diagram of a target object display apparatus provided in an embodiment of the present specification, where the target object display apparatus provided in the embodiment of the present specification is applied to a terminal.
An identification module 501 for identifying a pedestrian;
a first obtaining module 502, configured to obtain a service resource material corresponding to a service scene of the terminal when the pedestrian is identified;
a second obtaining module 503, configured to obtain an environmental resource material corresponding to environmental information around the location of the terminal;
the combining module 504 is used for combining the service resource materials and the environment resource materials to obtain rendering data;
the rendering module 505 is configured to render the rendering data to obtain a target object displayed on the interface of the terminal, so as to reflect the environment information in the service scene displayed by the target object.
Further, when the pedestrian is identified, acquiring a service resource material corresponding to a service scene configuring the terminal, including:
when the pedestrian is identified, judging whether the terminal displays a target object or not;
and if not, acquiring a service resource material corresponding to the service scene of the terminal.
Further, acquiring a service resource material corresponding to a service scene configuring the terminal includes:
determining business service information corresponding to a business scene for configuring the terminal;
and acquiring a business resource material corresponding to the business service information.
Further, when the pedestrian is identified, the method further comprises the following steps:
acquiring an image of the pedestrian;
extracting feature information of the pedestrian from the image;
acquiring a characteristic resource material corresponding to the characteristic information;
combining the business resource materials and the environment resource materials, including:
and combining the service resource material, the characteristic resource material and the environment resource material to obtain rendering data.
Further, acquiring an environment resource material corresponding to the environment information around the position of the terminal includes:
collecting the environment information of the position of the terminal;
and acquiring the environment resource materials corresponding to the environment information from a server.
Further, rendering the rendering data to obtain a target object displayed on the interface of the terminal, including:
rendering by using the rendering data to obtain a target animation;
carrying out transparent effect processing on the target animation to obtain the target transparent animation;
and displaying the target transparent animation on the display content of the current interface of the terminal.
Further, the displaying the target transparent animation on the display content of the current interface of the terminal comprises the following steps:
and displaying the target transparent animation at the specified position of the display content of the current interface of the terminal.
The target object display device provided by the embodiment of the specification monitors and identifies pedestrians passing by using a terminal in real time, when a pedestrian is identified, obtains a service resource material corresponding to a service scene configured with the terminal and an environment resource material corresponding to surrounding environment information of the terminal position, combines the service resource material and the environment resource material to obtain rendering data, and can obtain a target object capable of reflecting the service scene and the environment information of the terminal after the rendering data is rendered.
Therefore, the terminal can acquire different environment resource materials corresponding to different environment information according to the change of the ambient environment information, so that the change of the ambient environment information can be embodied in a service scene displayed by the target object, the visual effect and the attraction of the target object are improved, the visual fatigue of pedestrians caused by the fact that the terminal displays a single target object is avoided, and the user experience is good.
Based on the same inventive concept, embodiments of the present specification further provide an electronic device, including at least one processor and a memory, where the memory stores programs and is configured to be executed by the at least one processor to:
identifying a pedestrian;
when the pedestrian is identified, acquiring a service resource material corresponding to a service scene for configuring the terminal;
acquiring an environment resource material corresponding to the environment information around the position of the terminal;
combining the service resource material and the environment resource material to obtain rendering data;
rendering the rendering data to obtain a target object displayed on an interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.
For other functions of the processor, reference may also be made to the contents described in the above embodiments, which are not described in detail herein.
Based on the same inventive concept, embodiments of the present specification further provide a computer-readable storage medium including a program for use in conjunction with an electronic device, the program being executable by a processor to perform the steps of:
identifying a pedestrian;
when the pedestrian is identified, acquiring a service resource material corresponding to a service scene for configuring the terminal;
acquiring an environment resource material corresponding to the environment information around the position of the terminal;
combining the service resource material and the environment resource material to obtain rendering data;
rendering the rendering data to obtain a target object displayed on an interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.
For other functions of the processor, reference may also be made to the contents described in the above embodiments, which are not described in detail herein.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (e.g., improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various modules by functions and/or various units separately. Of course, the functionality of the modules and/or units may be implemented in the same one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (17)

1. A target object display method is applied to a terminal and comprises the following steps:
identifying a pedestrian;
when the pedestrian is identified, acquiring a service resource material corresponding to a service scene for configuring the terminal;
acquiring an environment resource material corresponding to the environment information around the position of the terminal;
combining the service resource material and the environment resource material to obtain rendering data;
rendering the rendering data to obtain a target object displayed on an interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.
2. The method of claim 1, when the pedestrian is identified, obtaining service resource materials corresponding to a service scenario configuring the terminal, comprising:
when the pedestrian is identified, judging whether the terminal displays a target object or not;
and if not, acquiring a service resource material corresponding to the service scene of the terminal.
3. The method according to claim 1, wherein the obtaining of the service resource material corresponding to the service scenario configuring the terminal comprises:
determining business service information corresponding to a business scene for configuring the terminal;
and acquiring a business resource material corresponding to the business service information.
4. The method of claim 3, wherein obtaining business resource materials corresponding to the business service information comprises:
and acquiring the business resource materials adaptive to the environment information from different business resource materials corresponding to the business service information.
5. The method of claim 1, upon identifying the pedestrian, further comprising:
acquiring an image of the pedestrian;
extracting feature information of the pedestrian from the image;
acquiring a characteristic resource material corresponding to the characteristic information;
combining the business resource materials and the environment resource materials, including:
and combining the service resource material, the characteristic resource material and the environment resource material to obtain rendering data.
6. The method of claim 5, obtaining feature resource material corresponding to the feature information, comprising:
and acquiring characteristic resource materials matched with the environment information and the service scene from different characteristic resource materials corresponding to the characteristic information.
7. The method according to claim 5, wherein if the terminal acquires images of a plurality of pedestrians, acquiring a feature resource material corresponding to the feature information, including:
determining relation characteristic information of the pedestrians according to the images of the pedestrians;
and acquiring corresponding characteristic resource materials according to the relation characteristic information, wherein the characteristic resource materials comprise multi-person interaction actions and multi-person interaction dialogue voices so as to reflect the multi-person interaction actions and the multi-person interaction dialogue voices in a service scene displayed by the target object.
8. The method of claim 1, upon identifying the pedestrian, further comprising:
collecting a human body image of the pedestrian;
extracting gesture information of the pedestrian from the human body image;
acquiring an interactive resource material corresponding to the gesture information;
and combining the service resource material, the interaction resource material and the environment resource material to obtain rendering data.
9. The method of claim 1, wherein obtaining environmental resource materials corresponding to environmental information around the location of the terminal comprises:
collecting the environment information of the position of the terminal;
and acquiring the environment resource materials corresponding to the environment information from a server.
10. The method of claim 1, rendering the rendering data to obtain a target object displayed on an interface of the terminal, comprising:
rendering by using the rendering data to obtain a target animation;
carrying out transparent effect processing on the target animation to obtain the target transparent animation;
and displaying the target transparent animation on the display content of the current interface of the terminal.
11. A target object display device is applied to a terminal and comprises:
an identification module that identifies a pedestrian;
the first acquisition module is used for acquiring a service resource material corresponding to a service scene for configuring the terminal when the pedestrian is identified;
the second acquisition module is used for acquiring environment resource materials corresponding to the environment information around the position of the terminal;
the combination module is used for combining the service resource material and the environment resource material to obtain rendering data;
and the rendering module is used for rendering the rendering data to obtain a target object displayed on the interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.
12. The apparatus of claim 11, when the pedestrian is identified, acquiring service resource materials corresponding to a service scenario configuring the terminal, comprising:
when the pedestrian is identified, judging whether the terminal displays a target object or not;
and if not, acquiring a service resource material corresponding to the service scene of the terminal.
13. The apparatus of claim 11, wherein the obtaining of service resource materials corresponding to service scenarios configuring the terminal includes:
determining business service information corresponding to a business scene for configuring the terminal;
and acquiring a business resource material corresponding to the business service information.
14. The apparatus of claim 11, upon identifying the pedestrian, further comprising:
acquiring an image of the pedestrian;
extracting feature information of the pedestrian from the image;
acquiring a characteristic resource material corresponding to the characteristic information;
combining the business resource materials and the environment resource materials, including:
and combining the service resource material, the characteristic resource material and the environment resource material to obtain rendering data.
15. The apparatus of claim 11, wherein obtaining environmental resource materials corresponding to environmental information around the location of the terminal comprises:
collecting the environment information of the position of the terminal;
and acquiring the environment resource materials corresponding to the environment information from a server.
16. The apparatus of claim 11, rendering the rendering data to obtain a target object displayed on an interface of the terminal, comprising:
rendering by using the rendering data to obtain a target animation;
carrying out transparent effect processing on the target animation to obtain the target transparent animation;
and displaying the target transparent animation on the display content of the current interface of the terminal.
17. An electronic device comprising at least one processor and a memory, the memory storing a program and configured for the at least one processor to perform the steps of:
identifying a pedestrian;
when the pedestrian is identified, acquiring a service resource material corresponding to a service scene for configuring the terminal;
acquiring an environment resource material corresponding to the environment information around the position of the terminal;
combining the service resource material and the environment resource material to obtain rendering data;
rendering the rendering data to obtain a target object displayed on an interface of the terminal so as to reflect the environment information in the service scene displayed by the target object.
CN202010384115.0A 2020-05-09 2020-05-09 Target object display method and device and electronic equipment Pending CN111311339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010384115.0A CN111311339A (en) 2020-05-09 2020-05-09 Target object display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010384115.0A CN111311339A (en) 2020-05-09 2020-05-09 Target object display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111311339A true CN111311339A (en) 2020-06-19

Family

ID=71162782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010384115.0A Pending CN111311339A (en) 2020-05-09 2020-05-09 Target object display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111311339A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813491A (en) * 2020-08-19 2020-10-23 广州汽车集团股份有限公司 Vehicle-mounted assistant anthropomorphic interaction method and device and automobile
CN112114918A (en) * 2020-09-07 2020-12-22 泰康保险集团股份有限公司 Intelligent device, server, intelligent system and related interface display method
CN112270578A (en) * 2020-11-23 2021-01-26 支付宝(杭州)信息技术有限公司 Object display method and device and electronic equipment
CN112494941A (en) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 Display control method and device of virtual object, storage medium and electronic equipment
CN114638629A (en) * 2020-12-15 2022-06-17 支付宝(杭州)信息技术有限公司 Information pushing method and device
CN115134663A (en) * 2022-07-11 2022-09-30 京东方科技集团股份有限公司 Information display method, device and system and electronic equipment
CN115134663B (en) * 2022-07-11 2024-06-04 京东方科技集团股份有限公司 Information display method, device and system and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106354457A (en) * 2016-08-25 2017-01-25 武克易 Targeted advertising through multi-screen display
CN107798714A (en) * 2017-09-11 2018-03-13 深圳创维数字技术有限公司 A kind of image data display method and relevant apparatus and computer-readable storage medium
CN108428158A (en) * 2018-04-25 2018-08-21 华南理工大学 A kind of large-size screen monitors advertisement orientation jettison system and method based on multi-source heterogeneous data analysis
CN108494836A (en) * 2018-03-09 2018-09-04 上海星视度科技有限公司 Information-pushing method, device and equipment
CN110490668A (en) * 2019-08-27 2019-11-22 微梦创科网络科技(中国)有限公司 A kind of advertisement dynamic rendering method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106354457A (en) * 2016-08-25 2017-01-25 武克易 Targeted advertising through multi-screen display
CN107798714A (en) * 2017-09-11 2018-03-13 深圳创维数字技术有限公司 A kind of image data display method and relevant apparatus and computer-readable storage medium
CN108494836A (en) * 2018-03-09 2018-09-04 上海星视度科技有限公司 Information-pushing method, device and equipment
CN108428158A (en) * 2018-04-25 2018-08-21 华南理工大学 A kind of large-size screen monitors advertisement orientation jettison system and method based on multi-source heterogeneous data analysis
CN110490668A (en) * 2019-08-27 2019-11-22 微梦创科网络科技(中国)有限公司 A kind of advertisement dynamic rendering method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813491A (en) * 2020-08-19 2020-10-23 广州汽车集团股份有限公司 Vehicle-mounted assistant anthropomorphic interaction method and device and automobile
CN112114918A (en) * 2020-09-07 2020-12-22 泰康保险集团股份有限公司 Intelligent device, server, intelligent system and related interface display method
CN112270578A (en) * 2020-11-23 2021-01-26 支付宝(杭州)信息技术有限公司 Object display method and device and electronic equipment
CN112270578B (en) * 2020-11-23 2023-10-27 支付宝(杭州)信息技术有限公司 Object display method and device and electronic equipment
CN112494941A (en) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 Display control method and device of virtual object, storage medium and electronic equipment
CN112494941B (en) * 2020-12-14 2023-11-28 网易(杭州)网络有限公司 Virtual object display control method and device, storage medium and electronic equipment
CN114638629A (en) * 2020-12-15 2022-06-17 支付宝(杭州)信息技术有限公司 Information pushing method and device
CN115134663A (en) * 2022-07-11 2022-09-30 京东方科技集团股份有限公司 Information display method, device and system and electronic equipment
CN115134663B (en) * 2022-07-11 2024-06-04 京东方科技集团股份有限公司 Information display method, device and system and electronic equipment

Similar Documents

Publication Publication Date Title
CN111311339A (en) Target object display method and device and electronic equipment
CN111652678B (en) Method, device, terminal, server and readable storage medium for displaying article information
CN109478124B (en) Augmented reality device and augmented reality method
CA2979778C (en) Systems and methods for improved data integration in augmented reality architectures
US10902262B2 (en) Vision intelligence management for electronic devices
US10575067B2 (en) Context based augmented advertisement
JP2017198647A (en) System and method for presenting media content in autonomous vehicles
CN111815645B (en) Method and system for cutting advertisement video picture
CN105528715A (en) Method for providing additional information related to broadcast content and electronic device implementing the same
CN108351880A (en) Image processing method, device, electronic equipment and graphic user interface
CN105302887A (en) Information pushing method and pushing apparatus
KR20210131415A (en) Interactive method, apparatus, device and recording medium
CN108076387B (en) Business object pushing method and device and electronic equipment
CN109815409A (en) A kind of method for pushing of information, device, wearable device and storage medium
CN110968362B (en) Application running method, device and storage medium
KR20210129714A (en) Interactive method, apparatus, device and recording medium
US10365816B2 (en) Media content including a perceptual property and/or a contextual property
CN113867528A (en) Display method, device, equipment and computer readable storage medium
US11217032B1 (en) Augmented reality skin executions
Bito et al. Automatic Generation of Road Trip Summary Video for Reminiscence and Entertainment using Dashcam Video
CN114531608A (en) Multimedia information pushing method, device and equipment
CN113590950A (en) Multimedia data playing method, device, equipment and computer readable storage medium
CN111741362A (en) Method and device for interacting with video user
CN108573056B (en) Content data processing method and device, electronic equipment and storage medium
US20180336225A1 (en) Interactive Map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619

RJ01 Rejection of invention patent application after publication