CN114721562B - Processing method, apparatus, device, medium and product for digital object - Google Patents

Processing method, apparatus, device, medium and product for digital object Download PDF

Info

Publication number
CN114721562B
CN114721562B CN202210392409.7A CN202210392409A CN114721562B CN 114721562 B CN114721562 B CN 114721562B CN 202210392409 A CN202210392409 A CN 202210392409A CN 114721562 B CN114721562 B CN 114721562B
Authority
CN
China
Prior art keywords
digital object
target
interaction
target scene
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210392409.7A
Other languages
Chinese (zh)
Other versions
CN114721562A (en
Inventor
彭建
何建斌
高治力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210392409.7A priority Critical patent/CN114721562B/en
Publication of CN114721562A publication Critical patent/CN114721562A/en
Application granted granted Critical
Publication of CN114721562B publication Critical patent/CN114721562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a processing method, a device, equipment, a medium and a product for a digital object, relates to the technical field of artificial intelligence, and particularly relates to the technical field of computer vision and augmented reality. The specific implementation scheme is as follows: responding to a first interactive operation executed by a user aiming at a target scene, and obtaining a first interactive instruction corresponding to the first interactive operation; a first interaction of a first digital object in the target scene is determined based on the first interaction instruction.

Description

Processing method, apparatus, device, medium and product for digital object
Technical Field
The present disclosure relates to the field of computer vision and augmented reality technologies in the field of artificial intelligence technologies, and in particular, to a processing method, apparatus, device, medium, and product for a digital object.
Background
Digital humans, digital objects, digital animals, etc. may refer to a three-dimensional object formed based on a mathematical model, which may also be referred to as a digital object. The digital object may be placed in a scene for interaction with a user. In the related art, a scene in which a digital object is located is generally static, and the scene cannot be changed. In this scenario, the digital object and the scene in which it is located are separated, and the user cannot effectively interact with the scene in the interaction process of the user and the digital object.
Disclosure of Invention
The present disclosure provides a processing method, apparatus, device, medium and product for digital objects to improve scene interaction effectiveness.
According to a first aspect of the present disclosure, there is provided a processing method for a digital object, comprising:
responding to a first interactive operation executed by a user aiming at a target scene, and obtaining a first interactive instruction corresponding to the first interactive operation;
a first interaction of a first digital object in the target scene is determined based on the first interaction instruction.
According to a second aspect of the present disclosure there is provided a processing apparatus for a digital object, comprising:
the first response unit is used for responding to a first interactive operation executed by a user aiming at a target scene and obtaining a first interactive instruction corresponding to the first interactive operation;
and the target determining unit is used for determining a first interaction action of the first digital object in the target scene based on the first interaction instruction.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of the first aspect.
According to the technology disclosed by the invention, the problem that effective interaction cannot be generated by associating scenes in the interaction process of the user and the digital object in the related technology is solved, and the interaction processing between the digital object and the scenes is realized through the interaction of the user and the target scenes, so that richer interaction content is provided.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a system diagram of an application of a processing method for a digital object provided in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow chart of a processing method for a digital object provided in accordance with a first embodiment of the present disclosure;
FIG. 3 is a flow chart of a processing method for a digital object provided in accordance with a second embodiment of the present disclosure;
FIG. 4 is a flow chart of a processing method for a digital object provided in accordance with a third embodiment of the present disclosure;
FIG. 5 is a flow chart of a processing method for a digital object provided in accordance with a fourth embodiment of the present disclosure;
FIG. 6 is a flow chart of a processing method for a digital object provided in accordance with a fifth embodiment of the present disclosure;
FIG. 7 is a flow chart of a processing method for a digital object provided in accordance with a sixth embodiment of the present disclosure;
fig. 8 is a flowchart of a processing method for a digital object provided according to a seventh embodiment of the present disclosure;
FIG. 9 is a flow chart of a processing method for a digital object provided in accordance with an eighth embodiment of the present disclosure;
fig. 10 is a schematic structural view of a processing apparatus for digital objects provided according to a ninth embodiment of the present disclosure;
Fig. 11 is a block diagram of an electronic device for implementing a processing method for a digital object in accordance with an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The technical scheme disclosed by the invention can be applied to the interactive scenes of the digital people, the digital objects in the target scene can be controlled to execute corresponding interactive actions according to the interactive instructions corresponding to the interactive operations by detecting the interactive operations executed by the user on the target scene, the interactive control of the user and the digital objects in the target scene can be realized, and the interactive efficiency and the effectiveness are improved.
In the related art, a digital person is an analog person constructed using a mathematical model. In addition, the three-dimensional objects such as digital objects, digital animals and the like can be constructed through mathematical models. Whether it is a digital person or a digital animal, a digital cartoon character, etc., may be referred to as a digital object. In practice, a digital object may be placed in a scene and a user may control the digital object located in the scene. For example, a knowledge question and answer is performed. Typically, the scene set for a digital object is static and typically does not change. The scene can also refer to a background image, and the background image can be directly placed behind the digital object, so that the scene can be displayed. But such schemes with static images as scenes of digital objects typically cannot change. In the static scene, the digital person and the scene where the digital person is located are separated, and in the control process of the user and the digital object, the interactive content is single, for example, only intelligent question answering can be performed, so that effective interaction cannot be generated in the interactive process by associating the scenes.
In order to solve the above-described problems, in the present disclosure, it is considered to use a three-dimensional scene in which a digital object is placed. When a user initiates an interaction with a digital object, the interaction between the user and the digital object may be achieved through the target scene. Through interaction detection in the target scene, the user can have certain interaction association with the target scene in the interaction process of the user and the digital object, and richer interaction content is provided.
The disclosure provides a processing method, a device, equipment, a medium and a product for a digital object, which can be applied to the technical fields of computer vision and augmented reality in the artificial intelligence field so as to achieve effective interaction of related target scenes and improve interaction efficiency and reliability.
The technical scheme of the present disclosure will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, a system schematic diagram of a processing method for a digital object applied in an embodiment of the disclosure is provided, where the system may include a flow configuration module 11, an interaction control module 12, and a rendering engine module 13.
The flow configuration module 11 may be used for configuration of the target scenario and correspondence of the interaction, the interaction instruction, and the interaction.
The interaction control module 12 may receive the content configured by the flow configuration module 11. And is used to implement the processing methods of the present disclosure for digital objects. That is, may be used to detect a first user initiated interaction. And obtaining a first interaction instruction corresponding to the first interaction operation. A first interaction of a first digital object in a target scene may be determined based on the first interaction instruction.
In addition, in some embodiments, the moving image display of the first digital object may also be implemented by the user and the target scene, that is, the moving image of the first digital object in the target scene may be generated according to the first interaction. Interaction with more scene association significance between the digital object and the user is realized, and interaction efficiency and interaction scene association are improved. Wherein, when rendering the target scene with moving images, the interaction control module 12 can control the rendering engine module to execute specific rendering operation.
As shown in fig. 2, a flowchart of a processing method for a digital object according to a first embodiment of the present disclosure may be implemented by a processing apparatus for a digital object, where the processing apparatus for a digital object may be located in an electronic device, and the processing method for a digital object may include the following steps:
201: and responding to the first interactive operation executed by the user aiming at the target scene, and obtaining a first interactive instruction corresponding to the first interactive operation.
In practical applications, the target scene may comprise a spatial scene established based on a two-dimensional, three-dimensional, four-dimensional or even more dimensional coordinate system, and the target scene may comprise a three-dimensional scene or a multi-dimensional scene.
Wherein the electronic device may be configured with an output unit, which may comprise, for example, a display screen. An output unit of the electronic device may display the target scene and the one or more digital objects. The target scene may be a scene script that is pre-generated using a generation tool or model of the target scene. When the target scene needs to be displayed, the script can be loaded to trigger the display of the target scene. Of course, in some complex target scenarios, multiple sub-scripts may be included, each of which may include one sub-scenario of the target scenario, and the first interaction performed by the user with respect to the target scenario may also be an interaction with respect to any one of the sub-scenarios. Taking the exhibition hall target scene as an example, a plurality of exhibition stands can be included, and each exhibition stand can correspond to one sub-script. When the three-dimensional exhibition hall is displayed, each sub-script can be utilized to display the corresponding exhibition stand, and the interactive operation triggered by the user aiming at any exhibition stand can be detected.
The user may perform a first interactive operation for any location in the target scene. The user may also perform a first interactive operation with respect to a first digital object in the target scene. I.e. the first interaction may be for any position in the target scene or for the first digital object.
The first interactive operation may include any one of a position clicking operation, an object moving operation, a position input operation, or a sliding operation of an object moving trajectory. In addition, in practical applications, the interactive operation may further include closing, opening, inputting, moving, and the like. The interactive operation may include an interactive signal. When the interaction operation is detected, the finger can be used as a sensing object of the interaction signal, and the eyeball, the gesture, the brain wave and the like can also be used as a sensing object of the interaction signal. The operation type of the interactive signal is identified to obtain the interactive operation, and the identification process of the interactive operation can be the same as that in the related art, which is not described herein. The interaction referred to in this disclosure is merely illustrative, and should not be construed as limiting the technical solution of this disclosure, and the interaction or manner related to the target scenario may be included in the protection scope of this technology.
The first interactive instruction may be determined from the first interactive operation. By identifying the first interactive operation, a first interactive instruction corresponding to the first interactive operation can be searched from a corresponding relation list of the operation and the instruction.
The first interactive instructions may be instructions to perform an object action with respect to the first digital object. In order to accurately determine the actions to be performed by the first digital object in the first interaction instruction, the first interaction instruction may be configured to include an identification of the actions to be performed by the first digital object.
202: a first interaction of a first digital object in a target scene is determined based on the first interaction instruction.
The first interaction may be determined by the first interaction instruction. The first interaction instruction may include an action identifier of an action required to be performed by the first digital object, so as to determine a first interaction action corresponding to the action identifier.
The first interactive action may be through one or more of the actions that the physical object emulated by the first digital object is capable of performing. For example, when the physical object simulated by the first digital object is a person object, the first digital object may be a digital person for simulating an action that can be performed by the person. Such as walking, turning around, drinking water, eating, saving money, querying content, running, opening doors, driving a vehicle, requesting a trigger for map navigation, etc. The specific action type of the first interaction action is not specifically limited in this disclosure, and any action that can be executed by the physical object may be used as one of the interaction actions.
For example, the first interactive command is "walk to cockpit" and start driving ", and the first interactive action of the first digital object, that is," walk to cockpit "+" start driving "may be determined according to the first interactive command. At this time, a moving image of the first digital object in the target scene, that is, a walking motion corresponding to "walk to the cockpit" and a driving start motion corresponding to "driving start" may be generated according to the first interaction motion, and a corresponding moving image may be generated according to the two motions, so as to render the target scene with the moving image.
In the embodiment of the disclosure, the first interaction instruction corresponding to the first interaction operation can be obtained by responding to the first interaction operation executed by the user aiming at the target scene. The first interaction instruction may be for determining a first interaction of a first digital object in the target scene. The action control of the first digital object can be realized through the interaction with the target scene, and the interaction control of the digital object and the scene is realized.
As shown in fig. 3, a flowchart of a processing method for a digital object according to a second embodiment of the present disclosure may be implemented by a processing apparatus for a digital object, where the processing apparatus for a digital object may be located in an electronic device, and the processing method for a digital object may include the following steps:
301: and responding to the first interactive operation executed by the user aiming at the target scene, and obtaining a first interactive instruction corresponding to the first interactive operation.
The technical solution of the present disclosure is the same as some of the steps in the foregoing embodiments, and reference may be made to the descriptions in the foregoing embodiments, which are not repeated herein.
302: a first interaction of a first digital object in a target scene is determined based on the first interaction instruction.
303: a motion image of the first digital object in the target scene is generated in accordance with the first interaction.
The first interactive action is a series of consecutive interactive sub-actions. And splicing the series of connected interaction sub-actions into one interaction action, and applying the first interaction action to the first digital object, so that one action video of the first digital object can be obtained through the first interaction action, and combining the action video with the target scene, so that the moving image of the first digital object in the target scene can be obtained.
304: rendering a target scene with moving pictures.
Alternatively, a target scene with moving images may be rendered by a rendering engine, in which the moving images are displayed in the target scene, and a scene page displayed in real time in the target scene may be changed as the moving images move. The coordinates of the moving image in the target scene can be provided for the rendering engine, and the rendering engine renders according to the coordinates of the moving image in the target scene. The user can view the target scene with the moving image rendered by the rendering engine.
The moving image may be generated from moving images of the first digital object at least one target track point, respectively. The moving image may include an action object formed by the first digital object according to the interactive sub-action. The coordinates of the moving image in the target scene may be referred to by coordinates of the key position of the first digital object therein in the target scene, for example, the coordinates of the nose tip of the first digital object in the target scene may be regarded as the coordinates of the moving image in the target scene. The coordinates change with the change of the moving image.
The coordinates of the moving image in the target scene may be associated with a virtual camera of the target scene, and a lens center of the virtual camera may correspond to the coordinates of the moving image in the target scene to set the lens center of the virtual camera as the first digital object. The fact that the lens center of the virtual camera corresponds to the coordinates of the moving image in the target scene may mean that the lens center of the virtual camera may be obtained by coordinate transformation of the coordinates of the moving image in the target scene.
In an embodiment of the present disclosure, a digital object may be placed in a target scene, and a user may perform a first interactive operation with respect to the target scene. The electronic device may detect a first interaction operation performed by the user for the target scene to obtain a corresponding first interaction instruction. By acquiring the first interaction instruction, a first interaction action of the first digital object located in the target scene can be determined, and the determination from the interaction of the user to the interaction action of the digital object is realized. And then, according to the first interaction action, a moving image of the first digital object in the target scene can be generated so as to realize the rendering of the target scene with the moving image. Through interaction control of the user and the first digital object in the target scene, interaction with the target scene and the first digital object with higher association degree can be performed, and effectiveness of scene interaction is improved.
In practice, the user interaction with the first digital object in the target scene is usually a traveling interaction, and the track point of the first digital object in the target scene needs to be determined during the interaction.
As shown in fig. 4, a flowchart of a processing method for a digital object according to a third embodiment of the present disclosure may be implemented by a processing apparatus for a digital object, where the processing apparatus for a digital object may be located in an electronic device, and the processing method for a digital object may include the following steps:
401: and responding to the first interactive operation executed by the user aiming at the target scene, and obtaining a first interactive instruction corresponding to the first interactive operation.
Some steps in the embodiments of the present disclosure are the same as those in the foregoing embodiments, and are not described herein for brevity.
402: a first interaction of a first digital object in a target scene is determined based on the first interaction instruction.
At least one target track point corresponding to the first interaction is determined 403.
The at least one target track point may be one or more track points generated when the first digital object moves in the target scene. Of course, when the target track point is one, the first digital object does not move in the target scene.
The at least one target track point may be a series of coordinate points located on the same track formed according to the motion track of the first digital object, and the coordinate points may be determined by a scene coordinate system of the target scene.
And 404, generating moving images respectively corresponding to the first digital object at least one target track point.
And obtaining the moving images respectively corresponding to the first digital object at least one target track point.
At least one target track point corresponds to a corresponding track driving sequence respectively. The track driving sequence corresponding to the at least one target track point respectively can be determined sequentially according to the track movement direction and the position coordinates. The moving images respectively corresponding to the first digital object at the at least one target track point can be generated sequentially according to the track running sequence respectively corresponding to the at least one target track point.
And 405, generating a moving image of the first digital object in the target scene according to the moving images respectively corresponding to the first digital object at least one target track point.
Alternatively, the moving images corresponding to the digital object at least one target track point can be synthesized into the moving images according to the track running sequence of each target track. The specific synthesis method may refer to an image-to-video synthesis algorithm in the related art, and will not be described herein.
406, rendering a target scene with moving images.
In the embodiment of the disclosure, when the moving image is generated for the first digital object, at least one target track point of the first interaction action can be determined, so that accurate acquisition of the track point of the first digital object is realized. By acquiring the track points, the moving images of the first digital object at the target track points can be generated, so that the corresponding moving images are generated by utilizing the moving images of the first digital object respectively corresponding to at least one target track point. By acquiring at least one target track point of the first digital object, the motion image of the first digital object can be accurately generated, the high matching degree of the motion image and the motion track of the first digital object is realized, and the motion effect of the motion image and the first digital object in a target scene is improved.
In order for the reader to more fully understand the principles of implementation of the present disclosure, the embodiment shown in fig. 3 will now be further refined in conjunction with fig. 5-7 below.
As a possible implementation manner, in order to make the interaction effect of the first digital object with the target scene better, as shown in fig. 5, a flowchart of a processing method for a digital object provided for a fourth embodiment of the present disclosure is different from the foregoing embodiment in that the step of generating a moving image corresponding to any one target track point may include:
501: and determining the interaction sub-action corresponding to the first digital object at the target track point.
502: and generating an action image of the first digital object at the target track point based on the interaction sub-action corresponding to the first digital object at the target track point.
503: and determining a moving image corresponding to the first digital object at the target track point by using the action image.
The interactive actions to be performed by the first digital object may be decomposed into interactive sub-actions corresponding to the respective track points, so as to synthesize the plurality of interactive sub-actions into interactive actions.
After the first interaction action is obtained, the first interaction action can be split into interaction sub-actions corresponding to all target track points according to at least one target track point. Specifically, the first interactive action is split into at least one interactive sub-action, at least one target track point is matched with at least one interactive sub-action, and the interactive sub-actions respectively corresponding to the at least one target track point are obtained, so that the matching of the track point and the interactive sub-action is realized, the action changes along with the track point, and an animation effect is generated. The interaction sub-action corresponding to the target track point can be used for generating an action image corresponding to the first digital object at the target track point. The action image may include a scene image having a first digital object and a target scene that perform an interaction sub-action.
In the embodiment of the disclosure, in the process of generating the moving image, the interaction sub-action corresponding to the target track point of the first digital object may be determined first, so as to generate the moving image of the first digital object at the target track point based on the interaction sub-action corresponding to the target track point. The generation of the action image can correlate the first digital object according to the interaction sub-action, so that the interaction action of the first digital object corresponds to the target track point, the close connection between the action of the first digital object and the track point is realized, and the accurate output of the action of the first digital object is improved. And the motion image of the first digital object at the target track point can be generated through the motion image of the target track point, so that an accurate motion image is obtained, and motion control of the first digital object is realized.
As one embodiment, step 503: determining, using the motion image, a motion image of the first digital object corresponding to the target track point may include:
and determining the environment information corresponding to the target track point.
And carrying out environment configuration on the action image according to the environment information corresponding to the target track point to obtain the motion image corresponding to the first digital object at the target track point.
The environment information is set for the first digital object. Since the target scene can simulate the real environment, when generating the moving image of the first digital object, it can be considered that the environment information is set for the first digital object at the target track point, the environment information corresponding to the target track point. The environmental information may include at least one of light information, solar environmental information over time, and text content information corresponding to the scene image.
Through environment configuration, the moving image of the target track point is related to the environment information thereof, so that the moving image of the first data object is effectively acquired, the actions of each track point are tightly combined with the environment, and the interaction efficiency is improved.
In the embodiment of the disclosure, the environmental information of the first digital object at the target track point may be obtained, so as to generate a corresponding moving image for the moving image corresponding to the first digital object at the target track point by using the environmental information. The obtained moving image can contain the environment information of the target track point, so that the moving image is tightly combined with the environment in the target scene, the environment combination rate of the user and the target scene is improved, and the interaction effect is improved.
In one possible design, step 502: based on the interaction sub-action corresponding to the first digital object at the target track point, generating an action image of the first digital object at the target track point comprises the following steps:
determining a virtual camera of a target scene;
acquiring an action object formed by the interaction sub-action corresponding to the target track point of the first digital object;
and taking the action object as the lens center of the virtual camera, embedding the action object into a scene image of the virtual camera corresponding to the target scene, and obtaining an action image of the first digital object at the target track point.
Optionally, embedding the action object into the scene image corresponding to the target scene by the virtual camera may include embedding the action object into the scene image corresponding to the target scene by the virtual camera according to the position of the first digital object in the corresponding scene image, so as to obtain the action image of the first digital object at the target track point.
Wherein the position of the first digital object in the corresponding scene image may be determined based on coordinates of the first digital object in the target scene. The method can be obtained by converting the coordinates of the scene image corresponding to the virtual camera in the target scene and the coordinates of the first digital object in the target scene.
The lens center taking the action object as the virtual camera can refer to that the action object is positioned at the center of the scene image in the action image, and a user can intuitively watch the action object in the action image.
In the embodiment of the disclosure, when the action image of the first digital object is generated by utilizing the interaction sub-action, the virtual camera of the target scene can be obtained, so that after the action object formed by the interaction sub-action corresponding to the first digital object according to the target track is obtained, the action object is taken as the lens center of the virtual camera, the action object is embedded into the scene image corresponding to the target scene of the virtual camera, the action object is embedded into the target scene and the lens center of the virtual camera is positioned, and the action image of the first digital object at the target track point is obtained. The action image can effectively display the action object and the target scene, so that the first digital object and the target scene can be accurately embedded, and the action image with high matching degree between the scenes and the objects can be obtained.
In one possible design, as shown in fig. 6, a flowchart of a processing method for a digital object according to a fifth embodiment of the disclosure is provided, which is different from the above embodiment in that, in step 403, determining at least one target track point corresponding to the first interaction may include the following steps:
601: and obtaining the target position corresponding to the first interaction action.
602: at least one target track point of the first digital object in the target scene is determined based on the target position and a start position corresponding to the first digital object.
The target position corresponding to the first interactive operation can be obtained by reading operation information in the first interactive operation executed by the user. The operation information may be used to match the operation position to obtain the target position. For example, when the target scene is a three-dimensional exhibition hall, a user can execute a selection operation for any exhibition stand when interacting with the three-dimensional exhibition hall, and at this time, the exhibition stand information in the first interaction operation can be acquired to acquire the target position corresponding to the exhibition stand information.
The starting position corresponding to the first digital object may be determined for the position of the first digital object in the target scene in real time. For example, in the presence Jing Jiazai, the starting location may be a default starting location for the first digital object. The starting position of the first digital object during movement in the scene may be a real-time position of the first digital object during movement. The starting position and the target position may both be coordinate points in a scene coordinate system corresponding to the target scene.
In the embodiment of the disclosure, when at least one target track point of the first digital object in the target scene is obtained, the at least one target track point of the first digital object in the target scene may be determined according to the target position corresponding to the first interactive operation performed by the user and the starting position corresponding to the first digital object currently. The track of the first digital object in the target scene can be accurately acquired through the target position and the starting position, and at least one accurate target track point is obtained.
In one possible design, the trajectory of the first digital object may be accurately generated in order to obtain accurate trajectory points. As shown in fig. 7, a flowchart of a processing method for a digital object according to a sixth embodiment of the disclosure is provided, which is different from the above embodiment in step 602: determining at least one target trajectory point of the first digital object in the target scene based on the target position and a start position corresponding to the first digital object may comprise the steps of:
701: and determining an electronic map corresponding to the target scene.
702: and determining at least one track point corresponding to the motion track of the first digital object in the electronic map by utilizing the electronic map and combining the target position and the initial position corresponding to the first digital object.
703: and mapping at least one track point to the target scene by the electronic map respectively to obtain at least one target track point of the first digital object in the target scene.
The electronic map corresponding to the target scene may include a two-dimensional electronic map. The motion track in the two-dimensional map can be rapidly determined through the two-dimensional electronic map so as to obtain at least one corresponding track point.
In the embodiment of the disclosure, an electronic map corresponding to a target scene may be acquired, so as to determine at least one track point corresponding to a motion track of a first digital object in the electronic map by using the electronic map. At least one track point can be respectively mapped into the target scene by the electronic map, and at least one target track point of the first digital object in the target scene is obtained. By using the electronic map of the target scene, the track points of the first digital object in the electronic map can be accurately acquired, and the acquisition efficiency and the accuracy of at least one track point are improved.
In one possible design, the above-mentioned step 702: determining at least one track point corresponding to a motion track of the first digital object in the electronic map by utilizing the electronic map and combining the target position and the initial position corresponding to the first digital object, wherein the method specifically comprises the following steps of:
Mapping the target position to an electronic map to obtain a first position;
mapping the initial position corresponding to the first digital object to an electronic map to obtain a second position;
and determining at least one track point corresponding to the motion track of the first digital object in the electronic map by utilizing the electronic map and combining the first position and the second position.
According to the first position and the second position in the electronic map, navigation planning can be carried out on the electronic map by the first position and the second position, a navigation path of the first digital object in the electronic map is obtained, the navigation path is used as a motion track, and the motion track is converted into at least one track point according to motion offset. The navigation path may have a start point of the second position and an end point of the first position.
In the embodiment of the disclosure, the target position is mapped into the electronic map, and the first position located in the electronic map can be obtained. And mapping the current corresponding starting position of the first digital object to the electronic map to obtain a second position. The first position and the second position are two positions of the electronic map, which need to plan a running path, so that the running track is accurately planned for the first digital object by using the first position and the second position, at least one track point corresponding to the obtained running track is obtained, and the acquisition precision and efficiency of the track point are improved.
As shown in fig. 8, a flowchart of a processing method for a digital object according to a seventh embodiment of the present disclosure may be implemented by a processing apparatus for a digital object, where the processing apparatus for a digital object may be located in an electronic device, and the processing method for a digital object may include the following steps:
801: and responding to the first interactive operation executed by the user aiming at the target scene, and obtaining a first interactive instruction corresponding to the first interactive operation.
Some steps in the embodiments of the present disclosure are the same as those in the foregoing embodiments, and for brevity of description, they are not described in detail herein.
802: a first interaction of a first digital object in a target scene is determined based on the first interaction instruction.
803: a motion image of the first digital object in the target scene is generated in accordance with the first interaction.
804: first content information matching the first interaction instruction is queried.
The first content information may be recommended content that matches the first interaction instruction. The first content information may be matched with the first interaction and the scene background of the target scene, and may be preset according to the first interaction or the first interaction instruction. For example, when the target scene is a showroom scene, the first content information may be introduction content to the showroom.
805: rendering a target scene with moving images and displaying first content information.
The first content information may be output in synchronization with the moving picture in the target scene. For example, the moving image may be output to the "a-article", and the first content information may be detailed description information of the "a-article", and the "a-article" and the first content information may be simultaneously output.
In the embodiment of the disclosure, on the basis of the first interaction operation performed by the user, in addition to obtaining the moving image of the first digital object in the target scene, the first content information matched with the first interaction instruction may be queried. The first content information can be interactive content added on the basis of the moving image, and the first content information is displayed in the process of rendering the target scene with the moving image, so that synchronous display of the moving image and the first content information can be realized, the interactive content is effectively expanded, and the interactive effectiveness is improved.
In one possible design, step 805 described above: rendering a target scene with moving images and displaying first content information, which can comprise the following steps:
and determining the voice corresponding to the first content information.
And outputting the voice corresponding to the first content information in the process of displaying the moving image in the target scene.
The first content information may include text content. A speech generation algorithm may be employed to convert the first content information into speech.
The first content information may also include multimedia information such as video, audio, etc., and voice may be extracted from the first content information. The information type of the first content information is not particularly limited in this disclosure.
In the embodiment of the disclosure, the first content semantics corresponding to the first content information may be determined, and the voice corresponding to the first content information may be output at the same time in the process of displaying the moving image in the target scene. The interactive content is output in a voice mode, so that the content output speed can be improved, and the content output mode is effectively expanded.
As shown in fig. 9, a flowchart of a processing method for a digital object according to an eighth embodiment of the present disclosure may be implemented by a processing apparatus for a digital object, where the processing apparatus for a digital object may be located in an electronic device, and the processing method for a digital object may include the following steps:
901: and responding to the first interactive operation executed by the user aiming at the target scene, and obtaining a first interactive instruction corresponding to the first interactive operation.
Some steps in the embodiments of the present disclosure are the same as those in the foregoing embodiments, and are not described herein for brevity.
902: a first interaction of a first digital object in a target scene is determined based on the first interaction instruction.
903: a motion image of the first digital object in the target scene is generated in accordance with the first interaction.
904: rendering a target scene with moving pictures.
905: and if the target scene with the moving image is determined to be rendered, detecting a second interactive operation triggered by the user.
The second interactive operation may detect an interactive operation re-performed by the user with respect to the target scene after the end of the playing of the moving image is observed. The operation type and instruction determination procedure of the second interaction may be the same as the first interaction, and specific contents of the interaction may refer to the related description of the first interaction.
906: and determining a second interaction action matched with the second interaction instruction according to the second interaction instruction corresponding to the second interaction operation.
The principle of the second interaction is the same as that of the first interaction, and will not be described here again.
907: the first digital object is controlled to perform a second interactive action in the target scene.
In the embodiment of the disclosure, if it is determined that the rendering of the target scene with the action image is finished, a second interaction operation triggered by the user may be detected, so as to determine, according to a second interaction instruction corresponding to the second interaction operation, a second interaction action matched with the second interaction instruction, so as to control the first digital object to execute the second interaction action in the target scene. By acquiring the second interaction action, continuous interaction between the user and the target scene can be realized, and the interaction continuity is improved.
After determining the second interaction that matches the second interaction instruction, one embodiment further comprises:
determining second content information matched with the second interaction instruction;
and rendering the second content information in the information output control of the target scene.
The second content information may be video content. The information output control may be a video player.
The user-triggered second interaction may include: content playback operations, product introduction operations, etc. may be used to trigger related operations for video output. The second content information is obtained according to the scene function pre-configuration of the target scene. For example, when the target scene is a plant species introduction scene, the second interaction operation initiated by the user may be an introduction operation of the plant a. The second content information that matches the introduction operation corresponding interaction instruction may be an introduction video of the a plant. The introduction video of the a plant may be output in a video player.
In the embodiment of the disclosure, after the second content information matched with the second interaction instruction is determined, the second content information can be rendered in the information output control of the target scene, the second content information can be output by using the information output control in the target scene, the output interaction of the control in the target scene can be realized, and the interaction rate is improved.
In some embodiments, the second interaction comprises a control action of the first digital object on the second digital object in the target scene; controlling the digital object to perform a second interaction in the target scene, comprising:
the first digital object is controlled to perform a corresponding control action on the second digital object in the target scene.
The second interactive operation may also be a control operation indicating that the first digital object is to a second digital object in the target scene. The second interaction action may then be a control action of the first digital object on the second digital object in the target scene.
In some application scenarios, the first digital object may be a digital person, for example. The second digital object may be an item object in the target scene, such as a teacup, a television, a vehicle, etc. The control action may be determined according to a specific type of the second digital object. For example, when the second digital object is a teacup, the control operation may be operations such as "move", "pour water", "take teacup and drink water". The second digital object is a vehicle, and the control operation may be, for example, operations such as "door opening", "entering cabin driving position", "starting the vehicle", and the like. The control action may include an interaction between the first digital object and the second digital object. Regarding the specific action type of the control action, no excessive limitation is made in the present disclosure.
In an embodiment of the disclosure, the second interaction action may further include a control action of the first digital object on the second digital object in the target scene, so as to implement interaction on the second digital object in the target scene. Through the interaction of the first digital object and the second digital object, the object interaction between the digital objects in the target scene can be realized, the interaction interest rate is improved, and the interaction expansion of the target scene is realized.
As shown in fig. 10, a schematic structural diagram of a processing apparatus for a digital object according to a ninth embodiment of the present disclosure may be configured with a processing method for a digital object, and the processing apparatus for a digital object may be located in an electronic device. The processing device 1000 for a digital object may comprise the following elements:
first response unit 1001: the method comprises the steps of responding to first interactive operation executed by a user aiming at a target scene, and obtaining a first interactive instruction corresponding to the first interactive operation;
the target determination unit 1002: a first interactive action for a first digital object in a target scene is determined based on the first interactive instruction.
The apparatus may further include:
an image generation unit: generating a moving image of the first digital object in the target scene according to the first interaction;
A scene rendering unit: for rendering a target scene with moving pictures.
In an embodiment of the present disclosure, a digital object may be placed in a target scene, and a user may perform a first interactive operation with respect to the target scene. The electronic device may detect a first interaction operation performed by the user for the target scene to obtain a corresponding first interaction instruction. By acquiring the first interaction instruction, a first interaction action of the first digital object located in the target scene can be determined, and the determination from the interaction of the user to the interaction action of the digital object is realized. And then, according to the first interaction action, a moving image of the first digital object in the target scene can be generated so as to realize the rendering of the target scene with the moving image. Through the interactive control of the user and the first digital object in the target scene, the target scene and the first digital object can be interacted with the user with higher association degree, and richer interaction content is provided.
As one embodiment, an image generation unit includes:
the track determining module is used for determining at least one target track point corresponding to the first interaction action;
and the image generation module is used for generating moving images respectively corresponding to the first digital object at least one target track point.
And the image generation module is used for generating a moving image of the first digital object in the target scene according to the moving images respectively corresponding to the first digital object at least one target track point.
In some embodiments, the image generation module includes:
the action acquisition sub-module is used for determining interaction sub-actions corresponding to the first digital object at the target track point;
the action generation sub-module is used for generating an action image of the first digital object at the target track point based on the interaction sub-action corresponding to the first digital object at the target track point;
and the image determining sub-module is used for determining the moving image corresponding to the first digital object at the target track point by utilizing the action image.
In one possible design, the image determination submodule is specifically configured to:
and determining the environment information corresponding to the target track point.
And carrying out environment configuration on the action image according to the environment information corresponding to the target track point to obtain the motion image corresponding to the first digital object at the target track point.
In yet another possible design, the action generating sub-module is specifically configured to:
determining a virtual camera of a target scene;
acquiring an action object formed by the interaction sub-action corresponding to the target track point of the first digital object;
And taking the action object as the lens center of the virtual camera, embedding the action object into a scene image of the virtual camera corresponding to the target scene, and obtaining an action image of the first digital object at the target track point.
In some embodiments, the trajectory determination module includes:
and the position determining sub-module is used for obtaining the target position corresponding to the first interaction action.
The track determination submodule is used for determining at least one target track point of the first digital object in the target scene based on the target position and the initial position corresponding to the first digital object.
In one possible design, the trajectory determination submodule is specifically configured to:
determining an electronic map corresponding to a target scene;
determining at least one track point corresponding to a motion track of the first digital object in the electronic map by utilizing the electronic map;
and mapping at least one track point to the target scene by the electronic map respectively to obtain at least one target track point of the first digital object in the target scene.
In one possible design, the trajectory determination submodule is specifically configured to:
mapping the target position to an electronic map to obtain a first position;
mapping the initial position corresponding to the first digital object to an electronic map to obtain a second position;
And determining at least one track point corresponding to the motion track of the first digital object in the electronic map by utilizing the electronic map and combining the first position and the second position.
As an alternative embodiment, further comprising:
the information inquiry unit is used for inquiring the first content information matched with the first interaction instruction;
and the information display unit is used for rendering the target scene with the moving image and displaying the first content information.
In one possible design, the information presentation unit includes:
the voice conversion module is used for determining voices corresponding to the first content information;
and the voice output module is used for outputting voice corresponding to the first content information in the process of displaying the moving image in the target scene.
As an alternative embodiment, further comprising:
the interaction detection unit is used for detecting a second interaction operation triggered by a user if the target scene with the moving image is determined to be rendered;
the interaction determining unit is used for determining a second interaction action matched with the second interaction instruction according to the second interaction instruction corresponding to the second interaction operation;
and the action interaction unit is used for controlling the first digital object to execute a second interaction action in the target scene.
In one possible design, the method further comprises:
a content determining unit for determining second content information matched with the second interactive instruction;
and the content rendering unit is used for rendering the second content information in the information output control of the target scene.
In yet another possible design, the second interaction comprises a control action of the digital object on a second digital object in the target scene; an action interaction unit comprising:
and the interaction control module is used for controlling the first digital object to execute corresponding control actions on the second digital object in the target scene.
It should be noted that, the digital object in this embodiment is not a digital person for a specific user, and cannot reflect personal information of a specific user. It should be noted that the digital object in this embodiment is derived from the public data set.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 11 illustrates a schematic block diagram of an example electronic device 1100 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the apparatus 1100 includes a computing unit 1101 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data required for the operation of the device 1100 can also be stored. The computing unit 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
Various components in device 1100 are connected to I/O interface 1105, including: an input unit 1106 such as a keyboard, a mouse, etc.; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108, such as a magnetic disk, optical disk, etc.; and a communication unit 1109 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 1109 allows the device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1101 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1101 performs the various methods and processes described above, such as the processing method for digital objects. For example, in some embodiments, the processing method for a digital object may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1108. In some embodiments, some or all of the computer programs may be loaded and/or installed onto device 1100 via ROM 1102 and/or communication unit 1109. When the computer program is loaded into RAM 1103 and executed by computing unit 1101, one or more of the steps of the processing method for a digital object described above may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured to perform the processing method for the digital object by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (12)

1. A processing method for a digital object, comprising:
responding to a first interactive operation executed by a user aiming at a target scene, and obtaining a first interactive instruction corresponding to the first interactive operation;
acquiring a first interaction action of a first digital object in the target scene based on the first interaction instruction;
obtaining a target position corresponding to the first interaction action;
determining an electronic map corresponding to the target scene;
Mapping the target position into the electronic map to obtain a first position;
mapping the initial position corresponding to the first digital object to the electronic map to obtain a second position;
determining at least one track point corresponding to a motion track of the first digital object in the electronic map by utilizing the electronic map and combining the first position and the second position;
mapping at least one track point to the target scene by the electronic map respectively to obtain at least one target track point of the first digital object in the target scene;
generating moving images of the first digital object corresponding to at least one target track point respectively;
generating a moving image of the first digital object in the target scene according to the moving images of the first digital object at least one target track point respectively corresponding to the target track points;
rendering the target scene with the moving picture.
2. The method according to claim 1, wherein the generating step of the moving image corresponding to any one of the target track points includes:
determining interaction sub-actions corresponding to the first digital object at the target track points;
Generating an action image of the first digital object at the target track point based on the interaction sub-action corresponding to the first digital object at the target track point;
and determining a moving image corresponding to the first digital object at the target track point by using the action image.
3. The method of claim 2, wherein the determining, using the motion image, a motion image of the first digital object corresponding to the target track point comprises:
determining environment information corresponding to the target track point;
and carrying out environment configuration on the action image according to the environment information corresponding to the target track point to obtain a moving image corresponding to the first digital object at the target track point.
4. The method of claim 2, wherein the generating an action image of the first digital object at the target track point based on the interaction sub-action corresponding to the first digital object at the target track point comprises:
determining a virtual camera of the target scene;
acquiring an action object formed by the first digital object according to the interaction sub-action corresponding to the target track point;
And taking the action object as the lens center of the virtual camera, embedding the action object into a scene image of the virtual camera corresponding to the target scene, and obtaining an action image of the first digital object at the target track point.
5. The method of any of claims 1-4, further comprising:
querying first content information matched with the first interaction instruction;
rendering the target scene with the moving image and displaying the first content information.
6. The method of claim 5, wherein the rendering the target scene with the moving picture and presenting the first content information comprises:
determining the voice corresponding to the first content information;
and outputting the voice corresponding to the first content information in the process of displaying the moving image in the target scene.
7. The method of any of claims 1-4, further comprising:
if the target scene with the moving image is determined to be rendered, detecting a second interactive operation triggered by the user;
determining a second interaction action matched with the second interaction instruction according to the second interaction instruction corresponding to the second interaction operation;
Controlling the first digital object to perform the second interactive action in the target scene.
8. The method of claim 7, wherein the determining a second interaction that matches the second interaction instruction is followed by:
determining second content information matched with the second interaction instruction;
and rendering the second content information in the information output control of the target scene.
9. The method of claim 7, wherein the second interaction comprises a control action of the first digital object on a second digital object in the target scene; the controlling the digital object to perform the second interaction in the target scene includes:
and controlling the first digital object to execute corresponding control actions on the second digital object in the target scene.
10. A processing apparatus for a digital object, comprising:
the first response unit is used for responding to a first interactive operation executed by a user aiming at a target scene and obtaining a first interactive instruction corresponding to the first interactive operation;
a target determining unit, configured to determine, based on the first interaction instruction, a first interaction action of a first digital object in the target scene;
The image generation unit is used for generating a moving image of the first digital object in the target scene according to the first interaction action;
a scene rendering unit for rendering the target scene with the moving picture;
the image generation unit includes:
the track determining module is used for determining at least one target track point corresponding to the first interaction action;
the image generation module is used for generating moving images respectively corresponding to the first digital object at least one target track point;
the image generation module is used for generating a moving image of the first digital object in the target scene according to the moving images respectively corresponding to the first digital object at least one target track point;
the track determining module is specifically configured to obtain a target position corresponding to the first interaction action;
determining an electronic map corresponding to the target scene;
mapping the target position into the electronic map to obtain a first position;
mapping the initial position corresponding to the first digital object to the electronic map to obtain a second position;
determining at least one track point corresponding to a motion track of the first digital object in the electronic map by utilizing the electronic map and combining the first position and the second position;
And mapping at least one track point to the target scene by the electronic map respectively to obtain at least one target track point of the first digital object in the target scene.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
CN202210392409.7A 2022-04-15 2022-04-15 Processing method, apparatus, device, medium and product for digital object Active CN114721562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210392409.7A CN114721562B (en) 2022-04-15 2022-04-15 Processing method, apparatus, device, medium and product for digital object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210392409.7A CN114721562B (en) 2022-04-15 2022-04-15 Processing method, apparatus, device, medium and product for digital object

Publications (2)

Publication Number Publication Date
CN114721562A CN114721562A (en) 2022-07-08
CN114721562B true CN114721562B (en) 2024-01-16

Family

ID=82243347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210392409.7A Active CN114721562B (en) 2022-04-15 2022-04-15 Processing method, apparatus, device, medium and product for digital object

Country Status (1)

Country Link
CN (1) CN114721562B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113359995A (en) * 2021-07-02 2021-09-07 北京百度网讯科技有限公司 Man-machine interaction method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109557998B (en) * 2017-09-25 2021-10-15 腾讯科技(深圳)有限公司 Information interaction method and device, storage medium and electronic device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113359995A (en) * 2021-07-02 2021-09-07 北京百度网讯科技有限公司 Man-machine interaction method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Intelligent Maritime Scene Frame Prediction Based on Digital Twins Technology;Zhong-Zheng Guo等;《IEEE Xplore》;全文 *
面向设计任务的增强现实交互技术;黄有群;王璐;常燕;;沈阳工业大学学报(02);全文 *

Also Published As

Publication number Publication date
CN114721562A (en) 2022-07-08

Similar Documents

Publication Publication Date Title
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
US11222471B2 (en) Implementing three-dimensional augmented reality in smart glasses based on two-dimensional data
CN110457414B (en) Offline map processing and virtual object display method, device, medium and equipment
CN104823152B (en) Augmented reality is realized using Eye-controlling focus
CN111654746B (en) Video frame insertion method and device, electronic equipment and storage medium
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
CN113420719B (en) Method and device for generating motion capture data, electronic equipment and storage medium
KR101545138B1 (en) Method for Providing Advertisement by Using Augmented Reality, System, Apparatus, Server And Terminal Therefor
US11989845B2 (en) Implementation and display of augmented reality
US10334222B2 (en) Focus-based video loop switching
CN110473293A (en) Virtual objects processing method and processing device, storage medium and electronic equipment
CN110349212A (en) Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring
US20220230350A1 (en) Position recognition method and system based on visual information processing
Kowalski et al. Holoface: Augmenting human-to-human interactions on hololens
CN110310325B (en) Virtual measurement method, electronic device and computer readable storage medium
CN115008454A (en) Robot online hand-eye calibration method based on multi-frame pseudo label data enhancement
KR20090087545A (en) Augmented reality system using simple frame marker, and method therefor, and the recording media storing the program performing the said method
CN114721562B (en) Processing method, apparatus, device, medium and product for digital object
CN111918114A (en) Image display method, image display device, display equipment and computer readable storage medium
Chaudhry et al. AR foundation for augmented reality in unity
Ogawa et al. Occlusion handling in outdoor augmented reality using a combination of map data and instance segmentation
Kaur et al. Computer vision and sensor fusion for efficient hybrid tracking in augmented reality systems
CN115994944A (en) Three-dimensional key point prediction method, training method and related equipment
CN110211239B (en) Augmented reality method, apparatus, device and medium based on label-free recognition
Ha et al. DigiLog Space: Real-time dual space registration and dynamic information visualization for 4D+ augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant