CN111583415B - Information processing method and device and electronic equipment - Google Patents

Information processing method and device and electronic equipment Download PDF

Info

Publication number
CN111583415B
CN111583415B CN202010390278.XA CN202010390278A CN111583415B CN 111583415 B CN111583415 B CN 111583415B CN 202010390278 A CN202010390278 A CN 202010390278A CN 111583415 B CN111583415 B CN 111583415B
Authority
CN
China
Prior art keywords
information
target
image
image information
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010390278.XA
Other languages
Chinese (zh)
Other versions
CN111583415A (en
Inventor
康石长
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010390278.XA priority Critical patent/CN111583415B/en
Publication of CN111583415A publication Critical patent/CN111583415A/en
Priority to PCT/CN2021/091975 priority patent/WO2021223724A1/en
Application granted granted Critical
Publication of CN111583415B publication Critical patent/CN111583415B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an information processing method, an information processing device and electronic equipment, and belongs to the field of information processing. The information processing method comprises the following steps: acquiring target point cloud data corresponding to a target object and environment information of a target space; wherein the target point cloud data includes image information of the target object; adjusting the target point cloud data under the condition that the environment information and the image information meet preset conditions; and generating a virtual image of the target object in the target space according to the adjusted target point cloud data. The application can realize the generation of the virtual image which is suitable for the environment information, and avoid the conflict between the virtual image and the environment, thereby better meeting the user demand, improving the intelligence of the information processing and well solving the problems that the existing information processing scheme is not intelligent enough and can not well meet the user demand.

Description

Information processing method and device and electronic equipment
Technical Field
The application belongs to the field of information processing, and particularly relates to an information processing method, an information processing device and electronic equipment.
Background
Augmented reality (Augmented Reality, AR) is a technique that seamlessly merges virtual world information and real world information. With rapid development of AR technology, AR is increasingly used in daily life and work of people, for example: AR conference brings a real field conference experience to participants in different spaces, however, in the existing AR conference scene, the requirements of users on vision and hearing are not well met. Therefore, in the AR technology, the existing information processing scheme still has the problems of insufficient intelligence and incapability of well meeting the user requirements.
Disclosure of Invention
The embodiment of the application aims to provide an information processing method, an information processing device and electronic equipment, which can solve the problems that the existing information processing scheme is not intelligent enough and can not well meet the demands of users.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides an information processing method, including:
acquiring target point cloud data corresponding to a target object and environment information of a target space; wherein the target point cloud data includes image information of the target object;
adjusting the target point cloud data under the condition that the environment information and the image information meet preset conditions;
and generating a virtual image of the target object in the target space according to the adjusted target point cloud data.
In a second aspect, an embodiment of the present application provides an apparatus of an information processing apparatus, including:
the first acquisition module is used for acquiring target point cloud data corresponding to the target object and environment information of the target space; wherein the target point cloud data includes image information of the target object;
the first adjusting module is used for adjusting the target point cloud data under the condition that the environment information and the image information meet preset conditions;
the first generation module is used for generating a virtual image of the target object in the target space according to the adjusted target point cloud data.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, the cloud data of the target point corresponding to the target object and the environmental information of the target space are obtained; wherein the target point cloud data includes image information of the target object; adjusting the target point cloud data under the condition that the environment information and the image information meet preset conditions; generating a virtual image of the target object in the target space according to the adjusted target point cloud data; the method can realize the generation of the virtual image which is suitable for the environment information, and avoid the conflict between the virtual image and the environment, thereby better meeting the user demand, improving the intelligence of information processing, and well solving the problems that the existing information processing scheme is not intelligent enough and can not well meet the user demand.
Drawings
FIG. 1 is a flow chart of an information processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an application flow of an information processing method according to an embodiment of the present application;
FIG. 3 is a second embodiment of a flowchart of an information processing method according to the present application;
fig. 4 is a schematic diagram of an information processing apparatus according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the application may be practiced otherwise than as specifically illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The information processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The information processing method provided by the application, as shown in figure 1, comprises the following steps:
step 11: acquiring target point cloud data corresponding to a target object and environment information of a target space; wherein the target point cloud data includes image information of the target object.
Taking an AR communication scenario as an example, specifically, step 11 may be to receive target point cloud data sent by a peer device (i.e., a second AR device that communicates with the first AR device, hereinafter referred to as a second AR device) that performs AR communication with a home terminal device (i.e., an electronic device that performs the present scheme, hereinafter referred to as a first AR device), and scan the home terminal device to obtain environmental information. Of course, the scene related to the virtual image may be a VR (virtual reality) communication scene, an MR (mixed reality) communication scene, or the like, which is not limited herein.
The environmental information may include scene type, environmental color, etc. The image information may include information of color, hairstyle, face shape, body shape, clothing, etc.
Step 12: and adjusting the target point cloud data under the condition that the environment information and the image information meet preset conditions.
The preset condition may be at least one of a too small difference, a type mismatch. In particular, all or part of the target point cloud data may be adjusted.
Before step 12, an initial virtual image of the target object may of course be generated according to the target point cloud data of step 11, so as to be interacted by a user.
Step 13: and generating a virtual image of the target object in the target space according to the adjusted target point cloud data.
That is, the virtual image of the target object is presented in the target space, and the virtual image generated in step 13 does not conflict with or match the environmental information. Virtual images may also be understood as avatars, virtual objects (such as avatar characters).
Specifically, the preset condition includes at least one of the following conditions: the difference value between the background color information in the environment information and the color information in the image information is smaller than a preset value; the scene type information in the environment information is not matched with the clothing type information in the image information; the scene type information in the environment information is not matched with the hairstyle information in the image information.
In this way, the virtual image matched with the environment information can be ensured as much as possible.
Wherein the adjusting the target point cloud data includes: and adjusting the image information in the target point cloud data to obtain target image information matched with the environment information.
This can ensure that the data processing amount is reduced while obtaining a virtual image that is compatible with the environmental information.
Further, when the environment information and the image information meet a preset condition, adjusting the target point cloud data further includes: acquiring target audio data corresponding to the target object; and adjusting the target audio data according to the image information in the adjusted target point cloud data, and playing the adjusted target audio data.
Therefore, the voice of the virtual image can be more recognized, and the user experience is improved.
Specifically, the image information includes: at least one of face type information, hairstyle information, body type information, and clothing information; the adjusting the target audio data according to the image information in the adjusted target point cloud data comprises: and adjusting the tone quality characteristic data in the target audio data according to the image information in the adjusted target point cloud data.
Therefore, the voice recognition degree can be guaranteed to be improved, and the data processing amount can be reduced.
The information processing method provided by the embodiment of the application is further described below, the target object is exemplified by a target person, the corresponding virtual image is exemplified by a virtual person image, and the application scene is exemplified by an AR teleconference.
In view of the above technical problems, an embodiment of the present application provides an information processing method, which mainly relates to adjustment of point cloud data and adjustment of audio data for a target object, and an exemplary scheme provided by the embodiment of the present application is described below.
Example one:
the embodiment of the application provides an information processing method, which is embodied as a method for adapting virtual characters and environments, based on an electronic device for AR communication, and according to the environments for AR communication (particularly, the environments of a target space), changing the material colors (such as skin colors, clothes colors, hair colors and the like) of the virtual character images.
When AR communication is carried out, the scanning equipment corresponding to the second AR equipment scans the reference person, and three-dimensional point cloud data of the person are formed through data filtering and optimizing.
And the second AR equipment transmits the three-dimensional point cloud data to the first AR equipment through a network.
After the first AR equipment receives the three-dimensional point cloud data, generating a virtual character through three-dimensional reconstruction and rendering; the method comprises the steps of collecting the image information such as the hair, clothes and the like of the virtual character and the tone style of the environment where the first AR equipment is located (namely, the environment information); through analysis and comparison, the first AR device properly adjusts the material color of the virtual character, so that the virtual character is better adapted to the environment, does not conflict with the environment color, improves the identification degree of the virtual character, and brings better visual experience to the user when AR communication (such as AR conference or live broadcast) is carried out.
As shown in fig. 2, the present exemplary solution specifically may include the following steps:
step 21: the first AR device and the second AR device enter into AR communication.
Specifically, for example, the user wears devices such as AR glasses and the like to enter an AR conference, so that all software and hardware systems are ensured to be available, and the conference is successfully connected and ready.
Step 22: and scanning the participant by a scanning device corresponding to the second AR device to form three-dimensional point cloud data.
Specifically, the scanning equipment scans the participant, and under the condition of a plurality of scanning equipment, the equipment is calibrated, so that each scanning data is fused and aligned, and three-dimensional point cloud data of the participant are formed through data filtering and optimizing.
Step 23: the second AR device transmits the three-dimensional point cloud data to the first AR device via a network.
Specifically, for example, the second AR device transmits the three-dimensional point cloud data to the other end of the AR conference or AR live broadcast (the first AR device) through the network.
Step 24: the first AR device reconstructs and renders the virtual character over the three-dimensional point cloud data.
Specifically, after the first AR device continuously receives the three-dimensional point cloud data through the network protocol, the three-dimensional engine is used for three-dimensional reconstruction of the point cloud data, and the reconstructed model is rendered, so that a user of the first AR device can see and interact with the generated virtual character.
Step 25: the first AR device collects avatar information such as hair and clothing of the avatar, and environmental information (e.g., hue style) of an environment in which the first AR device is located.
Specifically, the first AR device collects color data of the virtual character in terms of hair, skin color, clothes, etc., analyzes the color and tone, etc., and on the other hand, the first AR device may also collect color data of the current environment, analyze the tone style, etc.
Step 26: the first AR device performs analysis and comparison on the image information and the environment information (such as hue style), and appropriately adjusts the material color of the virtual character.
Specifically, the first AR device analyzes and compares color tones of the virtual character and an environment where the first AR device is located, and appropriately adjusts the material color of the virtual character, so that the virtual character is better adapted to the environment, does not conflict with the environment color, improves the identification degree of the virtual character, and brings better visual experience to a user when AR communication (such as AR conference or live broadcast) is performed.
Example two:
the embodiment of the application provides an information processing method based on electronic equipment for AR communication and combined with image information of an avatar for AR communication, and the method is embodied as a method for changing sound personalized tone quality.
When AR communication is performed, the microphone of the second AR device collects audio data (namely, the target audio data) and transmits the audio data to the first AR device through a network, and the second AR device also transmits the image information of the face, hairstyle, body shape, clothes and other personal images of the virtual character obtained through scanning to the first AR device; the first AR device changes the sound corresponding to the virtual character (namely the target audio data) into the sound with characteristics such as dialect matched with the image or personalized tone quality of the cartoon image through image analysis and big data matching, improves the identification degree of the sound, brings more interesting and intimate experience in terms of voice for a user when AR communication (such as AR teleconference or live broadcast) is carried out, and meets the requirement of the user on personalized tone quality in terms of voice.
The first AR device plays the processed audio, and the user can hear the personalized sound after the sound change.
As shown in fig. 3, the present exemplary embodiment specifically may include the following steps:
step 31: the first AR device and the second AR device enter into AR communication.
Specifically, for example, the user wears devices such as AR glasses and the like to enter an AR conference, so that all software and hardware systems are ensured to be available, and the conference is successfully connected and ready.
Step 32: the microphone of the second AR equipment collects the user audio data and transmits the user audio data to the first AR equipment through a network; and the second AR device transmits the avatar information of the avatar obtained by the scanning to the first AR device.
Specifically, the user begins speaking, speaks into the conference, and the microphone collects user audio data, enters the software system for processing, and then transmits to the first AR device. In addition, the second AR device transmits the image information of the personal image such as the face shape, hairstyle, body shape, and clothes of the virtual character obtained by the scanning to the first AR device.
Step 33: the first AR device performs image analysis and data matching in combination with the avatar (i.e., analyzes the avatar's personal avatar and performs image analysis big data matching).
Specifically, the first AR device finds out a personalized tone quality sound template of a dialect or cartoon image matched with the personal image of the user through image analysis and big data matching.
Step 34: the first AR device converts the sound corresponding to the virtual character (i.e., the above-mentioned target audio data) into a sound having characteristics such as a dialect matched with the character or a personalized sound quality of the cartoon character.
Specifically, the voice corresponding to the virtual character is changed into the voice with characteristics such as the dialect matched with the image of the virtual character or the personalized tone quality of the cartoon image, so that the identification degree of the voice is improved, the user is more interesting and relatives in terms of voice when carrying out AR communication (such as AR teleconference or live broadcast), and the requirement of the user on the personalized tone quality in terms of voice is met.
Step 35: the first AR device plays the sound after the sound change.
Specifically, the first AR device plays the audio data after the sound change through the system, so that the user can hear the personalized sound after the sound change.
From the above, the present example provides a service of varying the sound personalized tone quality for the user for the AR teleconference scenario; in the AR communication, the first AR device changes the user's voice into the voice with characteristics such as the dialect matched with the image or the personalized tone quality of the cartoon image through image analysis and big data matching according to the face, the hairstyle, the body shape, the clothes and other personal images of the virtual character transmitted by the second AR device, thereby bringing more interesting and relatives experience in voice for the user in the AR communication and meeting the requirement of the user on the personalized tone quality in voice.
In summary, the application not only provides the traditional visual experience for the user, but also can properly change the material color of the virtual character by analyzing the environmental information such as the color tone of the environment where the first AR equipment is positioned, so that the virtual character is better adapted to the environment, does not conflict with the environmental color, improves the identification degree of the virtual character, and brings better visual experience to the consumer when AR communication is carried out; in addition, the method can also provide personalized hearing experience after sound change by combining with the image of the virtual character, provide dynamic personalized and related sound for AR teleconference, AR live broadcast, AR remote assistance and the like of the user, and improve the aesthetic feeling and the pleasure of the whole system.
The application is described herein not limited to sound changing to dialect or cartoon character, but also supports various sound changing schemes such as animal sound, natural sound simulation, and the like. The application is not limited to the use scene of the AR teleconference, but also supports the application scenes such as AR remote video, AR live broadcast, AR teaching, AR training and the like. The application is not limited to AR equipment, VR equipment, MR (mixed reality) equipment, mobile phone terminals, simulation modules and other electronic equipment, and the application scheme can be applied.
The information processing method provided by the embodiment of the application obtains the cloud data of the target point corresponding to the target object and the environmental information of the target space; wherein the target point cloud data includes image information of the target object; adjusting the target point cloud data under the condition that the environment information and the image information meet preset conditions; generating a virtual image of the target object in the target space according to the adjusted target point cloud data; the method can realize the generation of the virtual image which is suitable for the environment information, and avoid the conflict between the virtual image and the environment, thereby better meeting the user demand, improving the intelligence of information processing, and well solving the problems that the existing information processing scheme is not intelligent enough and can not well meet the user demand.
It should be noted that, in the information processing method provided in the embodiment of the present application, the execution body may be an information processing apparatus, or a control module in the information processing apparatus for executing the loading information processing method. In the embodiment of the present application, an information processing apparatus executes a loading information processing method as an example, and the information processing method provided in the embodiment of the present application is described.
The embodiment of the application also provides an information processing device, as shown in fig. 4, comprising:
a first obtaining module 41, configured to obtain target point cloud data corresponding to a target object and environmental information of a target space; wherein the target point cloud data includes image information of the target object;
a first adjustment module 42, configured to adjust the target point cloud data if the environment information and the image information meet a preset condition;
the first generation module 43 is configured to generate a virtual image of the target object in the target space according to the adjusted target point cloud data.
Wherein the preset conditions include at least one of the following conditions: the difference value between the background color information in the environment information and the color information in the image information is smaller than a preset value; the scene type information in the environment information is not matched with the clothing type information in the image information; the scene type information in the environment information is not matched with the hairstyle information in the image information.
Specifically, the first adjustment module includes: and the first adjusting sub-module is used for adjusting the image information in the target point cloud data to obtain target image information matched with the environment information.
Further, the information processing apparatus further includes: the second acquisition module is used for acquiring target audio data corresponding to the target object after adjusting the target point cloud data under the condition that the environment information and the image information meet preset conditions; the first processing module is used for adjusting the target audio data according to the image information in the adjusted target point cloud data and playing the adjusted target audio data.
Specifically, the image information includes: at least one of face type information, hairstyle information, body type information, and clothing information; the first processing module includes: and the second adjusting sub-module is used for adjusting the tone quality characteristic data in the target audio data according to the image information in the adjusted target point cloud data.
The information processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and embodiments of the present application are not limited in particular.
The information processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The information processing device provided in the embodiment of the present application can implement each process implemented by the information processing device in the method embodiments of fig. 1 to 3, and in order to avoid repetition, a detailed description is omitted here.
The information processing device provided by the embodiment of the application obtains the cloud data of the target point corresponding to the target object and the environmental information of the target space; wherein the target point cloud data includes image information of the target object; adjusting the target point cloud data under the condition that the environment information and the image information meet preset conditions; generating a virtual image of the target object in the target space according to the adjusted target point cloud data; the method can realize the generation of the virtual image which is suitable for the environment information, and avoid the conflict between the virtual image and the environment, thereby better meeting the user demand, improving the intelligence of information processing, and well solving the problems that the existing information processing scheme is not intelligent enough and can not well meet the user demand.
Optionally, as shown in fig. 5, the embodiment of the present application further provides an electronic device, which includes a processor 51, a memory 52, and a program or an instruction stored in the memory 52 and capable of running on the processor 51, where the program or the instruction implements each process of the embodiment of the information processing method when executed by the processor 51, and the process can achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 60 includes, but is not limited to: radio frequency unit 61, network module 62, audio output unit 63, input unit 64, sensor 65, display unit 66, user input unit 67, interface unit 68, memory 69, and processor 610.
Those skilled in the art will appreciate that the electronic device 60 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 610 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The radio frequency unit 61 is configured to obtain target point cloud data corresponding to a target object; wherein the target point cloud data includes image information of the target object;
an input unit 64 for acquiring environmental information of a target space;
a processor 610, configured to adjust the target point cloud data if the environment information and the image information satisfy a preset condition; and generating a virtual image of the target object in the target space according to the adjusted target point cloud data.
In the embodiment of the application, the cloud data of the target point corresponding to the target object and the environmental information of the target space are obtained; wherein the target point cloud data includes image information of the target object; adjusting the target point cloud data under the condition that the environment information and the image information meet preset conditions; generating a virtual image of the target object in the target space according to the adjusted target point cloud data; the method can realize the generation of the virtual image which is suitable for the environment information, and avoid the conflict between the virtual image and the environment, thereby better meeting the user demand, improving the intelligence of information processing, and well solving the problems that the existing information processing scheme is not intelligent enough and can not well meet the user demand.
Optionally, the preset condition includes at least one of the following conditions: the difference value between the background color information in the environment information and the color information in the image information is smaller than a preset value; the scene type information in the environment information is not matched with the clothing type information in the image information; the scene type information in the environment information is not matched with the hairstyle information in the image information.
Optionally, the processor 610 is specifically configured to adjust the image information in the target point cloud data to obtain target image information matched with the environment information.
Optionally, the radio frequency unit 61 is further configured to obtain target audio data corresponding to the target object after adjusting the target point cloud data if the environment information and the image information meet a preset condition;
the processor 610 is further configured to adjust the target audio data according to the image information in the adjusted target point cloud data, and play the adjusted target audio data by using the audio output unit 63.
Optionally, the image information includes: at least one of face type information, hairstyle information, body type information, and clothing information;
the processor 610 is specifically configured to adjust the tone quality feature data in the target audio data according to the image information in the adjusted target point cloud data.
In summary, the scheme provided by the embodiment of the application not only provides the traditional visual experience for the user, but also can properly change the material color of the virtual character by analyzing the environmental information of the environment where the local terminal equipment is located, so that the virtual character is better adapted to the environment, does not conflict with the environmental color, improves the identification degree of the virtual character, and brings better visual experience to the consumer when AR communication is carried out; in addition, the method can also provide personalized hearing experience after sound change by combining with the image of the virtual character, provide dynamic personalized and related sound for AR teleconference, AR live broadcast, AR remote assistance and the like of the user, and improve the aesthetic feeling and the pleasure of the whole system.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-described information processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the information processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (6)

1. An information processing method, characterized by comprising:
acquiring target point cloud data corresponding to a target object and environment information of a target space in an augmented reality communication scene; the target point cloud data comprises image information of the target object, and the image information comprises: at least one of face type information, hairstyle information, body type information, and clothing information;
under the condition that the environment information and the image information meet the preset conditions, the image information in the target point cloud data is adjusted to obtain target image information matched with the environment information;
generating a virtual image of the target object in the target space according to the target image information;
acquiring target audio data corresponding to the target object;
and according to the target image information, adjusting tone quality characteristic data in the target audio data, and playing the adjusted target audio data.
2. The information processing method according to claim 1, wherein the preset condition includes at least one of:
the difference value between the background color information in the environment information and the color information in the image information is smaller than a preset value;
the scene type information in the environment information is not matched with the clothing type information in the image information;
the scene type information in the environment information is not matched with the hairstyle information in the image information.
3. An information processing apparatus, characterized by comprising:
the first acquisition module is used for acquiring target point cloud data corresponding to the target object and environment information of a target space in the augmented reality communication scene; the target point cloud data comprises image information of the target object, and the image information comprises: at least one of face type information, hairstyle information, body type information, and clothing information;
the first adjusting module is used for adjusting the image information in the target point cloud data to obtain target image information matched with the environment information under the condition that the environment information and the image information meet preset conditions;
the first generation module is used for generating a virtual image of the target object in the target space according to the target image information;
the second acquisition module is used for acquiring target audio data corresponding to the target object;
and the first processing module is used for adjusting the tone quality characteristic data in the target audio data according to the target image information and playing the adjusted target audio data.
4. An information processing apparatus according to claim 3, wherein the preset condition includes at least one of:
the difference value between the background color information in the environment information and the color information in the image information is smaller than a preset value;
the scene type information in the environment information is not matched with the clothing type information in the image information;
the scene type information in the environment information is not matched with the hairstyle information in the image information.
5. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which program or instruction when executed by the processor implements the steps of the information processing method according to any of claims 1-2.
6. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the information processing method according to any of claims 1-2.
CN202010390278.XA 2020-05-08 2020-05-08 Information processing method and device and electronic equipment Active CN111583415B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010390278.XA CN111583415B (en) 2020-05-08 2020-05-08 Information processing method and device and electronic equipment
PCT/CN2021/091975 WO2021223724A1 (en) 2020-05-08 2021-05-07 Information processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010390278.XA CN111583415B (en) 2020-05-08 2020-05-08 Information processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111583415A CN111583415A (en) 2020-08-25
CN111583415B true CN111583415B (en) 2023-11-24

Family

ID=72112248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010390278.XA Active CN111583415B (en) 2020-05-08 2020-05-08 Information processing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN111583415B (en)
WO (1) WO2021223724A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583415B (en) * 2020-05-08 2023-11-24 维沃移动通信有限公司 Information processing method and device and electronic equipment
CN112055167A (en) * 2020-09-18 2020-12-08 深圳随锐云网科技有限公司 Remote collaboration three-dimensional modeling system and method based on 5G cloud video conference
CN114612643B (en) * 2022-03-07 2024-04-12 北京字跳网络技术有限公司 Image adjustment method and device for virtual object, electronic equipment and storage medium
CN115222899B (en) * 2022-09-21 2023-02-21 湖南草根文化传媒有限公司 Virtual digital human generation method, system, computer device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN106796771A (en) * 2014-10-15 2017-05-31 精工爱普生株式会社 The method and computer program of head-mounted display apparatus, control head-mounted display apparatus
CN107203266A (en) * 2017-05-17 2017-09-26 东莞市华睿电子科技有限公司 A kind of data processing method based on VR
CN108804546A (en) * 2018-05-18 2018-11-13 维沃移动通信有限公司 A kind of clothing matching recommends method and terminal
CN110072116A (en) * 2019-05-06 2019-07-30 广州虎牙信息科技有限公司 Virtual newscaster's recommended method, device and direct broadcast server
CN110084891A (en) * 2019-04-16 2019-08-02 淮南师范学院 A kind of color adjustment method and AR glasses of AR glasses
CN110297684A (en) * 2019-06-28 2019-10-01 腾讯科技(深圳)有限公司 Theme display methods, device and storage medium based on virtual portrait
CN111066042A (en) * 2017-07-05 2020-04-24 马里亚·弗朗西斯卡·琼斯 Virtual conference participant response indication method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8982110B2 (en) * 2005-03-01 2015-03-17 Eyesmatch Ltd Method for image transformation, augmented reality, and teleperence
CN107170047A (en) * 2017-04-13 2017-09-15 北京小鸟看看科技有限公司 Update method, equipment and the virtual reality device of virtual reality scenario
CN108037863B (en) * 2017-12-12 2021-03-30 北京小米移动软件有限公司 Method and device for displaying image
CN110401810B (en) * 2019-06-28 2021-12-21 广东虚拟现实科技有限公司 Virtual picture processing method, device and system, electronic equipment and storage medium
CN111583415B (en) * 2020-05-08 2023-11-24 维沃移动通信有限公司 Information processing method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106796771A (en) * 2014-10-15 2017-05-31 精工爱普生株式会社 The method and computer program of head-mounted display apparatus, control head-mounted display apparatus
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN107203266A (en) * 2017-05-17 2017-09-26 东莞市华睿电子科技有限公司 A kind of data processing method based on VR
CN111066042A (en) * 2017-07-05 2020-04-24 马里亚·弗朗西斯卡·琼斯 Virtual conference participant response indication method and system
CN108804546A (en) * 2018-05-18 2018-11-13 维沃移动通信有限公司 A kind of clothing matching recommends method and terminal
CN110084891A (en) * 2019-04-16 2019-08-02 淮南师范学院 A kind of color adjustment method and AR glasses of AR glasses
CN110072116A (en) * 2019-05-06 2019-07-30 广州虎牙信息科技有限公司 Virtual newscaster's recommended method, device and direct broadcast server
CN110297684A (en) * 2019-06-28 2019-10-01 腾讯科技(深圳)有限公司 Theme display methods, device and storage medium based on virtual portrait

Also Published As

Publication number Publication date
CN111583415A (en) 2020-08-25
WO2021223724A1 (en) 2021-11-11

Similar Documents

Publication Publication Date Title
CN111583415B (en) Information processing method and device and electronic equipment
CN110519636B (en) Voice information playing method and device, computer equipment and storage medium
CN107392783B (en) Social contact method and device based on virtual reality
CN110401810B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
CN109302628B (en) Live broadcast-based face processing method, device, equipment and storage medium
CN110413108B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
EP4099709A1 (en) Data processing method and apparatus, device, and readable storage medium
US20010051535A1 (en) Communication system and communication method using animation and server as well as terminal device used therefor
CN113099298B (en) Method and device for changing virtual image and terminal equipment
CN116648729A (en) Head portrait display device, head portrait generation device, and program
CN110942501B (en) Virtual image switching method and device, electronic equipment and storage medium
CN110446000A (en) A kind of figural method and apparatus of generation dialogue
CN113066497A (en) Data processing method, device, system, electronic equipment and readable storage medium
CN115209180A (en) Video generation method and device
CN110794964A (en) Interaction method and device for virtual robot, electronic equipment and storage medium
CN110536095A (en) Call method, device, terminal and storage medium
CN113453027B (en) Live video and virtual make-up image processing method and device and electronic equipment
CN115049016A (en) Model driving method and device based on emotion recognition
CN111078005A (en) Virtual partner creating method and virtual partner system
CN114531564A (en) Processing method and electronic equipment
CN112866577A (en) Image processing method and device, computer readable medium and electronic equipment
CN115690281B (en) Role expression driving method and device, storage medium and electronic device
CN113079383B (en) Video processing method, device, electronic equipment and storage medium
CN116962742A (en) Live video image data transmission method, device and live video system
CN114425162A (en) Video processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant