WO2021223724A1 - 信息处理方法、装置和电子设备 - Google Patents

信息处理方法、装置和电子设备 Download PDF

Info

Publication number
WO2021223724A1
WO2021223724A1 PCT/CN2021/091975 CN2021091975W WO2021223724A1 WO 2021223724 A1 WO2021223724 A1 WO 2021223724A1 CN 2021091975 W CN2021091975 W CN 2021091975W WO 2021223724 A1 WO2021223724 A1 WO 2021223724A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
point cloud
target
cloud data
target point
Prior art date
Application number
PCT/CN2021/091975
Other languages
English (en)
French (fr)
Inventor
康石长
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2021223724A1 publication Critical patent/WO2021223724A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • This application belongs to the field of information processing, and specifically relates to an information processing method, device, and electronic equipment.
  • Augmented Reality is a technology that seamlessly integrates virtual world information and real world information.
  • AR is more and more widely used in people's daily life and work.
  • AR conferences bring a real on-site meeting experience to participants in different spaces.
  • the visual and auditory needs of users are not well met. Therefore, in the AR technology, the existing information processing solutions still have the problem of not being smart enough to meet the needs of users well.
  • the purpose of the embodiments of the present application is to provide an information processing method, device, and electronic equipment, which can solve the problem that the existing information processing solution is not smart enough to meet the needs of users well.
  • an embodiment of the present application provides an information processing method, which includes:
  • Target point cloud data corresponding to the target object and environmental information of the target space; wherein the target point cloud data includes the image information of the target object;
  • a virtual image of the target object is generated in the target space.
  • an information processing device which includes:
  • the first acquisition module is configured to acquire target point cloud data corresponding to the target object and environmental information of the target space; wherein the target point cloud data includes the image information of the target object;
  • the first adjustment module is configured to adjust the target point cloud data when the environmental information and the image information meet preset conditions
  • the first generating module is configured to generate a virtual image of the target object in the target space according to the adjusted target point cloud data.
  • an embodiment of the present application provides an electronic device that includes a processor, a memory, and a program or instruction stored on the memory and capable of running on the processor.
  • the program or instruction is The processor implements the steps of the method described in the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a program or an instruction to implement the chip as in the first aspect The method described.
  • the embodiments of the present application also provide a computer software product, the computer software product is stored in a non-volatile storage medium, and the software product is configured to be executed by at least one processor to achieve the following The steps of the method described in one aspect.
  • an embodiment of the present application also provides an information processing device configured to execute the method described in the first aspect.
  • the target point cloud data corresponding to the target object and the environment information of the target space are obtained; wherein the target point cloud data includes the image information of the target object; the environment information and the image information
  • the preset conditions are met, adjust the target point cloud data; generate a virtual image of the target object in the target space according to the adjusted target point cloud data; generate a virtual image that is compatible with environmental information
  • FIG. 1 is a schematic flowchart of an information processing method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram 1 of a specific application flow of the information processing method according to an embodiment of the present application
  • FIG. 3 is a schematic diagram 2 of a specific application flow of the information processing method according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of the structure of an information processing device according to an embodiment of the present application.
  • Fig. 5 is a first structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 6 is a second structural diagram of an electronic device according to an embodiment of the present application.
  • An information processing method provided by this application includes:
  • Step 11 Obtain target point cloud data corresponding to the target object and environmental information of the target space; wherein the target point cloud data includes the image information of the target object.
  • step 11 may be to receive the peer device that performs AR communication with the local device (that is, the electronic device that executes the solution, hereinafter referred to as the first AR device)
  • the target point cloud data sent by the second AR device in communication (hereinafter referred to as the second AR device) is scanned by the local device to obtain environmental information.
  • the second AR device can also be a virtual reality (Virtual Reality, VR) communication scene, a mixed reality (Mixed Reality, MR) communication scene, and other scenes involving virtual images, which are not limited here.
  • VR Virtual Reality
  • MR Mixed reality
  • the environment information may include scene type, environment color, and so on.
  • the image information may include information such as color, hairstyle, face shape, body shape, and clothing.
  • Step 12 Adjust the target point cloud data when the environmental information and the image information meet preset conditions.
  • the preset condition it may be at least one of the difference is too small and the type does not match. Specifically, all or part of the target point cloud data can be adjusted.
  • the initial virtual image of the target object can be generated according to the target point cloud data of step 11 for user interaction.
  • Step 13 Generate a virtual image of the target object in the target space according to the adjusted target point cloud data.
  • Virtual images can also be understood as virtual images and virtual objects (such as virtual characters).
  • the preset condition includes at least one of the following conditions: the difference between the background color information in the environment information and the color information in the image information is less than a preset value; The scene type information does not match the clothing type information in the image information; the scene type information in the environment information does not match the hairstyle information in the image information.
  • the adjusting the target point cloud data includes: adjusting the image information in the target point cloud data to obtain target image information that matches the environment information.
  • the method further includes: acquiring target audio data corresponding to the target object; and according to the adjusted target point
  • the image information in the cloud data adjusts the target audio data, and plays the adjusted target audio data.
  • the image information includes: at least one of face shape information, hairstyle information, body shape information, and clothing information; and adjusting the target audio data according to the image information in the adjusted target point cloud data includes: The image information in the adjusted target point cloud data is adjusted to the sound quality characteristic data in the target audio data.
  • the target object is a target person as an example, correspondingly, the virtual image is an avatar image as an example, and the application scenario is an AR remote conference as an example.
  • the embodiments of the present application provide an information processing method, which mainly involves the adjustment of point cloud data and audio data of the target object.
  • the solution provided in the embodiments of the present application will be illustrated below.
  • the example of this application is based on an electronic device that performs AR communication, and changes the material color of the avatar image (such as skin color, clothing color, hair color, etc.) according to the environment of AR communication (specifically the environment of the target space), and provides an information processing method , which is embodied in the method of adapting virtual characters to the environment.
  • the scanning device corresponding to the second AR device scans the participants, and after data filtering and optimization, forms three-dimensional point cloud data of the characters.
  • the second AR device transmits the three-dimensional point cloud data to the first AR device via a network.
  • the first AR device After receiving the three-dimensional point cloud data, the first AR device generates virtual characters through three-dimensional reconstruction and rendering; and collects the image information of the virtual characters’ hair and clothing, as well as the tone style of the environment in which the first AR device is located (that is, the above-mentioned environmental information ); After analysis and comparison, the first AR device appropriately adjusts the material color of the virtual character to make it better adapt to the environment, not conflict with the environment color, improve its recognition, and provide users with AR communication (such as AR meetings or Live broadcast) brings a better visual experience.
  • the solution of this example may specifically include the following steps:
  • Step 21 The first AR device and the second AR device enter AR communication.
  • the user puts on AR glasses and other equipment and enters the AR meeting to ensure that all software and hardware systems are available, and the meeting is successfully connected and ready.
  • Step 22 The scanning device corresponding to the second AR device scans the participants to form 3D point cloud data.
  • the scanning device scans the participants.
  • the device is calibrated to merge and align each scan data, and after data filtering and optimization, a three-dimensional point cloud data of the person is formed.
  • Step 23 The second AR device transmits the three-dimensional point cloud data to the first AR device via the network.
  • the second AR device transmits the three-dimensional point cloud data to the other end (the first AR device) of the AR conference or AR live broadcast via the network.
  • Step 24 The first AR device reconstructs the three-dimensional point cloud data and renders the virtual character.
  • the first AR device may continuously receive the 3D point cloud data through the network protocol, and then use the 3D engine to perform 3D reconstruction of the point cloud data, and render the reconstructed model so that the user of the first AR device can see the generated Virtual characters and can interact with them.
  • Step 25 The first AR device collects image information such as hair and clothing of the avatar, and environmental information (such as color tone style) of the environment where the first AR device is located.
  • the first AR device collects color data on the avatar’s hair, skin color, clothing, etc., and analyzes its color and tone.
  • the first AR device can also collect the color data of the current environment and analyze it. Hue style etc.
  • Step 26 The first AR device analyzes and compares the image information and the environment information (such as color tone style), and appropriately adjusts the material color of the virtual character.
  • the environment information such as color tone style
  • the first AR device analyzes and compares the color tones of the virtual character and the environment where the first AR device is located, and appropriately adjusts the material color of the virtual character to better adapt to the environment, and does not conflict with the environment color. Its recognition degree brings users a better visual experience when performing AR communication (such as AR conference or live broadcast).
  • the example of this application provides an information processing method based on an electronic device that performs AR communication in combination with the image information of a virtual character that performs AR communication, which is specifically embodied as a method of changing voice and personalized sound quality.
  • the audio data collected by the microphone of the second AR device (that is, the target audio data) is transmitted to the first AR device via the network, and the second AR device will scan the face, hairstyle, body shape, and shape of the avatar.
  • the image information of the personal image such as clothing is also transmitted to the first AR device; after image analysis and big data matching, the first AR device changes the voice corresponding to the virtual character (that is, the target audio data) into a dialect or cartoon that matches its image
  • Personalized voice quality and other distinctive voices can improve its recognition, and bring users a more interesting and friendly voice experience when performing AR communications (such as AR remote conferences or live broadcasts), and satisfy users’ voice personalization The need for sound quality.
  • the first AR device plays the processed audio, and the user can hear the personalized voice change.
  • the solution of this example may specifically include the following steps:
  • Step 31 The first AR device and the second AR device enter AR communication.
  • the user puts on AR glasses and other equipment and enters the AR meeting to ensure that all software and hardware systems are available, and the meeting is successfully connected and ready.
  • Step 32 The microphone of the second AR device collects user audio data and transmits it to the first AR device via the network; and the second AR device also transmits the image information of the avatar obtained by scanning to the first AR device.
  • the user starts to speak, speaks at the conference, the microphone collects the user's audio data, enters the software system for processing, and then transmits it to the first AR device.
  • the second AR device also transmits the image information of the avatar's face, hairstyle, body shape, clothing, and other personal images obtained through the scan to the first AR device.
  • Step 33 The first AR device combines the image of the virtual character to perform image analysis and data matching (that is, to analyze the personal image of the virtual character and perform image analysis and big data matching).
  • the first AR device finds a personalized sound quality sound template of a dialect or cartoon image that matches the personal image of the user.
  • Step 34 The first AR device converts the voice corresponding to the virtual character (that is, the aforementioned target audio data) into a distinctive voice such as a dialect or a cartoon character that matches the image.
  • the voice corresponding to the virtual character is changed into a dialect that matches its image or the personalized sound quality of the cartoon image, etc., to improve its recognition, and provide users with AR communication (such as AR remote conference or live broadcast). At this time, it will bring a more interesting and friendly voice experience to meet the needs of users for personalized voice quality.
  • Step 35 The first AR device plays the changed voice.
  • the first AR device plays the voice-changed audio data through the system, and the user can hear the voice-changed personalized sound.
  • this example provides users with voice-changing personalized sound quality services for the AR remote conference scene; among them, in AR communication, the first AR device transmits according to the face shape, hairstyle, body shape and shape of the virtual character transmitted by the second AR device.
  • Personal images such as clothing, through image analysis and big data matching, change the user’s voice into a dialect that matches the image or the personalized sound quality of the cartoon image and other distinctive voices, bringing users more interesting voices when communicating in AR
  • the friendly experience meets the needs of users for personalized voice quality.
  • this application can also appropriately change the material color of the virtual character by analyzing the environment information such as the color tone of the environment where the first AR device is located, so that it can better adapt to the environment. , Does not conflict with the color of the environment, improves its recognition, and brings consumers a better visual experience when communicating with AR; in addition, it can also combine the image of the virtual character to provide a personalized auditory experience after changing the voice, for users
  • the AR remote conference, AR live broadcast, AR remote assistance, etc. provide dynamic, personalized and friendly voices to enhance the beauty and fun of the entire system.
  • the application is not limited to changing voices into dialects or cartoon character voices, but also supports various voice changing solutions such as animal sounds and natural onomatopoeia.
  • This application is not limited to AR remote conference usage scenarios, but also supports AR remote video, AR live broadcast, AR teaching, AR training and other application scenarios.
  • the application is not limited to AR equipment, and electronic equipment such as VR equipment, MR (mixed reality) equipment, mobile phone terminals, and simulation cabins can all be applied to the application scheme.
  • the information processing method provided by the embodiments of the present application obtains target point cloud data corresponding to the target object and environmental information of the target space; wherein the target point cloud data includes the image information of the target object; When the image information meets preset conditions, adjust the target point cloud data; generate a virtual image of the target object in the target space according to the adjusted target point cloud data; generate and environment information can be realized Adaptable virtual images to avoid conflicts between virtual images and the environment, so as to better meet user needs, improve the intelligence of information processing, and solve the problem that existing information processing solutions are not smart enough to meet user needs well. The problem.
  • the execution subject of the information processing method provided by the embodiments of the present application may be an information processing device, or a control module in the information processing device for executing the loading information processing method.
  • the information processing method executed by the information processing device is taken as an example to illustrate the information processing method provided in the embodiment of this application.
  • An embodiment of the present application also provides an information processing device, as shown in FIG. 4, including:
  • the first obtaining module 41 is configured to obtain target point cloud data corresponding to the target object and environmental information of the target space; wherein the target point cloud data includes the image information of the target object;
  • the first adjustment module 42 is configured to adjust the target point cloud data when the environmental information and the image information meet preset conditions
  • the first generating module 43 is configured to generate a virtual image of the target object in the target space according to the adjusted target point cloud data.
  • the preset condition includes at least one of the following conditions: the difference between the background color information in the environment information and the color information in the image information is less than a preset value; the scene in the environment information The type information does not match the clothing type information in the image information; the scene type information in the environment information does not match the hairstyle information in the image information.
  • the first adjustment module includes: a first adjustment sub-module for adjusting the image information in the target point cloud data to obtain target image information that matches the environmental information.
  • the information processing device further includes: a second obtaining module, configured to obtain the target point cloud data after adjusting the target point cloud data when the environmental information and the image information meet preset conditions Target audio data corresponding to the object; a first processing module for adjusting the target audio data according to the image information in the adjusted target point cloud data, and playing the adjusted target audio data.
  • a second obtaining module configured to obtain the target point cloud data after adjusting the target point cloud data when the environmental information and the image information meet preset conditions Target audio data corresponding to the object
  • a first processing module for adjusting the target audio data according to the image information in the adjusted target point cloud data, and playing the adjusted target audio data.
  • the image information includes: at least one of face shape information, hairstyle information, body shape information, and clothing information;
  • the first processing module includes: a second adjustment sub-module, which is used to adjust the target point cloud data according to the adjustment. Adjust the sound quality feature data in the target audio data.
  • the information processing device in the embodiments of the present application may be a device, or a component, integrated circuit, or chip in a terminal.
  • the device can be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
  • UMPC ultra-mobile personal computer
  • netbook or a personal digital assistant (personal digital assistant).
  • non-mobile electronic devices can be servers, network attached storage (NAS), personal computers (PC), televisions (television, TV), teller machines or self-service machines, etc.
  • NAS network attached storage
  • PC personal computers
  • TV televisions
  • teller machines self-service machines
  • the information processing device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiment of the present application.
  • the information processing device provided by the embodiment of the present application can implement each process implemented by the information processing device in the method embodiments of FIG. 1 to FIG.
  • the information processing device obtains the target point cloud data corresponding to the target object and the environmental information of the target space; wherein the target point cloud data includes the image information of the target object;
  • the image information meets preset conditions, adjust the target point cloud data; generate a virtual image of the target object in the target space according to the adjusted target point cloud data; generate and environment information can be realized Adaptable virtual images to avoid conflicts between virtual images and the environment, so as to better meet user needs, improve the intelligence of information processing, and solve the problem that existing information processing solutions are not smart enough to meet user needs well. The problem.
  • an embodiment of the present application further provides an electronic device, as shown in FIG. 5, including a processor 51, a memory 52, a program or instruction stored on the memory 52 and running on the processor 51, the When the program or instruction is executed by the processor 51, each process of the foregoing information processing method embodiment is realized, and the same technical effect can be achieved. In order to avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 6 is a schematic diagram of the hardware structure of an electronic device that implements an embodiment of the present application.
  • the electronic device 60 includes but is not limited to: a radio frequency unit 61, a network module 62, an audio output unit 63, an input unit 64, a sensor 65, a display unit 66, a user input unit 67, an interface unit 68, a memory 69, a processor 610, etc. part.
  • the electronic device 60 may also include a power source (such as a battery) for supplying power to various components.
  • the power source may be logically connected to the processor 610 through a power management system, so that the power management system can manage charging, discharging, and power management. Consumption management and other functions.
  • the structure of the electronic device shown in FIG. 6 does not constitute a limitation on the electronic device.
  • the electronic device may include more or less components than those shown in the figure, or some components may be combined, or different component arrangements, which will not be repeated here. .
  • the radio frequency unit 61 is configured to obtain target point cloud data corresponding to the target object; wherein the target point cloud data includes image information of the target object;
  • the input unit 64 is used to obtain environmental information of the target space
  • the processor 610 is configured to adjust the target point cloud data when the environment information and the image information meet preset conditions; generate the target point cloud data in the target space according to the adjusted target point cloud data A virtual image of the target object.
  • the target point cloud data corresponding to the target object and the environment information of the target space are obtained; wherein the target point cloud data includes the image information of the target object; the environment information and the image information
  • the preset conditions are met, adjust the target point cloud data; generate a virtual image of the target object in the target space according to the adjusted target point cloud data; generate a virtual image that is compatible with environmental information Image, to avoid the conflict between virtual images and the environment, so as to better meet the needs of users, improve the intelligence of information processing, and solve the problem that the existing information processing solutions are not smart enough to meet the needs of users.
  • the preset condition includes at least one of the following conditions: the difference between the background color information in the environment information and the color information in the image information is less than a preset value; The scene type information does not match the clothing type information in the image information; the scene type information in the environment information does not match the hairstyle information in the image information.
  • the processor 610 is specifically configured to adjust the image information in the target point cloud data to obtain target image information that matches the environment information.
  • the radio frequency unit 61 is further configured to obtain target audio data corresponding to the target object after adjusting the target point cloud data when the environmental information and the image information meet preset conditions;
  • the processor 610 is further configured to adjust the target audio data according to the image information in the adjusted target point cloud data, and use the audio output unit 63 to play the adjusted target audio data.
  • the image information includes: at least one of face shape information, hairstyle information, body shape information, and clothing information;
  • the processor 610 is specifically configured to adjust the sound quality feature data in the target audio data according to the image information in the adjusted target point cloud data.
  • the solution provided by the embodiments of this application can also appropriately change the material color of the virtual character by analyzing the environment information of the environment where the local device is located, so that it can better adapt to the environment. It does not conflict with the color of the environment, improves its recognition, and brings consumers a better visual experience when communicating with AR; in addition, it can also combine the image of the virtual character to provide a personalized auditory experience after changing the voice.
  • the user's AR remote conference, AR live broadcast, AR remote assistance, etc. provide dynamic, personalized and friendly voices to enhance the beauty and fun of the entire system.
  • the embodiments of the present application also provide a readable storage medium with a program or instruction stored on the readable storage medium.
  • the program or instruction is executed by a processor, each process of the foregoing information processing method embodiment is realized, and the same can be achieved. In order to avoid repetition, I won’t repeat them here.
  • the processor is the processor in the electronic device described in the foregoing embodiment.
  • the readable storage medium includes a computer readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks, or optical disks.
  • An embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a program or an instruction to implement the information processing method in the foregoing embodiment of the information processing method.
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is used to run a program or an instruction to implement the information processing method in the foregoing embodiment of the information processing method.
  • chips mentioned in the embodiments of the present application may also be referred to as system-level chips, system-on-chips, system-on-chips, or system-on-chips.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to make a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.
  • a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
  • modules, units, and sub-units can be implemented in one or more application specific integrated circuits (ASIC), digital signal processors (Digital Signal Processor, DSP), and digital signal processing equipment (DSP Device, DSPD). ), programmable logic devices (Programmable Logic Device, PLD), Field-Programmable Gate Array (FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, used to implement this disclosure Other electronic units or a combination of the functions described above.
  • ASIC application specific integrated circuits
  • DSP Digital Signal Processor
  • DSP Device digital signal processing equipment
  • DSPD digital signal processing equipment
  • PLD programmable logic devices
  • FPGA Field-Programmable Gate Array
  • the technology described in the embodiments of the present disclosure can be implemented by modules (for example, procedures, functions, etc.) that perform the functions described in the embodiments of the present disclosure.
  • the software codes can be stored in the memory and executed by the processor.
  • the memory can be implemented in the processor or external to the processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种信息处理方法、装置和电子设备,属于信息处理领域。其中,信息处理方法包括:获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息(11);在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据(12);根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像(13)。

Description

信息处理方法、装置和电子设备
相关申请的交叉引用
本申请主张在2020年5月8日在中国提交的中国专利申请号No.202010390278.X的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于信息处理领域,具体涉及一种信息处理方法、装置和电子设备。
背景技术
增强现实(Augmented Reality,AR)是一种将虚拟世界信息和真实世界信息无缝融合的技术。随着AR技术的快速发展,AR越来越广泛的应用在人们的日常生活和工作中,例如:AR会议,AR会议给处在不同空间的与会者带来一种真实现场会议的体验,然而,现有的AR会议场景中,没有很好的满足用户在视觉和听觉上的需求。因此,AR技术中,现有的信息处理方案仍然存在不够智能、无法很好的满足用户需求的问题。
发明内容
本申请实施例的目的是提供一种信息处理方法、装置和电子设备,能够解决现有的信息处理方案不够智能,无法很好的满足用户需求的问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本申请实施例提供了一种信息处理方法,该方法包括:
获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息;
在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;
根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像。
第二方面,本申请实施例提供了一种信息处理装置,该装置包括:
第一获取模块,用于获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息;
第一调整模块,用于在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;
第一生成模块,用于根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例还提供了一种计算机软件产品,所述计算机软件产品被存储在非易失的存储介质中,所述软件产品被配置成被至少一个处理器执行以实现如第一方面所述的方法的步骤。
第七方面,本申请实施例还提供了一种信息处理装置,所述信息处理装置被配置成用于执行如第一方面所述的方法。
在本申请实施例中,通过获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息;在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像;能够实现生成与环境信息相适应的虚拟图像,避免虚拟图像与环境相冲突,从而更好的满足用户需求,提升信息处理的智能性,很好的解决了现有的信息处理方案不够智能,无法很好的满足用户需求的问题。
附图说明
图1是本申请实施例的信息处理方法流程示意图;
图2是本申请实施例的信息处理方法具体应用流程示意图一;
图3是本申请实施例的信息处理方法具体应用流程示意图二;
图4是本申请实施例的信息处理装置结构示意图;
图5是本申请实施例的电子设备结构示意图一;
图6是本申请实施例的电子设备结构示意图二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的信息处理方法进行详细地说明。
本申请提供的一种信息处理方法,如图1所示,包括:
步骤11:获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息。
以AR通信场景为例,具体的,步骤11可以是接收与本端设备(即执行本方案的电子设备,以下简称第一AR设备)进行AR通信的对端设备(即与第一AR设备进行通信的第二AR设备,以下简称第二AR设备)发送的目标点云数据,通过本端设备扫描获取环境信息。当然也可以是虚拟现实(Virtual Reality,VR)通信场景、混合现实(Mixed Reality,MR)通信场景 等涉及虚拟图像的场景,在此不作限定。
环境信息可以包括场景类型、环境颜色等。形象信息可以包括颜色、发型、脸型、体型、服饰等信息。
步骤12:在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据。
关于预设条件可以是差异过小、类型不匹配中的至少一个。具体可以调整目标点云数据的全部或者部分。
在步骤12之前,当然也可以先根据步骤11的目标点云数据生成目标对象的初始虚拟图像,以供用户交互。
步骤13:根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像。
也就是,在目标空间呈现所述目标对象的虚拟图像,步骤13生成的虚拟图像与环境信息不冲突、相匹配。虚拟图像也可理解为虚拟形象、虚拟对象(比如虚拟人物)。
具体的,所述预设条件包括以下条件中的至少一个:所述环境信息中的背景颜色信息与所述形象信息中的颜色信息之间的差值小于预设值;所述环境信息中的场景类型信息与所述形象信息中的服装类型信息不匹配;所述环境信息中的场景类型信息与所述形象信息中的发型信息不匹配。
这样能够尽量保证得到与环境信息匹配的虚拟图像。
其中,所述调整所述目标点云数据,包括:调整所述目标点云数据中的形象信息,得到与所述环境信息相匹配的目标形象信息。
这样能够保证在得到与环境信息相适应的虚拟图像的同时,减少数据处理量。
进一步的,在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据之后,还包括:获取所述目标对象对应的目标音频数据;根据调整后的目标点云数据中的形象信息,调整所述目标音频数据,并播放调整后的目标音频数据。
这样能够使得虚拟图像的声音更具辨识度,提高用户体验。
具体的,所述形象信息包括:脸型信息、发型信息、体型信息及服饰信 息中的至少一个;所述根据调整后的目标点云数据中的形象信息,调整所述目标音频数据,包括:根据调整后的目标点云数据中的形象信息,调整所述目标音频数据中的音质特征数据。
这样能够保证提升声音辨识度的同时,减少数据处理量。
下面对本申请实施例提供的所述信息处理方法进行进一步说明,目标对象以目标人物为例,对应的,虚拟图像以虚拟人物形象为例,应用场景以AR远程会议为例。
针对上述技术问题,本申请实施例提供了一种信息处理方法,主要涉及针对目标对象的点云数据的调整以及音频数据的调整,下面对本申请实施例提供的方案进行举例说明。
举例一:
本申请实例基于进行AR通信的电子设备,根据进行AR通信的环境(具体为目标空间的环境)改变虚拟人物形象的材质颜色(比如肤色、服饰颜色、头发颜色等),提供一种信息处理方法,具体体现为虚拟人物与环境自适应的方法。
进行AR通信时,上述第二AR设备对应的扫描设备扫描参会人物,经过数据过滤和优化,形成人物的三维点云数据。
上述第二AR设备将三维点云数据经过网络传输给上述第一AR设备。
第一AR设备接收到三维点云数据之后,通过三维重建和渲染,生成虚拟人物;并且采集虚拟人物的头发及服饰等形象信息,以及第一AR设备所处环境的色调风格(即上述环境信息);经过分析对比,第一AR设备适当调整虚拟人物的材质颜色,使之更好的适应到环境中,不和环境颜色冲突,提高其辨识度,给用户在进行AR通信(比如AR会议或直播)的时候带来视觉上更好的体验。
如图2所示,本举例的方案具体可包括以下步骤:
步骤21:第一AR设备和第二AR设备进入AR通信。
具体地,比如用户戴上AR眼镜等设备,进入AR会议,保证所有软硬件系统可用,会议连接成功,准备就绪。
步骤22:第二AR设备对应的扫描设备扫描参会人物,形成三维点云数 据。
具体地,扫描设备扫描参会人物,多台扫描设备的情况下,对设备进行标定,使每份扫描数据进行融合对齐,经过数据过滤和优化,形成人物的三维点云数据。
步骤23:第二AR设备将三维点云数据经网络传输到第一AR设备。
具体地,比如第二AR设备将三维点云数据经过网络传输给AR会议或AR直播的另一端(第一AR设备)。
步骤24:第一AR设备对三维点云数据重建并渲染虚拟人物。
具体地,第一AR设备可通过网络协议,连续接收到三维点云数据之后,使用三维引擎对点云数据进行三维重建,对重建的模型进行渲染,让第一AR设备的用户看到生成的虚拟人物,并可与之交互。
步骤25:第一AR设备采集虚拟人物的头发和服饰等形象信息,以及第一AR设备所处环境的环境信息(比如色调风格)。
具体地,第一AR设备采集虚拟人物的头发、肤色、服饰等方面的颜色数据,分析其颜色和色调等,另一方面,第一AR设备还可采集当前所处环境的颜色数据,分析其色调风格等。
步骤26:第一AR设备对形象信息与环境信息(比如色调风格)进行分析对比,适当调整虚拟人物的材质颜色。
具体地,第一AR设备对虚拟人物和第一AR设备所处环境的颜色色调进行分析对比,适当调整虚拟人物的材质颜色,使之更好的适应到环境中,不和环境颜色冲突,提高其辨识度,给用户在进行AR通信(比如AR会议或直播)的时候带来视觉上更好的体验。
举例二:
本申请实例基于进行AR通信的电子设备,结合进行AR通信的虚拟人物的形象信息,提供一种信息处理方法,具体体现为变声个性化音质的方法。
进行AR通信时,上述第二AR设备的麦克风采集完音频数据(即上述目标音频数据)经过网络传输给第一AR设备,第二AR设备将通过扫描得到的虚拟人物的脸型、发型、体型及服饰等个人形象的形象信息也传输给第一AR设备;第一AR设备经过图像分析和大数据匹配,将虚拟人物对应的 声音(即上述目标音频数据)变声成与其形象相匹配的方言或卡通形象的个性化音质等具有特色的声音,提高其辨识度,给用户在进行AR通信(比如AR远程会议或直播)的时候带来语音上更加有趣和亲切的体验,满足用户在语音上个性化音质的需求。
第一AR设备将处理之后的音频播放出来,用户即可听到个性化变声之后的声音。
如图3所示,本举例的方案具体可包括以下步骤:
步骤31:第一AR设备和第二AR设备进入AR通信。
具体地,比如用户戴上AR眼镜等设备,进入AR会议,保证所有软硬件系统可用,会议连接成功,准备就绪。
步骤32:第二AR设备的麦克风采集用户音频数据,并通过网络传输给第一AR设备;以及第二AR设备将通过扫描得到的虚拟人物的形象信息也传输给第一AR设备。
具体地,用户开始说话,进行会议发言,麦克风采集用户音频数据,进入软件系统进行处理,然后传输给第一AR设备。此外,第二AR设备将通过扫描得到的虚拟人物的脸型、发型、体型及服饰等个人形象的形象信息也传输给第一AR设备。
步骤33:第一AR设备结合虚拟人物的形象进行图像分析和数据匹配(也就是分析虚拟人物的个人形象,进行图像分析大数据匹配)。
具体地,第一AR设备经过图像分析和大数据匹配,找出与用户的个人形象相匹配的方言或卡通形象的个性化音质声音模板。
步骤34:第一AR设备将虚拟人物对应的声音(即上述目标音频数据)变声成与形象匹配的方言或卡通形象的个性化音质等具有特色的声音。
具体地,将虚拟人物对应的声音变声成与其形象相匹配的方言或卡通形象的个性化音质等具有特色的声音,提高其辨识度,给用户在进行AR通信(比如AR远程会议或直播)的时候带来语音上更加有趣和亲切的体验,满足用户在语音上个性化音质的需求。
步骤35:第一AR设备播放变声后的声音。
具体地,第一AR设备将变声之后的音频数据,通过系统播放出来,用 户即可听到变声之后的个性化声音。
由上可知,本举例针对AR远程会议场景,为用户提供了变声个性化音质的服务;其中,在AR通信中,第一AR设备根据第二AR设备传输的虚拟人物的脸型、发型、体型及服饰等个人形象,经过图像分析和大数据匹配,将用户声音变声成与其形象相匹配的方言或卡通形象的个性化音质等具有特色的声音,给用户在AR通信的时候带来语音上更加有趣和亲切的体验,满足用户在语音上个性化音质的需求。
综上,本申请除了为用户提供传统的视觉体验外,还能通过分析第一AR设备所处环境的颜色色调等环境信息,适当改变虚拟人物的材质颜色,使之更好的适应到环境中,不和环境颜色冲突,提高其辨识度,给消费者在进行AR通信的时候带来视觉上更好的体验;此外,还能结合虚拟人物的形象提供变声之后的个性化听觉体验,为用户的AR远程会议、AR直播、AR远程协助等提供动个性化和亲切的声音,提升整个系统的美感和乐趣。
在此说明,该申请不限于变声为方言或卡通角色的声音,也支持动物声音、自然拟声等多种变声方案。该申请不限于AR远程会议使用场景,也支持AR远程视频、AR直播、AR教学、AR培训等应用场景。该申请不限于AR设备,VR设备、MR(混合现实)设备、手机终端、仿真模拟舱等电子设备,均可应用该申请方案。
本申请实施例提供的所述信息处理方法通过获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息;在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像;能够实现生成与环境信息相适应的虚拟图像,避免虚拟图像与环境相冲突,从而更好的满足用户需求,提升信息处理的智能性,很好的解决了现有的信息处理方案不够智能,无法很好的满足用户需求的问题。
需要说明的是,本申请实施例提供的信息处理方法,执行主体可以为信息处理装置,或者该信息处理装置中的用于执行加载信息处理方法的控制模块。本申请实施例中以信息处理装置执行加载信息处理方法为例,说明本申 请实施例提供的信息处理方法。
本申请实施例还提供了一种信息处理装置,如图4所示,包括:
第一获取模块41,用于获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息;
第一调整模块42,用于在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;
第一生成模块43,用于根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像。
其中,所述预设条件包括以下条件中的至少一个:所述环境信息中的背景颜色信息与所述形象信息中的颜色信息之间的差值小于预设值;所述环境信息中的场景类型信息与所述形象信息中的服装类型信息不匹配;所述环境信息中的场景类型信息与所述形象信息中的发型信息不匹配。
具体的,所述第一调整模块,包括:第一调整子模块,用于调整所述目标点云数据中的形象信息,得到与所述环境信息相匹配的目标形象信息。
进一步,所述的信息处理装置,还包括:第二获取模块,用于在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据之后,获取所述目标对象对应的目标音频数据;第一处理模块,用于根据调整后的目标点云数据中的形象信息,调整所述目标音频数据,并播放调整后的目标音频数据。
具体的,所述形象信息包括:脸型信息、发型信息、体型信息及服饰信息中的至少一个;所述第一处理模块,包括:第二调整子模块,用于根据调整后的目标点云数据中的形象信息,调整所述目标音频数据中的音质特征数据。
本申请实施例中的信息处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage, NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的信息处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的信息处理装置能够实现图1至图3的方法实施例中信息处理装置实现的各个过程,为避免重复,这里不再赘述。
本申请实施例提供的所述信息处理装置通过获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息;在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像;能够实现生成与环境信息相适应的虚拟图像,避免虚拟图像与环境相冲突,从而更好的满足用户需求,提升信息处理的智能性,很好的解决了现有的信息处理方案不够智能,无法很好的满足用户需求的问题。
可选的,本申请实施例还提供一种电子设备,如图5所示,包括处理器51,存储器52,存储在存储器52上并可在所述处理器51上运行的程序或指令,该程序或指令被处理器51执行时实现上述信息处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要注意的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图6为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备60包括但不限于:射频单元61、网络模块62、音频输出单元63、输入单元64、传感器65、显示单元66、用户输入单元67、接口单元68、存储器69、以及处理器610等部件。
本领域技术人员可以理解,电子设备60还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器610逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图6中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或 更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,射频单元61,用于获取目标对象对应的目标点云数据;其中所述目标点云数据包括所述目标对象的形象信息;
输入单元64,用于获取目标空间的环境信息;
处理器610,用于在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像。
在本申请实施例中,通过获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息;在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像;能够实现生成与环境信息相适应的虚拟图像,避免虚拟图像与环境相冲突,从而更好的满足用户需求,提升信息处理的智能性,很好的解决了现有的信息处理方案不够智能,无法很好的满足用户需求的问题。
可选的,所述预设条件包括以下条件中的至少一个:所述环境信息中的背景颜色信息与所述形象信息中的颜色信息之间的差值小于预设值;所述环境信息中的场景类型信息与所述形象信息中的服装类型信息不匹配;所述环境信息中的场景类型信息与所述形象信息中的发型信息不匹配。
可选的,处理器610,具体用于调整所述目标点云数据中的形象信息,得到与所述环境信息相匹配的目标形象信息。
可选的,射频单元61,还用于在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据之后,获取所述目标对象对应的目标音频数据;
处理器610,还用于根据调整后的目标点云数据中的形象信息,调整所述目标音频数据,并利用音频输出单元63播放调整后的目标音频数据。
可选的,所述形象信息包括:脸型信息、发型信息、体型信息及服饰信息中的至少一个;
处理器610,具体用于根据调整后的目标点云数据中的形象信息,调整所述目标音频数据中的音质特征数据。
综上,本申请实施例提供的方案除了为用户提供传统的视觉体验外,还能通过分析本端设备所处环境的环境信息,适当改变虚拟人物的材质颜色,使之更好的适应到环境中,不和环境颜色冲突,提高其辨识度,给消费者在进行AR通信的时候带来视觉上更好的体验;此外,还能结合虚拟人物的形象提供变声之后的个性化听觉体验,为用户的AR远程会议、AR直播、AR远程协助等提供动个性化和亲切的声音,提升整个系统的美感和乐趣。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述信息处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述信息处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被 组合。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。
可以理解的是,本公开实施例描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,模块、单元、子单元可 以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本公开所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本公开实施例所述功能的模块(例如过程、函数等)来实现本公开实施例所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (15)

  1. 一种信息处理方法,包括:
    获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息;
    在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;
    根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像。
  2. 根据权利要求1所述的信息处理方法,其中,所述预设条件包括以下条件中的至少一个:
    所述环境信息中的背景颜色信息与所述形象信息中的颜色信息之间的差值小于预设值;
    所述环境信息中的场景类型信息与所述形象信息中的服装类型信息不匹配;
    所述环境信息中的场景类型信息与所述形象信息中的发型信息不匹配。
  3. 根据权利要求1所述的信息处理方法,其中,所述调整所述目标点云数据,包括:
    调整所述目标点云数据中的形象信息,得到与所述环境信息相匹配的目标形象信息。
  4. 根据权利要求1所述的信息处理方法,其中,在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据之后,还包括:
    获取所述目标对象对应的目标音频数据;
    根据调整后的目标点云数据中的形象信息,调整所述目标音频数据,并播放调整后的目标音频数据。
  5. 根据权利要求4所述的信息处理方法,其中,所述形象信息包括:脸型信息、发型信息、体型信息及服饰信息中的至少一个;
    所述根据调整后的目标点云数据中的形象信息,调整所述目标音频数据,包括:
    根据调整后的目标点云数据中的形象信息,调整所述目标音频数据中的音质特征数据。
  6. 一种信息处理装置,包括:
    第一获取模块,用于获取目标对象对应的目标点云数据和目标空间的环境信息;其中所述目标点云数据包括所述目标对象的形象信息;
    第一调整模块,用于在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据;
    第一生成模块,用于根据调整后的目标点云数据,在所述目标空间中生成所述目标对象的虚拟图像。
  7. 根据权利要求6所述的信息处理装置,其中,所述预设条件包括以下条件中的至少一个:
    所述环境信息中的背景颜色信息与所述形象信息中的颜色信息之间的差值小于预设值;
    所述环境信息中的场景类型信息与所述形象信息中的服装类型信息不匹配;
    所述环境信息中的场景类型信息与所述形象信息中的发型信息不匹配。
  8. 根据权利要求6所述的信息处理装置,其中,所述第一调整模块,包括:
    第一调整子模块,用于调整所述目标点云数据中的形象信息,得到与所述环境信息相匹配的目标形象信息。
  9. 根据权利要求6所述的信息处理装置,还包括:
    第二获取模块,用于在所述环境信息与所述形象信息满足预设条件的情况下,调整所述目标点云数据之后,获取所述目标对象对应的目标音频数据;
    第一处理模块,用于根据调整后的目标点云数据中的形象信息,调整所述目标音频数据,并播放调整后的目标音频数据。
  10. 根据权利要求9所述的信息处理装置,其中,所述形象信息包括:脸型信息、发型信息、体型信息及服饰信息中的至少一个;
    所述第一处理模块,包括:
    第二调整子模块,用于根据调整后的目标点云数据中的形象信息,调整 所述目标音频数据中的音质特征数据。
  11. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-5任一项所述的信息处理方法的步骤。
  12. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-5任一项所述的信息处理方法的步骤。
  13. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-5任一项所述的信息处理方法。
  14. 一种计算机软件产品,所述计算机软件产品被存储在非易失的存储介质中,所述软件产品被配置成被至少一个处理器执行以实现如权利要求1-5任一项所述的信息处理方法的步骤。
  15. 一种信息处理装置,所述信息处理装置被配置成用于执行如权利要求1-5任一项所述的信息处理方法。
PCT/CN2021/091975 2020-05-08 2021-05-07 信息处理方法、装置和电子设备 WO2021223724A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010390278.XA CN111583415B (zh) 2020-05-08 2020-05-08 信息处理方法、装置和电子设备
CN202010390278.X 2020-05-08

Publications (1)

Publication Number Publication Date
WO2021223724A1 true WO2021223724A1 (zh) 2021-11-11

Family

ID=72112248

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/091975 WO2021223724A1 (zh) 2020-05-08 2021-05-07 信息处理方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN111583415B (zh)
WO (1) WO2021223724A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222899A (zh) * 2022-09-21 2022-10-21 湖南草根文化传媒有限公司 虚拟数字人生成方法、系统、计算机设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583415B (zh) * 2020-05-08 2023-11-24 维沃移动通信有限公司 信息处理方法、装置和电子设备
CN112055167A (zh) * 2020-09-18 2020-12-08 深圳随锐云网科技有限公司 一种基于5g云视频会议的远程协作三维建模系统及方法
CN114612643B (zh) * 2022-03-07 2024-04-12 北京字跳网络技术有限公司 虚拟对象的形象调整方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107170047A (zh) * 2017-04-13 2017-09-15 北京小鸟看看科技有限公司 虚拟现实场景的更新方法、设备及虚拟现实设备
CN108804546A (zh) * 2018-05-18 2018-11-13 维沃移动通信有限公司 一种服饰搭配推荐方法及终端
US20190180486A1 (en) * 2017-12-12 2019-06-13 Beijing Xiaomi Mobile Software Co., Ltd. Method and device for displaying image
CN110401810A (zh) * 2019-06-28 2019-11-01 广东虚拟现实科技有限公司 虚拟画面的处理方法、装置、系统、电子设备及存储介质
CN111583415A (zh) * 2020-05-08 2020-08-25 维沃移动通信有限公司 信息处理方法、装置和电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8982110B2 (en) * 2005-03-01 2015-03-17 Eyesmatch Ltd Method for image transformation, augmented reality, and teleperence
US10210670B2 (en) * 2014-10-15 2019-02-19 Seiko Epson Corporation Head-mounted display device, method of controlling head-mounted display device, and computer program
CN106303555B (zh) * 2016-08-05 2019-12-03 深圳市摩登世纪科技有限公司 一种基于混合现实的直播方法、装置和系统
CN107203266A (zh) * 2017-05-17 2017-09-26 东莞市华睿电子科技有限公司 一种基于vr的数据处理方法
GB201710840D0 (en) * 2017-07-05 2017-08-16 Jones Maria Francisca Virtual meeting participant response indication method and system
CN110084891B (zh) * 2019-04-16 2023-02-17 淮南师范学院 一种ar眼镜的颜色调整方法及ar眼镜
CN110072116A (zh) * 2019-05-06 2019-07-30 广州虎牙信息科技有限公司 虚拟主播推荐方法、装置及直播服务器
CN110297684B (zh) * 2019-06-28 2023-06-30 腾讯科技(深圳)有限公司 基于虚拟人物的主题显示方法、装置及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107170047A (zh) * 2017-04-13 2017-09-15 北京小鸟看看科技有限公司 虚拟现实场景的更新方法、设备及虚拟现实设备
US20190180486A1 (en) * 2017-12-12 2019-06-13 Beijing Xiaomi Mobile Software Co., Ltd. Method and device for displaying image
CN108804546A (zh) * 2018-05-18 2018-11-13 维沃移动通信有限公司 一种服饰搭配推荐方法及终端
CN110401810A (zh) * 2019-06-28 2019-11-01 广东虚拟现实科技有限公司 虚拟画面的处理方法、装置、系统、电子设备及存储介质
CN111583415A (zh) * 2020-05-08 2020-08-25 维沃移动通信有限公司 信息处理方法、装置和电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222899A (zh) * 2022-09-21 2022-10-21 湖南草根文化传媒有限公司 虚拟数字人生成方法、系统、计算机设备及存储介质
CN115222899B (zh) * 2022-09-21 2023-02-21 湖南草根文化传媒有限公司 虚拟数字人生成方法、系统、计算机设备及存储介质

Also Published As

Publication number Publication date
CN111583415A (zh) 2020-08-25
CN111583415B (zh) 2023-11-24

Similar Documents

Publication Publication Date Title
WO2021223724A1 (zh) 信息处理方法、装置和电子设备
CN104170318B (zh) 使用交互化身的通信
CN110401810B (zh) 虚拟画面的处理方法、装置、系统、电子设备及存储介质
EP4099709A1 (en) Data processing method and apparatus, device, and readable storage medium
US6943794B2 (en) Communication system and communication method using animation and server as well as terminal device used therefor
CN110418095B (zh) 虚拟场景的处理方法、装置、电子设备及存储介质
CN109189544B (zh) 用于生成表盘的方法和装置
CN110413108B (zh) 虚拟画面的处理方法、装置、系统、电子设备及存储介质
JP2016537922A (ja) 擬似ビデオ通話方法及び端末
WO2017072534A2 (en) Communication system and method
CN113592985B (zh) 混合变形值的输出方法及装置、存储介质、电子装置
CN113362263B (zh) 变换虚拟偶像的形象的方法、设备、介质及程序产品
WO2021143574A1 (zh) 增强现实眼镜、基于增强现实眼镜的ktv实现方法与介质
CN111539882A (zh) 辅助化妆的交互方法、终端、计算机存储介质
CN115049016B (zh) 基于情绪识别的模型驱动方法及设备
CN114187547A (zh) 目标视频的输出方法及装置、存储介质及电子装置
CN110794964A (zh) 虚拟机器人的交互方法、装置、电子设备及存储介质
CN113313797A (zh) 虚拟形象驱动方法、装置、电子设备和可读存储介质
CN108962254A (zh) 用于辅助听障人士的方法、装置和系统及增强现实眼镜
CN113301372A (zh) 直播方法、装置、终端及存储介质
CN112364144A (zh) 交互方法、装置、设备和计算机可读介质
CN115690281B (zh) 角色表情的驱动方法及装置、存储介质、电子装置
CN109685741B (zh) 一种图像处理方法、装置及计算机存储介质
WO2023087929A1 (zh) 一种辅助拍摄方法、装置、终端和计算机可读存储介质
CN116433810A (zh) 服务器、显示设备以及虚拟数字人交互方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21799588

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21799588

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21799588

Country of ref document: EP

Kind code of ref document: A1