WO2020056694A1 - 增强现实的通信方法及电子设备 - Google Patents

增强现实的通信方法及电子设备 Download PDF

Info

Publication number
WO2020056694A1
WO2020056694A1 PCT/CN2018/106789 CN2018106789W WO2020056694A1 WO 2020056694 A1 WO2020056694 A1 WO 2020056694A1 CN 2018106789 W CN2018106789 W CN 2018106789W WO 2020056694 A1 WO2020056694 A1 WO 2020056694A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
model
communication
touch screen
user
Prior art date
Application number
PCT/CN2018/106789
Other languages
English (en)
French (fr)
Inventor
蒋东生
郭泽金
杜明亮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/106789 priority Critical patent/WO2020056694A1/zh
Priority to US17/278,015 priority patent/US11743954B2/en
Priority to CN201880090776.3A priority patent/CN111837381A/zh
Priority to EP18933804.9A priority patent/EP3833012A4/en
Publication of WO2020056694A1 publication Critical patent/WO2020056694A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Definitions

  • the embodiments of the present application relate to the field of communication technologies, and in particular, to an augmented reality communication method and an electronic device.
  • the embodiments of the present application provide a communication method for augmented reality, which can provide users with interactive services in a real scene during a voice communication or a video communication.
  • an embodiment of the present application provides an augmented reality communication method.
  • the method includes: in response to a first operation, a first electronic device sends AR communication request information to a second electronic device to request to perform communication with the second electronic device.
  • Augmented reality (AR) communication the first electronic device establishes an AR communication link with the second electronic device; the first electronic device displays an image including the first reality scene and the first AR model in the first reality scene on the touch screen (Such as the first AR model a) and the first AR communication interface (ie, the first AR communication interface a) of the second AR model (such as the second AR model a).
  • AR Augmented reality
  • the first AR model makes corresponding expressions and actions according to the expressions and actions of the first user obtained by the first device;
  • the second AR model makes expressions and actions according to the expressions and actions of the second user obtained by the first device. Make corresponding expressions and movements.
  • the first AR communication interface displayed by the first electronic device includes an image of a first reality scene in which the first electronic device is located, and the first AR model and the second AR in the first reality scene. model.
  • the first AR model can make the expressions and actions of the first user (that is, the calling party), and the second AR model can make the expressions and actions of the second user (that is, the called party).
  • the user can be provided with an interactive service in a real scene, and the communication experience of the user can be improved.
  • the first electronic device may receive the first operation during a voice communication or a video communication with the second electronic device. Specifically, before the first electronic device responds to the first operation and sends AR communication request information to the second electronic device, the method in the embodiment of the present application may further include: the first electronic device performs voice communication or video with the second electronic device. Communication, a graphical user interface that displays voice communication or video communication on a touch screen.
  • the first operation may be a first preset gesture input on a touch screen to a graphical user interface of voice communication or video communication.
  • the graphical user interface of voice communication or video communication includes an AR communication button, and the first operation is a click operation on the AR communication button on the touch screen.
  • the first operation is a click operation on a first preset button during a voice communication or a video communication, and a physical button of the first electronic device of the first preset button.
  • the first electronic device may initiate AR communication during a voice communication or a video communication with the second electronic device, which may improve the user experience of the voice communication or the video communication.
  • an AR application is installed in the first electronic device and the second electronic device.
  • AR applications are clients used to provide AR communication services.
  • the first electronic device may request AR communication with the second electronic device through the AR application.
  • the method in the embodiment of the present application further includes: in response to a click operation on an application icon of the AR application, the first electronic device The device displays an AR application interface including at least one contact option, and the at least one contact option includes a contact option corresponding to the second electronic device.
  • the first operation may be a first user's click operation on a contact option corresponding to the second electronic device.
  • the first electronic device includes a first camera (such as a front camera) and a second camera (such as a rear camera), and the second electronic device includes a first camera ( Such as the front camera).
  • the above-mentioned image of the first realistic scene is an image collected by the second camera of the first electronic device; the expressions and actions of the first user are collected by the first camera of the first electronic device; the expressions and actions of the second user are the second electronic device Captured by the first camera of the device.
  • the first electronic device may receive behavior information of the second user sent by the second electronic device through the AR communication link.
  • the behavior information of the second user is used to represent changes in the expression and movement of the second user.
  • the behavior information of the second user is collected by the first camera of the second electronic device.
  • the first electronic device may also send the behavior information of the first user to the second electronic device through the AR communication link.
  • the behavior information of the first user is used to represent changes in the expression and movement of the first user.
  • the behavior information of the first user is collected by the first camera of the first electronic device.
  • the second electronic device can display the expressions and actions of the first AR model (such as the first AR model b) in the second reality scene along with the expressions and actions of the first user. Change the corresponding dynamic image. That is to say, through the above solution, the first user can control the expressions and actions of the first AR model b displayed by the second electronic device, which improves the user's communication experience.
  • the first AR model is an AR model preset for the first electronic device in the first electronic device (that is, a default AR model), and the second AR model is An AR model preset for the second electronic device in the first electronic device.
  • the first electronic device may receive a user's click operation (such as a click operation) on the "Settings" application icon on the main interface (ie, the desktop) of the first electronic device; in response to the click operation, the first electronic device 200 may A setting screen including AR model setting options is displayed.
  • the first electronic device may display an AR model setting interface.
  • the AR model setting interface may include an AR model setting option of the owner and an AR model setting option of multiple contacts. The first electronic device may receive a user's click operation on any setting option, and set an AR model for a corresponding user.
  • the foregoing first AR model and the second AR model may be AR models added by the first electronic device to the AR communication interface in response to a user operation.
  • displaying the first AR communication interface on the touch screen by the first electronic device includes: in response to establishing the AR communication link, the first electronic device turns on the first camera and the second camera of the first electronic device; thus, the first electronic device The second camera of the device can collect images of the first real scene, and the first camera of the first electronic device can collect expressions and actions of the first user; the first electronic device displays the image including the first real scene on the touch screen, But does not include the second AR communication interface of the first AR model and the second AR model (that is, the second AR communication interface a); that is, the AR communication interface displayed by the first electronic device at the beginning does not include the AR model, The user is required to operate the first electronic device to add an AR model; in response to the second operation on the second AR communication interface on the touch screen, the first electronic device displays a model selection interface including
  • the AR communication interface (such as the second AR communication interface) just displayed by the first electronic device only includes images of the first real scene collected by the second camera of the first electronic device, excluding the first AR model and the second AR model.
  • An AR model may be added by the user in this second AR communication interface.
  • the first electronic device responds to a click operation on the first AR model on the touch screen
  • the above model selection interface may be displayed; in response to a click operation on a third model option among a plurality of model options on the touch screen, the first electronic device may replace the first AR model in the first AR communication interface with a third model option Corresponding third AR model.
  • the first electronic device can change the AR model at any time according to the user's operation, and display the AR model that meets the user's preferences, which can improve the user's communication experience.
  • the first electronic device responds to a third operation (ie, a recording operation) on the touch screen.
  • the first electronic device may start recording video data of AR communication between the first electronic device and the second electronic device.
  • the video data can be sent to other electronic devices, so that users of the other electronic devices can learn and understand the content involved in the communication.
  • the first electronic device may upload the video data to a public network platform, so that more users can learn and understand the content involved in the communication.
  • the image of the first realistic scene changes corresponding to the framing content of the second camera of the first electronic device.
  • the image of the real scene displayed by the first electronic device can change in real time as the first electronic device moves, and the user's communication experience in the real scene can be improved.
  • a position of the first AR model in the first realistic scene is different from a position of the second AR model in the first realistic scene.
  • the first electronic device may display a dynamic image of the first AR model moving along a trajectory corresponding to the drag operation in the first realistic scene in response to the drag operation of the first AR model on the touch screen.
  • the first The electronic device may display contact information of the second user.
  • the contact information includes at least one of a second user's phone number, email address, or avatar.
  • the first electronic device after the first electronic device displays the first AR communication interface on the touch screen, the first electronic device can identify the voice data collected by the microphone of the first electronic device, and Voice data converted from audio and electrical signals from the second electronic device; the first electronic device displays the text of the recognized voice data on the touch screen. That is, the first electronic device can display text (ie, subtitles) of the voice data of AR communication in the AR communication interface. The first electronic device displays the text of the voice data of the AR communication, and can visually present to the user the content exchanged by the users of the two parties during the AR communication between the first electronic device and the second electronic device.
  • the first electronic device displays the text of the voice data of the AR communication, and can visually present to the user the content exchanged by the users of the two parties during the AR communication between the first electronic device and the second electronic device.
  • the first electronic device when the first electronic device recognizes the voice data corresponding to the text and the preset text, the first AR model and the second AR model are executed on the touch screen to execute the preset text.
  • the first electronic device may store a plurality of preset texts and actions corresponding to the preset texts. For example, the action corresponding to the preset text "hello” is a handshake. The action corresponding to the preset text "Goodbye” is waving.
  • the first electronic device after the first electronic device sends the AR communication request information to the second electronic device, if the first electronic device receives the first AR communication response information sent by the two electronic devices , It means that the second electronic device agrees to perform the AR communication with the first electronic device. In response to the first AR communication response information, the first electronic device establishes an AR communication link with the second electronic device.
  • the first electronic device When the first electronic device requests AR communication with the second electronic device, the first electronic device cannot directly establish an AR communication link with the second electronic device, but according to the wishes of the called party, the second electronic device (that is, the called party) ) Only after the first electronic device agrees can the AR communication link be established with the second electronic device. In this way, the communication experience of the user can be improved.
  • the second electronic device may not agree to perform AR communication with the first electronic device. If the first electronic device receives the second AR communication response information sent by the second electronic device, it indicates that the second electronic device refuses to perform the AR communication with the first electronic device. In response to the second AR communication response information, the first electronic device may present the second prompt information. The second prompt information is used to instruct the second electronic device to refuse to perform the AR communication with the first electronic device.
  • an embodiment of the present application provides an augmented reality communication method.
  • the method may include: the second electronic device receives AR communication request information sent by the first electronic device, and the AR communication request information is used to request communication with the second electronic device.
  • the device performs AR communication; in response to the AR communication request information, the second electronic device establishes an AR communication link with the first electronic device; the second electronic device displays a first AR communication interface on the touch screen, and the first AR communication interface includes a second An image of a real scene, and a first AR model (such as the first AR model b) and a second AR model (such as the second AR model b) in a second real scene.
  • the second reality scene is a reality scene in which the second electronic device is located.
  • the first AR model makes corresponding expressions and actions according to the expressions and actions of the first user obtained by the first device;
  • the second AR model makes expressions and actions according to the expressions and actions of the second user obtained by the first device. Make corresponding expressions and movements.
  • the second AR communication interface displayed by the second electronic device includes an image of the second reality scene where the second electronic device is located, and the first AR model and the second AR in the second reality scene. model.
  • the expressions and actions of the first user can be presented on the first AR model
  • the expressions and actions of the second user can be presented on the second AR model.
  • the second electronic device may present the first prompt information in response to the AR communication request information.
  • the first prompt information is used to confirm whether the second electronic device performs AR communication with the first electronic device. Then, the user decides whether to perform AR communication with the first electronic device.
  • the second electronic device may establish the AR communication link with the second electronic device in response to an operation (that is, the fifth operation) that agrees to perform AR communication.
  • establishment of the AR communication link with the first electronic device may be refused.
  • the second electronic device may send the first AR communication response information to the first electronic device.
  • the second electronic device may send the second AR communication response information to the first electronic device.
  • the second electronic device in response to the AR communication request information, does not directly establish the AR communication link with the first electronic device, but decides whether to perform AR communication with the first electronic device according to the user's wishes, which can improve the user. Communication experience.
  • the legality of the first electronic device may be authenticated before the second electronic device displays the first prompt information.
  • the second electronic device can determine whether the first electronic device is legal; if the first electronic device is legal, the second electronic device presents the first prompt information.
  • the first electronic device legally includes: the device identification information of the first electronic device is stored in the white list of the second electronic device, or the device identification information of the first electronic device is not stored in the black list of the second electronic device.
  • the device identification information of the first electronic device includes a phone number of the first electronic device.
  • the second electronic device and the first electronic device are performing voice communication or video communication.
  • Graphic user interface for voice communication or video communication displayed on the touch screen.
  • Initiating AR communication during the voice communication or video communication between the first electronic device and the second electronic device can improve the user experience of voice communication or video communication.
  • the AR application is installed in the first electronic device and the second electronic device.
  • the AR communication request information is sent by the first electronic device to the second electronic device in the AR application.
  • the second electronic device includes a first camera (such as a front camera) and a second camera (such as a rear camera).
  • the first electronic device includes a first camera (such as a front camera).
  • the image of the second reality scene is an image collected by the second camera of the second electronic device; the expressions and actions of the first user are collected by the first camera of the first electronic device; the expressions and actions of the second user are the second Captured by the first camera of the electronic device.
  • the second electronic device may receive the behavior information of the first user sent by the first electronic device through the AR communication link.
  • the behavior information of the first user is used to represent changes in the expression and movement of the first user.
  • the behavior information of the first user is collected by the first camera of the first electronic device. That is, through the above solution, the first user can control the expression and movement of the first AR model displayed by the second electronic device, and improve the user's communication experience.
  • the second electronic device may also send the behavior information of the second user to the first electronic device through the AR communication link.
  • the behavior information of the second user is used to represent changes in the expression and movement of the second user.
  • the behavior information of the second user is collected by the first camera of the second electronic device.
  • the first electronic device can display the expressions and actions of the second AR model (such as the second AR model a) in the first realistic scene according to the expressions and actions of the second user. Change the corresponding dynamic image. That is to say, through the above solution, the second user can control the expression and movement of the second AR model a displayed by the first electronic device, which improves the user's communication experience.
  • the first AR model is an AR model preset for the first electronic device in the second electronic device
  • the second AR model is the second AR device for the second electronic device.
  • An AR model preset by an electronic device for the method for setting the AR model in the second electronic device.
  • the foregoing first AR model and the second AR model may be AR models added by the second electronic device in response to the second operation.
  • the AR communication interface displayed by the second electronic device during the AR model added by the second electronic device in response to the second operation includes the second AR communication interface b, the third AR communication interface b, and the first AR communication interface.
  • the second AR communication interface b includes an image of the second reality scene, but does not include the first AR model b and the second AR model b.
  • the third AR communication interface b includes an image of the second reality scene and the first AR model b, but does not include the second AR model b.
  • the second electronic device can change the AR model at any time according to the user's operation, and the display conforms to the user.
  • the preferred AR model can improve the user's communication experience.
  • the second electronic device responds to a third operation (ie, a recording operation) on the touch screen. , Can start recording video data of AR communication between the first electronic device and the second electronic device.
  • a third operation ie, a recording operation
  • the video data can be sent to other electronic devices, so that users of the other electronic devices can learn and understand the content involved in the communication.
  • the second electronic device may upload the video data to a public network platform, so that more users can learn and understand the content involved in the communication.
  • the image of the second real scene changes corresponding to the framing content of the second camera of the first electronic device.
  • the image of the real scene displayed by the second electronic device can change in real time as the second electronic device moves, and the user's communication experience in the real scene can be improved.
  • a position of the first AR model in the second reality scene is different from a position of the second AR model in the second reality scene.
  • the second electronic device may display a dynamic image of the first AR model moving along a trajectory corresponding to the drag operation in the first realistic scene.
  • the second electronic device displays the first AR communication interface on the touch screen, in response to the fourth operation on the second AR model on the touch screen, it may display Contact information of the second user.
  • the method for displaying the contact information by the second electronic device reference may be made to the method for displaying the contact information by the first electronic device, which is not described in the embodiment of the present application.
  • the second electronic device may recognize the voice data collected by the microphone of the second electronic device, and Voice data converted from audio and electrical signals from the first electronic device; the second electronic device displays the text of the recognized voice data on the touch screen. That is, the second electronic device can display the text (ie, subtitles) of the AR communication voice data in the AR communication interface.
  • the second electronic device displays the text of the voice data of the AR communication, and can visually present to the user the content of the communication between the two users during the AR communication between the first electronic device and the second electronic device.
  • the touch screen displays the first AR model and the second AR model to execute the corresponding preset text.
  • Motion graphics The second electronic device may store a plurality of preset texts and actions corresponding to the respective preset texts. For example, the action corresponding to the preset text "hello” is a handshake. The action corresponding to the preset text "Goodbye” is waving.
  • the first AR model and the second AR model interact according to the voice data of the users on both sides, making the content displayed on the AR communication interface more consistent with the face-to-face communication picture of the user in the real scene, which can enhance the reality of AR communication and enhance the user's communication Experience.
  • an embodiment of the present application provides an electronic device, and the electronic device is a first electronic device.
  • the electronic device may include: a processor, a memory, a touch screen, and a communication interface; the memory, the touch screen, and a communication interface are coupled to the processor; the memory is used to store computer program code; the computer program code includes computer instructions, and when the processor executes the computer instructions described above
  • the above processor is configured to receive a first operation of the user; the communication interface is configured to send an augmented reality AR communication request message to the second electronic device in response to the first operation, and the AR communication request information is used to request to communicate with the second electronic device Performing AR communication; establishing an AR communication link with a second electronic device; a touch screen for displaying a first AR communication interface during the AR communication process, the first AR communication interface including an image of a first reality scene, and being in the first reality The first AR model and the second AR model of the scene.
  • the first reality scenario is the reality scenario where the first electronic device is located; the first AR model is an AR model of a first user corresponding to the first electronic device, and the second AR model is a second user's corresponding to the second electronic device.
  • AR model in the process of AR communication, the first AR model displayed on the touch screen makes corresponding expressions and actions according to the expressions and actions of the first user acquired by the first device, and the second AR model displayed on the touch screen is acquired according to the first device
  • the second user ’s expressions and actions make corresponding expressions and actions.
  • the processor is further configured to perform voice communication or video communication with the second electronic device before the communication interface sends the AR communication request information to the second electronic device.
  • the touch screen is also used to display a graphical user interface for voice communication or video communication.
  • the first operation is a first preset gesture input to a graphical user interface of voice communication or video communication on a touch screen; or the graphical user interface of voice communication or video communication includes an AR communication button, and the first operation is to touch the touch screen.
  • an AR application is installed in the foregoing electronic device and the second electronic device, and the AR application is a client for providing an AR communication service; the related information of the AR application is stored in the memory. .
  • the touch screen is further configured to: before the communication interface sends the AR communication request information to the second electronic device, in response to a click operation on an application icon of the AR application, the first electronic device displays an AR application interface, and the AR application interface includes at least one contact option , The at least one contact option includes a contact option corresponding to the second electronic device.
  • the first operation is a click operation of the first user on a contact option corresponding to the second electronic device.
  • the first AR model is an AR model preset for the first electronic device in the electronic device
  • the second AR model is a preset AR model for the second electronic device in the electronic device.
  • the set AR model is an AR model preset for the first electronic device in the electronic device.
  • the touch screen displaying the first AR communication interface includes: in response to establishing an AR communication link, the touch screen displays a second AR communication interface, and the second AR communication interface includes a first AR communication interface.
  • the foregoing electronic device further includes a first camera and a second camera
  • the second electronic device includes a first camera.
  • the processor is further configured to, in response to establishing the AR communication link, turn on the first camera and the second camera of the first electronic device, and the second camera of the first electronic device is used to collect an image of the first real scene, and the first electronic device
  • the first camera is used to capture the expressions and actions of the first user.
  • the above touch screen is further configured to display a model selection interface in response to a click operation on the first AR model on the touch screen after the first AR communication interface is displayed, and the model The selection interface includes multiple model options, each model option corresponding to an AR model; in response to a click operation on a touch screen of a third model option of the multiple model options, the touch screen displays the first AR model in the first AR communication interface Replace with the AR communication interface of the third AR model corresponding to the third model option.
  • the processor is further configured to, after the first AR communication interface is displayed on the touch screen, respond to the third operation on the touch screen to start recording the first electronic device and the first electronic device.
  • Two electronic devices perform AR communication video data.
  • the touch screen is further configured to display the second user in response to the fourth operation on the touch screen on the second AR model after the first AR communication interface is displayed on the touch screen.
  • the contact information includes at least one of the second user's phone number, email address, or avatar.
  • the processor is further configured to identify the voice data collected by the microphone of the first electronic device after the first AR communication interface is displayed on the touch screen, and the voice data collected by the second electronic device. Voice data converted from audio electrical signals of the device.
  • the touch screen is also used to display the text of the voice data recognized by the processor.
  • the touch screen is further configured to display the first AR model and the second AR model to execute the preset text when the processor recognizes the voice data corresponding to the text and the preset text. Corresponding motion image.
  • an embodiment of the present application provides an electronic device, and the electronic device is a second electronic device.
  • the electronic device includes: a processor, a memory, a touch screen, and a communication interface; the memory, the touch screen, and a communication interface are coupled to the processor; the memory is used to store computer program code; the computer program code includes computer instructions, and when the processor executes the computer instructions, A communication interface for receiving AR communication request information sent by the first electronic device, the AR communication request information for requesting AR communication with the second electronic device, and a processor for responding to the AR communication request information and communicating with the first
  • the electronic device establishes an AR communication link; a touch screen is used to display the first AR communication interface during the AR communication process, the first AR communication interface includes an image of a second reality scene, and the first AR model and The second AR model.
  • the second reality scenario is the reality scenario where the second electronic device is located; the first AR model is the AR model of the first user corresponding to the first electronic device; the second AR model is the second user's corresponding to the second electronic device.
  • AR model in the process of AR communication, the first AR model displayed on the touch screen makes corresponding expressions and actions according to the expressions and actions of the first user acquired by the second device, and the second AR model displayed on the touch screen is acquired according to the second device
  • the second user ’s expressions and actions make corresponding expressions and actions.
  • the processor is further configured to present the first prompt information in response to the AR communication request information, and the first prompt information is used to confirm whether the second electronic device is connected with the second electronic device.
  • the first electronic device performs AR communication; in response to a user agreeing to perform an AR communication operation on the touch screen, an AR communication link is established with the second electronic device.
  • the processor is configured to present the first prompt information in response to the AR communication request information, and includes: a processor configured to respond to the AR communication request information to determine the first Whether an electronic device is legal; if the first electronic device is legal, a first prompt message is presented.
  • the first electronic device legally includes: the device identification information of the first electronic device is stored in the white list of the second electronic device, or the device identification information of the first electronic device is not stored in the black list of the second electronic device;
  • the device identification information of an electronic device includes the phone number of the first electronic device; the white list and black list of the second electronic device are stored in the memory.
  • the processor is further configured to perform voice communication or video communication with the first electronic device before the communication interface receives the AR communication request information.
  • the touch screen is also used to display a graphical user interface for voice communication or video communication.
  • the first AR model displayed on the touch screen is an AR model preset for the first electronic device in the electronic device
  • the second AR model displayed on the touch screen is in the electronic device.
  • the touch screen for displaying a first AR communication interface includes: in response to establishing an AR communication link, the touch screen displays a second AR communication interface, and Including an image of a second reality scene, but excluding the first AR model and the second AR model; in response to a second operation on the touch screen of the second AR communication interface, the touch screen displays a model selection interface, and the model selection interface includes a plurality of Model options, each model option corresponds to an AR model; in response to a first selection operation of a first model option of a plurality of model options on a touch screen, the touch screen displays a third AR communication interface, and the third AR communication interface includes a second The image of the real scene and the first AR model corresponding to the first model option, but excluding the second AR model; in response to the second operation of the third AR communication interface on the touch screen, the touch screen displays a model selection interface; in response to the touch screen For a second selection operation of a second model option among a plurality of model options,
  • the foregoing electronic device includes a first camera and a second camera, and the first electronic device includes a first camera.
  • the processor is further configured to, in response to establishing the AR communication link, turn on the first camera and the second camera of the electronic device, the second camera of the electronic device is used to collect an image of the second reality scene, and the first camera of the electronic device is used to collect Expressions and actions of the second user.
  • the touch screen is further configured to display a model selection interface in response to a click operation on the first AR model on the touch screen after the first AR communication interface is displayed.
  • the selection interface includes multiple model options, each model option corresponding to an AR model; in response to a click operation on a touch screen of a third model option of the multiple model options, the touch screen displays the first AR model in the first AR communication interface Replace with the AR communication interface of the third AR model corresponding to the third model option.
  • the processor is further configured to start recording the second electronic device and the first electronic device in response to the third operation on the touch screen after the first AR communication interface is displayed on the touch screen. Video data of an electronic device performing AR communication.
  • the foregoing touch screen is further configured to display the first user ’s information in response to the fourth operation on the first AR model on the touch screen after the first AR communication interface is displayed.
  • Contact information includes at least one of a phone number, an email address, or an avatar of the first user.
  • the processor is further configured to identify the voice data collected by the microphone of the second electronic device after the first AR communication interface is displayed on the touch screen, and the voice data collected by the first electronic device Voice data converted from the audio and electrical signals of the device; the touch screen is also used to display the text of the voice data recognized by the processor.
  • the touch screen is further configured to display the first AR model and the second AR model to execute the preset text when the processor recognizes the voice data corresponding to the text and the preset text. Corresponding motion image.
  • an embodiment of the present application provides a computer storage medium.
  • the computer storage medium includes computer instructions, and when the computer instructions are run on an electronic device, the electronic device executes the first aspect or the second aspect and The communication method of the augmented reality described in any one of the possible design manners.
  • an embodiment of the present application provides a computer program product, and when the computer program product runs on a computer, the computer is caused to execute the first or second aspect and any possible design manner.
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment
  • FIG. 2 is a schematic diagram of an example communication scenario applied to an augmented reality communication method according to an embodiment
  • 3A-3D are schematic diagrams of an example of an AR communication interface provided by the embodiment.
  • FIGS. 4A-4B are schematic diagrams of an example of an AR communication interface provided by another embodiment
  • FIGS. 5A-5B are schematic diagrams of examples of an AR communication interface according to another embodiment.
  • FIGS. 6A-6D are schematic diagrams of examples of an AR communication interface according to another embodiment.
  • FIGS. 7A-7E are schematic diagrams of examples of an AR communication interface provided by another embodiment.
  • FIG. 8 is a schematic diagram of an example of a facial feature point cloud according to an embodiment
  • 9A is a schematic diagram of an example of a three-dimensional face model and an AR model according to another embodiment
  • 9B is a flowchart of an augmented reality communication method according to another embodiment.
  • FIG. 9C is a schematic diagram of an AR communication example according to another embodiment.
  • FIGS. 10A-10B are schematic diagrams of an example of an AR communication interface according to another embodiment.
  • FIGS. 11A-11D are schematic diagrams of examples of an AR communication interface provided by another embodiment.
  • FIG. 12 is a schematic diagram of an example communication scenario applied to an augmented reality communication method according to another embodiment.
  • FIG. 13 is a schematic diagram of an AR communication interface according to another embodiment
  • FIGS. 14A-14B are schematic diagrams of an example of an AR communication interface provided by another embodiment
  • 16 is a flowchart of an augmented reality communication method according to another embodiment
  • FIG. 17 is a flowchart of an augmented reality communication method according to another embodiment.
  • FIG. 18 is a flowchart of an augmented reality communication method according to another embodiment.
  • FIG. 19 is a flowchart of an augmented reality communication method according to another embodiment.
  • FIG. 21 is a flowchart of an augmented reality communication method according to another embodiment.
  • FIG. 22 is a schematic structural composition diagram of an electronic device according to another embodiment.
  • the electronic devices in the following embodiments may be portable computers (such as mobile phones, etc.), notebook computers, personal computers (PCs), wearable electronic devices (such as smart watches, smart glasses or smart helmets, etc.), tablet computers, AR ⁇ Virtual reality (VR) devices, on-board computers, etc.
  • portable computers such as mobile phones, etc.
  • PCs personal computers
  • wearable electronic devices such as smart watches, smart glasses or smart helmets, etc.
  • tablet computers AR ⁇ Virtual reality (VR) devices, on-board computers, etc.
  • VR Virtual reality
  • AR technology is a technology that can superimpose a virtual object into a real scene to achieve the fusion and interaction of the virtual and the real in the real scene.
  • the AR technology in the embodiment of the present application may include an AI technology based on a multiple camera.
  • the AR technology may include a front camera-based AR technology and a rear camera-based AR technology.
  • the front camera-based AR technology refers to: an electronic device can turn on the front camera and collect the user's face and facial features to generate a three-dimensional face model; add AR models (such as emoji models) pre-stored in the electronic device to the AR scene;
  • AR models such as emoji models
  • the mapping relationship between the three-dimensional face model and the AR model enables the user's facial expressions and body movements to control the AR model.
  • AR technology based on the rear camera means that the electronic device can turn on the rear camera to collect images of real scenes and display the images of the real scenes collected by the rear camera on the touch screen; add AR models to the real scenes displayed on the touch screen; electronic devices The sensor detects the position change and motion parameters of the electronic device in real time, and calculates the coordinate changes of the AR model in the real scene displayed on the touch screen according to the detected parameters; realizes the interaction of the AR model in the real scene according to the calculated coordinate changes.
  • the electronic device may respond to a user's operation on the AR model displayed on the touch screen (such as a drag operation, a click operation, etc.) to implement the interaction of the AR model in a real scene.
  • FIG. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment.
  • the electronic device 100 is the electronic device 200 or the electronic device 300 described in this embodiment.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, and a battery 142 , Antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193 Display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • the camera 193 may include a first camera and a second camera. E.g. The first camera may be a front camera, and the second camera may be a rear camera.
  • the first camera in the following embodiments may be a structured light camera, which is also referred to as a point cloud depth camera or a 3D structured light camera.
  • the structured light camera can collect facial feature point clouds, which can form 3D images of human faces.
  • the images presented on electronic devices are two-dimensional images.
  • the depth corresponding to each position on the image cannot be displayed in a two-dimensional image.
  • a structured light camera acquires a 3D image, it not only acquires the color of each position in the image, but also the depth of each position.
  • the principle of structured light is to emit an invisible grating through a light source to isolate a characteristic stripe or image, and then calculate the corresponding three-dimensional image data, such as a three-dimensional face model, based on the distribution and distortion of the pattern.
  • the structured light camera may collect a facial feature point cloud 802 of the user 801.
  • the facial feature point cloud 802 may represent facial contour information of the user 801.
  • the electronic device 100 can construct a three-dimensional face model of the user 801.
  • the collection of the depth information of each position in the image is not limited to the structured light camera, and the electronic device 100 may also estimate the depth information of each position in the image based on an optical camera through algorithms such as deep learning.
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an environment Light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer parts than shown, or some parts may be combined, or some parts may be split, or different parts may be arranged.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image, signal processor, ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and / or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • different processing units may be independent devices or integrated in one or more processors.
  • the controller may be a nerve center and a command center of the electronic device 100, and a decision maker who instructs each component of the electronic device 100 to coordinate work according to instructions.
  • the controller can generate operation control signals according to the instruction operation code and timing signals, and complete the control of fetching and executing instructions.
  • the processor 110 may further include a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit (inter-integrated circuit, sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver receiver / transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input / output (GPIO) interface, subscriber identity module (SIM) interface, and / Or universal serial bus (universal serial bus, USB) interface.
  • I2C integrated circuit
  • I2S integrated circuit
  • PCM pulse code modulation
  • UART universal asynchronous transceiver receiver / transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input / output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be respectively coupled to a touch sensor 180K, a charger, a flash, a camera 193, and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to implement a function of receiving a call through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing, and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement the function of receiving calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus for asynchronous communication.
  • the bus may be a two-way communication bus. It converts the data to be transferred between serial and parallel communications.
  • a UART interface is typically used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with a Bluetooth module in the wireless communication module 160 through a UART interface to implement a Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to implement a function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display 194, the camera 193, and the like.
  • the MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), and the like.
  • CSI camera serial interface
  • DSI display serial interface
  • the processor 110 and the camera 193 communicate through a CSI interface to implement a shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to implement a display function of the electronic device 100.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and a peripheral device. It can also be used to connect headphones and play audio through headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes or a combination of multiple interface connection modes in the above embodiments.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. While the charge management module 140 is charging the battery 142, the power management module 141 can also provide power to the electronic device.
  • the power management module 141 is used to connect the battery 142, the charge management module 140 and the processor 110.
  • the power management module 141 receives inputs from the battery 142 and / or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, number of battery cycles, battery health (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charge management module 140 may be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, and a baseband processor.
  • the antenna 1 and the antenna 2 are used for transmitting and receiving electromagnetic wave signals.
  • Each antenna in the electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve antenna utilization.
  • antenna 1 can be multiplexed into a diversity antenna for a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G / 3G / 4G / 5G and the like applied on the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 150 may receive the electromagnetic wave by the antenna 1, and perform filtering, amplification, and other processing on the received electromagnetic wave, and transmit it to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is configured to modulate a low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be a separate device.
  • the modem processor may be independent of the processor 110 and disposed in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 may provide wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites applied to the electronic device 100. Wireless communication solutions such as global navigation system, GNSS, frequency modulation (FM), near field communication (NFC), and infrared technology (infrared, IR).
  • the wireless communication module 160 may be one or more devices that integrate at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic wave radiation through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (GSM), a general packet radio service (GPRS), a code division multiple access (CDMA), and broadband. Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and / or IR technology.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and / or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Bertdou navigation navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing and is connected to the display 194 and an application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can adopt a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode or an active-matrix organic light-emitting diode emitting diodes (AMOLED), flexible light-emitting diodes (FLEDs), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (QLEDs), etc.
  • the electronic device 100 may include one or N display screens 194, where N is a positive integer greater than one.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP processes the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, and the light is transmitted to the light receiving element of the camera through the lens. The light signal is converted into an electrical signal, and the light receiving element of the camera passes the electrical signal to the ISP for processing and converts the image to the naked eye. ISP can also optimize the image's noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, an ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • An object generates an optical image through a lens and projects it onto a photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs digital image signals to the DSP for processing.
  • DSP converts digital image signals into image signals in standard RGB, YUV and other formats.
  • the electronic device 100 may include one or N cameras 193, where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals. In addition to digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform a Fourier transform on the frequency point energy and the like.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture expert groups (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture expert groups
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can quickly process input information and continuously learn by itself.
  • the NPU can realize applications such as intelligent recognition of the electronic device 100, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, save music, videos and other files on an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area may store an operating system, at least one application required by a function (such as a sound playback function, an image playback function, etc.) and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
  • UFS universal flash memory
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. Such as music playback, recording, etc.
  • the audio module 170 is configured to convert digital audio information into an analog audio signal and output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called a "horn" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as the "handset" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or a voice message, it can answer the voice by holding the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound through the mouth near the microphone 170C, and input a sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, in addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 100 may further be provided with three, four, or more microphones 170C to achieve sound signal collection, noise reduction, identification of sound sources, and directional recording.
  • the headset interface 170D is used to connect a wired headset.
  • the headphone interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular communication industry association (United States of America, CTIA) standard interface.
  • OMTP open mobile electronic device platform
  • CTIA cellular communication industry association
  • the pressure sensor 180A is used to sense a pressure signal, and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be disposed on the display screen 194.
  • the capacitive pressure sensor may be at least two parallel plates having a conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity lower than the first pressure threshold is applied to the short message application icon, an instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction for creating a short message is executed.
  • the gyro sensor 180B may be used to determine a movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, the x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the angle of the electronic device 100 shake, and calculates the distance that the lens module needs to compensate according to the angle, so that the lens cancels the shake of the electronic device 100 through the backward movement to achieve image stabilization.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the barometric pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C, and assists in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip leather case by using the magnetic sensor 180D.
  • the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the opened and closed state of the holster or the opened and closed state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to recognize the posture of electronic devices, and is used in applications such as switching between horizontal and vertical screens, and pedometers.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light through a light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from a nearby object. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficiently reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100.
  • the electronic device 100 may use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in holster mode, and the pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • Ambient light sensor 180L can also be used to automatically adjust white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 may use the collected fingerprint characteristics to realize fingerprint unlocking, access application lock, fingerprint photographing, fingerprint answering an incoming call, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 executes a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 performs a performance reduction of a processor located near the temperature sensor 180J so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid the abnormal shutdown of the electronic device 100 caused by the low temperature.
  • the electronic device 100 when the temperature is lower than another threshold, performs a boost on the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K is also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also referred to as a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • a visual output related to the touch operation may be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire a vibration signal of a human voice oscillating bone mass.
  • Bone conduction sensor 180M can also contact the human pulse and receive blood pressure beating signals.
  • the bone conduction sensor 180M may also be disposed in the earphone and combined into a bone conduction earphone.
  • the audio module 170 may analyze a voice signal based on the vibration signal of the oscillating vibration bone obtained by the bone conduction sensor 180M to implement a voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement a heart rate detection function.
  • the keys 190 include a power-on key, a volume key, and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 may receive a key input, and generate a key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 may generate a vibration alert.
  • the motor 191 can be used for vibration alert for incoming calls, and can also be used for touch vibration feedback.
  • the touch operation applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios (such as time reminders, receiving information, alarm clocks, games, etc.) can also correspond to different vibration feedback effects.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting or removing the SIM card interface 195.
  • the electronic device 100 may support one or N SIM card interfaces, and N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, MicroSIM card, SIM card, etc. Multiple SIM cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 may also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through a SIM card to implement functions such as calling and data communication.
  • the electronic device 100 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • FIG. 2 is a schematic diagram illustrating an example of a communication scenario to which an augmented reality communication method according to this embodiment is applied.
  • the first electronic device such as the electronic device 200
  • the second electronic device such as the electronic device 300
  • each include a first camera such as a front camera
  • a second camera such as a rear camera
  • a touch screen For the structures of the electronic device 200 and the electronic device 300, reference may be made to the description of the electronic device 100 shown in FIG. 1 in the embodiment of the present application, and details are not described in the following embodiments.
  • a first user uses the electronic device 200 to perform wireless communication with a second user (such as the user 310) using the electronic device 300.
  • the scenario 220 shown in FIG. 2 is a realistic scenario in which the user 210 uses the electronic device 200, that is, a first realistic scenario.
  • the scene 320 shown in FIG. 2 is a real scene where the user 310 uses the electronic device 300, that is, a second real scene.
  • the communication between the electronic device 200 and the electronic device 300 may be voice communication or video communication.
  • the electronic device 200 may call a "phone" application or a third-party communication application (such as WeChat, QQ, etc.) to perform voice communication or video communication with the electronic device 300.
  • an application program (referred to as an AR application) for supporting the electronic device to perform AR communication may be installed in the electronic device 200 and the electronic device 300.
  • the electronic device 200 may call the AR application to perform AR communication with the electronic device 300.
  • a second camera (referred to as a second camera a, such as a rear camera a) of the electronic device 200 may collect an image of the scene 220.
  • the electronic device 200 may display the AR communication interface 301 shown in FIG. 3A on a touch screen or other display device.
  • the electronic device 200 may add the AR model of the user 310 (ie, the second AR model a, such as the AR model 311) at a specific position in the AR communication interface 301.
  • the second camera of the electronic device 300 can collect images of the scene 310 of the user 310, and the electronic device 300 is on a touch screen or other
  • the display device displays the AR communication interface 302 shown in FIG. 3B.
  • the electronic device 300 may add the AR model of the user 210 (ie, the first AR model b, such as the AR model 211) at a specific position in the AR communication interface 302.
  • the AR model 211 can make corresponding behaviors according to the behavior information of the user 210 (such as facial expressions, body motions, etc.).
  • the AR model 311 can make corresponding behaviors according to the behaviors of the user 310 (for example, facial expressions or body motions, etc.).
  • the first camera referred to as the first camera b, such as the front camera b
  • the electronic device 300 may collect behavior information (for example, facial feature information or body motion information) of the user 310. Facial feature information is used to characterize the user's facial expressions and motion changes.
  • the body movement information can be used to characterize a user's body movement changes.
  • the electronic device 200 may receive the behavior information of the user 310 sent by the electronic device 300, and may control the facial expression and body motion of the AR model 311 according to the behavior information of the user 310.
  • the electronic device 200 may display a dynamic image of the AR model 311 talking with the user 210 in the scene 220 where the user 210 is located.
  • the facial expressions and body movements of the AR model 311 may change accordingly as the facial expressions and body movements of the user 310 change.
  • the electronic device 200 may provide the user 210 with a real experience in which the user 310 talks with the user 210 in the scene 220.
  • the above AR model 211 can make corresponding behaviors according to the behaviors of the user 210 (for example, facial features or body motions, etc.).
  • the first camera referred to as the first camera a, such as the front camera a
  • the electronic device 300 may receive the behavior information of the user 210 sent by the electronic device 200, and control the facial expression and body motion of the AR model 211 according to the behavior information of the user 210.
  • the electronic device 300 may display a dynamic image of the AR model 211 talking to the user 310 in the scene 320 where the user 310 is located.
  • the facial expression and body movement of the AR model 211 may change accordingly as the facial expression and body movement of the user 210 changes.
  • the electronic device 300 may provide the user 310 with a real experience in which the user 210 talks with the user 310 in the scene 320. Therefore, this method improves the user experience.
  • the electronic device 200 may further display a dynamic image of the AR model 311 and the AR model of the user 210 (ie, the first AR model a, such as the AR model 212) talking face-to-face in the scene 220.
  • the electronic device 200 may not only add the AR model 311 to the AR communication interface 303 shown in FIG. 3C, but also add the AR model 212 to the AR communication interface 303 shown in FIG. 3C.
  • the electronic device 200 may control the facial expression and body motion of the AR model 212 according to the behavior information of the user 210 collected by the front camera a. That is, the facial expressions and body movements of the AR model 212 change accordingly as the facial expressions and body movements of the user 210 change.
  • the electronic device 300 may also display a dynamic image of the AR model of the user 310 (ie, the second AR model b, such as the AR model 312) and the AR model 211 talking in the scene 320. Specifically, the electronic device 300 may not only add the AR model 211 to the AR communication interface 304 shown in FIG. 3D, but also add the AR model 312 to the AR communication interface 304 shown in FIG. 3D.
  • the electronic device 300 controls the facial expression and body movement of the AR model 211 according to the behavior information of the user 210 received from the electronic device 200.
  • the electronic device 300 controls the facial expression and body movement of the AR model 312 according to the behavior information of the user 310 that can be collected by the front camera b. That is, the facial expressions and body movements of the AR model 312 change accordingly as the facial expressions and body movements of the user 310 change.
  • the electronic device 200 can provide the user 210 with a real experience of talking with the user 310 in the scenario 220.
  • the electronic device 300 may also provide the user 310 with a real experience of talking with the user 310 in the scene 320.
  • the electronic device 200 may respond to a recording operation of the user.
  • the video data including the AR model 311 and the AR model 212 (recorded as video data) is recorded. 1).
  • the electronic device 300 may record video data (recorded as video data 2) of the AR model 312 and the AR model 211 during the voice communication or video communication between the electronic device 300 and the electronic device 200 in response to a recording operation of the user.
  • video data 2 of the AR model 312 and the AR model 211 during the voice communication or video communication between the electronic device 300 and the electronic device 200 in response to a recording operation of the user.
  • the electronic device 200 may send the video data 1 to other electronic devices, so that users of other electronic devices can learn and understand the content involved in the communication.
  • the electronic device 200 may upload the video data 1 to a public network platform, so that more users can learn and understand the content involved in this communication.
  • the AR model in this embodiment may be an AR model of a cartoon character or a real AR model of a user.
  • the AR model of the cartoon character may be an AR model downloaded by an electronic device from a server (such as an AR server).
  • the user's real AR model may be an AR model constructed by the electronic device according to the user's feature information (such as facial feature information and shape feature information).
  • the facial feature information of the user may include a facial feature point cloud of the user and a facial image of the user.
  • the user's physical feature information may include the user's physical feature point cloud and the user's physical image.
  • the “body” in the embodiment of the present application includes body parts other than the head in the user's body.
  • a real AR model of the user 210 is constructed by the electronic device 200 as an example.
  • the electronic device 200 may collect the facial feature information of the user 210 and the facial image of the user through the front camera a or the rear camera a, and then construct a 3D model of the user 210 according to the collected information (i.e., A real AR model of the user 210), and save the 3D model.
  • the electronic device 200 may also construct a 3D face model of the user 210 according to the facial feature information of the user and the face image of the user.
  • the AR model 311 shown in FIGS. 3A and 3C and the AR model 211 shown in FIGS. 3B and 3D are AR models of cartoon characters.
  • the AR model 212 shown in FIG. 3C is a real AR model of the user 210
  • the AR model 312 shown in FIG. 3D is a real AR model of the user 310.
  • the method for the electronic device to collect the user's characteristic information through the camera (the first camera or the second camera)
  • the specific method for the electronic device to construct the 3D model of the user (that is, the AR model) according to the feature information may refer to the method for constructing the 3D model of the user by the electronic device in the prior art, which is not described in the embodiment of the present application.
  • the communication between the electronic device 200 and the electronic device 300 may be a voice communication or a video communication.
  • the electronic device 200 receives a first operation of the user 210. This first operation is used to trigger the electronic device 200 to perform AR communication.
  • the electronic device 200 may receive a first operation of the user 210 (for example, S902 shown in FIG. 9B).
  • the electronic device 200 may receive a first operation of the user 210 on the voice communication interface or the video communication interface.
  • the first operation may be a first preset gesture input by a user on a voice communication interface or a video communication interface, such as an S-shaped gesture or a swipe-up gesture.
  • the first operation may be an S-shaped gesture input by the user in the voice communication interface 401 shown in FIG. 4A or the video communication interface 403 shown in FIG. 4B.
  • the above-mentioned voice communication interface or video communication interface may include an "AR communication” button.
  • the “AR communication” button may be used to trigger the electronic device 200 to perform AR communication.
  • the above first operation may be a user's click operation on the "AR communication” button, such as a click operation, a double-click operation, or a long-press operation.
  • the voice communication interface 401 shown in FIG. 4A includes an “AR communication” button 402.
  • the video communication interface 403 shown in FIG. 4B includes an “AR communication” button 404.
  • the above-mentioned first operation may be a user's single-click operation on the "AR communication” button 402 or the "AR communication” button 404.
  • the electronic device 200 may receive a first operation of the user 210 on a first preset key in the electronic device 200.
  • the first operation may be a single-click operation, a double-click operation, or a long-press operation.
  • the first preset key is a physical key or a combination of a plurality of physical keys in the electronic device 200.
  • the first preset button may be a dedicated physical button in the electronic device 200 for triggering the electronic device 200 to perform AR communication.
  • the dedicated physical button may be disposed on a side frame or an upper frame of the electronic device 200.
  • the first preset key may be a combination key composed of a "volume +" key and a "volume-" key.
  • the electronic device 200 may send AR communication request information to the electronic device 300 (for example, S903 shown in FIG. 9B).
  • the electronic device 300 may start the front camera b and the rear camera b.
  • the rear camera b can collect images of the scene 320.
  • the electronic device 300 displays an image of the scene 320 collected by the rear camera b on the touch screen.
  • the electronic device 300 may display the AR communication interface 504 shown in FIG. 5B.
  • the front camera b can collect a facial feature point cloud of the user 310.
  • the facial feature point cloud of the user 310 is used to characterize the facial contour information of the user 310.
  • the electronic device 300 may construct a three-dimensional model b (such as a three-dimensional face model b) of the user 310 according to the characteristic point cloud of the user 310 (such as a facial feature point cloud and a shape feature point cloud) collected by the front camera b.
  • a three-dimensional model b such as a three-dimensional face model b
  • the characteristic point cloud of the user 310 such as a facial feature point cloud and a shape feature point cloud
  • the electronic device 300 may present the first prompt information (for example, S904 shown in FIG. 9B).
  • the first prompt information is used to confirm whether the user 310 agrees that the electronic device 300 performs AR communication with the electronic device 200.
  • the electronic device 300 may send voice prompt information, that is, the first prompt information is voice prompt information.
  • the electronic device 300 may issue a voice prompt message “Please confirm whether to perform AR communication with the user 210?”.
  • the electronic device 300 displays an image user interface including the first prompt information.
  • the electronic device 300 may display an image user interface including the first prompt information, and issue a vibration prompt or the above-mentioned voice prompt information.
  • the electronic device 300 may display a graphical user interface (GUI) 501 shown in FIG. 5A on the touch screen.
  • the graphical user interface 501 includes a first prompt message “Please confirm whether to perform AR communication with the user 210?”.
  • the electronic device 300 may first authenticate the legality of the electronic device 200. For example, the electronic device 300 may determine whether the device identification information of the electronic device 200 is in the AR communication white list or black list. Exemplarily, if the electronic device 300 determines that the device identification information of the electronic device 200 is in the AR communication whitelist (or is not in the AR communication blacklist), the above first prompt information is presented on the electronic device 300. If it is determined that the electronic device 200 is not in the AR communication clear list (or in the AR communication blacklist), the electronic device 300 may not respond to the AR communication request or return an AR communication response to the electronic device 200 to indicate that the AR communication is rejected. information. Exemplarily, the above device identification information may be a phone number of the electronic device 200.
  • the electronic device 300 may receive an operation (ie, a fifth operation) (for example, S905 shown in FIG. 9B) or a sixth operation (for example, S915 shown in FIG. 9B) that the user agrees to perform AR communication.
  • This fifth operation is used to instruct the user 310 to agree that the electronic device 300 performs AR communication with the electronic device 200.
  • the sixth operation is used to indicate that the user 310 does not agree that the electronic device 300 performs AR communication with the electronic device 200.
  • the fifth operation may be a second preset gesture (such as a swipe up gesture) input by the user on the touch screen of the electronic device 300 after the electronic device 300 presents the first prompt information, or a user's click operation on the second preset key.
  • the fifth operation may be a third preset gesture (such as a downward swipe) entered by the user on the touch screen of the electronic device 300 after the electronic device 300 presents the first prompt information, or a user's click operation on the third preset key.
  • the second preset gesture is different from the third preset gesture.
  • the second preset button and the third preset button may be a combination of multiple physical buttons in the electronic device 300.
  • the second preset button is different from the third preset button.
  • the second preset key may be a combination key composed of a “volume +” key and a “lock screen” key.
  • the third preset key may be a combination key composed of a "volume-" key and a "lock screen” key.
  • the graphic user interface 501 may further include an "OK" button 502 and a "NO” button 503.
  • the fifth operation may be a user's click operation (such as a click operation) on the "OK" button 502.
  • the sixth operation may be a user's click operation on the “NO” button 503.
  • the electronic device 300 responds to the fifth operation of the user 310 (such as the user's click operation on the “OK” button 502 shown in FIG. 5A).
  • the electronic device 300 may start the front camera b and the rear camera b (for example, S907 shown in FIG. 9B).
  • the image collected by the rear camera b is displayed on the touch screen of the electronic device 300, and a three-dimensional model b of the user 310 is constructed based on the feature point cloud b collected by the front camera b (for example, S908 shown in FIG. 9B).
  • the three-dimensional model b may form a mapping relationship with an AR model (such as the AR model 311 and the AR model 312) of the user 310. In this way, according to the mapping relationship, the facial expressions and body movements of the AR model 311 and the AR model 312 can be controlled by using the behavior information of the user 310.
  • the electronic device 300 may send the first AR communication response information to the electronic device 200 (for example, S906 shown in FIG. 9B).
  • the first AR communication response information is used to indicate that the electronic device 300 agrees to perform AR communication with the electronic device 200.
  • the electronic device 200 receives the first AR communication response information (for example, S910 shown in FIG. 9B) sent by the electronic device 300.
  • the electronic device 200 may establish an AR communication link with the electronic device 300. After the AR communication link is established, the electronic device 200 may start the front camera a and the rear camera a (for example, S911 shown in FIG. 9B).
  • the rear camera a can collect an image of the scene 220.
  • the electronic device 200 may display an image collected by the rear camera a on the touch screen (for example, S912 shown in FIG. 9B).
  • the electronic device 200 may display the AR communication interface 601 shown in FIG. 6A.
  • the AR communication interface 601 includes an image of a scene 220.
  • the front camera a can collect a real-time feature point cloud of the user 210.
  • the real-time feature point cloud of the user 210 is used to characterize the face contour information and shape contour information of the user 210.
  • the electronic device 200 may construct a three-dimensional human model (a three-dimensional human face model a (for example, S912 shown in FIG. 9B)) of the user 210 according to the real-time feature point cloud of the user 210.
  • the three-dimensional model a may form a mapping relationship with an AR model (such as the AR model 211 and the AR model 212) of the user 210.
  • an AR model such as the AR model 211 and the AR model 212
  • the facial expressions and body movements of the AR model 211 and the AR model 212 can be controlled by using the behavior information of the user 210.
  • the electronic device 300 receives the sixth operation of the user 310 (such as the click operation of the “NO” button 503 by the user 310, such as a click operation), it indicates that the user 310 refuses to perform AR communication with the electronic device 200 (for example, as shown in FIG. 9B). (See S915).
  • the electronic device 300 may send the second AR communication response information to the electronic device 200 (for example, S916 shown in FIG. 9B).
  • the second AR communication response information is used to indicate that the electronic device 300 refuses to perform AR communication with the electronic device 200.
  • the electronic device 200 may receive the second AR communication response information (for example, S917 shown in FIG. 9B) sent by the electronic device 300.
  • the electronic device 200 may present second prompt information (for example, S918 shown in FIG. 9B); the second prompt information is used to instruct the electronic device 300 to refuse to perform the communication with the electronic device 200 AR communication.
  • the second prompt information is used to instruct the electronic device 300 to refuse to perform the communication with the electronic device 200 AR communication.
  • the electronic device 200 and the electronic device 300 may continue to perform voice communication or video communication (for example, S919 shown in FIG. 9B).
  • the electronic device 200 and the electronic device 300 may end the voice communication or video communication described in S901.
  • the electronic device 300 may add an AR model to the image of the scene 320 in response to a user operation (such as a second operation) (for example, S909 shown in FIG. 9B).
  • the electronic device 200 may also add an AR model to the image of the scene 220 in response to a user operation (such as the second operation) (for example, S913 shown in FIG. 9B) .
  • the electronic device 200 is taken as an example to describe a method for adding an AR model to an image of a real scene by the electronic device:
  • the AR model 311 shown in FIG. 3A or FIG. 3C and the AR model 212 shown in FIG. 3C may be images added to the scene 220 by the electronic device 200 in response to an operation (such as a second operation) of the user 210. of.
  • the second AR communication interface a (such as the AR communication interface 601 shown in FIG. 6A) may be displayed.
  • the AR communication interface 601 includes an image of the scene 220, but does not include an AR model 311 and an AR model 212.
  • the electronic device 200 may receive the second operation of the AR communication interface 601 by the user 210; in response to the second operation, the electronic device 200 may add the AR model of the user 210 and / or the user 310 to the image of the scene 220.
  • the second operation may be a fourth preset gesture input by the user in the AR communication interface 601, such as an S-shaped gesture or a swipe-up gesture. In some embodiments, as shown in FIG.
  • the AR communication interface 601 may further include an “AR model” button 602.
  • the above-mentioned second operation may be a click operation of the "AR model” button 602 by the user 210, such as a click operation, a double-click operation, or a long-press operation.
  • the electronic device 200 may display the model selection interface 603 shown in FIG. 6B.
  • the model selection interface 603 includes a "called party AR model” option 604 and a "calling party AR model” option 605.
  • the model selection interface 603 may include options for at least two AR models.
  • the model selection interface 603 includes an "AR model 1" option 606 and an "AR model 2" option 607.
  • the AR model corresponding to the “AR model 1” option 606 and the AR model corresponding to the “AR model 2” option 607 may be stored in the electronic device 200 in advance.
  • the AR model corresponding to the "AR model 1" option 606 and the AR model corresponding to the "AR model 2" option 607 may be stored in the cloud (such as an AR server).
  • the electronic device 200 may download the AR model from the cloud in response to a user's selection operation (such as a first selection operation) of options of the AR model.
  • the electronic device 200 may further receive the AR model of the user 310 sent by the electronic device 300. That is, the electronic device 200 may choose to set the AR model of the user 310 by the counterparty of the communication (ie, the electronic device 300). Based on this situation, as shown in FIG. 6B, the model selection interface 603 may further include an option “Set AR model by counterpart” 608. In response to the user's selection operation of the "set AR model by the counterparty" option 608, the electronic device 200 sends an AR model acquisition request to the electronic device 300 to obtain the AR model set by the electronic device 300 for the user 310, and in the AR communication interface 601 An AR model image from the electronic device 300 is added to the included AR scene image.
  • the electronic device 200 may receive the AR model actively sent by the electronic device 300; in response to a user's selection operation (such as the first selection operation) of the "set AR model by the other party" option 608, the electronic device 200 may be in the AR communication interface 601 An AR model image from the electronic device 300 is added to the included AR scene image.
  • the electronic device 200 may save the AR model AR model.
  • the model selection interface 603 may further include an “AR model (user 310)” option 609.
  • the electronic device 200 may set the corresponding AR model saved by the electronic device 200 as the AR model of the user 310.
  • the electronic device 200 may display the AR model option selected by the user in a preset display manner.
  • a user's selection operation such as a click operation
  • the electronic device 200 may display the "AR model 1" option 606 in a white text on a black background as shown in FIG. 6C.
  • the above-mentioned preset display manners include, but are not limited to, the manner of white text on a black background as shown in FIG. 6C.
  • the preset display mode may also include a bold display or a preset color (such as red) for display.
  • the electronic device 200 may receive a user's click operation (such as a click operation) on the “OK” button 610 in the model selection interface 612.
  • a user's click operation such as a click operation
  • the electronic device 200 may add an AR model 311 corresponding to the “AR model 1” option 606 to the AR scene image shown in FIG. 6A, and display the location shown in FIG. 6D.
  • the illustrated AR communication interface 613 (equivalent to the AR communication interface 301 shown in FIG. 3A), that is, the third AR communication interface a.
  • the AR communication interface 613 shown in FIG. 6D includes the image of the scene 220 and the AR model 311, but does not include the AR model 212.
  • the electronic device 200 may also receive a user ’s click operation on the “AR model” option 602 in the AR communication interface 613, and reset the AR model of the user 310, or Add AR model for user 210.
  • the electronic device 200 responds to the user ’s click operation on the “AR model” option 602, and the method for resetting or adding the AR model may refer to the foregoing related description, which is not repeated in this embodiment of the present application.
  • the electronic device 200 may receive a user's click operation (such as a click) on the "caller AR model” option 605 in the model selection interface 612. operating).
  • a user's click operation such as a click
  • the electronic device 200 may display a model selection interface 701 shown in FIG. 7A.
  • the model selection interface 701 includes an "AR model a (native)" option 702, an "AR model b" option 703, an "AR model c" option 704, and the like.
  • the AR model corresponding to the AR model option in the model selection interface 701 may be stored in the electronic device 200 in advance.
  • the AR model can be stored in the cloud.
  • the electronic device 200 may download an AR model from the cloud in response to a user's selection operation of an option of the AR model.
  • the electronic device 200 may display the AR model option selected by the user by using the foregoing preset display mode. For example, in response to a user's click operation on the "AR model a (local)" option 702 in the model selection interface 701, the electronic device 200 may display "AR model a (local) ) "Option 702. After the electronic device 200 displays the model selection interface 705 shown in FIG. 7B, the electronic device 200 may receive a user's click operation (such as a click operation) on the “OK” button 610 in the model selection interface 705.
  • a user's click operation such as a single-click operation
  • the electronic device 200 may add an AR model 311 and an “AR model a” corresponding to the “AR model 1” option 606 to the AR scene image shown in FIG. 6A.
  • the AR model 212 corresponding to the (Local) option 702 displays the AR communication interface 706 (equivalent to the AR communication interface 303 shown in FIG. 3C) shown in FIG. 7C, that is, the first AR communication interface a.
  • the AR communication interface 706 shown in FIG. 7C includes an image of the scene 220, an AR model 311, and an AR model 212.
  • the AR model corresponding to the "AR model a (native)" option 702 may be: the electronic device 200 collects the feature information a (such as facial feature information a) of the user 210 through the front camera a or the rear camera a , And then build a 3D model of the user 210 (ie, a real AR model of the user 210), such as a 3D face model, based on the feature information a.
  • the AR model 212 shown in FIG. 7C is a real AR model of the user 210.
  • the electronic device 300 may receive a user ’s click operation on the “AR model” option 505 to add an AR model to the AR communication interface 504 shown in FIG. 5B.
  • the AR communication interface 504 shown in FIG. 5B is a second AR communication interface b.
  • the AR communication interface 504 includes an image of the scene 320, but does not include an AR model 211 and an AR model 312.
  • the method for adding an AR model to the AR communication interface 504 shown in FIG. 5B by the electronic device 300 may refer to the method for adding an AR model to the AR communication interface 601 shown in FIG. 6A by the electronic device 200, which is not described in the embodiment of this application. To repeat.
  • the electronic device 300 may display an AR communication interface 302 shown in FIG. 3B or an AR communication interface 304 shown in FIG. 3D after adding an AR model to the AR communication interface 504 shown in FIG. 5B.
  • the AR communication interface 302 shown in FIG. 3B is a third AR communication interface b.
  • the AR communication interface 302 includes an image of the scene 320 and an AR model 211, but does not include an AR model 312.
  • the AR communication interface 304 shown in FIG. 3D is a first AR communication interface b.
  • the AR communication interface 304 includes an image of the scene 320, an AR model 211, and an AR model 312.
  • the electronic device 200 may directly display an AR communication interface including a default AR model.
  • the electronic device 200 may set at least one default AR model for the local device.
  • the AR model corresponding to the “AR model a (native)” option shown in FIG. 7B may be a default AR model set for the local in the electronic device 200.
  • the electronic device 200 may set a default AR model for the communication partner.
  • the electronic device 200 may set a default AR model for each contact.
  • the AR model corresponding to the “AR model (user 310)” option shown in FIG. 6B may be a default AR model set for the user 310 in the electronic device 200.
  • the user 210 may set an AR model for the owner and multiple contacts in the electronic device 200 in the “settings” application of the electronic device 200.
  • the electronic device 200 may receive a user's click operation (such as a click operation) on the "Settings" application icon on the main interface of the electronic device 200; in response to the click operation, the electronic device 200 may display the setting interface shown in FIG. 7D 707.
  • the setting interface 707 includes a "flying model” option, a "WLAN” option, a "Bluetooth” option, a "mobile network” option, an "AR model setting” option 708, and the like.
  • the "AR model setting” option 708 is used to trigger the electronic device 200 to display an AR model setting interface for setting the AR model for the owner and the contact.
  • a user's click operation such as a click operation
  • the electronic device 200 displays an AR model setting interface 709 shown in FIG. 7E.
  • the electronic device 200 (such as the address book of the electronic device 200) stores information of multiple contacts including Bob, Michael, and the user 310. As shown in FIG.
  • the AR model setting interface 709 may include the "owner's AR model” setting option 710, "Bob's AR model” setting option 711, “Michael's AR model” setting option 712, and "user 310's AR Model "setting option 713 and many other setting options.
  • the electronic device 200 may receive a user's click operation on any setting option, and set an AR model for the corresponding user.
  • For a specific method for setting the AR model for the user by the electronic device 200 reference may be made to the description in the foregoing embodiment, which is not repeated in this embodiment.
  • the electronic device 200 may set a default AR model for the calling party (such as the local machine) and the called party (such as the contact Bob) in the Settings application, after the electronic device 200 starts the rear camera a, it may The AR communication interface including the default AR model of the calling party and the default AR model of the called party is directly displayed. Instead of adding an AR model in the AR communication interface in response to a user's operation after the electronic device 200 displays the AR communication interface. During the AR communication between the electronic device 200 and the electronic device 300, the electronic device 200 may receive a user's click operation on the "AR model" option in the AR communication interface, and reset or add the AR model.
  • the electronic device 200 may receive a user's click operation on the "AR model" option in the AR communication interface, and reset or add the AR model.
  • the method for resetting or adding the AR model to the electronic device 200 can refer to the related descriptions above, which will not be repeated here in the embodiment of the present application.
  • the electronic device 200 may receive a user's click operation on an AR model (such as the AR model 211) in the AR communication interface, and reset the AR model of the user 210.
  • the electronic device 200 may change the AR model in response to a user's operation on the AR model in the AR communication interface. For example, the electronic device 200 may change the clothing, hairstyle, and accessories of the AR model.
  • the electronic device 200 can perform AR communication with the electronic device 300 (for example, S914 shown in FIG. 9B).
  • the AR communication in this embodiment includes AR communication based on a first camera and AR communication based on a second camera.
  • FIG. 9C illustrates an example of AR communication between the electronic device 200 and the electronic device 300:
  • the electronic device 200 is used as an example to describe the AR communication process of the electronic device 200 based on the second camera:
  • the electronic device 200 starts the rear camera a (such as A1 shown in FIG. 9C)
  • it can be synchronized with A map construction (Simultaneous Localization And Mapping, SLAM) positioning engine establishes a unified world coordinate system based on the electronic device 200 and the scene 220, and initializes the coordinate system (as shown in B1 in FIG. 9C).
  • SLAM Simultaneous Localization And Mapping
  • the electronic device 200 can control the AR model according to the feature point cloud of the AR model (such as D1) shown in FIG. 9C.
  • the electronic device 200 may perform hybrid rendering on the AR scene image of the scene 220 and the AR model 311 through the rendering engine of the electronic device 200 (as shown in E1 in FIG. 9C).
  • FIG. 3A shows a mixed rendering effect diagram of the AR scene image of the scene 220 and the AR model 311.
  • the detailed description of SLAM in the embodiment of the present application can refer to the related description in https://zhuanlan.zhihu.com/p/23247395, which is not described in the embodiment of the present application.
  • the electronic device 300 can execute A2-E2.
  • A2-E2 For a specific manner for the electronic device 300 to execute A2-E2, reference may be made to the detailed description of the electronic device 200 for performing A1-E1 in the foregoing embodiment, and details are not described herein again in the embodiment of the present application.
  • This embodiment uses the electronic device 200 as an example to introduce the AR communication process of the electronic device 200 based on the first camera:
  • the front camera a of the electronic device 200 can collect the real-time feature point cloud of the user 210 (F1-G1 shown in FIG. 9C).
  • the real-time feature point cloud of the user 210 is used to characterize the real-time changes in the facial expressions and body movements of the user 210.
  • the electronic device 200 adds the AR model 212 of the user 210 to the AR scene image shown in FIG. 3C, the electronic device 200 can establish a mapping relationship between the three-dimensional model a of the user 210 and the AR model 212 (see H1 shown in FIG. 9C).
  • the electronic device 200 may determine the real-time characteristic point cloud of the AR model 212 according to the real-time characteristic point cloud of the user 210 and the mapping relationship between the three-dimensional model a and the AR model 212. Then, the electronic device 200 may display the AR model 212 in which facial expressions and body movements change in real time according to the real-time characteristic point cloud of the AR model 212. That is, the electronic device 200 can control the facial expression and body movement of the AR model 212 in real time according to the real-time feature point cloud of the AR model 212 (as shown in FIG. 9C). In this way, a direct interaction between the user 210 and the AR model 212 displayed by the electronic device 200 can be achieved.
  • the electronic device 200 may indicate the AR model 311 to the electronic device 300 after adding the AR model 311 of the user 310 to the AR scene image shown in FIG. 3A or 3C. Specifically, if the AR model 311 is an AR model downloaded by the electronic device 200 from the cloud, the electronic device 200 may send the identification of the AR model 311 to the electronic device 300. The identification of the AR model 311 may uniquely identify the AR model 311. After receiving the identification of the AR model 311, the electronic device 300 can download the AR model 311 from the cloud according to the identification of the AR model 311. Alternatively, the electronic device 200 may send the AR model 311 to the electronic device 300.
  • the electronic device 300 After the electronic device 300 receives the AR model 311, it can establish a mapping relationship between the three-dimensional model b (such as the three-dimensional face model b) of the user 310 and the AR model 311 (as shown in H2 in FIG. 9C). For example, as shown in FIG. 9A, the electronic device 300 may establish a mapping relationship between the three-dimensional face model b and the AR model 311. Exemplarily, the electronic device 300 may establish a mapping relationship between the b of the user 310 and the AR model 311 by iterating a point cloud matching algorithm such as an Iterative Closest Point (ICP).
  • ICP Iterative Closest Point
  • the features of the user 310 and the features of the AR model 311 have been matched with each other.
  • the nose of the user 310 and the nose of the AR model 311 have a one-to-one mapping relationship.
  • the front camera b of the electronic device 300 can collect real-time feature point clouds (such as facial feature point clouds and shape feature point clouds) of the user 310 (see F2 shown in FIG. 9C). -G2).
  • the real-time feature point cloud of the user 310 is used to represent real-time changes in facial expressions and body movements of the user 310.
  • the electronic device 300 may determine the real-time characteristic point cloud of the AR model 311 according to the real-time characteristic point cloud of the user 310 and the mapping relationship between the three-dimensional model b and the AR model 311.
  • the electronic device 300 may send the real-time feature point cloud of the AR model 311 to the electronic device 200 (as shown in J2 in FIG. 9C).
  • the electronic device 200 may display the AR model 311 with real-time changes in facial expressions and body movements according to the real-time feature point cloud of the AR model 311. That is, the electronic device 200 can control the facial expression and body movement of the AR model 311 in real time according to the real-time characteristic point cloud of the AR model 311. In this way, the user 310 can directly interact with the AR model 311 displayed by the electronic device 200.
  • the electronic device 200 executes H1-J1 shown in FIG. 9C, sends a real-time characteristic point cloud of the AR model 211 to the electronic device 300, the method in which the electronic device 300 controls real-time changes in facial expressions and body movements of the AR model 211, and The device 300 executes the method for controlling the real-time changes of facial expressions and body movements of the AR model 312 by the H2-I2 shown in FIG. 9C, which will not be repeated here in the embodiment of the present application.
  • the AR communication interface in this embodiment is different from the ordinary graphical user interface.
  • the AR communication interface is a fusion of the images of the real scene and the virtual objects displayed on the touch screen through AR technology, and can generate information.
  • Interactive interface In the AR communication interface, elements in the interface such as icons, buttons, text, etc. may correspond to a real scene. If the collected real scene changes, the AR communication interface on the touch screen will also change accordingly.
  • the AR communication interface may include adding one or more AR models (ie, virtual objects) to a real scene collected by a second camera (such as a rear camera) of an electronic device (such as the electronic device 200), and An image obtained by fusing and rendering the real scene and the AR model.
  • a second camera such as a rear camera
  • an electronic device such as the electronic device 200
  • An image obtained by fusing and rendering the real scene and the AR model For example, please refer to any one of FIGS.
  • FIG. 3A to 3D FIG. 5B, FIG. 6A, FIG. 6D, FIG. 7C, FIG. 11B, FIG. 11C, and FIG. 14A to FIG. 14B, which show the AR communication interface provided by this embodiment.
  • Example diagram Taking the AR communication interface 303 shown in FIG. 3C as an example, the scene 220 in the AR communication interface 303 is a real scene, and the AR model 311 and the AR model 212 are virtual objects.
  • the AR communication interface 304 shown in FIG. 3D the scene 320 in the AR communication interface 304 is a real scene, and the AR model 312 and the AR model 211 are virtual objects.
  • a common graphical user interface is an image generated by a processor of an electronic device in response to a user operation.
  • the common graphic user interface does not include an image obtained by blending and rendering an image of a real scene and an AR model.
  • FIGS. 4A, 4B, 5A, 6B, 6C, 7A-7B, 7D-7E, 10A-10B, 11A, and 11D A schematic diagram of a common graphical user interface example provided by this embodiment is shown.
  • FIG. 9B shows a flowchart of an augmented reality communication method provided by the foregoing embodiment. As shown in FIG. 9B, the electronic device 200 and the electronic device 300 perform AR communication through the AR server 900.
  • the electronic device 200 may request AR communication with the electronic device 300 during a process of invoking a “phone” application for voice communication or video communication with the electronic device 300.
  • the AR server 900 may include a base station of the electronic device 200 and a base station of the electronic device 300, and a core network device.
  • the electronic device 200 may send the AR communication request information to the base station of the electronic device 200 in response to the first operation; the base station of the electronic device 200 sends the AR communication request information to the base station of the electronic device 300 through the core network device; the electronic device After receiving the AR communication request information, the base station of 300 sends the AR communication request information to the electronic device 300.
  • the first AR communication response information in S906 and the second AR communication response information in S916 can also be sent from the electronic device 300 to the electronic device 200 through the base station of the electronic device 300, the core network device, and the base station of the electronic device 200. send.
  • the electronic device 200 and the electronic device 300 may also exchange AR communication data through the AR server 900 (that is, the base station of the electronic device 200 and the base station of the electronic device 300, and the core network device).
  • the electronic device 200 may send the identification of the AR model 311 or the AR model 311 to the electronic device 300 through the electronic device 200 through the base station of the electronic device 200, the core network device, and the base station of the electronic device 300.
  • the electronic device 300 may send the real-time characteristic point cloud of the user 310 to the electronic device 200 through the base station of the electronic device 300, the core network device, and the base station of the electronic device 200.
  • the AR server 900 is not the base station of the electronic device 200, the base station of the electronic device 300, and the core network device.
  • the AR server 900 is a dedicated server for providing AR communication services for electronic devices such as the electronic device 200 and the electronic device 300.
  • the execution of S901 and S919 by the electronic device 200 and the electronic device 300 is to transmit data through the base station of the electronic device 200 and the base station of the electronic device 300, and the core network device.
  • the electronic device 200 executes S903, S910, and S917
  • the electronic device 300 executes "receiving AR communication request information", S906, and S916 in S904; and the electronic device 200 and the electronic device 300 execute S914 to transmit data through the AR server 900.
  • the electronic device 200 may request AR communication with the electronic device 300 during a process of calling a third-party application (such as a “WeChat” application) to perform voice communication or video communication with the electronic device 300.
  • a third-party application such as a “WeChat” application
  • the AR server is a server of the third-party application, such as a server of a "WeChat” application.
  • the electronic device 200 and the electronic device 300 execute S901, S919, and S914, the electronic device 200 executes S903, S910, and S917, and the electronic device 300 executes "receive AR communication request information", S906, and S916 in S904. Data is transmitted through the AR server 900.
  • the above-mentioned AR application is installed in the electronic device 200 and the electronic device 300.
  • the AR application is a client that can provide users with AR communication services.
  • the AR application installed in the electronic device 200 may perform data interaction with the AR application in the electronic device 300 through the AR server, and provide an AR communication service for the user 210 and the user 310.
  • the main interface (ie, the desktop) 1001 of the electronic device 200 includes an application icon 1002 of an AR application.
  • the desktop 1003 of the electronic device 300 includes an application icon 1004 of an AR application.
  • the electronic device 200 shown in FIG. 2 calls the AR application to perform AR communication with the electronic device 300 according to the embodiment of the present application.
  • the electronic device 200 may receive a user's click operation (such as a click operation) on the application icon 1002 shown in FIG. 10A, and display the AR application interface 1101 shown in FIG. 11A.
  • the AR application interface 1101 includes a "new friend" option 1102 and at least one contact option.
  • at least one contact option includes Bob's contact option 1103 and user 311's contact option 1104.
  • the "new friend" option 1102 is used to add a new contact.
  • the electronic device 200 can execute the AR communication method provided in the embodiment of the present application to perform AR communication with the electronic device 300.
  • the electronic device 200 may start the rear camera a, as shown in FIG. 11B, and display an AR communication interface including an AR scene image of the scene 220 collected by the rear camera a 1105.
  • the AR communication interface 1105 includes a third prompt message "Waiting for the other party to respond! 1106 and a "Cancel” button 1107.
  • the “Cancel” button 1107 is used to trigger the electronic device 200 to cancel AR communication with the electronic device 300.
  • the electronic device 200 may also send the AR communication request information to the electronic device 300. After the electronic device 200 sends the AR communication request information to the electronic device 300, the specific method for the AR communication between the electronic device 200 and the electronic device 300 can refer to the detailed description in the foregoing embodiment, which will not be repeated here in the embodiment of the present application.
  • the framing content of the second camera a (such as the rear camera a) will change.
  • the image of the real scene collected by the rear camera a and the AR communication interface displayed by the electronic device 200 will change accordingly.
  • the AR model in the AR communication interface displayed by the electronic device 200 also moves with the movement of the electronic device 200 in the image of the real scene collected by the electronic device 200 in real time. .
  • the image captured by the electronic device 200 in the scene 220 is an image collected by the electronic device 200 in the living room of the user 210.
  • the image of scene 220 includes a sofa and a stool, the AR model 311 is sitting on the sofa, and the AR model 212 is sitting on the stool.
  • the electronic device 200 is moved from the living room to the outdoor lawn, as the electronic device 200 moves, the image of the real scene collected by the rear camera a of the electronic device 200 will change in real time, and the AR communication interface displayed by the electronic device 200 will also change with Corresponding changes.
  • the AR communication interface displayed by the electronic device 200 may include an image of a real scene that changes in real time during the journey from the living room to the outdoor lawn.
  • the electronic device 200 displays an image of a realistic scene that changes in real time during the journey from the living room to the outdoor lawn
  • the AR model 311 and the AR model 212 can be displayed from the living room to the outdoor lawn in the image of the real scene that changes in real time Moving image.
  • the electronic device 200 adds the AR model at a specific position of an image of a real scene (such as scene 220) (such as on the sofa of scene 220), if the second camera a (such as a rear camera) of the electronic device 200 a)
  • the viewfinder content is changed, that is, the viewfinder content of the second camera a is changed from scene 220 to other scenes that do not include the specific position described above.
  • the AR model will appear at a specific position of the image of the scene 220 (for example, on the sofa of the scene 220).
  • the electronic device 200 is smart glasses as an example.
  • the smart glasses include a first camera and a second camera.
  • the first camera is used to collect behavior information (for example, facial feature information or body motion information) of the user 210 wearing smart glasses.
  • the second camera is used to collect an image of the real scene 220.
  • the smart glasses can display the AR communication interface (including the AR model 311 shown in FIG. 3A). ), So that the user wearing the smart glasses can see the AR model 311 of the user 310 in the scene 220 and talk face to face with the AR model 311.
  • the viewfinder content of the second camera of the smart glasses changes (for example, the user 210 faces away from the scene 220 or the user 210 leaves the scene 220), the image of the real scene displayed by the smart glasses will change, and the AR model 311 will disappear.
  • the AR model 311 will appear at a specific position of the image of the scene 220 (such as on the sofa of the scene 220).
  • the above embodiment only uses AR communication between the electronic device 200 and the electronic device 300 as an example to introduce the communication method of augmented reality provided by the embodiment of the present application.
  • the enhanced display communication method provided in the embodiment of the present application may also be applied to a process in which three or more electronic devices perform AR communication.
  • the electronic device 200 displays the AR communication interface 1108 shown in FIG. 11C.
  • the AR communication interface 1108 may further include an “Add” button 1109.
  • the electronic device 200 may display a contact selection interface 1110 shown in FIG. 11D in response to a user's click operation (such as a click operation) on the “Add” button 1109.
  • the contact selection interface 1110 includes a "complete" button 1115, a "cancel” button 1114, and a plurality of contact options, such as Bob's contact option 1111, Michael's contact option 1112, and user 310's contact option 1113.
  • the contact option 1113 of the user 310 in the contact selection interface 1110 is selected (eg, ticked).
  • the electronic device 200 may receive a user's selection operation for a contact option other than the contact option 1113 of the user 310 (for example, at least one contact option of Bob's contact option 1111 or Michael's contact option 1112), and request to contact with The corresponding electronic device performs AR communication.
  • the method for the electronic device 200 to request AR communication with the electronic device reference may be made to the method for requesting AR communication with the electronic device 300 by the electronic device 200 in the foregoing embodiment, which is not repeatedly described in this embodiment of the present application.
  • FIG. 12 is a schematic diagram illustrating an example of a communication scenario to which an augmented reality communication method according to an embodiment of the present application is applied.
  • the electronic device 200 establishes a connection with a large-screen device (such as a smart TV) 1200.
  • the large-screen device 1200 includes a camera 1202.
  • the camera 1202 may be a built-in camera of the large-screen device 1200, or the camera 1202 may be an external camera of the large-screen device 1200.
  • the connection between the electronic device 200 and the large-screen device 1200 may be a wired connection or a wireless connection.
  • the wireless connection may be a Wi-Fi connection or a Bluetooth connection.
  • the electronic device 300 includes a front camera, a rear camera, and a touch screen. For the hardware structure of the electronic device 200 and the electronic device 300, reference may be made to the description of the electronic device 100 shown in FIG. 1 in the embodiment of the present application, and details are not described herein in the embodiment of the present application.
  • the large-screen device in the embodiment of the present application may be a large-screen electronic device such as a smart television, a personal computer (PC), a notebook computer, a tablet computer, a projector, and the like.
  • a method in which the large-screen device is a smart TV is described as an example.
  • the manner in which the electronic device 200 initiates the enhanced display communication to the electronic device 300 may refer to the foregoing embodiment, the manner in which the electronic device 200 initiates the enhanced display communication to the electronic device 300.
  • the application examples are not repeated here.
  • the user 210 uses the electronic device 200 to communicate with the user 310 using the electronic device 300.
  • the electronic device 200 may control the camera 1202 of the large-screen device 1200 to collect the AR scene image of the scene 220 and the real-time feature point cloud of the user 210, and control the large-screen device 1200 to display the AR scene image of the scene 220 collected by the camera 1202.
  • the large-screen device 1200 may display an AR communication interface 1201. Since the scene 220 and the user 210 are both within the framing range of the camera 1202, the AR communication interface 1201 displayed by the large-screen device 1200 includes the AR scene image of the scene 220 and the image of the user 210.
  • the electronic device 200 may add the AR model 311 of the user 310 to the AR scene image in the AR communication interface 1201 displayed on the large-screen device 1200 in response to a user's operation on the touch screen of the electronic device 200.
  • the electronic device 200 can be used as a touchpad, and the user can operate the electronic device 200 to add the AR model 311 of the user 310 to the AR scene image in the AR communication interface 1201 displayed on the large-screen device 1200.
  • the electronic device 200 adds the AR model 311 of the user 310 to the AR scene image displayed by the large-screen device 1200.
  • the electronic device 200 can refer to This embodiment of the present application will not repeat them here.
  • the large-screen device 1200 may also be connected with external devices (such as a mouse and a keyboard). As shown in FIG. 12, the AR communication interface 1201 displayed on the large-screen device 1200 may further include an “AR model” option 1203. The user can click on the "AR model” option 1203 through the above external device.
  • the electronic device 200 may control the large screen device 1200 to display the model selection interface 1301 shown in FIG. 13 in response to a user's click operation on the “AR model” option 1203 in the AR communication interface 1201 displayed on the large screen device 1200.
  • the large-screen device 1200 may receive a user operation in the model selection interface 1301 through an external device, and send corresponding operation information to the electronic device 200.
  • the electronic device 200 may add the AR model 311 of the user 310 to the AR scene image displayed by the large-screen device 1200 in response to the corresponding operation.
  • the electronic device 200 responds to the user's operation in the model selection interface 1301, and adds the AR model 311 to the AR scene image displayed by the large-screen device 1200.
  • the electronic device 200 responds to the user in the figure.
  • the operation in the model selection interface 612 shown in FIG. 6C is a method for adding an AR model 311 to the AR scene image displayed by the electronic device 200, which is not repeatedly described in this embodiment of the present application.
  • the electronic device 200 may indicate the AR model 311 to the electronic device 300.
  • the electronic device 300 receives the AR model 311, a mapping relationship between the three-dimensional model b of the user 310 and the AR model 311 can be established.
  • the front camera b of the electronic device 300 can collect the real-time feature point cloud of the user 310.
  • the electronic device 300 may determine the real-time characteristic point cloud of the AR model 311 according to the real-time characteristic point cloud of the user 310 and the mapping relationship between the three-dimensional model b and the AR model 311.
  • the electronic device 300 may send the real-time feature point cloud of the AR model 311 to the electronic device 200.
  • the electronic device 200 can control the large-screen device 1200 to display the AR model 311 that changes facial expressions and body movements in real time according to the real-time feature point cloud of the AR model 311. In this way, a direct interaction between the user 310 and the AR model 311 displayed by the large-screen device 1200 can be achieved.
  • the electronic device 300 shown in FIG. 12 may also add the AR model of the user 210 and the AR model of the user 310 to the AR scene image displayed by the electronic device 300.
  • the specific method for adding the AR model of the user 210 and the user 310 to the AR scene image by the electronic device 300 shown in FIG. 12, a method for directly interacting the AR model of the user 210 displayed by the electronic device 300 with the user 210, and implementing the user 310 For a method of directly interacting with the AR model of the user 310 displayed by the electronic device 300, reference may be made to the foregoing related description in the embodiment of the present application, which is not repeatedly described in the embodiment of the present application.
  • the electronic device 200 may receive an interactive operation of the AR 210 by the user 210 on the touch screen (for example, the user 310 taps the AR model 311 Head).
  • the electronic device 200 can simulate the reflection action when the human head is tapped, and display the AR model 311 to perform the reflection action of the head tapping
  • the dynamic image realizes the interaction between the user 210 and the AR model 311.
  • the electronic device 200 may send the relevant information of the tapping action to the electronic device 300, so that the touch screen of the electronic device 300 may perform the tapping action according to the tapping action.
  • the related information displays a dynamic image of the AR model 312 making a reflection action of the head being struck. If the electronic device 300 is a smart helmet, the smart helmet may also send a vibration prompt after receiving the related information about the tapping action.
  • the electronic device 200 may record and save the video data of the AR model 311 and the AR model 212 face-to-face communication in response to the recording operation (ie, the third operation) of the user. .
  • the electronic device 200 stores video data of the AR model 311 and the AR model 212 that are exchanged face to face.
  • the foregoing recording operation may be a fifth preset gesture input by the user 210 on the AR communication interface displayed by the electronic device 200, such as an S-shaped gesture or a swipe-up gesture.
  • the fifth preset gesture is different from the fourth preset gesture.
  • the AR communication interface 1401 includes a "record" button 1402.
  • the foregoing recording operation may be a click operation (such as a single-click operation, a double-click operation, or a long-press operation) of the "record" button 1402 by the user 210.
  • the electronic device 200 when the electronic device 200 starts AR communication with the electronic device 300, it can automatically record the video data of the AR model 311 and the AR model 212 communicating face to face.
  • the electronic device 200 may display prompt information for confirming whether to save the video data. If the user 210 confirms to save the video data, the electronic device 200 can save the recorded video data. If the user confirms that the video data is not saved, the electronic device 200 may delete the recorded video data. For example, when the AR communication between the electronic device 200 and the electronic device 300 ends, the electronic device 200 may display a prompt box 1403 shown in FIG. 14B.
  • the prompt box 1403 includes prompt information "End AR call, save call video?" 1404, "Save” button 1405, and "Do not save” button 1406.
  • the electronic device 200 may save the recorded video in response to a user's click operation (such as a click operation) on the "Save” button 1405.
  • the electronic device 200 may delete the recorded video data in response to a user's click operation (such as a click operation) on the "Delete” button 1406.
  • the electronic device 200 may "App (ie system app) to save the recorded video data.
  • the user 210 can view the video data in the Photos application.
  • the electronic device 200 may Third-party applications save recorded video data.
  • the user 210 can view the video data in a third-party application.
  • the electronic device 200 may also save the recorded video data in a “photo” application (that is, a system application), which is not limited in this embodiment.
  • the electronic device 200 may receive a fourth operation (such as a double-click operation or a long-press operation) of the AR model 311 by the user 210 on the touch screen. Wait).
  • a fourth operation such as a double-click operation or a long-press operation
  • the electronic device 200 may display contact information of the user 310.
  • the contact information may include a phone number, an email address, and an avatar of the user 310.
  • the contact information is stored in the electronic device 200, or the contact information is obtained by the electronic device 200 from the cloud (such as an AR server) in response to the fourth operation.
  • the microphone of the electronic device 200 may receive voice data (ie, the first sound signal) sent by the user 210.
  • the electronic device 200 may convert a first sound signal into a first audio electrical signal.
  • the electronic device 200 then sends the first audio electrical signal to the electronic device 300.
  • the electronic device 300 may convert the first audio electrical signal from the electronic device 200 into a first sound signal, and play the first sound signal through a receiver (also referred to as a “handset”), or through a speaker of the electronic device 300 (also referred to as “a Horn ”) plays the first sound signal.
  • the “voice data captured by the microphone 270C” is specifically the first sound signal captured by the microphone.
  • the microphone of the electronic device 300 can receive voice data (ie, a second sound signal) from the user 310.
  • the electronic device 200 may convert the second sound signal into a second audio electrical signal.
  • the electronic device 300 then sends the second audio electrical signal to the electronic device 200.
  • the electronic device 200 may convert the second audio electrical signal from the electronic device 200 into a second sound signal, and play the second sound signal through a receiver, or play the second sound signal through a speaker of the electronic device 200.
  • the electronic device 200 can recognize the voice data (that is, the first sound signal) collected by the microphone of the electronic device 200 and converted from the second audio electrical signal. Voice data (ie, the second sound signal). If the electronic device 200 recognizes the voice data corresponding to the preset text, the electronic device 200 may display a dynamic image of the AR model 311 and the AR model 212 performing an action corresponding to the preset text.
  • the electronic device 200 may store a plurality of preset texts and actions corresponding to the preset texts. For example, the action corresponding to the preset text "hello” is a handshake. The action corresponding to the preset text "Goodbye" is waving.
  • the user 210 and the user 310 may greet each other and say "hello”.
  • the electronic device 200 recognizes the voice data “hello”, and can display a dynamic image of the AR model 311 shaking hands with the AR model 212. That is, the electronic device can control the AR model to perform corresponding actions according to the user's voice data, increasing the direct interaction between the user and the AR model, and the interaction between the AR model, which can improve the user experience of AR communication.
  • the electronic device 200 may recognize the voice data (that is, the first sound signal) collected by the microphone of the electronic device 200 and the second audio electrical signal.
  • the converted voice data (that is, the second sound signal) converts the recognized voice data into text.
  • the electronic device 200 may display the converted text (ie, subtitles) in the AR communication interface.
  • this embodiment provides a communication method for augmented reality.
  • the method may include S1501-S1504:
  • the first electronic device sends AR communication request information to the second electronic device.
  • the second electronic device receives the AR communication request information sent by the first electronic device.
  • the first electronic device establishes an AR communication link with the second electronic device.
  • the second electronic device may establish an AR communication link with the first electronic device in response to the AR communication request information.
  • the first electronic device may establish an AR communication link with the second electronic device through the base station and the core network device.
  • the first electronic device may establish an AR communication link with the second electronic device through an AR server dedicated to providing an AR communication service.
  • the first electronic device displays a first AR communication interface a on the touch screen.
  • the first AR communication interface a includes an image of a first reality scene, and a first AR model a and a second AR model a in the first reality scene.
  • the first reality scene may be a reality scene collected by a second camera of the first electronic device.
  • the first AR model is an AR model of the first user corresponding to the first electronic device.
  • the first AR model can present the expression and actions of the first user and display it on the touch screen of the first electronic device.
  • the second AR model is an AR model of the second user corresponding to the second electronic device.
  • the second AR model can present the expression and actions of the second user and display it on the touch screen of the first electronic device.
  • the first electronic device is the electronic device 200 shown in FIG. 2, and the second electronic device is the electronic device 300 shown in FIG. 2 as an example.
  • the first AR communication interface a may be the AR communication interface 303 shown in FIG. 3C.
  • the AR communication interface displayed by the electronic device includes an image of a real scene where the electronic device is located, and the first AR model and the second AR model in the real scene.
  • the expressions and actions of the first user can be presented on the first AR model
  • the expressions and actions of the second user can be presented on the second AR model.
  • an AR communication interface may also be displayed.
  • the method in the embodiment of the present application may further include S1505:
  • the second electronic device displays the first AR communication interface b on the touch screen.
  • the first AR communication interface b includes an image of the second reality scene, and the first AR model b and the second AR model b in the second reality scene
  • the second reality scene is a reality scene collected by a second camera of the second electronic device.
  • the first AR model is an AR model of a first user corresponding to the first electronic device.
  • the second electronic device presents the expression and motion of the first user on the first AR model.
  • the second AR model is an AR model of a second user corresponding to the second electronic device.
  • the second electronic device presents the expression and action of the second user on the second AR model.
  • the second AR communication interface b may be the AR communication interface 304 shown in FIG. 3D.
  • both electronic devices performing AR communication can display an AR communication interface. That is, both electronic devices performing AR communication can provide users with interactive services in real scenarios, which can improve the user's communication experience.
  • the above-mentioned augmented reality communication method may further include S1601:
  • the first electronic device performs voice communication or video communication with the second electronic device.
  • the first electronic device When the first electronic device performs voice communication or video communication with the second electronic device, the first electronic device displays a graphical user interface for performing voice communication or video communication with the second electronic device on the touch screen; the second electronic device is on the touch screen.
  • a graphical user interface displaying voice communication or video communication with the first electronic device.
  • the above-mentioned first operation may be a first preset gesture input on a touch screen of the first electronic device to a graphical user interface of voice communication or the video communication.
  • a first preset gesture input on a touch screen of the first electronic device to a graphical user interface of voice communication or the video communication.
  • an S-shaped gesture or a swipe-up gesture For example, an S-shaped gesture or a swipe-up gesture.
  • the above-mentioned GUI for voice communication or video communication includes an AR communication button, and the first operation is a click operation on the AR communication button on the touch screen of the electronic device.
  • the GUI of the video communication may be the video communication interface 403 shown in FIG. 4B.
  • the video communication interface 403 includes an AR communication button 404.
  • the GUI of the voice communication may be the voice communication interface 401 shown in FIG. 4A.
  • the AR communication button in the voice communication interface 401 includes 402.
  • the first operation is a click operation on a first preset button during a voice communication or the video communication, and a physical button of the first electronic device of the first preset button.
  • the first preset key may be a combination key of a "volume +" key and a "lock screen” key.
  • the first operation is a click operation on an AR communication button in a graphical user interface of voice communication or video communication as an example.
  • S1501 in Figure 15 can be replaced with S1602:
  • the first electronic device In response to a click operation on an AR communication button in a GUI of voice communication or video communication, the first electronic device sends AR communication request information to the second electronic device.
  • the first electronic device may initiate AR communication during a voice communication or a video communication with the second electronic device, which may improve the user experience of the voice communication or the video communication.
  • an AR application is installed in the first electronic device and the second electronic device, and the AR application is a client for providing an AR communication service.
  • the above-mentioned augmented reality communication method may further include: in response to a click operation on an application icon of the AR application, the first electronic device displays an AR application interface.
  • the AR application interface includes at least one contact option, and the at least one contact option includes a contact option corresponding to the second electronic device.
  • the above-mentioned first operation is a first user's click operation on a contact option corresponding to the second electronic device.
  • the second electronic device may request the user to confirm whether the second electronic device agrees to perform AR communication with the first electronic device.
  • the method in the embodiment of the present application may further include S1701, and S1503 may include S1702:
  • the second electronic device presents first prompt information, and the first prompt information is used to confirm whether the second electronic device is allowed to perform AR communication with the first electronic device.
  • the second electronic device establishes an AR communication link with the first electronic device.
  • the fifth operation is used to confirm that the second electronic device agrees to perform AR communication with the first electronic device.
  • the fifth operation reference may be made to related content in the foregoing embodiment, which is not repeatedly described in the embodiment of the present application.
  • the second electronic device in response to the AR communication request information, does not directly establish the AR communication link with the first electronic device, but decides whether to perform AR communication with the first electronic device according to the user's wishes, which can improve the user Communication experience.
  • the second electronic device may authenticate the legality of the first electronic device.
  • S1701 shown in FIG. 17 may include S1801-S1802:
  • the second electronic device determines whether the first electronic device is legal.
  • the second electronic device executes S1802; if the first electronic device is illegal, the second electronic device refuses to perform AR communication with the first electronic device.
  • the first electronic device legally includes: the device identification information of the first electronic device is stored in the white list of the second electronic device; or the device identification information of the first electronic device is not stored in the black list of the second electronic device;
  • the device identification information of an electronic device includes a phone number of the first electronic device.
  • the second electronic device presents the first prompt information.
  • the second electronic device may send voice prompt information, that is, the first prompt information is voice prompt information.
  • the second electronic device may issue a voice prompt message "Please confirm whether to perform AR communication with the first electronic device?".
  • the second electronic device displays an image user interface including the first prompt information.
  • the second electronic device may display an image user interface including the first prompt information, and issue a vibration prompt or the above-mentioned voice prompt information.
  • the second electronic device may display the GUI 501 shown in FIG. 5A on the touch screen.
  • the graphical user interface 501 includes a first prompt message “Please confirm whether to perform AR communication with the user 210?”.
  • the second electronic device only presents the first prompt information when the first electronic device is legal, which can block the harassment of the second electronic device by the illegal electronic device, and can improve the user's communication experience.
  • the first AR model a and the second AR model a are AR models preset in the first electronic device.
  • the first AR model b and the second AR model b are AR models set in advance in the second electronic device.
  • FIG. 7D to FIG. 7E for a method of presetting an AR model in an electronic device, reference may be made to the description of FIG. 7D to FIG. 7E, which is not repeatedly described in this embodiment of the present application.
  • the above S1504 may include S1901-1905.
  • S1504 shown in FIG. 18 may include S1901-S1905:
  • the first electronic device displays a second AR communication interface a on the touch screen.
  • the second AR communication interface a includes an image of the first real scene, but does not include the first AR model a and the second AR model a.
  • the second AR communication interface a may be the AR communication interface 601 shown in FIG. 6A.
  • the first electronic device displays a model selection interface on the touch screen.
  • the model selection interface includes multiple model options, and each model option corresponds to an AR model.
  • the model selection interface may be the model selection interface 603 shown in FIG. 6B.
  • the model selection interface 603 includes an "AR model (user 310)" option 609, an "AR model set by the other party” option 608, an "AR model 1" option 606, an "AR model 2” option 607, and the like.
  • the first electronic device displays a third AR communication interface a on the touch screen, and the third AR communication interface a includes a first reality scene
  • the second AR model a corresponding to the image and the second model option, but does not include the first AR model a.
  • the third AR communication interface a may be the AR communication interface 613 shown in FIG. 6D.
  • the first electronic device In response to the second operation of the third AR communication interface on the touch screen, the first electronic device displays a model selection interface on the touch screen.
  • the first electronic device displays a first AR communication interface a on the touch screen, and the first AR communication interface includes a first reality scene.
  • the first AR communication interface a may be the AR communication interface 706 shown in FIG. 7C (equivalent to the AR communication interface 303 shown in FIG. 3C).
  • the first electronic device in response to establishing the AR communication link, may turn on the first camera and the second camera.
  • the foregoing augmented reality communication method may further include S1900:
  • the first electronic device turns on the first camera a and the second camera a.
  • the second camera a is used to collect an image of the first realistic scene, and the first camera a is used to collect the expression of the first user. And action.
  • the second electronic device may also turn on the first camera b and the second camera b.
  • the second camera b is used to collect an image of the second reality scene
  • the first camera b is used to collect the expression and action of the second user.
  • the AR communication interface (such as the second AR communication interface) just displayed by the first electronic device only includes the image of the first reality scene collected by the second camera of the first electronic device, excluding the first An AR model and a second AR model.
  • An AR model may be added by the user in this second AR communication interface.
  • the communication method for augmented reality may further include S2001-S2002.
  • the above-mentioned augmented reality communication method may further include S2001-S2002:
  • the first electronic device responds to a click operation on the first AR model a on the touch screen, and displays a model selection interface.
  • the model selection interface includes multiple model options, and each model option corresponds to an AR model.
  • the first electronic device In response to a click operation on a third model option of the plurality of model options on the touch screen, the first electronic device replaces the first AR model in the first AR communication interface with a third corresponding to the third model option. AR model.
  • the first electronic device may also change the second AR model a at any time after S1504 or S1905 in response to a user operation.
  • the second electronic device may also change the first AR model b and the second AR model b at any time after S1505 in response to a user operation.
  • S2001-S2002 for a method in which the first electronic device changes the second AR model a, and a method in which the second electronic device changes the first AR model b and the second AR model b, reference may be made to S2001-S2002, which is not repeatedly described in this embodiment.
  • the first electronic device can change the AR model at any time according to the user's operation, and display the AR model that meets the user's preference, which can improve the user's communication experience.
  • the above-mentioned augmented reality communication method may further include: in response to a fourth operation on the second AR model on the touch screen, the first electronic device displays the contact of the second user People information.
  • the contact information includes at least one of a second user's phone number, email address, or avatar.
  • the user only needs to operate the AR model, and the first electronic device can display the contact information of the corresponding user; without having to exit the AR communication interface, Find the user's contact information in the address book.
  • the foregoing augmented reality communication method may further include S2101-S2102.
  • the foregoing augmented reality communication method may further include S2101-S2102:
  • the first electronic device recognizes voice data collected by a microphone of the first electronic device, and voice data converted from audio and electrical signals from the second electronic device.
  • the first electronic device displays the text of the recognized voice data on the touch screen.
  • the second electronic device can also recognize the voice data collected by the microphone of the second electronic device, and the voice data converted from the audio and electrical signals from the first electronic device; Text for your speech data.
  • the first electronic device can display text (ie, subtitles) of the voice data of AR communication in the AR communication interface.
  • the first electronic device displays the text of the voice data of the AR communication, and can visually present to the user the content exchanged by the users of the two parties during the AR communication between the first electronic device and the second electronic device.
  • the above-mentioned augmented reality communication method may further include S2103:
  • the action corresponding to the preset text "hello” is a handshake.
  • the action corresponding to the preset text "Goodbye” is waving.
  • the first AR model and the second AR model interact according to the voice data of the users of both parties, so that the content displayed on the AR communication interface is more consistent with the face-to-face communication picture of users in the real scene, which can improve AR communication Realism to enhance the user ’s communication experience.
  • the foregoing electronic device (such as the electronic device 200 or the electronic device 300) includes a hardware structure and / or a software module corresponding to each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is performed by hardware or computer software-driven hardware depends on the specific application of the technical solution and design constraints. Professional technicians can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the embodiments of the present application.
  • the embodiments of the present application may divide the functional modules of the electronic device according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules may be implemented in the form of hardware or software functional modules. It should be noted that the division of the modules in the embodiments of the present application is schematic, and is only a logical function division. In actual implementation, there may be another division manner.
  • FIG. 22 shows a possible structural diagram of the electronic device involved in the foregoing embodiment.
  • the electronic device 2200 includes a processing module 2201, a display module 2202, and a communication module 2203.
  • the processing module 2201 is configured to control and manage the actions of the electronic device 2200.
  • the display module 2202 is configured to display an image generated by the processing module 2201.
  • the communication module 2203 is configured to support communication between the electronic device 2200 and other network entities.
  • the electronic device 2200 can be used as the electronic device of the calling party or the electronic device of the called party.
  • the processing module 2201 may be used to support the electronic device 2200 to execute the method in the foregoing method embodiment.
  • the display module 2202 may be used to support the electronic device 2200 to perform the operations of "displaying a voice communication interface or a video communication interface” in the above method embodiment, the operation of "displaying an AR communication interface", the operation of "displaying an image” in S912, S913 "Display AR model” operations in S918, “Show second prompt information” operations in S918, S1504, S1901, S1902, S1903, S1904, S1905, S2001, S2102, S2103, and / or those used in the technology described herein Other processes.
  • the above communication module 2203 may be used to support the electronic device 2200 to perform the operations of "interacting with the electronic device 300" in S901, S914, and S919 in the above method embodiment, and the "and Interaction of a second electronic device ", S1602, and / or other processes for the techniques described herein.
  • the processing module 2201 may be used to support the electronic device 2200 to execute the method in the foregoing method embodiment S901, S907, S908 operations of "building a three-dimensional face model b", S909 operations of "receiving a second operation", S915, S914, S919, S1503, S1601, S1702, and / or for the operations described in this article Other processes of technology.
  • the above display module 2202 may be used to support the electronic device 2200 to perform the operations of “displaying a voice communication interface or a video communication interface” in the above method embodiment, and the operations of “displaying an AR communication interface”, and the “display first in S904, S1701, and S1802
  • the above communication module 2203 may be used to support the electronic device 2200 to perform the operations of "interacting with the electronic device 200" in S901, S914, and S919 in the above method embodiment, and the operations of "receiving AR communication request information” in S904, S906, S916, S1502, S1503, S1601, and S1702 operations of "interaction with the first electronic device", “receiving AR communication request information in S1602", and / or other processes for the technology described herein.
  • the unit modules in the above-mentioned electronic device 2200 include, but are not limited to, the above-mentioned processing module 2201, display module 2202, and communication module 2203.
  • the electronic device 2200 may further include a storage module and an audio module.
  • the storage module is used to store program codes and data of the electronic device 2200.
  • the audio module is used to collect the voice data sent by the user during voice communication, and to play the voice data.
  • the processing module 2201 may be a processor or a controller.
  • the processing module 2201 may be a central processing unit (CPU), a digital signal processor (DSP), and an application-specific integrated circuit (ASIC). ), Field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • the processor may include an application processor and a baseband processor. It may implement or execute various exemplary logical blocks, modules, and circuits described in connection with the present disclosure.
  • the processor may also be a combination that implements computing functions, such as a combination including one or more microprocessors, a combination of a DSP and a microprocessor, and so on.
  • the communication module 2203 may be a transceiver, a transceiver circuit, or the like.
  • the memory module may be a memory.
  • the processing module 2201 is a processor (such as the processor 110 shown in FIG. 1), and the communication module 2203 includes a mobile communication module (such as the mobile communication module 150 shown in FIG. 1) and a wireless communication module (as shown in FIG. 1). Wireless communication module 160).
  • the mobile communication module and the wireless communication module may be collectively referred to as a communication interface.
  • the storage module may be a memory (the internal memory 121 shown in FIG. 1).
  • the display module 2202 may be a touch screen (such as a display screen 194 shown in FIG. 1, in which a display panel and a touch panel are integrated).
  • the audio module may include a microphone (such as the microphone 170C shown in FIG. 1), a speaker (such as the speaker 170A shown in FIG.
  • the electronic device 2200 provided in the embodiment of the present application may be the electronic device 100 shown in FIG. 1.
  • the processor, the memory, the communication interface, the touch screen, and the like may be connected together, for example, through a bus.
  • An embodiment of the present application further provides a computer storage medium.
  • the computer storage medium stores computer program code.
  • the processor executes the computer program code
  • the electronic device 2200 executes any one of FIG. 9B and FIG. 15- FIG. 21.
  • the relevant method steps in the figure implement the method in the above embodiment.
  • An embodiment of the present application further provides a computer program product.
  • the computer program product runs on a computer, the computer is caused to execute the related method steps in any one of FIG. 9B and FIG. 15 to FIG. 21 to implement the foregoing embodiment. Methods.
  • the electronic device 2200, the computer storage medium, or the computer program product provided in the embodiment of the present application is used to execute the corresponding methods provided above. Therefore, for the beneficial effects that can be achieved, refer to the corresponding ones provided above. The beneficial effects in the method are not repeated here.
  • the disclosed apparatus and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules or units is only a logical function division.
  • multiple units or components may be divided.
  • the combination can either be integrated into another device, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solution of the embodiments of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution may be embodied in the form of a software product that is stored in a storage medium. Included are several instructions for causing a device (which can be a single-chip microcomputer, a chip, etc.) or a processor to execute all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing storage medium includes various media that can store program codes, such as a U disk, a mobile hard disk, a ROM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

本申请实施例公开了一种增强现实的通信方法及电子设备,涉及通信技术领域,在语音通信或者视频通信的过程中,可以为用户提供在现实场景的互动的服务。具体方案为:第一电子设备响应于第一操作,向第二电子设备发送AR通信请求信息,以请求与第二电子设备进行AR通信;第一电子设备与第二电子设备建立AR通信链接;第一电子设备在触摸屏上显示包括第一现实场景的图像以及处于第一现实场景的第一AR模型和第二AR模型的第一AR通信界面。并且,在AR通信的过程中,第一AR模型按照第一设备获取的第一用户的表情和动作做出相应的表情和动作,第二AR模型按照第一设备获取的第二用户的表情和动作做出相应的表情和动作。

Description

增强现实的通信方法及电子设备 技术领域
本申请实施例涉及通信技术领域,尤其涉及一种增强现实的通信方法及电子设备。
背景技术
现有技术中,电子设备与其他电子设备通信(如语音通信或者视频通信)的过程中,无法实现主叫方与被叫方在现实场景中的互动。
发明内容
本申请实施例提供一种增强现实的通信方法,在语音通信或者视频通信的过程中,可以为用户提供在现实场景的互动的服务。
第一方面,本申请实施例提供一种增强现实的通信方法,该方法包括:第一电子设备响应于第一操作,向第二电子设备发送AR通信请求信息,以请求与第二电子设备进行增强现实(augmented reality,AR)通信;第一电子设备与第二电子设备建立AR通信链接;第一电子设备在触摸屏上显示包括第一现实场景的图像以及处于第一现实场景的第一AR模型(如第一AR模型a)和第二AR模型(如第二AR模型a)的第一AR通信界面(即第一AR通信界面a)。并且,在AR通信的过程中,第一AR模型按照第一设备获取的第一用户的表情和动作做出相应的表情和动作;第二AR模型按照第一设备获取的第二用户的表情和动作做出相应的表情和动作。
通过上述增强现实的通信方法,第一电子设备显示的第一AR通信界面中包括第一电子设备所处的第一现实场景的图像,以及处于第一现实场景的第一AR模型和第二AR模型。并且,第一AR模型做出第一用户(即主叫方)的表情和动作,第二AR模型可以做出第二用户(即被叫方)的表情和动作。如此,可以在电子设备进行AR通信的过程中,为用户提供在现实场景的互动的服务,可以提升用户的通信体验。
结合第一方面,在一种可能的设计方式中,第一电子设备可以在与第二电子设备进行语音通信或者视频通信的过程中,接收上述第一操作。具体的,在第一电子设备响应于第一操作,向第二电子设备发送AR通信请求信息之前,本申请实施例的方法还可以包括:第一电子设备与第二电子设备进行语音通信或者视频通信,在触摸屏上显示语音通信或者视频通信的图形用户界面。
例如,第一操作可以是在触摸屏上对语音通信或者视频通信的图形用户界面输入的第一预设手势。或者,语音通信或者视频通信的图形用户界面中包括AR通信按钮,第一操作是在触摸屏上对AR通信按钮的点击操作。或者,第一操作是语音通信或者视频通信的过程中,对第一预设按键的点击操作,第一预设按键的第一电子设备的物理按键。其中,第一电子设备可以在与第二电子设备进行语音通信或者视频通信的过程中发起AR通信,可以提高语音通信或者视频通信的用户体验。
结合第一方面,在另一种可能的设计方式中,第一电子设备与第二电子设备中安装有AR应用。AR应用是用于提供AR通信服务的客户端。第一电子设备可以通过 AR应用请求与第二电子设备进行AR通信。具体的,在第一电子设备响应于第一操作,向第二电子设备发送AR通信请求信息之前,本申请实施例的方法还包括:响应于对AR应用的应用图标的点击操作,第一电子设备显示包括至少一个联系人选项的AR应用界面,至少一个联系人选项中包括第二电子设备对应的联系人选项。上述第一操作可以是第一用户对第二电子设备对应的联系人选项的点击操作。
结合第一方面,在另一种可能的设计方式中,上述第一电子设备包括第一摄像头(如前置摄像头)和第二摄像头(如后置摄像头),第二电子设备包括第一摄像头(如前置摄像头)。上述第一现实场景的图像是第一电子设备的第二摄像头采集的图像;第一用户的表情和动作是第一电子设备的第一摄像头采集的;第二用户的表情和动作是第二电子设备的第一摄像头采集的。
其中,第一电子设备可以通过上述AR通信链接,接收第二电子设备发送的第二用户的行为动作信息。第二用户的行为动作信息用于表征第二用户的表情和动作的变化。第二用户的行为动作信息是第二电子设备的第一摄像头采集的。也就是说,通过上述方案,可以实现第二用户对第一电子设备显示的第二AR模型的表情和动作的控制,提升了用户的通信体验。
结合第一方面,在另一种可能的设计方式中,第一电子设备也可以通过AR通信链接向第二电子设备发送第一用户的行为动作信息。第一用户的行为动作信息用于表征第一用户的表情和动作的变化。第一用户的行为动作信息的第一电子设备的第一摄像头采集的。这样,第二电子设备便可以根据第一用户的行为动作信息,在第二现实场景中显示第一AR模型(如第一AR模型b)的表情和动作随着第一用户的表情和动作的变化相应变化的动态图像。也就是说,通过上述方案,可以实现第一用户对第二电子设备显示的第一AR模型b的表情和动作的控制,提升了用户的通信体验。
结合第一方面,在另一种可能的设计方式中,上述第一AR模型是第一电子设备中针对第一电子设备预先设定的AR模型(即默认的AR模型),第二AR模型是第一电子设备中针对第二电子设备预先设定的AR模型。例如,第一电子设备可以接收用户对第一电子设备的主界面(即桌面)上的“设置”应用图标的点击操作(如单击操作);响应于该点击操作,第一电子设备200可以显示包括AR模型设置选项的设置界面。响应于用户对AR模型设置选项的点击操作(如单击操作),第一电子设备可以显示AR模型设置界面。AR模型设置界面中可以包括机主的AR模型设置选项和多个联系人的AR模型设置选项。第一电子设备可以接收用户对任一个设置选项的点击操作,为对应的用户设置AR模型。
结合第一方面,在另一种可能的设计方式中,上述第一AR模型和第二AR模型可以是第一电子设备响应于用户的操作,在AR通信界面中添加的AR模型。具体的,上述第一电子设备在触摸屏上显示第一AR通信界面,包括:响应于建立AR通信链接,第一电子设备开启第一电子设备的第一摄像头和第二摄像头;这样,第一电子设备的第二摄像头便可以采集第一现实场景的图像,第一电子设备的第一摄像头便可以采集第一用户的表情和动作;第一电子设备在触摸屏上显示包括第一现实场景的图像、但不包括第一AR模型和第二AR模型的第二AR通信界面(即第二AR通信界面a);也就是说,第一电子设备刚开始显示的AR通信界面中是不包括AR模型,需要用户 操作第一电子设备添加AR模型;响应于在触摸屏上对所述第二AR通信界面的第二操作,第一电子设备在触摸屏上显示包括多个模型选项的模型选择界面,每个模型选项对应一个AR模型;响应于在触摸屏上对多个模型选项中的第一模型选项的第一选择操作,第一电子设备在触摸屏上显示包括第一现实场景的图像和第二模型选项对应的第二AR模型、但不包括第一AR模型的第三AR通信界面(即第三AR通信界面a);这样,第一电子设备便可以在第一AR模型上呈现第一电子设备的第一摄像头采集的表情和动作;响应于在触摸屏上第三AR通信界面的第二操作,第一电子设备在触摸屏上显示模型选择界面;响应于在触摸屏上对多个模型选项中的第一模型选项的第二选择操作,第一电子设备在所述触摸屏上显示第一AR通信界面(即第一AR通信界面a);如此,第一电子设备在第二AR模型上呈现第二电子设备的第一摄像头采集的表情和动作。
其中,第一电子设备刚开始显示的AR通信界面(如第二AR通信界面)中仅包括第一电子设备的第二摄像头采集的第一现实场景的图像,不包括第一AR模型和第二AR模型。可以由用户在该第二AR通信界面中添加AR模型。通过上述方法,可以由用户选择符合用户喜好的AR模型,可以提升用户的通信体验。
结合第一方面,在另一种可能的设计方式中,在第一电子设备与第二电子设备进行AR通信的过程中,第一电子设备响应于在触摸屏上对第一AR模型的点击操作,可以显示上述模型选择界面;响应于在触摸屏上对多个模型选项中的第三模型选项的点击操作,第一电子设备可以将第一AR通信界面中的第一AR模型更换为第三模型选项对应的第三AR模型。
其中,在第一电子设备与第二电子设备进行AR通信的过程中,第一电子设备可以根据用户的操作,随时变更AR模型,显示符合用户喜好的AR模型,可以提升用户的通信体验。
结合第一方面,在另一种可能的设计方式中,在第一电子设备与第二电子设备进行AR通信的过程中,第一电子设备响应于在触摸屏上的第三操作(即录制操作),第一电子设备可以开始录制第一电子设备与第二电子设备进行AR通信的视频数据。
其中,第一电子设备录制上述视频后,可以向其他电子设备发送该视频数据,使得其他电子设备的用户可以学习、了解本次通信所涉及的内容。或者,第一电子设备可以将该视频数据上传至公共网络平台,使得更多的用户可以学习、了解本次通信所涉及的内容。
结合第一方面,在另一种可能的设计方式中,第一现实场景的图像随着第一电子设备的第二摄像头的取景内容的变化而相应变化。这样,第一电子设备显示的现实场景的图像便可以随着第一电子设备的移动而实时变化,可以提升用户在现实场景中的通信体验。
结合第一方面,在另一种可能的设计方式中,第一AR模型在第一现实场景中的位置与第二AR模型在所述第一现实场景中的位置不同。第一电子设备响应于在触摸屏上对第一AR模型的拖动操作,可以显示第一AR模型在第一现实场景中沿着拖动操作对应的轨迹移动的动态图像。
结合第一方面,在另一种可能的设计方式中,在第一电子设备在触摸屏上显示第 一AR通信界面之后,响应于在触摸屏上对所述第二AR模型的第四操作,第一电子设备可以显示第二用户的联系人信息。例如,联系人信息包括第二用户的电话号码、电子邮箱地址或者头像中的至少一个。这样,在第一电子设备与第二电子设备进行AR通信的过程中,用户只需要操作AR模型的操作,第一电子设备便可以显示对应用户的联系人信息;而不需要退出AR通信界面,在通讯录中查找该用户的联系人信息。
结合第一方面,在另一种可能的设计方式中,在第一电子设备在触摸屏上显示第一AR通信界面之后,第一电子设备可以识别第一电子设备的麦克风采集的语音数据,以及由来自第二电子设备的音频电信号转换而来的语音数据;第一电子设备在触摸屏显示识别到的语音数据的文本。即第一电子设备可以在AR通信界面中显示AR通信的语音数据的文本(即字幕)。第一电子设备显示AR通信的语音数据的文本,可以从视觉上为用户呈现第一电子设备与第二电子设备进行AR通信的过程中,双方用户交流的内容。
结合第一方面,在另一种可能的设计方式中,第一电子设备识别到文本与预设文本对应的语音数据时,在触摸屏显示所述第一AR模型与第二AR模型执行预设文本对应的动作的动态图像。其中,第一电子设备中可以保存多个预设文本以及各个预设文本对应的动作。例如,预设文本“你好”对应的动作为握手。预设文本“再见”对应的动作为挥手。
结合第一方面,在另一种可能的设计方式中,在第一电子设备向第二电子设备发送AR通信请求信息之后,第一电子设备如果接收到二电子设备发送的第一AR通信响应信息,则表示第二电子设备同意与第一电子设备进行所述AR通信。响应于该第一AR通信响应信息,第一电子设备与第二电子设备建立AR通信链接。
第一电子设备请求与第二电子设备进行AR通信时,第一电子设备不能直接与第二电子设备建立AR通信链接,而是根据被叫方的意愿,在第二电子设备(即被叫方)同意后,第一电子设备才能与第二电子设备建立AR通信链接。这样,可以提升用户的通信体验。
结合第一方面,在另一种可能的设计方式中,第二电子设备可能并不同意与第一电子设备进行AR通信。如果第一电子设备接收到第二电子设备发送的第二AR通信响应信息则表示第二电子设备拒绝与第一电子设备进行所述AR通信。响应于第二AR通信响应信息,第一电子设备可以呈现第二提示信息。该第二提示信息用于指示第二电子设备拒绝与第一电子设备进行所述AR通信。
第二方面,本申请实施例提供一种增强现实的通信方法,该方法可以包括:第二电子设备接收第一电子设备发送的AR通信请求信息,该AR通信请求信息用于请求与第二电子设备进行AR通信;响应于AR通信请求信息,第二电子设备与第一电子设备建立AR通信链接;第二电子设备在触摸屏上显示第一AR通信界面,该第一AR通信界面中包括第二现实场景的图像,以及处于第二现实场景的第一AR模型(如第一AR模型b)和第二AR模型(如第二AR模型b)。其中,第二现实场景是第二电子设备所处的现实场景。并且,在AR通信的过程中,第一AR模型按照第一设备获取的第一用户的表情和动作做出相应的表情和动作;第二AR模型按照第一设备获取的第二用户的表情和动作做出相应的表情和动作。
通过上述增强现实的通信方法,第二电子设备显示的第二AR通信界面中包括第二电子设备所处的第二现实场景的图像,以及处于第二现实场景的第一AR模型和第二AR模型。并且,可以在第一AR模型上呈现第一用户(即主叫方)的表情和动作,在第二AR模型上呈现第二用户(即被叫方)的表情和动作。如此,可以在电子设备进行AR通信的过程中,为用户提供在现实场景的互动的服务,可以提升用户的通信体验。
结合第二方面,在一种可能的设计方式中,第二电子设备响应于上述AR通信请求信息,可以呈现第一提示信息。该第一提示信息用于确认所述第二电子设备是否与第一电子设备进行AR通信。然后,由用户决定是否与第一电子设备进行AR通信。例如,第二电子设备响应于同意进行AR通信的操作(即第五操作),可以与第二电子设备建立所述AR通信链接。响应于对第一提示信息的第六操作,可以拒绝与第一电子设备建立所述AR通信链接。其中,响应于第五操作,第二电子设备可以向第一电子设备发送第一AR通信响应信息。响应于第六操作,第二电子设备可以向第一电子设备发送第二AR通信响应信息。
通过本方案,第二电子设备响应于AR通信请求信息,不会直接与第一电子设备建立所述AR通信链接,而是根据用户的意愿决定是否与第一电子设备进行AR通信,可以提升用户的通信体验。
结合第二方面,在另一种可能的设计方式中,第二电子设备在显示第一提示信息之前,可以对第一电子设备的合法性进行鉴权。具体的,响应于AR通信请求信息,第二电子设备可以判断第一电子设备是否合法;如果第一电子设备合法,第二电子设备呈现第一提示信息。其中,第一电子设备合法包括:第二电子设备的白名单中保存有第一电子设备的设备标识信息,或者,第二电子设备的黑名单中未保存第一电子设备的设备标识信息。第一电子设备的设备标识信息包括第一电子设备的电话号码。
结合第二方面,在另一种可能的设计方式中,第二电子设备接收第一电子设备发送的AR通信请求信息之前,第二电子设备与第一电子设备正在进行语音通信或者视频通信,在触摸屏上显示语音通信或者视频通信的图形用户界面。
其中,第一电子设备与第二电子设备进行语音通信或者视频通信的过程中发起AR通信,可以提高语音通信或者视频通信的用户体验。
结合第二方面,在另一种可能的设计方式中,第一电子设备与第二电子设备中安装了上述AR应用。上述AR通信请求信息是第一电子设备在AR应用中向第二电子设备发送的。
结合第二方面,在另一种可能的设计方式中,上述第二电子设备包括第一摄像头(如前置摄像头)和第二摄像头(如后置摄像头)。第一电子设备包括第一摄像头(如前置摄像头)。第二现实场景的图像是第二电子设备的第二摄像头采集的图像;第一用户的表情和动作是第一电子设备的第一摄像头采集的;第二用户的表情和动作是所述第二电子设备的第一摄像头采集的。
其中,第二电子设备可以通过上述AR通信链接,接收第一电子设备发送的第一用户的行为动作信息。第一用户的行为动作信息用于表征第一用户的表情和动作的变化。第一用户的行为动作信息是第一电子设备的第一摄像头采集的。也就是说,通过 上述方案,可以实现第一用户对第二电子设备显示的第一AR模型的表情和动作的控制,提升了用户的通信体验。
结合第二方面,在另一种可能的设计方式中,第二电子设备也可以通过AR通信链接向第一电子设备发送第二用户的行为动作信息。第二用户的行为动作信息用于表征第二用户的表情和动作的变化。第二用户的行为动作信息的第二电子设备的第一摄像头采集的。这样,第一电子设备便可以根据第二用户的行为动作信息,在第一现实场景中显示第二AR模型(如第二AR模型a)的表情和动作随着第二用户的表情和动作的变化相应变化的动态图像。也就是说,通过上述方案,可以实现第二用户对第一电子设备显示的第二AR模型a的表情和动作的控制,提升了用户的通信体验。
结合第二方面,在另一种可能的设计方式中,第一AR模型是第二电子设备中针对第一电子设备预先设定的AR模型,第二AR模型是第二电子设备中针对第二电子设备预先设定的AR模型。其中,第二电子设备中设置AR模型的方法可以参考第一方面的可能的设计方式中的相关描述,本申请实施例这里不再赘述。
结合第二方面,在另一种可能的设计方式中,上述第一AR模型和第二AR模型可以是第二电子设备响应于第二操作添加的AR模型。其中,第二电子设备响应于第二操作添加的AR模型的方法,可以参考第一电子设备响应于第二操作添加的AR模型的方法,本申请实施例这里不予赘述。其中,在第二电子设备响应于第二操作添加的AR模型的过程中,第二电子设备显示的AR通信界面包括:第二AR通信界面b、第三AR通信界面b和第一AR通信界面。第二AR通信界面b中包括第二现实场景的图像、但不包括第一AR模型b和第二AR模型b。第三AR通信界面b中包括第二现实场景的图像和第一AR模型b、但不包括第二AR模型b。
结合第二方面,在另一种可能的设计方式中,在第二电子设备与第一电子设备进行AR通信的过程中,第二电子设备可以根据用户的操作,随时变更AR模型,显示符合用户喜好的AR模型,可以提升用户的通信体验。其中,第二电子设备在AR通信过程中变更AR模型的方法,可以参考第一电子设备在AR通信过程中变更AR模型的方法,本申请实施例这里不予赘述。
结合第二方面,在另一种可能的设计方式中,在第二电子设备与第一电子设备进行AR通信的过程中,第二电子设备响应于在触摸屏上的第三操作(即录制操作),可以开始录制第一电子设备与第二电子设备进行AR通信的视频数据。
其中,第二电子设备录制上述视频后,可以向其他电子设备发送该视频数据,使得其他电子设备的用户可以学习、了解本次通信所涉及的内容。或者,第二电子设备可以将该视频数据上传至公共网络平台,使得更多的用户可以学习、了解本次通信所涉及的内容。
结合第二方面,在另一种可能的设计方式中,第二现实场景的图像随着第一电子设备的第二摄像头的取景内容的变化而相应变化。这样,第二电子设备显示的现实场景的图像便可以随着第二电子设备的移动而实时变化,可以提升用户在现实场景中的通信体验。
结合第二方面,在另一种可能的设计方式中,第一AR模型在第二现实场景中的位置与第二AR模型在所述第二现实场景中的位置不同。第二电子设备响应于在触摸 屏上对第一AR模型的拖动操作,可以显示第一AR模型在第一现实场景中沿着拖动操作对应的轨迹移动的动态图像。
结合第二方面,在另一种可能的设计方式中,在第二电子设备在触摸屏上显示第一AR通信界面之后,响应于在触摸屏上对所述第二AR模型的第四操作,可以显示第二用户的联系人信息。其中,第二电子设备显示联系人信息的方法,可以参考第一电子设备显示联系人信息的方法,本申请实施例这里不予赘述。
结合第二方面,在另一种可能的设计方式中,在第二电子设备在触摸屏上显示第一AR通信界面之后,第二电子设备可以识别第二电子设备的麦克风采集的语音数据,以及由来自第一电子设备的音频电信号转换而来的语音数据;第二电子设备在触摸屏显示识别到的语音数据的文本。即第二电子设备可以在AR通信界面中显示AR通信的语音数据的文本(即字幕)。第二电子设备显示AR通信的语音数据的文本,可以从视觉上为用户呈现第一电子设备与第二电子设备进行AR通信的过程中,双方用户交流的内容。
结合第二方面,在另一种可能的设计方式中,第二电子设备识别到文本与预设文本对应的语音数据时,在触摸屏显示第一AR模型与第二AR模型执行预设文本对应的动作的动态图像。其中,第二电子设备中可以保存多个预设文本以及各个预设文本对应的动作。例如,预设文本“你好”对应的动作为握手。预设文本“再见”对应的动作为挥手。第一AR模型和第二AR模型根据双方用户的语音数据进行互动,使得AR通信界面所展示的内容更加符合现实场景中用户面对面的交流的画面,可以提升AR通信的真实感,提升用户的通信体验。
第三方面,本申请实施例提供一种电子设备,该电子设备是第一电子设备。该电子设备可以包括:处理器、存储器、触摸屏和通信接口;存储器、触摸屏和通信接口与处理器耦合;存储器用于存储计算机程序代码;计算机程序代码包括计算机指令,当处理器执行上述计算机指令时,上述处理器,用于接收用户的第一操作;通信接口,用于响应于第一操作,向第二电子设备发送增强现实AR通信请求信息,AR通信请求信息用于请求与第二电子设备进行AR通信;与第二电子设备建立AR通信链接;触摸屏,用于在AR通信过程中,显示第一AR通信界面,第一AR通信界面中包括第一现实场景的图像,以及处于第一现实场景的第一AR模型和第二AR模型。
其中,第一现实场景是第一电子设备所处的现实场景;第一AR模型是第一电子设备对应的第一用户的AR模型,第二AR模型是第二电子设备对应的第二用户的AR模型;在AR通信的过程中,触摸屏显示的第一AR模型按照第一设备获取的第一用户的表情和动作做出相应的表情和动作,触摸屏显示的第二AR模型按照第一设备获取的第二用户的表情和动作做出相应的表情和动作。
结合第三方面,在另一种可能的设计方式中,上述处理器,还用于在通信接口向第二电子设备发送AR通信请求信息之前,与第二电子设备进行语音通信或者视频通信。上述触摸屏,还用于显示语音通信或者视频通信的图形用户界面。
其中,第一操作是在触摸屏上对语音通信或者视频通信的图形用户界面输入的第一预设手势;或者,语音通信或者视频通信的图形用户界面中包括AR通信按钮,第一操作是在触摸屏上对AR通信按钮的点击操作;或者,第一操作是语音通信或者视 频通信的过程中,对第一预设按键的点击操作,第一预设按键的第一电子设备的物理按键。
结合第三方面,在另一种可能的设计方式中,上述电子设备与第二电子设备中安装有AR应用,AR应用是用于提供AR通信服务的客户端;存储器中保存AR应用的相关信息。触摸屏,还用于在通信接口向第二电子设备发送AR通信请求信息之前,响应于对AR应用的应用图标的点击操作,第一电子设备显示AR应用界面,AR应用界面包括至少一个联系人选项,至少一个联系人选项中包括第二电子设备对应的联系人选项。其中,第一操作是第一用户对第二电子设备对应的联系人选项的点击操作。
结合第三方面,在另一种可能的设计方式中,上述第一AR模型是电子设备中针对第一电子设备预先设定的AR模型,第二AR模型是电子设备中针对第二电子设备预先设定的AR模型。
结合第三方面,在另一种可能的设计方式中,上述触摸屏,显示第一AR通信界面,包括:响应于建立AR通信链接,触摸屏显示第二AR通信界面,第二AR通信界面中包括第一现实场景的图像、但不包括第一AR模型和第二AR模型;响应于在触摸屏上对第二AR通信界面的第二操作,触摸屏显示模型选择界面,模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;响应于在触摸屏上对多个模型选项中的第一模型选项的第一选择操作,触摸屏显示第三AR通信界面,第三AR通信界面包括第一现实场景的图像和第二模型选项对应的第二AR模型、但不包括第一AR模型;响应于在触摸屏上第三AR通信界面的第二操作,触摸屏显示模型选择界面;响应于在触摸屏上对多个模型选项中的第二模型选项的第二选择操作,触摸屏显示第一AR通信界面,第一AR通信界面包括第一现实场景的图像、第二AR模型和第一模型选项对应的第一AR模型。
结合第三方面,在另一种可能的设计方式中,上述电子设备还包括第一摄像头和第二摄像头,第二电子设备包括第一摄像头。上述处理器,还用于响应于建立AR通信链接,开启第一电子设备的第一摄像头和第二摄像头,第一电子设备的第二摄像头用于采集第一现实场景的图像,第一电子设备的第一摄像头用于采集第一用户的表情和动作。
结合第三方面,在另一种可能的设计方式中,上述触摸屏,还用于在显示第一AR通信界面之后,响应于在触摸屏上对第一AR模型的点击操作,显示模型选择界面,模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;响应于在触摸屏上对多个模型选项中的第三模型选项的点击操作,触摸屏显示第一AR通信界面中的第一AR模型更换为第三模型选项对应的第三AR模型的AR通信界面。
结合第三方面,在另一种可能的设计方式中,上述处理器,还用于在触摸屏显示第一AR通信界面之后,响应于在触摸屏上的第三操作,开始录制第一电子设备与第二电子设备进行AR通信的视频数据。
结合第三方面,在另一种可能的设计方式中,上述触摸屏,还用于在触摸屏显示第一AR通信界面之后,响应于在触摸屏上对第二AR模型的第四操作,显示第二用户的联系人信息;联系人信息包括第二用户的电话号码、电子邮箱地址或者头像中的至少一个。
结合第三方面,在另一种可能的设计方式中,上述处理器,还用于在触摸屏显示第一AR通信界面之后,识别第一电子设备的麦克风采集的语音数据,以及由来自第二电子设备的音频电信号转换而来的语音数据。触摸屏,还用于显示处理器识别到的语音数据的文本。
结合第三方面,在另一种可能的设计方式中,上述触摸屏,还用于处理器识别到文本与预设文本对应的语音数据时,显示第一AR模型与第二AR模型执行预设文本对应的动作的动态图像。
第四方面,本申请实施例提供一种电子设备,该电子设备是第二电子设备。该电子设备包括:处理器、存储器、触摸屏和通信接口;存储器、触摸屏和通信接口与处理器耦合;存储器用于存储计算机程序代码;计算机程序代码包括计算机指令,当处理器执行上述计算机指令时,通信接口,用于接收第一电子设备发送的增强现实AR通信请求信息,AR通信请求信息用于请求与第二电子设备进行AR通信;处理器,用于响应于AR通信请求信息,与第一电子设备建立AR通信链接;触摸屏,用于在AR通信过程中,显示第一AR通信界面,第一AR通信界面中包括第二现实场景的图像,以及处于第二现实场景的第一AR模型和第二AR模型。
其中,第二现实场景是第二电子设备所处的现实场景;第一AR模型是第一电子设备对应的第一用户的AR模型;第二AR模型是第二电子设备对应的第二用户的AR模型;在AR通信的过程中,触摸屏显示的第一AR模型按照第二设备获取的第一用户的表情和动作做出相应的表情和动作,触摸屏显示的第二AR模型按照第二设备获取的第二用户的表情和动作做出相应的表情和动作。
结合第四方面,在一种可能的设计方式中,上述处理器,还用于在处理器响应于AR通信请求信息,呈现第一提示信息,第一提示信息用于确认第二电子设备是否与第一电子设备进行AR通信;响应于用户在触摸屏上同意进行AR通信的操作,与第二电子设备建立AR通信链接。
结合第四方面,在另一种可能的设计方式中,上述处理器,用于响应于AR通信请求信息,呈现第一提示信息,包括:处理器,用于响应于AR通信请求信息,判断第一电子设备是否合法;如果第一电子设备合法,呈现第一提示信息。其中,第一电子设备合法包括:第二电子设备的白名单中保存有第一电子设备的设备标识信息,或者,第二电子设备的黑名单中未保存第一电子设备的设备标识信息;第一电子设备的设备标识信息包括第一电子设备的电话号码;第二电子设备的白名单和黑名单保存在存储器中。
结合第四方面,在另一种可能的设计方式中,上述处理器,还用于在通信接口接收AR通信请求信息之前,与第一电子设备进行语音通信或者视频通信。触摸屏,还用于显示语音通信或者视频通信的图形用户界面。
结合第四方面,在另一种可能的设计方式中,上述触摸屏显示的第一AR模型是电子设备中针对第一电子设备预先设定的AR模型,触摸屏显示的第二AR模型是电子设备中针对第二电子设备预先设定的AR模型。
结合第四方面,在另一种可能的设计方式中,上述触摸屏,用于显示第一AR通信界面,包括:响应于建立AR通信链接,触摸屏显示第二AR通信界面,第二AR 通信界面中包括第二现实场景的图像、但不包括第一AR模型和第二AR模型;响应于在触摸屏上对第二AR通信界面的第二操作,触摸屏显示显示模型选择界面,模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;响应于在触摸屏上对多个模型选项中的第一模型选项的第一选择操作,触摸屏显示第三AR通信界面,第三AR通信界面包括第二现实场景的图像和第一模型选项对应的第一AR模型、但不包括第二AR模型;响应于在触摸屏上第三AR通信界面的第二操作,触摸屏显示模型选择界面;响应于在触摸屏上对多个模型选项中的第二模型选项的第二选择操作,触摸屏显示第一AR通信界面,第一AR通信界面包括第二现实场景的图像、第一AR模型和第二模型选项对应的第二AR模型。
结合第四方面,在另一种可能的设计方式中,上述电子设备包括第一摄像头和第二摄像头,第一电子设备包括第一摄像头。处理器,还用于响应于建立AR通信链接,开启电子设备的第一摄像头和第二摄像头,电子设备的第二摄像头用于采集第二现实场景的图像,电子设备的第一摄像头用于采集第二用户的表情和动作。
结合第四方面,在另一种可能的设计方式中,上述触摸屏,还用于在显示第一AR通信界面之后,响应于在触摸屏上对第一AR模型的点击操作,显示模型选择界面,模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;响应于在触摸屏上对多个模型选项中的第三模型选项的点击操作,触摸屏显示第一AR通信界面中的第一AR模型更换为第三模型选项对应的第三AR模型的AR通信界面。
结合第四方面,在另一种可能的设计方式中,上述处理器,还用于在触摸屏显示第一AR通信界面之后,响应于在触摸屏上的第三操作,开始录制第二电子设备与第一电子设备进行AR通信的视频数据。
结合第四方面,在另一种可能的设计方式中,上述触摸屏,还用于在显示第一AR通信界面之后,响应于在触摸屏上对第一AR模型的第四操作,显示第一用户的联系人信息;联系人信息包括第一用户的电话号码、电子邮箱地址或者头像中的至少一个。
结合第四方面,在另一种可能的设计方式中,上述处理器,还用于在触摸屏显示第一AR通信界面之后,识别第二电子设备的麦克风采集的语音数据,以及由来自第一电子设备的音频电信号转换而来的语音数据;触摸屏,还用于显示处理器识别到的语音数据的文本。
结合第四方面,在另一种可能的设计方式中,上述触摸屏,还用于处理器识别到文本与预设文本对应的语音数据时,显示第一AR模型与第二AR模型执行预设文本对应的动作的动态图像。
第五方面,本申请实施例提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如第一方面或者第二方面及其任一种可能的设计方式所述的增强现实的通信方法。
第六方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第一方面或者第二方面及其任一种可能的设计方式所述的增强现实的通信方法。
另外,第三方面和第四方面及其任一种设计方式所述的电子设备,以及第五方面所述的计算机存储介质、第六方面所述的计算机程序产品所带来的技术效果可参见上 述第一方面及其不同设计方式所带来的技术效果,此处不再赘述。
附图说明
图1为一实施例提供的一种电子设备的结构示意图;
图2为一实施例提供的一种增强现实的通信方法所应用的通信场景实例示意图;
图3A-图3D另一为实施例提供的一种AR通信界面的实例示意图;
图4A-图4B为另一实施例提供的一种AR通信界面的实例示意图;
图5A-图5B为另一实施例提供的一种AR通信界面的实例示意图;
图6A-图6D为另一实施例提供的一种AR通信界面的实例示意图;
图7A-图7E为另一实施例提供的一种AR通信界面的实例示意图;
图8为一实施例提供的一种面部特征点云的实例示意图;
图9A为另一实施例提供的一种三维人脸模型和AR模型的实例示意图;
图9B为另一实施例提供的一种增强现实的通信方法流程图;
图9C为另一实施例提供的一种AR通信的实例示意图;
图10A-图10B为另一实施例提供的一种AR通信界面的实例示意图;
图11A-图11D为另一实施例提供的一种AR通信界面的实例示意图;
图12为另一实施例提供的一种增强现实的通信方法所应用的通信场景实例示意图;
图13为另一实施例提供的一种AR通信界面的实例示意图;
图14A-图14B为另一实施例提供的一种AR通信界面的实例示意图;
图15为另一实施例提供的一种增强现实的通信方法流程图;
图16为另一实施例提供的一种增强现实的通信方法流程图;
图17为另一实施例提供的一种增强现实的通信方法流程图;
图18为另一实施例提供的一种增强现实的通信方法流程图;
图19为另一实施例提供的一种增强现实的通信方法流程图;
图20为另一实施例提供的一种增强现实的通信方法流程图;
图21为另一实施例提供的一种增强现实的通信方法流程图;
图22为另一实施例提供的一种电子设备的结构组成示意图。
具体实施方式
本申请实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指并包含一个或多个相绑定的列出项目的任何或所有可能组合。以下实施例中的电子设备可以为便携式计算机(如手机等)、笔记本电脑、个人计算机(personal computer,PC)、可穿戴电子设备(如智能手表、智能眼镜或者智能头盔等)、平板电脑、AR\虚拟现实(virtual reality,VR)设备、车载电脑等,以下实施例对该电子设备的具体形式不做特殊限制。
AR技术是一种可以将虚拟物体叠加到现实场景中,实现现实场景中的虚实融合和互动的技术。本申请实施例中的AR技术可以包括基于多摄像头的AI技术。例如, AR技术可以包括前置摄像头的AR技术和基于后置摄像头的AR技术。
基于前置摄像头的AR技术是指:电子设备可以开启前置摄像头,采集用户的脸面部特征生成三维人脸模型;在AR场景中添加电子设备中预先存储的AR模型(如emoji模型);建立三维人脸模型与AR模型的映射关系,实现用户面部表情和身体动作对于AR模型的控制。
基于后置摄像头的AR技术是指:电子设备可以开启后置摄像头采集现实场景的图像,并在触摸屏显示后置摄像头采集的现实场景的图像;在触摸屏显示的现实场景中添加AR模型;电子设备的传感器实时检测电子设备的位置变化和运动参数,并根据检测到的参数计算AR模型在触摸屏显示的现实场景中的坐标变化;根据计算的坐标变化实现AR模型在现实场景中的互动。或者,电子设备可以响应于用户对触摸屏中显示的AR模型的操作(如拖动操作、点击操作等),实现AR模型在现实场景中的互动。
请参考图1,其示出一实施例提供的一种电子设备100的结构示意图。该电子设备100是本实施例中所述的电子设备200或电子设备300。如图1所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中,摄像头193可以包括第一摄像头和第二摄像头。例如。第一摄像头可以为前置摄像头,第二摄像头可以为后置摄像头。
以下实施例中的第一摄像头可以为结构光摄像头,也称为点云深度摄像头或者3D结构光摄像头。结构光摄像头可以采集面部特征点云,面部特征点云可以构成人脸的3D图像。
一般而言,电子设备(如手机)上呈现的图像都是二维图像。二维图像中不能显示该图像上每个位置对应的深度。而结构光摄像头在采集3D图像时,不仅获取了图像中每个位置的颜色,还获取了每个位置的深度。结构光的原理就是通过光源发出一个不可见的光栅,以隔出一个特点的条纹或者图像,之后再根据图案的分布和扭曲程度,逆向计算出其所对应的三维图像数据,如三维人脸模型。例如,如图8所示,结构光摄像头可以采集到用户801的面部特征点云802。面部特征点云802可以表征用户801的面部轮廓信息。通过面部特征点云802,电子设备100可以构建用户801的三维人脸模型。在以下实施例中,图像中每个位置深度信息的采集不局限于结构光摄像头,电子设备100也可以基于光学摄像头通过深度学习等算法估计图像中每个位置的深度信息。
其中,传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在本 申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心,是指挥电子设备100的各个部件按照指令协调工作的决策者。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接 口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serialinterface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低 噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic  light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨 块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,MicroSIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
请参考图2,其示出本实施例提供的一种增强现实的通信方法所应用的通信场景实例示意图。其中,第一电子设备(如电子设备200)与第二电子设备(如电子设备300)均包括第一摄像头(如前置摄像头)、第二摄像头(如后置摄像头)和触摸屏。电子设备200与电子设备300的结构可以参考本申请实施例对图1所示的电子设备100的描述,以下实施例中不再赘述。
如图2所示,第一用户(如用户210)使用电子设备200与使用电子设备300的第二用户(如用户310)进行无线通信。图2所示的场景220为用户210使用电子设备200所处的现实场景,即第一现实场景。图2所示的场景320为用户310使用电子设备300所处的现实场景,即第二现实场景。
其中,电子设备200与电子设备300的通信可以为语音通信或者视频通信。例如,电子设备200可以调用“电话”应用或者第三方通讯应用(如微信、QQ等),与电子设备300进行语音通信或者视频通信。再例如,电子设备200和电子设备300中可以安装用于支持电子设备进行AR通信的应用程序(记为AR应用)。电子设备200可以调用该AR应用与电子设备300进行AR通信。
在一个实施例中,在电子设备200与电子设备300通信的过程中,电子设备200的第二摄像头(记为第二摄像头a,如后置摄像头a)可以采集场景220的图像。电子设备200可以在触摸屏或者其他显示装置中显示图3A所示的AR通信界面301。如图3A所示,电子设备200可以在AR通信界面301中的特定位置添加用户310的AR模型(即第二AR模型a,如AR模型311)。
在电子设备300与电子设备200通信的过程中,电子设备300的第二摄像头(记为第二摄像头b,如后置摄像头b)可以采集用户310场景320的图像,电子设备300在触摸屏或者其他显示装置中显示图3B所示的AR通信界面302。如图3B所示,电子设备300可以在AR通信界面302中的特定位置添加用户210的AR模型(即第一AR模型b,如AR模型211)。该AR模型211可以根据用户210的行为动作信息(例如面部表情、身体动作等)来做出相应的行为动作。
上述AR模型311可以根据用户310的行为动作(例如,面部表情或身体动作等)来做出相应的行为动作。示例性地,电子设备300的第一摄像头(记为第一摄像头b,如前置摄像头b)可以采集用户310的行为动作信息(例如,面部特征信息或身体动作信息)。面部特征信息用于表征用户的面部表情和动作变化。身体动作信息可以用于表征用户的身体动作变化。电子设备200可以接收电子设备300发送的用户310的行为动作信息,根据用户310的行为动作信息可以控制AR模型311的面部表情和身体动作。
也就是说,电子设备200可以显示AR模型311在用户210所处的场景220中与用户210对话的动态图像。并且,AR模型311的面部表情和身体动作可以随着用户310的面部表情和身体动作的变化而相应地变化。换言之,电子设备200可以为用户210提供用户310在场景220中与用户210对话的真实体验。
上述AR模型211可以根据用户210的行为动作(例如,面部特征或身体动作等)来做出相应的行为动作。示例性地,电子设备200的第一摄像头(记为第一摄像头a,如前置摄像头a)可以采集用户210的行为动作信息(例如,面部特征信息或身体动作信息)。电子设备300可以接收电子设备200发送的用户210的行为动作信息,根据用户210的行为动作信息控制AR模型211的面部表情和身体动作。
也就是说,电子设备300可以显示AR模型211在用户310所处的场景320中与用户310对话的动态图像。并且,AR模型211的面部表情和身体动作可以随着用户210的面部表情和身体动作的变化而相应地变化。换言之,电子设备300可以为用户310提供用户210在场景320中与用户310对话的真实体验。因此,该方法提高了用户体验。
在另一实施例中,电子设备200还可以显示AR模型311与用户210的AR模型(即第一AR模型a,如AR模型212)在场景220中面对面对话的动态图像。具体的,电子设备200不仅可以在图3C所示的AR通信界面303中添加AR模型311,还可以在图3C所示的AR通信界面303中添加AR模型212。示例性地,电子设备200可以根据前置摄像头a采集的用户210的行为动作信息,控制AR模型212的面部表情和身体动作。也就是说,AR模型212的面部表情和身体动作随着用户210的面部表情和身体动作的变化而相应的变化。
电子设备300还可以显示用户310的AR模型(即第二AR模型b,如AR模型312)与AR模型211在场景320中对话的动态图像。具体的,电子设备300不仅可以在图3D所示的AR通信界面304中添加AR模型211,还可以在图3D所示的AR通信界面304中添加AR模型312。电子设备300根据接收自电子设备200的用户210的行为动作信息,控制AR模型211的面部表情和身体动作。电子设备300根据可以根据前置 摄像头b采集的用户310的行为动作信息,控制AR模型312的面部表情和身体动作。也就是说,AR模型312的面部表情和身体动作随着用户310的面部表情和身体动作的变化而相应的变化。
由上述描述可知,电子设备200可以为用户210提供与用户310在场景220中对话的真实体验。电子设备300也可以为用户310提供与用户310在场景320中对话的真实体验。
在一些实施例中,电子设备200可以响应于用户的录制操作,录制电子设备200与电子设备300进行语音通信或者视频通信过程中,包括AR模型311和AR模型212的视频数据(记录为视频数据1)。同样的,电子设备300可以响应于用户的录制操作,录制电子设备300与电子设备200进行语音通信或者视频通信过程中,包括AR模型312和AR模型211的视频数据(记录为视频数据2)。以电子设备200向其他用户共享视频数据1为例。电子设备200可以向其他电子设备发送该视频数据1,使得其他电子设备的用户可以学习、了解本次通信所涉及的内容。或者,电子设备200可以将该视频数据1上传至公共网络平台,使得更多的用户可以学习、了解本次通信所涉及的内容。
需要说明的是,本实施例中的上述AR模型可以是卡通人物的AR模型或者用户的真实AR模型。卡通人物的AR模型可以是电子设备从服务器(如AR服务器)中下载的AR模型。用户的真实AR模型可以是电子设备根据用户的特征信息(如面部特征信息和形体特征信息)构建的AR模型。其中,用户的面部特征信息可以包括该用户的面部特征点云和用户的面部图像。用户的形体特征信息可以包括该用户的形体特征点云和用户的形体图像。本申请实施例中的“形体”包括用户身体中除头部之外的其他身体部件。例如,以电子设备200构建用户210的真实AR模型为例。电子设备200可以通过前置摄像头a或者后置摄像头a采集用户210的面部特征信息和用户的面部图像,以及用户的形体特征信息和形体图像,然后根据采集的信息构建用户210的3D模型(即用户210的真实AR模型),并保存该3D模型。其中,电子设备200也可以根据用户的面部特征信息和用户的面部图像构建用户210的3D人脸模型。
示例性的,图3A和图3C中所示的AR模型311,以及图3B和图3D中所示的AR模型211均为卡通人物的AR模型。图3C中所示的AR模型212为用户210的真实AR模型,图3D中所示的AR模型312为用户310的真实AR模型。
本申请实施例中,电子设备通过摄像头(第一摄像头或者第二摄像头)采集用户的特征信息的方法,可以参考现有技术中电子设备通过摄像头采集用户的特征信息的方法,本申请实施例这里不予赘述。电子设备根据该特征信息构建用户的3D模型(即AR模型)的具体方法,可以参考现有技术中电子设备构建用户的3D模型的方法,本申请实施例这里不予赘述。
以下实施例对电子设备200与电子设备300实现增强现实的通信的具体方式进行详细描述:
在一些实施例中,电子设备200与电子设备300的通信可以为语音通信或者视频通信。电子设备200接收用户210的第一操作。该第一操作用于触发电子设备200进行AR通信。
在电子设备200与电子设备300进行语音通信或者视频通信(例如,图9B所示的S901)的过程中,电子设备200可以接收用户210的第一操作(例如,图9B所示的S902)。
在一些实施例中,电子设备200可以接收用户210对语音通信界面或者视频通信界面的第一操作。
示例性的,上述第一操作可以是用户在语音通信界面或者视频通信界面输入的第一预设手势,如S形手势或者向上滑动手势等。例如,该第一操作可以是用户在图4A所示的语音通信界面401或者图4B所示的视频通信界面403中输入的S形手势。
示例性的,上述语音通信界面或者视频通信界面中可以包括“AR通信”按钮。该“AR通信”按钮可以用于触发电子设备200执行AR通信。上述第一操作可以是用户对该“AR通信”按钮的点击操作,如单击操作、双击操作或者长按操作等。例如,图4A所示的语音通信界面401中包括“AR通信”按钮402。图4B所示的视频通信界面403中包括“AR通信”按钮404。上述第一操作可以是用户对“AR通信”按钮402或者“AR通信”按钮404的单击操作。
在另一些实施例中,电子设备200可以接收用户210对电子设备200中的第一预设按键的第一操作。该第一操作可以为单击操作、双击操作或者长按操作等。该第一预设按键是电子设备200中的物理按键或者多个物理按键的组合。例如,该第一预设按键可以为电子设备200中用于触发电子设备200进行AR通信的专用物理按键。该专用物理按键可以设置在电子设备200的侧边框或者上边框上。又例如,该第一预设按键可以是由“音量+”按键与“音量-”按键组成的组合按键。
响应于上述第一操作,如图4A或图4B所示,电子设备200可以向电子设备300发送AR通信请求信息(例如,图9B所示的S903)。电子设备300接收到AR通信请求信息后,可以启动前置摄像头b和后置摄像头b。其中,后置摄像头b可以采集场景320的图像。电子设备300在触摸屏显示后置摄像头b采集的场景320的图像。例如,电子设备300可以显示图5B所示的AR通信界面504。前置摄像头b可以采集用户310的面部特征点云。用户310的面部特征点云用于表征用户310的面部轮廓信息。电子设备300可以根据前置摄像头b采集的用户310的特征点云(如面部特征点云和形体特征点云)构建用户310的三维模型b(如三维人脸模型b)。
在一些实施例中,电子设备300接收到上述AR通信请求信息后,可以呈现第一提示信息(例如,图9B所示的S904)。该第一提示信息用于确认用户310是否同意电子设备300与电子设备200进行AR通信。
示例性的,电子设备300可以发出语音提示信息,即第一提示信息为语音提示信息。例如,电子设备300可以发出语音提示信息“请确认是否与用户210进行AR通信?”。或者,电子设备300显示包括第一提示信息的图像用户界面。或者,电子设备300可以显示包括第一提示信息的图像用户界面,并发出振动提示或者上述语音提示信息。例如,电子设备300可以在触摸屏上显示图5A所示的图形用户界面(graphical user interface,GUI)501。该图形用户界面501中包括第一提示信息“请确认是否与用户210进行AR通信?”。
在另外一些实施例中,在电子设备300接收到上述AR通信请求信息后,电子设 备300可以先对电子设备200的合法性进行鉴权。例如电子设备300可以判断电子设备200的设备标识信息是否在AR通信白名单或者黑名单中。示例性地,如果电子设备300确定电子设备200的设备标识信息是在AR通信白名单中(或者不在AR通信黑名单中),则在电子设备300上呈现上述第一提示信息,如果电子设备300确定电子设备200不在AR通信明白名单中(或者在AR通信黑名单中),则电子设备300可以不对上述AR通信请求进行应答,或者向上述电子设备200返回用于指示拒绝AR通信的AR通信响应信息。示例性地,上述设备标识信息可以是电子设备200的电话号码。
电子设备300可以接收用户的同意进行AR通信的操作(即第五操作)(例如,图9B所示的S905)或者第六操作(例如,图9B所示的S915)。该第五操作用于指示用户310同意电子设备300与电子设备200进行AR通信。第六操作用于指示用户310不同意电子设备300与电子设备200进行AR通信。例如,第五操作可以是电子设备300呈现第一提示信息后,用户在电子设备300的触摸屏上输入的第二预设手势(如向上滑动势)或者用户对第二预设按键的点击操作。第五操作可以是电子设备300呈现第一提示信息后,用户在电子设备300的触摸屏上输入的第三预设手势(如向下滑动势)或者用户对第三预设按键的点击操作。上述第二预设手势与第三预设手势不同。第二预设按键和第三预设按键可以是电子设备300中多个物理按键的组合。第二预设按键与第三预设按键不同。例如,第二预设按键可以是由“音量+”按键与“锁屏”按键组成的组合按键。第三预设按键可以是由“音量-”按键与“锁屏”按键组成的组合按键。
又例如,在电子设备300显示包括第一提示信息的图像用户界面(如图形用户界面501)的情况下,该图形用户界面501还可以包括“OK”按钮502和“NO”按钮503。第五操作可以是用户对“OK”按钮502的点击操作(如单击操作)。第六操作可以是用户对“NO”按钮503的点击操作。
电子设备300响应于用户310的第五操作(如用户对图5A所示的“OK”按钮502的点击操作),电子设备300可以与电子设备200建立AR通信链接。在建立AR通信链接后,电子设备300可以启动前置摄像头b和后置摄像头b(例如,图9B所示的S907)。然后,在电子设备300触摸屏上显示后置摄像头b采集的图像,根据前置摄像头b采集的特征点云b构建用户310的三维模型b(例如,图9B所示的S908)。三维模型b可以与用户310的AR模型(如AR模型311和AR模型312)形成映射关系。这样,便可以根据该映射关系,采用用户310的行为动作信息控制AR模型311和AR模型312的面部表情和身体动作。
其中,响应于用户的第五操作,电子设备300可以向电子设备200发送第一AR通信响应信息(例如图9B所示的S906)。该第一AR通信响应信息用于表明电子设备300同意与电子设备200进行AR通信。电子设备200接收电子设备300发送的第一AR通信响应信息(例如,图9B所示的S910)。响应于该第一AR通信响应信息,电子设备200可以与电子设300建立AR通信链接。在建立AR通信链接后,电子设备200可以启动前置摄像头a和后置摄像头a(例如,图9B所示的S911)。其中,后置摄像头a可以采集场景220的图像。电子设备200可以在触摸屏上显示后置摄像头a采集的图像(例如,图9B所示的S912)。例如,电子设备200可以显示图6A所示 的AR通信界面601。如图6A所示,AR通信界面601中包括场景220的图像。前置摄像头a可以采集用户210的实时特征点云。用户210的实时特征点云用于表征用户210的面部轮廓信息和形体轮廓信息。电子设备200可以根据用户210的实时特征点云构建用户210的三维人体模型(三维人脸模型a(例如,图9B所示的S912))。三维模型a可以与用户210的AR模型(如AR模型211和AR模型212)形成映射关系。这样,便可以根据该映射关系,采用用户210的行为动作信息控制AR模型211和AR模型212的面部表情和身体动作。
如果电子设备300接收到用户310的第六操作(如用户310对“NO”按钮503的点击操作,如单击操作),则表示用户310拒绝与电子设备200进行AR通信(例如,图9B所示的S915)。响应于第六操作,电子设备300可以向电子设备200发送第二AR通信响应信息(例如,图9B所示的S916)。该第二AR通信响应信息用于表明电子设备300拒绝与电子设备200进行AR通信。电子设备200可以接收电子设备300发送的第二AR通信响应信息(例如,图9B所示的S917)。响应于接收到上述第二AR通信响应信息,电子设备200则可以呈现第二提示信息(例如,图9B所示的S918);该第二提示信息用于指示电子设备300拒绝与电子设备200进行AR通信。其中,电子设备200呈现第二提示信息的方法,可以参考电子设备300呈现第一提示信息的方法,本申请实施例这里不予赘述。电子设备200接收到第二AR通信响应信息后,电子设备200与电子设备300可以继续进行语音通信或者视频通信(例如,图9B所示的S919)。或者,电子设备200接收到第二AR通信响应信息后,电子设备200与电子设备300可以结束S901所述的语音通信或者视频通信。
如果电子设备300接收到用户的第五操作,则表示用户310同意电子设备300与电子设备200进行AR通信。在这种情况下,电子设备300可以响应于用户的操作(如第二操作)在场景320的图像中添加AR模型(例如,图9B所示的S909)。电子设备200接收到电子设备300发送的第一AR通信响应信息后,也可以响应于用户的操作(如第二操作)在场景220的图像中添加AR模型(例如,图9B所示的S913)。本申请实施例这里以电子设备200为例,对电子设备在现实场景的图像中添加AR模型的方法进行说明:
在一些实施例中,图3A或者图3C所示的AR模型311,以及图3C所示的AR模型212可以是电子设备200响应于用户210的操作(如第二操作)添加至场景220的图像的。
示例性的,电子设备200开启后置摄像头a后,可以显示第二AR通信界面a(如图6A所示的AR通信界面601)。AR通信界面601中包括场景220的图像、但不包括AR模型311和AR模型212。电子设备200可以接收用户210对AR通信界面601的第二操作;响应于该第二操作,电子设备200可以在场景220的图像中添加用户210和/或用户310的AR模型。其中,上述第二操作可以是用户在AR通信界面601中输入的第四预设手势,如S形手势或者向上滑动手势等。在一些实施例中,如图6A所示,AR通信界面601中还可以包括“AR模型”按钮602。上述第二操作可以是用户210对“AR模型”按钮602的点击操作,如单击操作、双击操作或者长按操作等。
响应于上述第二操作,电子设备200可以显示图6B所示的模型选择界面603。该 模型选择界面603中包括“被叫方AR模型”选项604和“主叫方AR模型”选项605。如图6B所示,当“被叫方AR模型”选项604被选中时,用户可以在模型选择界面603中为用户310选择AR模型。当“主叫方AR模型”选项605被选中时,用户可以在模型选择界面603中为用户210选择AR模型。如图6A所示,以“被叫方AR模型”选项604被选中为例,模型选择界面603中可以包括至少两个AR模型的选项。例如,模型选择界面603中包括“AR模型1”选项606和“AR模型2”选项607。
其中,“AR模型1”选项606对应的AR模型和“AR模型2”选项607对应的AR模型可以预先保存在电子设备200中。或者,“AR模型1”选项606对应的AR模型和“AR模型2”选项607对应的AR模型可以保存在云端(如AR服务器)。电子设备200可以响应于用户对一个AR模型的选项的选择操作(如第一选择操作),从云端下载该AR模型。
在另外一些实施例中,电子设备200还可以接收电子设备300发送的用户310的AR模型。也就是说,电子设备200可以选择由通信的对方(即电子设备300)设置用户310的AR模型。基于这种情况,如图6B所示,模型选择界面603中还可以包括“由对方设置AR模型”选项608。响应于用户对“由对方设置AR模型”选项608的选择操作,电子设备200向电子设备300发送AR模型获取请求,以获取电子设备300为用户310设置的AR模型,并在AR通信界面601中包括的AR场景图像中添加来自电子设备300的AR模型。或者,电子设备200可以接收电子设备300主动发送的AR模型;响应于用户对“由对方设置AR模型”选项608的选择操作(如第一选择操作),电子设备200可以在AR通信界面601中包括的AR场景图像中添加来自电子设备300的AR模型。
在一些实施例中,如果电子设备200曾经与电子设备300进行过AR通信,并且在AR通信过程中,电子设备200接收了电子设备300为用户310设置的AR模型,则电子设备200可以保存该AR模型。在这种情况下,如图6B所示,模型选择界面603中还可以包括“AR模型(用户310)”选项609。响应于用户对“AR模型(用户310)”选项609的选择操作,电子设备200可以将电子设备200保存的对应AR模型设置为用户310的AR模型。
其中,如果电子设备200接收到用户对模型选择界面603中的任一AR模型选项的选择操作(如单击操作),电子设备200可以采用预设显示方式显示用户选择的AR模型选项。例如,假设电子设备200接收到用户对模型选择界面603中的“AR模型1”选项606的点击操作。电子设备200可以采用图6C所示的黑底白字的方式显示“AR模型1”选项606。其中,上述预设显示方式包括但不限于图6C所示的黑底白字的方式。例如,预设显示方式还可以包括加粗显示或者采用预设颜色(如红色)显示等。
电子设备200显示图6C所示的模型选择界面612后,电子设备200可以接收用户对模型选择界面612中“确定”按钮610的点击操作(如单击操作)。响应于用户对图6C所示的“确定”按钮610的点击操作,电子设备200可以在图6A所示的AR场景图像中添加“AR模型1”选项606对应的AR模型311,显示图6D所示的AR通信界面613(相当于图3A所示的AR通信界面301),即第三AR通信界面a。图6D所示的AR通信界面613中包括场景220的图像和AR模型311、但不包括AR模型212。
其中,在电子设备200与电子设备300进行AR通信的过程中,电子设备200还可以接收用户对AR通信界面613中的“AR模型”选项602的点击操作,重新设置用户310的AR模型,或者添加用户210的AR模型。电子设备200响应于用户对“AR模型”选项602的点击操作,重新设置或者添加AR模型的方法可以参考上述相关描述,本申请实施例这里不再赘述。
在另外一些实施例中,电子设备200显示图6C所示的模型选择界面612后,电子设备200可以接收用户对模型选择界面612中“主叫方AR模型”选项605的点击操作(如单击操作)。响应于用户对图6C所示的“主叫方AR模型”选项605的点击操作,电子设备200可以显示图7A所示的模型选择界面701。该模型选择界面701中包括“AR模型a(本机)”选项702、“AR模型b”选项703和“AR模型c”选项704等。
其中,模型选择界面701中AR模型选项对应的AR模型可以预先保存在电子设备200中。或者,AR模型可以保存在云端。电子设备200可以响应于用户对一个AR模型的选项的选择操作,从云端下载该AR模型。
其中,如果电子设备200接收到用户对模型选择界面701中的任一AR模型选项的选择操作(如单击操作),电子设备200可以采用上述预设显示方式显示用户选择的AR模型选项。例如,响应于用户对模型选择界面701中的“AR模型a(本机)”选项702的点击操作,电子设备200可以采用图7B所示的黑底白字的方式显示“AR模型a(本机)”选项702。电子设备200显示图7B所示的模型选择界面705后,电子设备200可以接收用户对模型选择界面705中“确定”按钮610的点击操作(如单击操作)。响应于用户对图7B所示的“确定”按钮610的点击操作,电子设备200可以在图6A所示的AR场景图像中添加“AR模型1”选项606对应的AR模型311和“AR模型a(本机)”选项702对应的AR模型212,显示图7C所示的AR通信界面706(相当于图3C所示的AR通信界面303),即第一AR通信界面a。图7C所示的AR通信界面706中包括场景220的图像、AR模型311和AR模型212。
需要注意的是,“AR模型a(本机)”选项702对应的AR模型可以为:电子设备200通过前置摄像头a或者后置摄像头a采集用户210的特征信息a(如面部特征信息a),然后根据该特征信息a构建的用户210的3D模型(即用户210的真实AR模型),如3D人脸模型。例如,图7C中所示的AR模型212为用户210的真实AR模型。
在一些实施例中,电子设备300显示图5B所示的AR通信界面504后,可以接收用户对“AR模型”选项505的点击操作,在图5B所示的AR通信界面504中添加AR模型。图5B所示的AR通信界面504为第二AR通信界面b。该AR通信界面504中包括场景320的图像、但不包括AR模型211和AR模型312。其中,电子设备300在图5B所示的AR通信界面504中添加AR模型的方法,可以参考电子设备200在图6A所示的AR通信界面601中添加AR模型的方法,本申请实施例不再赘述。其中,电子设备300可以在图5B所示的AR通信界面504中添加AR模型后,可以显示图3B所示的AR通信界面302或者图3D所示的AR通信界面304。图3B所示的AR通信界面302为第三AR通信界面b。该AR通信界面302中包括场景320的图像和AR模型211、但不包括AR模型312。图3D所示的AR通信界面304为第一AR通信界面b。该AR通信界面304中包括场景320的图像、AR模型211和AR模型312。
在另外一些实施例中,电子设备200启动后置摄像头a后,可以直接显示包括默认AR模型的AR通信界面。其中,电子设备200中可以针对本机设置至少一个默认AR模型。例如,图7B所示的“AR模型a(本机)”选项对应的AR模型可以为电子设备200中针对本机设置的默认AR模型。电子设备200中可以针对通信对方设置一个默认AR模型。或者,电子设备200中可以针对每一个联系人设置一个默认AR模型。例如,如图6B所示的“AR模型(用户310)”选项对应的AR模型可以为电子设备200中针对用户310设置的默认AR模型。
示例性的,用户210可以在电子设备200的“设置”应用中为机主和电子设备200中的多个联系人设置AR模型。例如,电子设备200可以接收用户对电子设备200的主界面上的“设置”应用图标的点击操作(如单击操作);响应于该点击操作,电子设备200可以显示图7D所示的设置界面707。设置界面707中包括“飞行模型”选项、“WLAN”选项、“蓝牙”选项、“移动网络”选项和“AR模型设置”选项708等。其中,“飞行模型”选项、“WLAN”选项、“蓝牙”选项和“移动网络”选项等选项的功能可以参考常规技术中的对应功能介绍,本实施例这里不予赘述。“AR模型设置”选项708用于触发电子设备200显示用于为机主和联系人设置AR模型的AR模型设置界面。响应于用户对“AR模型设置”选项708的点击操作(如单击操作),电子设备200显示图7E所示的AR模型设置界面709。假设电子设备200(如电子设备200的通讯录)中保存了包括Bob、迈克尔(Michael)和用户310等多个联系人的信息。如图7E所示,AR模型设置界面709中可以包括“机主的AR模型”设置选项710、“Bob的AR模型”设置选项711、“Michael的AR模型”设置选项712和“用户310的AR模型”设置选项713等多个设置选项。电子设备200可以接收用户对任一个设置选项的点击操作,为对应的用户设置AR模型。其中,电子设备200为用户设置AR模型的具体方法可以参考上述实施例中的描述,本实施例这里不再赘述。
其中,如果电子设备200在“设置”应用中为主叫方(如本机)和被叫方(如联系人Bob)设置了默认的AR模型,那么电子设备200启动后置摄像头a后,可以直接显示包括主叫方的默认AR模型和被叫方的默认AR模型的AR通信界面。而不是在电子设备200显示AR通信界面后,响应于用户的操作在AR通信界面中添加AR模型。在电子设备200与电子设备300进行AR通信的过程中,电子设备200可以接收用户对AR通信界面中的“AR模型”选项的点击操作,重新设置或者添加AR模型。电子设备200响应于用户对“AR模型”选项的点击操作,重新设置或者添加AR模型的方法可以参考上述相关描述,本申请实施例这里不再赘述。或者,电子设备200可以接收用户对AR通信界面中的AR模型(如AR模型211)的点击操作,重新设置用户210的AR模型。
在一些实施例中,在电子设备200与电子设备300进行AR通信的过程中,电子设备200可以响应于用户对AR通信界面中的AR模型的操作,为AR模型换装。例如,电子设备200可以更换AR模型的服装、发型和配饰等。
电子设备200在触摸屏显示的图像中添加AR模型之后,电子设备200便可以与电子设备300进行AR通信(例如,图9B所示的S914)。本实施例中的AR通信包括基于第一摄像头的AR通信和基于第二摄像头的AR通信。请参考图9C,其示出电 子设备200与电子设备300进行AR通信的实例示意图:
本实施例这里以电子设备200为例,对电子设备200基于第二摄像头的AR通信过程进行介绍:电子设备200启动后置摄像头a(如图9C所示的A1)之后,可以通过同步定位与地图构建(Simultaneous Localization And Mapping,SLAM)定位引擎基于电子设备200与场景220建立一个统一的世界坐标系,并进行该坐标系的初始化(如图9C所示的B1)。电子设备200接收用户在场景220的AR场景图像中添加AR模型(如AR模型311和AR模型212)(如图9C所示的C1)后,可以根据AR模型的特征点云控制AR模型(如图9C所示的D1)。其中,根据AR模型的特征点云控制AR模型的具体方式,可以参考对“基于第一摄像头的AR通信过程”的描述,本实施例这里不再赘述。随后,电子设备200可以通过电子设备200的渲染引擎,对场景220的AR场景图像与AR模型311进行混合渲染(如图9C所示的E1)。其中,图3A示出了场景220的AR场景图像与AR模型311的混合渲染效果图。其中,本申请实施例中的SLAM的详细介绍可以参考https://zhuanlan.zhihu.com/p/23247395中的相关描述,本申请实施例这里不予介绍。
其中,如图9C所示,电子设备300可以执行A2-E2。电子设备300执行A2-E2的具体方式可以参考上述实施例对电子设备200执行A1-E1的详细描述,本申请实施例这里不再赘述。
本实施例这里以电子设备200为例,对电子设备200基于第一摄像头的AR通信过程进行介绍:
在电子设备200与电子设备300通信的过程中,电子设备200的前置摄像头a可以采集用户210的实时特征点云(如图9C所示的F1-G1)。用户210的实时特征点云用于表征用户210面部表情和身体动作的实时变化。电子设备200在图3C所示的AR场景图像中添加用户210的AR模型212后,电子设备200可以建立用户210的三维模型a与AR模型212的映射关系(如图9C所示的H1)。电子设备200可以根据用户210的实时特征点云,以及三维模型a与AR模型212的映射关系,确定出AR模型212的实时特征点云。然后,电子设备200可以根据AR模型212的实时特征点云,显示面部表情和身体动作实时变化的AR模型212。即电子设备200可以根据AR模型212的实时特征点云(如图9C所示的I1)控制AR模型212的面部表情和身体动作实时变化。如此,便可以实现用户210对电子设备200显示的AR模型212的直接互动。
电子设备200在图3A或者图3C所示的AR场景图像中添加用户310的AR模型311后,可以向电子设备300指示该AR模型311。具体的,如果AR模型311是电子设备200从云端下载的AR模型,电子设备200可以向电子设备300发送该AR模型311的标识。其中,AR模型311的标识可以唯一标识AR模型311。电子设备300接收到AR模型311的标识后,可以根据AR模型311的标识从云端下载该AR模型311。或者,电子设备200可以向电子设备300发送该AR模型311。
电子设备300接收到AR模型311后,可以建立用户310的三维模型b(如三维人脸模型b)与AR模型311的映射关系(如图9C所示的H2)。例如,如图9A所示,电子设备300可以建立三维人脸模型b与AR模型311的映射关系。示例性的,电子设备300可以通过迭代最近点算法(Iterative Closest Point,ICP))等点云匹配算法, 建立用户310的b与AR模型311的映射关系。在建立用户310的b与AR模型311的映射关系后,用户310的五官和AR模型311的五官已经相互匹配。例如,用户310的鼻子与AR模型311的鼻子已经建立一一对应的映射关系。
在电子设备300与电子设备200通信的过程中,电子设备300的前置摄像头b可以采集用户310的实时特征点云(如面部特征点云和形体特征点云)(如图9C所示的F2-G2)。该用户310的实时特征点云用于表征用户310面部表情和身体动作的实时变化。电子设备300可以根据用户310的实时特征点云,以及三维模型b与AR模型311的映射关系,确定出AR模型311的实时特征点云。电子设备300可以向电子设备200发送该AR模型311的实时特征点云(如图9C所示的J2)。电子设备200接收到AR模型311的实时特征点云后,可以根据AR模型311的实时特征点云,显示面部表情和身体动作实时变化的AR模型311。即电子设备200可以根据AR模型311的实时特征点云控制AR模型311的面部表情和身体动作实时变化。如此,便可以实现用户310对电子设备200显示的AR模型311的直接互动。
其中,电子设备200执行图9C所示的H1-J1,向电子设备300发送AR模型211的实时特征点云,由电子设备300控制AR模型211的面部表情和身体动作实时变化的方法,以及电子设备300执行图9C所示的H2-I2控制AR模型312的面部表情和身体动作实时变化的方法,本申请实施例这里不予赘述。
需要注意的是,本实施例中的AR通信界面与普通的图形用户界面不同,AR通信界面是通过AR技术将采集到的现实场景的图像与触摸屏上显示的虚拟物体融合渲染,并能产生信息交互的界面。在AR通信界面中,界面中的元素例如图标、按钮、文本等可以与现实场景对应。如果采集到的现实场景发生了变化,那么触摸屏上的AR通信界面也会随之变化。在本实施例中,AR通信界面可以是在电子设备(如电子设备200)的第二摄像头(如后置摄像头)采集的现实场景中添加一个或多个AR模型(即虚拟物体),并对现实场景和AR模型进行融合渲染得到的图像。例如,请参考图3A-图3D、图5B、图6A、图6D、图7C、图11B、图11C和图14A-图14B中任一附图,其示出本实施例提供的AR通信界面实例示意图。以图3C所示的AR通信界面303为例,AR通信界面303中的场景220为现实场景,AR模型311和AR模型212是虚拟物体。以图3D所示的AR通信界面304为例,AR通信界面304中的场景320为现实场景,AR模型312和AR模型211是虚拟物体。
而普通的图形用户界面则是电子设备响应于用户的操作,由电子设备的处理器生成的图像。普通的图形用户界面中不包括对现实场景的图像和AR模型进行混合渲染得到的图像。例如,请参考图4A、图4B、图5A、图6B、图6C、图7A-图7B、图7D-图7E、图10A-图10B、图11A和图11D中任一附图,其示出本实施例提供的普通的图形用户界面实例示意图。
为了本领域技术人员可以更好的理解上述实施例的方法,清楚电子设备200与电子设备300的交互流程,图9B示出了上述实施例提供的一种增强现实的通信方法流程图。如图9B所示,电子设备200与电子设备300通过AR服务器900进行AR通信。
在一些实施例中,电子设备200可以在调用“电话”应用与电子设备300进行语音通信或者视频通信的过程中,请求与电子设备300进行AR通信。
在本实施例的一种情况下,AR服务器900可以包括电子设备200的基站和电子设备300的基站,以及核心网设备。S903中,电子设备200响应于第一操作,可以向电子设备200的基站发送AR通信请求信息;由电子设备200的基站通过核心网设备向电子设备300的基站发送该AR通信请求信息;电子设备300的基站接收到AR通信请求信息后,向电子设备300发送该AR通信请求信息。同样的,S906中的第一AR通信响应信息和S916中的第二AR通信响应信息,也可以由电子设备300通过电子设备300的基站、核心网设备和电子设备200的基站,向电子设备200发送。
并且,S914中,电子设备200与电子设备300也可以通过上述AR服务器900(即电子设备200的基站和电子设备300的基站,以及核心网设备)交互AR通信的数据。例如,电子设备200可以通过电子设备200通过电子设备200的基站、核心网设备和电子设备300的基站,向电子设备300发送AR模型311的标识或者AR模型311。又例如,电子设备300可以通过电子设备300的基站、核心网设备和电子设备200的基站,向电子设备200发送用户310的实时特征点云。
在本实施例的另一种情况下,AR服务器900不是电子设备200的基站和电子设备300的基站以及核心网设备。AR服务器900是用于为电子设备(如电子设备200和电子设备300)提供AR通信服务的专用服务器。
在这种情况下,电子设备200与电子设备300执行S901和S919是通过电子设备200的基站和电子设备300的基站,以及核心网设备传输数据的。而电子设备200执行S903、S910和S917,电子设备300执行S904中“接收AR通信请求信息”、S906和S916;以及电子设备200与电子设备300执行S914则是通过AR服务器900传输数据的。
在另一些实施例中,电子设备200可以在调用第三方应用(如“微信”应用)与电子设备300进行语音通信或者视频通信的过程中,请求与电子设备300进行AR通信。在本实施例中,AR服务器则为该第三方应用的服务器,如“微信”应用的服务器。
在本实施例中,电子设备200与电子设备300执行S901、S919和S914,电子设备200执行S903、S910和S917,以及电子设备300执行S904中“接收AR通信请求信息”、S906和S916,都是通过AR服务器900传输数据的。
在另一实施例中,电子设备200和电子设备300中安装有上述AR应用。该AR应用是可以为用户提供AR通信服务的客户端。电子设备200中安装的AR应用可以通过AR服务器与电子设备300中的AR应用进行数据交互,为用户210与用户310提供AR通信服务。例如,如图10A所示,电子设备200的主界面(即桌面)1001上包括AR应用的应用图标1002。如图10B所示,电子设备300的桌面1003上包括AR应用的应用图标1004。图2所示的电子设备200调用该AR应用与电子设备300进行本申请实施例所述的AR通信。例如,电子设备200可以接收用户对图10A所示的应用图标1002的点击操作(如单击操作),显示图11A所示的AR应用界面1101。AR应用界面1101中包括“新朋友”选项1102和至少一个联系人选项。例如,至少一个联系人选项包括鲍勃(Bob)的联系人选项1103和用户311的联系人选项1104。其中,“新朋友”选项1102用于添加新的联系人。电子设备200响应于用户对用户311的联系人选项1104的点击操作(如单击操作),可以执行本申请实施例提供的AR通信方法, 与电子设备300进行AR通信。
示例性的,响应于用户对联系人选项1104的点击操作,电子设备200可以启动后置摄像头a,如图11B所示,显示包括后置摄像头a采集的场景220的AR场景图像的AR通信界面1105。AR通信界面1105中包括第三提示信息“正在等待对方响应!”1106和“取消”按钮1107。“取消”按钮1107用于触发电子设备200取消与电子设备300进行AR通信。响应于用户对联系人选项1104的点击操作,电子设备200还可以向电子设备300发送AR通信请求信息。其中,电子设备200向电子设备300发送AR通信请求信息后,电子设备200与电子设备300进行AR通信的具体方法可以参考上述实施例中的详细描述,本申请实施例这里不予赘述。
在一些实施例中,电子设备200与电子设备300进行AR通信的过程中,随着电子设备200的移动,第二摄像头a(如后置摄像头a)的取景内容会发生变化。后置摄像头a所采集的现实场景的图像以及电子设备200显示的AR通信界面也会随之相应变化。并且,随着电子设备200采集的现实场景的图像的变化,电子设备200显示的AR通信界面中的AR模型也会随着电子设备200的移动,在电子设备200实时采集的现实场景的图像中移动。
例如,如图7C所示,电子设备200采集场景220的图像是电子设备200在用户210的客厅中采集的图像。场景220的图像中包括沙发和凳子,AR模型311坐在沙发上,AR模型212坐在凳子上。如果电子设备200由客厅移动到户外草坪中,那么随着电子设备200的移动,电子设备200的后置摄像头a采集的现实场景的图像会实时变化,电子设备200显示的AR通信界面也会随之相应变化。其中,电子设备200显示的AR通信界面可以包括:由客厅到户外草坪这段路程中实时变化的现实场景的图像。并且,在电子设备200显示由客厅到户外草坪这段路程中实时变化的现实场景的图像的同时,可以显示AR模型311和AR模型212在实时变化的现实场景的图像中,由客厅向户外草坪走动的动态图像。
在另一些实施例中,电子设备200在现实场景(如场景220)的图像的特定位置(如场景220的沙发上)添加AR模型后,如果电子设备200的第二摄像头a(如后置摄像头a)的取景内容发生变化,即第二摄像头a的取景内容由场景220变化为不包括上述特定位置的其他场景,那么不仅电子设备200显示的现实场景的图像会发生变化,在场景220的图像中添加的AR模型也会消失。当电子设备200的取景内容切换回上述场景220时,上述AR模型又会出现在场景220的图像的特定位置(如场景220的沙发上)。
例如,以电子设备200是智能眼镜为例。该智能眼镜包括第一摄像头和第二摄像头。第一摄像头用于采集佩戴智能眼镜的用户210的行为动作信息(例如,面部特征信息或身体动作信息)。第二摄像头用于采集现实场景220的图像。智能眼镜与其他电子设备进行AR通信的过程中,如果智能眼镜的第二摄像头的取景内容为图2所示的场景220,那么智能眼镜可以显示图3A所示的AR通信界面(包括AR模型311),使得佩戴该智能眼镜的用户可以在场景220中看到用户310的AR模型311,与AR模型311面对面交谈。如果智能眼镜的第二摄像头的取景内容发生变化(如用户210背对场景220或者用户210离开场景220),那么智能眼镜显示的现实场景的图像会发 生变化,并且AR模型311也会消失。当智能眼镜的取景内容切换回上述场景220时,上述AR模型311又会出现在场景220的图像的特定位置(如场景220的沙发上)。
需要注意的是,上述实施例仅以电子设备200与电子设备300进行AR通信为例,介绍本申请实施例提供的增强现实的通信方法。本申请实施例提供的增强显示的通信方法还可以应用于三个或者三个以上的电子设备进行AR通信的过程中。例如,在电子设备200与电子设备300进行AR通信的过程中,电子设备200显示图11C所示的AR通信界面1108。相比于图7C所示的AR通信界面706,AR通信界面1108中还可以包括“添加”按钮1109。电子设备200响应于用户对“添加”按钮1109的点击操作(如单击操作),可以显示图11D所示的选择联系人界面1110。选择联系人界面1110中包括“完成”按钮1115、“取消”按钮1114和多个联系人选项,如Bob的联系人选项1111、Michael的联系人选项1112和用户310的联系人选项1113等。其中,由于电子设备200与电子设备300(用户310的电子设备)正在进行AR通信;因此,在选择联系人界面1110中用户310的联系人选项1113为被选中的状态(如打钩)。电子设备200可以接收用户对除用户310的联系人选项1113之外的其他联系人选项(如Bob的联系人选项1111或者Michael的联系人选项1112等至少一个联系人选项)的选择操作,请求与对应的电子设备进行AR通信。其中,电子设备200请求与电子设备进行AR通信的方法,可以参考上述实施例中电子设备200请求与电子设备300进行AR通信的方法,本申请实施例这里不再赘述。
请参考图12,其示出本申请实施例提供的一种增强现实的通信方法所应用的通信场景实例示意图。其中,电子设备200与大屏设备(如智能电视机)1200建立连接。大屏设备1200包括摄像头1202。摄像头1202可以为大屏设备1200的内置摄像头,或者,摄像头1202可以为大屏设备1200的外置摄像头。电子设备200与大屏设备1200之间的连接可以为有线连接或者无线连接。该无线连接可以为Wi-Fi连接或者蓝牙连接。电子设备300包括前置摄像头、后置摄像头和触摸屏。电子设备200与电子设备300的硬件结构可以参考本申请实施例对图1所示的电子设备100的描述,本申请实施例这里不予赘述。
示例性的,本申请实施例中的大屏设备可以为智能电视机、个人计算机(PC)、笔记本电脑、平板电脑、投影仪等大屏电子设备。本申请实施例中以大屏设备是智能电视机为例,对本申请实施例的方法进行说明。
需要说明的是,图12所示的应用场景中,电子设备200向电子设备300发起增强显示通信的方式,可以参考上述实施例中,电子设备200向电子设备300发起增强显示通信的方式,本申请实施例这里不再赘述。
如图12所示,用户210使用电子设备200与使用电子设备300的用户310通信。为了提升用户210使用电子设备200与电子设备300进行AR通信的视觉体验。电子设备200可以控制大屏设备1200的摄像头1202采集场景220的AR场景图像和用户210的实时特征点云,并控制大屏设备1200显示摄像头1202采集的场景220的AR场景图像。如图12所示,大屏设备1200可以显示AR通信界面1201。由于场景220和用户210都在摄像头1202的取景范围内;因此,大屏设备1200显示的AR通信界面1201中包括场景220的AR场景图像和用户210的图像。
本申请实施例中,电子设备200可以响应于用户在电子设备200的触摸屏的操作,在大屏设备1200显示的AR通信界面1201中的AR场景图像中添加用户310的AR模型311。具体的,电子设备200可以作为触控板,用户可以操作电子设备200在大屏设备1200显示的AR通信界面1201中的AR场景图像中添加用户310的AR模型311。其中,电子设备200在大屏设备1200显示的AR场景图像中添加用户310的AR模型311的方法,可以参考电子设备200在电子设备200显示的AR场景图像中添加用户310的AR模型311的方法,本申请实施例这里不再赘述。
在一些实施例中,大屏设备1200还可以连接有外部设备(如鼠标和键盘)。如图12所示,大屏设备1200显示的AR通信界面1201中还可以包括“AR模型”选项1203。用户可以通过上述外部设备点击“AR模型”选项1203。电子设备200响应于用户对大屏设备1200显示的AR通信界面1201中的“AR模型”选项1203的点击操作,可以控制大屏设备1200显示图13所示的模型选择界面1301。大屏设备1200可以接收用户通过外部设备在模型选择界面1301中的操作,并向电子设备200发送相应的操作信息。电子设备200接收到该操作信息后,可以响应对应的操作,在大屏设备1200显示的AR场景图像中添加用户310的AR模型311。其中,电子设备200响应于用户在模型选择界面1301中的操作,在大屏设备1200显示的AR场景图像中添加AR模型311的方法,可以参考上述实施例中,电子设备200响应于用户在图6C所示的模型选择界面612中的操作,在电子设备200显示的AR场景图像中添加AR模型311的方法,本申请实施例这里不再赘述。
在大屏设备1200显示的AR场景图像中添加AR模型311后,电子设备200可以向电子设备300指示该AR模型311。其中,电子设备200向电子设备300指示该AR模型311的方法可以参考上述实施例中的具体介绍,本申请实施例这里不再赘述。电子设备300接收到AR模型311后,可以建立用户310的三维模型b与AR模型311的映射关系。在电子设备300与电子设备200通信的过程中,电子设备300的前置摄像头b可以采集用户310的实时特征点云。电子设备300可以根据用户310的实时特征点云,以及三维模型b与AR模型311的映射关系,确定出AR模型311的实时特征点云。电子设备300可以向电子设备200发送该AR模型311的实时特征点云。电子设备200接收到AR模型311的实时特征点云后,可以根据AR模型311的实时特征点云,控制大屏设备1200显示面部表情和身体动作实时变化的AR模型311。如此,便可以实现用户310对大屏设备1200显示的AR模型311的直接互动。
图12所示的电子设备300也可以在其所显示的AR场景图像中添加用户210的AR模型和用户310的AR模型。图12所示的电子设备300在AR场景图像中添加用户210和用户310的AR模型的具体方法,实现用户210对电子设备300显示的用户210的AR模型的直接互动的方法,以及实现用户310对电子设备300显示的用户310的AR模型的直接互动的方法,可以参考本申请实施例上述相关描述,本申请实施例这里不再赘述。
在一些实施例中,电子设备200显示包括用户310的AR模型311的AR通信界面后,电子设备200可以接收用户210在触摸屏上对AR模型311的互动操作(例如,用户310敲击AR模型311的头部)。响应于用户210对触摸屏显示的AR模型311 的头部的敲击操作,电子设备200可以模拟人体头部被敲击时的反射动作,显示AR模型311做出头部被敲击的反射动作的动态图像,实现用户210与AR模型311的互动。并且,响应于用户210对触摸屏显示的AR模型311的头部的敲击操作,电子设备200可以向电子设备300发送该敲击动作的相关信息,使得电子设备300的触摸屏可以根据该敲击动作的相关信息,显示AR模型312做出头部被敲击的反射动作的动态图像。如果电子设备300是智能头盔,智能头盔接收到上述敲击动作的相关信息,还可以发出振动提示。
在一些实施例中,电子设备200在与电子设备300进行AR通信的过程中,可以响应于用户的录制操作(即第三操作),录制并保存AR模型311和AR模型212面对面交流的视频数据。电子设备200与电子设备300结束后,电子设备200中保存了AR模型311和AR模型212面对面交流的视频数据。例如,上述录制操作可以为用户210在电子设备200显示的AR通信界面输入的第五预设手势,如S形手势或者向上滑动手势等。该第五预设手势与上述第四预设手势不同。又例如,如图14A所示,AR通信界面1401中包括“录制”按钮1402。上述录制操作可以是用户210对“录制”按钮1402的点击操作(如单击操作、双击操作或者长按操作等)。
在另一些实施例中,电子设备200开始与电子设备300进行AR通信时,便可以自动录制AR模型311和AR模型212面对面交流的视频数据。电子设备200与电子设备300的AR通信结束时,电子设备200可以显示用于确认是否保存上述视频数据的提示信息。如果用户210确认保存上述视频数据,电子设备200才可以保存录制的视频数据。如果用户确认不保存上述视频数据,电子设备200则可以删除录制的视频数据。例如,电子设备200与电子设备300的AR通信结束时,电子设备200可以显示图14B所示的提示框1403。提示框1403中包括提示信息“AR通话结束,是否保存通话视频?”1404、“保存”按钮1405和“不保存”按钮1406。电子设备200响应于用户对“保存”按钮1405的点击操作(如单击操作),可以保存录制的视频。电子设备200响应于用户对“删除”按钮1406的点击操作(如单击操作),可以删除录制的视频数据。
如果上述AR通信是电子设备200调用“电话”应用(即系统应用)与电子设备300进行语音通信或者视频通信的过程中,发起的AR通信,那么电子设备200则可以在电子设备200的“照片”应用(即系统应用)中保存录制的视频数据。用户210可以在“照片”应用中查看该视频数据。
如果上述AR通信是电子设备200调用第三方应用(如“微信”应用或者上述AR应用)与电子设备300进行语音通信或者视频通信的过程中,发起的AR通信,那么电子设备200则可以在该第三方应用中保存录制的视频数据。用户210可以在第三方应用中查看该视频数据。当然,在这种情况下,电子设备200也可以在“照片”应用(即系统应用)中保存录制的视频数据,本实施例对此不作限制。
在另一些实施例中,电子设备200显示包括用户310的AR模型311的AR通信界面后,电子设备200可以接收用户210在触摸屏上对AR模型311的第四操作(如双击操作或者长按操作等)。响应于用户210对触摸屏显示的AR模型311的第四操作,电子设备200可以显示用户310的联系人信息。该联系人信息可以包括用户310的电话号码、电子邮箱地址和头像等。其中,该联系人信息保存在电子设备200中, 或者该联系人信息是电子设备200响应于上述第四操作从云端(如AR服务器)获取的。
在另一些实施例中,在电子设备200与电子设备300进行AR通信的过程中,电子设备200的麦克风可以接收用户210发出的语音数据(即第一声音信号)。电子设备200可以将第一声音信号转换为第一音频电信号。然后,电子设备200向电子设备300发送该第一音频电信号。电子设备300可以将来自电子设备200的第一音频电信号转换为第一声音信号,并通过受话器(也称“听筒”)播放该第一声音信号,或者通过电子设备300的扬声器(也称“喇叭”)播放该第一声音信号。其中,上述“麦克风270C捕获的语音数据”具体为麦克风捕获的第一声音信号。同样的,电子设备300的麦克风可以接收用户310发出的语音数据(即第二声音信号)。电子设备200可以将第二声音信号转换为第二音频电信号。然后,电子设备300向电子设备200发送该第二音频电信号。电子设备200可以将来自电子设备200的第二音频电信号转换为第二声音信号,并通过受话器播放该第二声音信号,或者通过电子设备200的扬声器播放该第二声音信号。
参考图7C,在电子设备200与电子设备300进行AR通信的过程中,电子设备200可以识别电子设备200的麦克风采集的语音数据(即第一声音信号)和由第二音频电信号转换而来的语音数据(即第二声音信号)。如果电子设备200识别到文本与预设文本对应的语音数据,电子设备200则可以显示AR模型311与AR模型212执行与预设文本对应的动作的动态图像。其中,电子设备200中可以保存多个预设文本以及各个预设文本对应的动作。例如,预设文本“你好”对应的动作为握手。预设文本“再见”对应的动作为挥手。
一般而言,用户与其他用户交流之前都会互相打招呼。例如,电子设备200与电子设备300开始进行AR通信时,用户210与用户310可以相互打招呼说“你好”。电子设备200识别到语音数据“你好”,则可以显示AR模型311与AR模型212握手的动态图像。即电子设备可以控制AR模型根据用户的语音数据执行相应的动作,增加了用户与AR模型直接的互动,以及AR模型之间的互动,可以提升AR通信的用户体验。
在另一些实施例中,在电子设备200与电子设备300进行AR通信的过程中,电子设备200可以识别电子设备200的麦克风采集的语音数据(即第一声音信号)和由第二音频电信号转换而来的语音数据(即第二声音信号),将识别到的语音数据转换为文本。电子设备200可以在AR通信界面中显示转换得到的文本(即字幕)。
结合上述实施例及附图,如图15所示,本实施例提供一种增强现实的通信方法,该方法可以包括S1501-S1504:
S1501、第一电子设备响应于第一操作,向第二电子设备发送AR通信请求信息。
其中,S1501可以参考上述实施例对S902的详细介绍,本申请实施例这里不再赘述。
S1502、第二电子设备接收第一电子设备发送的AR通信请求信息。
其中,S1501可以参考上述实施例对S903的详细介绍,本申请实施例这里不再赘述。
S1503、第一电子设备与第二电子设备建立AR通信链接。
第二电子设备响应于AR通信请求信息,可以与第一电子设备建立AR通信链接。其中,第一电子设备可以通过基站和核心网设备与第二电子设备建立AR通信链接。或者,第一电子设备可以通过专门用于提供AR通信服务的AR服务器与第二电子设备建立AR通信链接。第一电子设备与第二电子设备建立AR通信链接的具体方法可以参考上述实施例中电子设备200与电子设备300建立通信链接的方法。
S1504、第一电子设备在触摸屏上显示第一AR通信界面a。第一AR通信界面a中包括第一现实场景的图像,以及处于第一现实场景的第一AR模型a和第二AR模型a。
其中,第一现实场景可以是第一电子设备的第二摄像头采集的现实场景。第一AR模型是第一电子设备对应的第一用户的AR模型,在第一AR模型上可以呈现第一用户的表情和动作,并通过第一电子设备的触摸屏显示出来。第二AR模型是第二电子设备对应的第二用户的AR模型,在第二AR模型上可以呈现第二用户的表情和动作,并通过第一电子设备的触摸屏显示出来。
本实施例中,以第一电子设备是图2所示的电子设备200,第二电子设备是图2所示的电子设备300为例。第一AR通信界面a可以为图3C所示的AR通信界面303。
通过上述增强现实的通信方法,电子设备显示的AR通信界面中包括电子设备所处的现实场景的图像,以及处于现实场景的第一AR模型和第二AR模型。并且,可以在第一AR模型上呈现第一用户(即主叫方)的表情和动作,在第二AR模型上呈现第二用户(即被叫方)的表情和动作。如此,可以在电子设备进行AR通信的过程中,为用户提供在现实场景的互动的服务,可以提升用户的通信体验。
在一些实施例中,第二电子设备与第一电子设备进行AR通信的过程中,也可以显示AR通信界面。具体的,如图15所示,在S1503之后,本申请实施例的方法还可以包括S1505:
S1505、第二电子设备在触摸屏显示第一AR通信界面b。第一AR通信界面b中包括第二现实场景的图像,以及处于第二现实场景的第一AR模型b和第二AR模型b
其中,第二现实场景是第二电子设备的第二摄像头采集的现实场景。第一AR模型是第一电子设备对应的第一用户的AR模型。第二电子设备在第一AR模型上呈现第一用户的表情和动作。第二AR模型是第二电子设备对应的第二用户的AR模型。第二电子设备在第二AR模型上呈现第二用户的表情和动作。例如,第二AR通信界面b可以为图3D所示的AR通信界面304。
通过上述增强现实的通信方法,进行AR通信的双方电子设备都可以显示AR通信界面。即进行AR通信的双方电子设备都可以为用户提供在现实场景的互动的服务,可以提升用户的通信体验。
在一些实施例中,如图16所示,在S1501和S1502之前,上述增强现实的通信方法还可以包括S1601:
S1601、第一电子设备与第二电子设备进行语音通信或者视频通信。
其中,第一电子设备与第二电子设备进行语音通信或者视频通信时,第一电子设备在触摸屏上显示与第二电子设备进行语音通信或者视频通信的图形用户界面;第二 电子设备在触摸屏上显示与第一电子设备进行语音通信或者视频通信的图形用户界面。
上述第一操作可以是在第一电子设备的触摸屏上对语音通信或者所述视频通信的图形用户界面输入的第一预设手势。例如,S形手势或者向上滑动手势等。
或者,上述语音通信或者视频通信的GUI中包括AR通信按钮,第一操作是在电子设备的触摸屏上对AR通信按钮的点击操作。例如,视频通信的GUI可以为图4B所示的视频通信界面403。视频通信界面403中包括AR通信按钮404。语音通信的GUI可以为图4A所示的语音通信界面401。语音通信界面401中AR通信按钮包括402。
或者,第一操作是语音通信或者所述视频通信的过程中,对第一预设按键的点击操作,第一预设按键的所述第一电子设备的物理按键。例如,该第一预设按键可以为“音量+”按键与“锁屏”按键的组合键。
示例性的,以第一操作是对语音通信或者视频通信的图形用户界面中的AR通信按钮的点击操作为例。如图16所示,图15中的S1501可以替换为S1602:
S1602、响应于对语音通信或者视频通信的GUI中的AR通信按钮的点击操作,第一电子设备向第二电子设备发送AR通信请求信息。
本实施例中,第一电子设备可以在与第二电子设备进行语音通信或者视频通信的过程中发起AR通信,可以提高语音通信或者视频通信的用户体验。
在另一些实施例中,第一电子设备与第二电子设备中安装有AR应用,该AR应用是用于提供AR通信服务的客户端。在S1501之前,上述增强现实的通信方法还可以包括:响应于对AR应用的应用图标的点击操作,第一电子设备显示AR应用界面。该AR应用界面包括至少一个联系人选项,至少一个联系人选项中包括第二电子设备对应的联系人选项。在该实施例中,上述第一操作是第一用户对第二电子设备对应的联系人选项的点击操作。
在一些实施例中,在S1503之前,第二电子设备可以请求用户确认是否同意第二电子设备与第一电子设备进行AR通信。例如,如图17所示,在图15所示的S1503之前,本申请实施例的方法还可以包括S1701,S1503可以包括S1702:
S1701、响应于AR通信请求信息,第二电子设备呈现第一提示信息,第一提示信息用于确认是否同意第二电子设备与第一电子设备进行AR通信。
S1702、第二电子设备响应于第五操作,与第一电子设备建立AR通信链接。
其中,第五操作用于确认第二电子设备同意与第一电子设备进行AR通信。第五操作的详细描述可以参考上述实施例中的相关内容,本申请实施例这里不再赘述。
本实施中,第二电子设备响应于AR通信请求信息,不会直接与第一电子设备建立所述AR通信链接,而是根据用户的意愿决定是否与第一电子设备进行AR通信,可以提升用户的通信体验。
在一些实施例中,在第二电子设备呈现第一提示信息之前,第二电子设备可以对第一电子设备的合法性进行鉴权。如图18所示,图17所示的S1701可以包括S1801-S1802:
S1801、响应于AR通信请求信息,第二电子设备判断第一电子设备是否合法。
其中,如果第一电子设备合法,第二电子设备则执行S1802;如果第一电子设备不合法,第二电子设备则拒绝与第一电子设备进行AR通信。
其中,第一电子设备合法包括:第二电子设备的白名单中保存有第一电子设备的设备标识信息;或者,第二电子设备的黑名单中未保存第一电子设备的设备标识信息;第一电子设备的设备标识信息包括第一电子设备的电话号码。
S1802、第二电子设备呈现第一提示信息。
例如,第二电子设备可以发出语音提示信息,即第一提示信息为语音提示信息。例如,第二电子设备可以发出语音提示信息“请确认是否与第一电子设备进行AR通信?”。或者,第二电子设备显示包括第一提示信息的图像用户界面。或者,第二电子设备可以显示包括第一提示信息的图像用户界面,并发出振动提示或者上述语音提示信息。例如,第二电子设备可以在触摸屏上显示图5A所示的GUI501。该图形用户界面501中包括第一提示信息“请确认是否与用户210进行AR通信?”。
通过上述增强现实的通信方法,第二电子设备在第一电子设备合法时,才会呈现第一提示信息,可以拦截非法电子设备对第二电子设备的骚扰,可以提升用户的通信体验。
在一些实施例中,第一AR模型a和第二AR模型a是第一电子设备中预先设定的AR模型。第一AR模型b和第二AR模型b是第二电子设备中预先设定的AR模型。其中,本实施例中,在电子设备中预先设定AR模型的方法可以参考对图7D-图7E的介绍,本申请实施例这里不再赘述。
在另一些实施例中,上述S1504可以包括S1901-1905。例如,如图19所示,图18所示的S1504可以包括S1901-S1905:
S1901、第一电子设备在触摸屏上显示第二AR通信界面a,第二AR通信界面a中包括第一现实场景的图像、但不包括第一AR模型a和第二AR模型a。
例如,第二AR通信界面a可以为图6A所示的AR通信界面601。
S1902、响应于在触摸屏上对第二AR通信界面a的第二操作,第一电子设备在触摸屏上显示模型选择界面,模型选择界面包括多个模型选项,每个模型选项对应一个AR模型。
例如,模型选择界面可以为图6B所示的模型选择界面603。模型选择界面603中包括“AR模型(用户310)”选项609、“由对方设置AR模型”选项608、“AR模型1”选项606和“AR模型2”选项607等。
S1903、响应于在触摸屏上对多个模型选项中的第一模型选项的第一选择操作,第一电子设备在触摸屏上显示第三AR通信界面a,第三AR通信界面a包括第一现实场景的图像和第二模型选项对应的第二AR模型a、但不包括第一AR模型a。
例如,第三AR通信界面a可以为图6D所示的AR通信界面613。
S1904、响应于在触摸屏上第三AR通信界面的第二操作,第一电子设备在触摸屏上显示模型选择界面。
S1905、响应于在触摸屏上对多个模型选项中的第二模型选项的第二选择操作,第一电子设备在触摸屏上显示第一AR通信界面a,第一AR通信界面包括第一现实场景的图像、第二AR模型a和第一模型选项对应的第一AR模型a。
例如,第一AR通信界面a可以为图7C所示的AR通信界面706(相当于图3C所示的AR通信界面303)。
在一些实施例中,响应于建立AR通信链接,第一电子设备可以开启第一摄像头和第二摄像头。例如,如图19所示,在S1901之前,上述增强现实的通信方法还可以包括S1900:
S1900、响应于建立AR通信链接,第一电子设备开启第一摄像头a和第二摄像头a,第二摄像头a用于采集第一现实场景的图像,第一摄像头a用于采集第一用户的表情和动作。
当然,第二电子设备也可以开启第一摄像头b和第二摄像头b,第二摄像头b用于采集第二现实场景的图像,第一摄像头b用于采集第二用户的表情和动作。
其中,S1505中第二电子设备在触摸屏显示第一AR通信界面b的方法,可以参考S1901-S1905中对第一电子设备在触摸屏显示第一AR通信界面a的方法的描述,本实施例这里不再赘述。
通过上述增强现实的通信方法,第一电子设备刚开始显示的AR通信界面(如第二AR通信界面)中仅包括第一电子设备的第二摄像头采集的第一现实场景的图像,不包括第一AR模型和第二AR模型。可以由用户在该第二AR通信界面中添加AR模型。通过上述方法,可以由用户选择符合用户喜好的AR模型,可以提升用户的通信体验。
在一些实施例中,在上述S1504或者S1905之后,上述增强现实的通信方法还可以包括S2001-S2002。例如,如图20所示,在图19所示的S1905之后,上述增强现实的通信方法还可以包括S2001-S2002:
S2001、第一电子设备响应于在触摸屏上对第一AR模型a的点击操作,显示模型选择界面,模型选择界面包括多个模型选项,每个模型选项对应一个AR模型。
S2002、响应于在触摸屏上对所述多个模型选项中的第三模型选项的点击操作,第一电子设备将第一AR通信界面中的第一AR模型更换为第三模型选项对应的第三AR模型。
需要注意的是,第一电子设备也可以在S1504或S1905之后,响应于用户的操作随时变更第二AR模型a。第二电子设备也可以在S1505之后,响应于用户的操作随时变更第一AR模型b和第二AR模型b。第一电子设备变更第二AR模型a,以及第二电子设备变更第一AR模型b和第二AR模型b的方法,可以参考S2001-S2002,本实施例这里不再赘述。
在第二电子设备与第一电子设备进行AR通信的过程中,第一电子设备可以根据用户的操作,随时变更AR模型,显示符合用户喜好的AR模型,可以提升用户的通信体验。
在一些实施例中,在上述S1504或者S1905之后,上述增强现实的通信方法还可以包括:响应于在触摸屏上对所述第二AR模型的第四操作,第一电子设备显示第二用户的联系人信息。例如,联系人信息包括第二用户的电话号码、电子邮箱地址或者头像中的至少一个。
其中,在第一电子设备与第二电子设备进行AR通信的过程中,用户只需要操作AR模型的操作,第一电子设备便可以显示对应用户的联系人信息;而不需要退出AR通信界面,在通讯录中查找该用户的联系人信息。
在一些实施例中,在上述S1504或者S1905之后,上述增强现实的通信方法还可以包括S2101-S2102。例如,如图21所示,在图19所示的S1905之后,上述增强现实的通信方法还可以包括S2101-S2102:
S2101、第一电子设备识别第一电子设备的麦克风采集的语音数据,以及由来自第二电子设备的音频电信号转换而来的语音数据。
S2102、第一电子设备在触摸屏显示识别到的语音数据的文本。
需要注意的是,第二电子设备也可以在S1505之后,识别第二电子设备的麦克风采集的语音数据,以及由来自第一电子设备的音频电信号转换而来的语音数据;在触摸屏显示识别到的语音数据的文本。
通过上述增强现实的通信方法,第一电子设备可以在AR通信界面中显示AR通信的语音数据的文本(即字幕)。第一电子设备显示AR通信的语音数据的文本,可以从视觉上为用户呈现第一电子设备与第二电子设备进行AR通信的过程中,双方用户交流的内容。
在一些实施例中,如图21所示,在上述S2101之后,上述增强现实的通信方法还可以包括S2103:
S2103、第一电子设备识别到文本与预设文本对应的语音数据时,在触摸屏显示第一AR模型与第二AR模型执行预设文本对应的动作的动态图像。
例如,预设文本“你好”对应的动作为握手。预设文本“再见”对应的动作为挥手。
通过上述增强现实的通信方法,第一AR模型和第二AR模型根据双方用户的语音数据进行互动,使得AR通信界面所展示的内容更加符合现实场景中用户面对面的交流的画面,可以提升AR通信的真实感,提升用户的通信体验。
可以理解的是,上述电子设备(如电子设备200或者电子设备300)为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本申请实施例可以根据上述方法示例对上述电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用集成的单元的情况下,图22示出了上述实施例中所涉及的电子设备的一种可能的结构示意图。该电子设备2200包括:处理模块2201、显示模块2202和通信模块2203。
其中,处理模块2201用于对电子设备2200的动作进行控制管理。显示模块2202用于显示处理模块2201生成的图像。通信模块2203用于支持电子设备2200与其他网络实体的通信。电子设备2200可以作为主叫方的电子设备,也可以作为被叫方的电子 设备。
在上述电子设备2200作为主叫方的电子设备(如上述实施例中的电子设备200或者第一电子设备)的情况下,上述处理模块2201可以用于支持电子设备2200执行上述方法实施例中的S901,S902,S911,S912中“构建三维人脸模型a”的操作,S913中“接收第二操作”的操作,S914,S919,S1503,S1601,S1702,S1801,S1900,S2002,S2101,和/或用于本文所描述的技术的其它过程。上述显示模块2202可以用于支持电子设备2200执行上述方法实施例中的“显示语音通信界面或者视频通信界面”的操作,“显示AR通信界面”的操作,S912中“显示图像”的操作,S913中“显示AR模型”的操作,S918中“显示第二提示信息”的操作,S1504,S1901,S1902,S1903,S1904,S1905,S2001,S2102,S2103,和/或用于本文所描述的技术的其它过程。上述通信模块2203可以用于支持电子设备2200执行上述方法实施例中的S901、S914和S919中“与电子设备300交互”的操作,S903,S910,S917,S1501,S1503、S1601和S1702中“与第二电子设备的交互”,S1602,和/或用于本文所描述的技术的其它过程。
在上述电子设备2200作为被叫方的电子设备(如上述实施例中的电子设备300或者第二电子设备)的情况下,上述处理模块2201可以用于支持电子设备2200执行上述方法实施例中的S901,S907,S908中“构建三维人脸模型b”的操作,S909中“接收第二操作”的操作,S915,,S914,S919,S1503,S1601,S1702,和/或用于本文所描述的技术的其它过程。上述显示模块2202可以用于支持电子设备2200执行上述方法实施例中的“显示语音通信界面或者视频通信界面”的操作,“显示AR通信界面”的操作,S904、S1701和S1802中“显示第一提示信息”的操作,S908中“显示后置摄像头b采集的图像”的操作,S909中“显示添加的AR模型”的操作,S1505,和/或用于本文所描述的技术的其它过程。上述通信模块2203可以用于支持电子设备2200执行上述方法实施例中的S901、S914和S919中“与电子设备200交互”的操作,S904中“接收AR通信请求信息”的操作,S906,S916,S1502,S1503、S1601和S1702中“与第一电子设备的交互”,“接收S1602中的AR通信请求信息”的操作,和/或用于本文所描述的技术的其它过程。
当然,上述电子设备2200中的单元模块包括但不限于上述处理模块2201、显示模块2202和通信模块2203。例如,电子设备2200中还可以包括存储模块和音频模块等。存储模块用于保存电子设备2200的程序代码和数据。音频模块用于在语音通信过程中采集用户发出的语音数据,以及播放语音数据。
其中,处理模块2201可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。处理器可以包括应用处理器和基带处理器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块2203可以是收发器、收发电路等。存储模块可以是存储器。
例如,处理模块2201为处理器(如图1所示的处理器110),通信模块2203包 括移动通信模块(如图1所示的移动通信模块150)和无线通信模块(如图1所示的无线通信模块160)。移动通信模块和无线通信模块可以统称为通信接口。存储模块可以为存储器(如图1所示的内部存储器121)。显示模块2202可以为触摸屏(如图1所示的显示屏194,该显示屏194中集成了显示面板和触控面板)。音频模块可以包括麦克风(如图1所示的麦克风170C)、扬声器(如图1所示的扬声器170A)、受话器(如图1所示的受话器170B)和耳机接口(如图1所示的耳机接口170D)。本申请实施例所提供的电子设备2200可以为图1所示的电子设备100。其中,上述处理器、存储器、通信接口和触摸屏等可以连接在一起,例如通过总线连接。
本申请实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机程序代码,当上述处理器执行该计算机程序代码时,电子设备2200执行图9B、图15-图21中任一附图中的相关方法步骤实现上述实施例中的方法。
本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行图9B、图15-图21中任一附图中的相关方法步骤实现上述实施例中的方法。
其中,本申请实施例提供的电子设备2200、计算机存储介质或者计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以使用硬件的形式实现,也可以使用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因 此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (48)

  1. 一种增强现实的通信方法,其特征在于,包括:
    第一电子设备响应于第一操作,向第二电子设备发送增强现实AR通信请求信息,所述AR通信请求信息用于请求与所述第二电子设备进行AR通信;
    所述第一电子设备与所述第二电子设备建立AR通信链接;
    所述第一电子设备在触摸屏上显示第一AR通信界面,所述第一AR通信界面中包括第一现实场景的图像,以及处于所述第一现实场景的第一AR模型和第二AR模型;
    其中,所述第一现实场景是所述第一电子设备所处的现实场景;所述第一AR模型是所述第一电子设备对应的第一用户的AR模型,所述第二AR模型是所述第二电子设备对应的第二用户的AR模型;在AR通信的过程中,所述第一AR模型按照所述第一设备获取的所述第一用户的表情和动作做出相应的表情和动作,所述第二AR模型按照所述第一设备获取的所述第二用户的表情和动作做出相应的表情和动作。
  2. 根据权利要求1所述的方法,其特征在于,在所述第一电子设备响应于第一操作,向第二电子设备发送AR通信请求信息之前,所述方法还包括:
    所述第一电子设备与所述第二电子设备进行语音通信或者视频通信,在所述触摸屏上显示所述语音通信或者所述视频通信的图形用户界面;
    其中,所述第一操作是在所述触摸屏上对所述语音通信或者所述视频通信的图形用户界面输入的第一预设手势;或者,所述语音通信或者所述视频通信的图形用户界面中包括AR通信按钮,所述第一操作是在所述触摸屏上对所述AR通信按钮的点击操作;或者,所述第一操作是所述语音通信或者所述视频通信的过程中,对第一预设按键的点击操作,所述第一预设按键的所述第一电子设备的物理按键。
  3. 根据权利要求1所述的方法,其特征在于,所述第一电子设备与所述第二电子设备中安装有AR应用,所述AR应用是用于提供AR通信服务的客户端;
    在所述第一电子设备响应于第一操作,向第二电子设备发送AR通信请求信息之前,所述方法还包括:
    响应于对所述AR应用的应用图标的点击操作,所述第一电子设备显示AR应用界面,所述AR应用界面包括至少一个联系人选项,所述至少一个联系人选项中包括所述第二电子设备对应的联系人选项;
    其中,所述第一操作是所述第一用户对所述第二电子设备对应的联系人选项的点击操作。
  4. 根据权利要求1-3中任意一项所述的方法,其特征在于,所述第一AR模型是所述第一电子设备中针对所述第一电子设备预先设定的AR模型,所述第二AR模型是所述第一电子设备中针对所述第二电子设备预先设定的AR模型。
  5. 根据权利要求1-3中任意一项所述的方法,其特征在于,所述第一电子设备在触摸屏上显示第一AR通信界面,包括:
    响应于建立所述AR通信链接,所述第一电子设备在所述触摸屏上显示第二AR通信界面,所述第二AR通信界面中包括所述第一现实场景的图像、但不包括所述第一AR模型和所述第二AR模型;
    响应于在所述触摸屏上对所述第二AR通信界面的第二操作,所述第一电子设备在所述触摸屏上显示模型选择界面,所述模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;
    响应于在所述触摸屏上对所述多个模型选项中的第一模型选项的第一选择操作,所述第一电子设备在所述触摸屏上显示第三AR通信界面,所述第三AR通信界面包括所述第一现实场景的图像和所述第二模型选项对应的所述第二AR模型、但不包括所述第一AR模型;
    响应于在所述触摸屏上所述第三AR通信界面的第二操作,所述第一电子设备在所述触摸屏上显示所述模型选择界面;
    响应于在所述触摸屏上对所述多个模型选项中的第二模型选项的第二选择操作,所述第一电子设备在所述触摸屏上显示所述第一AR通信界面,所述第一AR通信界面包括所述第一现实场景的图像、所述第二AR模型和所述第一模型选项对应的所述第一AR模型。
  6. 根据权利要求1-5中任意一项所述的方法,其特征在于,所述第一电子设备包括第一摄像头和第二摄像头,所述第二电子设备包括第一摄像头;
    响应于建立所述AR通信链接,所述第一电子设备开启所述第一电子设备的第一摄像头和第二摄像头,所述第一电子设备的第二摄像头用于采集所述第一现实场景的图像,所述第一电子设备的第一摄像头用于采集所述第一用户的表情和动作。
  7. 根据权利要求1-6中任意一项所述的方法,其特征在于,在所述第一电子设备在触摸屏上显示第一AR通信界面之后,所述方法还包括:
    所述第一电子设备响应于在所述触摸屏上对所述第一AR模型的点击操作,显示模型选择界面,所述模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;
    响应于在所述触摸屏上对所述多个模型选项中的第三模型选项的点击操作,所述第一电子设备将所述第一AR通信界面中的所述第一AR模型更换为所述第三模型选项对应的第三AR模型。
  8. 根据权利要求1-7中任意一项所述的方法,其特征在于,在所述第一电子设备在触摸屏上显示第一AR通信界面之后,所述方法还包括:
    响应于在所述触摸屏上的第三操作,所述第一电子设备开始录制所述第一电子设备与所述第二电子设备进行AR通信的视频数据。
  9. 根据权利要求1-8中任意一项所述的方法,其特征在于,在所述第一电子设备在触摸屏上显示第一AR通信界面之后,所述方法还包括:
    响应于在所述触摸屏上对所述第二AR模型的第四操作,所述第一电子设备显示所述第二用户的联系人信息;所述联系人信息包括所述第二用户的电话号码、电子邮箱地址或者头像中的至少一个。
  10. 根据权利要求1-9中任意一项所述的方法,其特征在于,在所述第一电子设备在触摸屏上显示第一AR通信界面之后,所述方法还包括:
    所述第一电子设备识别所述第一电子设备的麦克风采集的语音数据,以及由来自所述第二电子设备的音频电信号转换而来的语音数据;
    所述第一电子设备在所述触摸屏显示识别到的语音数据的文本。
  11. 根据权利要求1-10中任意一项所述的方法,其特征在于,所述方法还包括:
    所述第一电子设备识别到文本与预设文本对应的语音数据时,在所述触摸屏显示所述第一AR模型与所述第二AR模型执行所述预设文本对应的动作的动态图像。
  12. 一种增强现实的通信方法,其特征在于,包括:
    第二电子设备接收第一电子设备发送的增强现实AR通信请求信息,所述AR通信请求信息用于请求与所述第二电子设备进行AR通信;
    响应于所述AR通信请求信息,所述第二电子设备与所述第一电子设备建立AR通信链接;
    所述第二电子设备在触摸屏上显示第一AR通信界面,所述第一AR通信界面中包括第二现实场景的图像,以及处于所述第二现实场景的第一AR模型和第二AR模型;
    其中,所述第二现实场景是所述第二电子设备所处的现实场景;所述第一AR模型是所述第一电子设备对应的第一用户的AR模型;所述第二AR模型是所述第二电子设备对应的第二用户的AR模型;在AR通信的过程中,所述第一AR模型按照所述第二设备获取的所述第一用户的表情和动作做出相应的表情和动作,所述第二AR模型按照所述第二设备获取的所述第二用户的表情和动作做出相应的表情和动作。
  13. 根据权利要求12所述的方法,其特征在于,所述响应于所述AR通信请求信息,所述第二电子设备与所述第一电子设备建立AR通信链接,包括:
    响应于所述AR通信请求信息,所述第二电子设备呈现第一提示信息,所述第一提示信息用于确认所述第二电子设备是否与所述第一电子设备进行AR通信;
    响应于同意进行AR通信的操作,所述第二电子设备与所述第一电子设备建立所述AR通信链接。
  14. 根据权利要求13所述的方法,其特征在于,所述响应于所述AR通信请求信息,所述第二电子设备呈现第一提示信息,包括:
    响应于所述AR通信请求信息,所述第二电子设备判断所述第一电子设备是否合法;
    如果所述第一电子设备合法,所述第二电子设备呈现所述第一提示信息;
    其中,所述第一电子设备合法包括:所述第二电子设备的白名单中保存有所述第一电子设备的设备标识信息,或者,所述第二电子设备的黑名单中未保存所述第一电子设备的设备标识信息;所述第一电子设备的设备标识信息包括所述第一电子设备的电话号码。
  15. 根据权利要求12-14中任意一项所述的方法,其特征在于,在所述第二电子设备接收第一电子设备发送的AR通信请求信息之前,所述方法还包括:
    所述第二电子设备与所述第一电子设备进行语音通信或者视频通信,在所述触摸屏上显示所述语音通信或者所述视频通信的图形用户界面。
  16. 根据权利要求12-15中任意一项所述的方法,其特征在于,所述第一AR模型是所述第二电子设备中针对所述第一电子设备预先设定的AR模型,所述第二AR模型是所述第二电子设备中针对所述第二电子设备预先设定的AR模型。
  17. 根据权利要求12-15中任意一项所述的方法,其特征在于,所述第二电子设备在触摸屏上显示第一AR通信界面,包括:
    响应于建立所述AR通信链接,所述第二电子设备在所述触摸屏上显示第二AR通信界面,所述第二AR通信界面中包括所述第二现实场景的图像、但不包括所述第一AR模型和所述第二AR模型;
    响应于在所述触摸屏上对所述第二AR通信界面的第二操作,所述第二电子设备在所述触摸屏上显示模型选择界面,所述模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;
    响应于在所述触摸屏上对所述多个模型选项中的第一模型选项的第一选择操作,所述第二电子设备在所述触摸屏上显示第三AR通信界面,所述第三AR通信界面包括所述第二现实场景的图像和所述第一模型选项对应的所述第一AR模型、但不包括所述第二AR模型;
    响应于在所述触摸屏上所述第三AR通信界面的第二操作,所述第二电子设备在所述触摸屏上显示所述模型选择界面;
    响应于在所述触摸屏上对所述多个模型选项中的第二模型选项的第二选择操作,所述第二电子设备在所述触摸屏上显示所述第一AR通信界面,所述第一AR通信界面包括所述第二现实场景的图像、所述第一AR模型和所述第二模型选项对应的所述第二AR模型。
  18. 根据权利要求12-17中任意一项所述的方法,其特征在于,所述第二电子设备包括第一摄像头和第二摄像头,所述第一电子设备包括第一摄像头;
    响应于建立所述AR通信链接,所述第二电子设备开启所述第二电子设备的第一摄像头和第二摄像头,所述第二电子设备的第二摄像头用于采集所述第二现实场景的图像,所述第二电子设备的第一摄像头用于采集所述第二用户的表情和动作。
  19. 根据权利要求12-18中任意一项所述的方法,其特征在于,在所述第二电子设备在触摸屏上显示第一AR通信界面之后,所述方法还包括:
    所述第二电子设备响应于在所述触摸屏上对所述第一AR模型的点击操作,显示模型选择界面,所述模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;
    响应于在所述触摸屏上对所述多个模型选项中的第三模型选项的点击操作,所述第二电子设备将所述第一AR通信界面中的所述第一AR模型更换为所述第三模型选项对应的第三AR模型。
  20. 根据权利要求12-19中任意一项所述的方法,其特征在于,在所述第二电子设备在触摸屏上显示第一AR通信界面之后,所述方法还包括:
    响应于在所述触摸屏上的第三操作,所述第二电子设备开始录制所述第二电子设备与所述第一电子设备进行AR通信的视频数据。
  21. 根据权利要求12-20中任意一项所述的方法,其特征在于,在所述第二电子设备在触摸屏上显示第一AR通信界面之后,所述方法还包括:
    响应于在所述触摸屏上对所述第一AR模型的第四操作,所述第二电子设备显示所述第一用户的联系人信息;所述联系人信息包括所述第一用户的电话号码、电子邮 箱地址或者头像中的至少一个。
  22. 根据权利要求12-21中任意一项所述的方法,其特征在于,在所述第一电子设备在触摸屏上显示第一AR通信界面之后,所述方法还包括:
    所述第二电子设备识别所述第二电子设备的麦克风采集的语音数据,以及由来自所述第一电子设备的音频电信号转换而来的语音数据;
    所述第二电子设备在所述触摸屏显示识别到的语音数据的文本。
  23. 根据权利要求12-22中任意一项所述的方法,其特征在于,所述方法还包括:
    所述第二电子设备识别到文本与预设文本对应的语音数据时,在所述触摸屏显示所述第一AR模型与所述第二AR模型执行所述预设文本对应的动作的动态图像。
  24. 一种电子设备,其特征在于,所述电子设备是第一电子设备,所述电子设备包括:处理器、存储器、触摸屏和通信接口;所述存储器、所述触摸屏和所述通信接口与所述处理器耦合;所述存储器用于存储计算机程序代码;所述计算机程序代码包括计算机指令,当所述处理器执行上述计算机指令时,
    所述处理器,用于接收用户的第一操作;
    所述通信接口,用于响应于第一操作,向第二电子设备发送增强现实AR通信请求信息,所述AR通信请求信息用于请求与所述第二电子设备进行AR通信;与所述第二电子设备建立AR通信链接;
    所述触摸屏,用于在AR通信过程中,显示第一AR通信界面,所述第一AR通信界面中包括第一现实场景的图像,以及处于所述第一现实场景的第一AR模型和第二AR模型;
    其中,所述第一现实场景是所述第一电子设备所处的现实场景;所述第一AR模型是所述第一电子设备对应的第一用户的AR模型,所述第二AR模型是所述第二电子设备对应的第二用户的AR模型;在所述AR通信的过程中,所述触摸屏显示的所述第一AR模型按照所述第一设备获取的所述第一用户的表情和动作做出相应的表情和动作,所述触摸屏显示的所述第二AR模型按照所述第一设备获取的所述第二用户的表情和动作做出相应的表情和动作。
  25. 根据权利要求24所述的电子设备,其特征在于,所述处理器,还用于在所述通信接口向第二电子设备发送AR通信请求信息之前,与所述第二电子设备进行语音通信或者视频通信;
    所述触摸屏,还用于显示所述语音通信或者所述视频通信的图形用户界面;
    其中,所述第一操作是在所述触摸屏上对所述语音通信或者所述视频通信的图形用户界面输入的第一预设手势;或者,所述语音通信或者所述视频通信的图形用户界面中包括AR通信按钮,所述第一操作是在所述触摸屏上对所述AR通信按钮的点击操作;或者,所述第一操作是所述语音通信或者所述视频通信的过程中,对第一预设按键的点击操作,所述第一预设按键的所述第一电子设备的物理按键。
  26. 根据权利要求24所述的电子设备,其特征在于,所述电子设备与所述第二电子设备中安装有AR应用,所述AR应用是用于提供AR通信服务的客户端;所述存储器中保存所述AR应用的相关信息;
    所述触摸屏,还用于在所述通信接口向所述第二电子设备发送所述AR通信请求 信息之前,响应于对所述AR应用的应用图标的点击操作,所述第一电子设备显示AR应用界面,所述AR应用界面包括至少一个联系人选项,所述至少一个联系人选项中包括所述第二电子设备对应的联系人选项;
    其中,所述第一操作是所述第一用户对所述第二电子设备对应的联系人选项的点击操作。
  27. 根据权利要求24-26中任意一项所述的电子设备,其特征在于,所述第一AR模型是所述电子设备中针对所述第一电子设备预先设定的AR模型,所述第二AR模型是所述电子设备中针对所述第二电子设备预先设定的AR模型。
  28. 根据权利要求24-26中任意一项所述的电子设备,其特征在于,所述触摸屏,显示第一AR通信界面,包括:
    响应于建立所述AR通信链接,所述触摸屏显示第二AR通信界面,所述第二AR通信界面中包括所述第一现实场景的图像、但不包括所述第一AR模型和所述第二AR模型;
    响应于在所述触摸屏上对所述第二AR通信界面的第二操作,所述触摸屏显示模型选择界面,所述模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;
    响应于在所述触摸屏上对所述多个模型选项中的第一模型选项的第一选择操作,所述触摸屏显示第三AR通信界面,所述第三AR通信界面包括所述第一现实场景的图像和所述第二模型选项对应的所述第二AR模型、但不包括所述第一AR模型;
    响应于在所述触摸屏上所述第三AR通信界面的第二操作,所述触摸屏显示所述模型选择界面;
    响应于在所述触摸屏上对所述多个模型选项中的第二模型选项的第二选择操作,所述触摸屏显示所述第一AR通信界面,所述第一AR通信界面包括所述第一现实场景的图像、所述第二AR模型和所述第一模型选项对应的所述第一AR模型。
  29. 根据权利要求24-28中任意一项所述的电子设备,其特征在于,所述电子设备还包括第一摄像头和第二摄像头,所述第二电子设备包括第一摄像头;
    所述处理器,还用于响应于建立所述AR通信链接,开启所述第一电子设备的第一摄像头和第二摄像头,所述第一电子设备的第二摄像头用于采集所述第一现实场景的图像,所述第一电子设备的第一摄像头用于采集所述第一用户的表情和动作。
  30. 根据权利要求24-29中任意一项所述的电子设备,其特征在于,所述触摸屏,还用于在显示所述第一AR通信界面之后,响应于在所述触摸屏上对所述第一AR模型的点击操作,显示模型选择界面,所述模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;响应于在所述触摸屏上对所述多个模型选项中的第三模型选项的点击操作,所述触摸屏显示所述第一AR通信界面中的所述第一AR模型更换为所述第三模型选项对应的第三AR模型的AR通信界面。
  31. 根据权利要求24-30中任意一项所述的电子设备,其特征在于,所述处理器,还用于在所述触摸屏显示所述第一AR通信界面之后,响应于在所述触摸屏上的第三操作,开始录制所述第一电子设备与所述第二电子设备进行AR通信的视频数据。
  32. 根据权利要求24-31中任意一项所述的电子设备,其特征在于,所述触摸屏,还用于在所述触摸屏显示所述第一AR通信界面之后,响应于在所述触摸屏上对所述 第二AR模型的第四操作,显示所述第二用户的联系人信息;所述联系人信息包括所述第二用户的电话号码、电子邮箱地址或者头像中的至少一个。
  33. 根据权利要求24-32中任意一项所述的电子设备,其特征在于,所述处理器,还用于在所述触摸屏显示所述第一AR通信界面之后,识别所述第一电子设备的麦克风采集的语音数据,以及由来自所述第二电子设备的音频电信号转换而来的语音数据;
    所述触摸屏,还用于显示所述处理器识别到的语音数据的文本。
  34. 根据权利要求24-33中任意一项所述的电子设备,其特征在于,所述触摸屏,还用于所述处理器识别到文本与预设文本对应的语音数据时,显示所述第一AR模型与所述第二AR模型执行所述预设文本对应的动作的动态图像。
  35. 一种电子设备,其特征在于,所述电子设备是第二电子设备,所述电子设备包括:处理器、存储器、触摸屏和通信接口;所述存储器、所述触摸屏和所述通信接口与所述处理器耦合;所述存储器用于存储计算机程序代码;所述计算机程序代码包括计算机指令,当所述处理器执行上述计算机指令时,
    所述通信接口,用于接收第一电子设备发送的增强现实AR通信请求信息,所述AR通信请求信息用于请求与所述第二电子设备进行AR通信;
    所述处理器,用于响应于所述AR通信请求信息,与所述第一电子设备建立AR通信链接;
    所述触摸屏,用于在AR通信过程中,显示第一AR通信界面,所述第一AR通信界面中包括第二现实场景的图像,以及处于所述第二现实场景的第一AR模型和第二AR模型;
    其中,所述第二现实场景是所述第二电子设备所处的现实场景;所述第一AR模型是所述第一电子设备对应的第一用户的AR模型;所述第二AR模型是所述第二电子设备对应的第二用户的AR模型;在所述AR通信的过程中,所述触摸屏显示的所述第一AR模型按照所述第二设备获取的所述第一用户的表情和动作做出相应的表情和动作,所述触摸屏显示的所述第二AR模型按照所述第二设备获取的所述第二用户的表情和动作做出相应的表情和动作。
  36. 根据权利要求35所述的电子设备,其特征在于,所述处理器,还用于在所述处理器响应于所述AR通信请求信息,呈现第一提示信息,所述第一提示信息用于确认所述第二电子设备是否与所述第一电子设备进行AR通信;响应于用户在所述触摸屏上同意进行AR通信的操作,与所述第二电子设备建立所述AR通信链接。
  37. 根据权利要求36所述的电子设备,其特征在于,所述处理器,用于响应于所述AR通信请求信息,呈现第一提示信息,包括:
    所述处理器,用于响应于所述AR通信请求信息,判断所述第一电子设备是否合法;如果所述第一电子设备合法,呈现所述第一提示信息;
    其中,所述第一电子设备合法包括:所述第二电子设备的白名单中保存有所述第一电子设备的设备标识信息,或者,所述第二电子设备的黑名单中未保存所述第一电子设备的设备标识信息;所述第一电子设备的设备标识信息包括所述第一电子设备的电话号码;所述第二电子设备的白名单和黑名单保存在所述存储器中。
  38. 根据权利要求35-37中任意一项所述的电子设备,其特征在于,所述处理器, 还用于在所述通信接口接收所述AR通信请求信息之前,与所述第一电子设备进行语音通信或者视频通信;
    所述触摸屏,还用于显示所述语音通信或者所述视频通信的图形用户界面。
  39. 根据权利要求35-37中任意一项所述的电子设备,其特征在于,所述触摸屏显示的所述第一AR模型是所述电子设备中针对所述第一电子设备预先设定的AR模型,所述触摸屏显示的所述第二AR模型是所述电子设备中针对所述第二电子设备预先设定的AR模型。
  40. 根据权利要求35-37中任意一项所述的电子设备,其特征在于,所述触摸屏,用于显示所述第一AR通信界面,包括:
    响应于建立所述AR通信链接,所述触摸屏显示第二AR通信界面,所述第二AR通信界面中包括所述第二现实场景的图像、但不包括所述第一AR模型和所述第二AR模型;
    响应于在所述触摸屏上对所述第二AR通信界面的第二操作,所述触摸屏显示显示模型选择界面,所述模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;
    响应于在所述触摸屏上对所述多个模型选项中的第一模型选项的第一选择操作,所述触摸屏显示第三AR通信界面,所述第三AR通信界面包括所述第二现实场景的图像和所述第一模型选项对应的所述第一AR模型、但不包括所述第二AR模型;
    响应于在所述触摸屏上所述第三AR通信界面的第二操作,所述触摸屏显示所述模型选择界面;
    响应于在所述触摸屏上对所述多个模型选项中的第二模型选项的第二选择操作,所述触摸屏显示所述第一AR通信界面,所述第一AR通信界面包括所述第二现实场景的图像、所述第一AR模型和所述第二模型选项对应的所述第二AR模型。
  41. 根据权利要求35-40中任意一项所述的电子设备,其特征在于,所述电子设备包括第一摄像头和第二摄像头,所述第一电子设备包括第一摄像头;
    所述处理器,还用于响应于建立所述AR通信链接,开启所述电子设备的第一摄像头和第二摄像头,所述电子设备的第二摄像头用于采集所述第二现实场景的图像,所述电子设备的第一摄像头用于采集所述第二用户的表情和动作。
  42. 根据权利要求35-41中任意一项所述的电子设备,其特征在于,所述触摸屏,还用于在显示所述第一AR通信界面之后,
    响应于在所述触摸屏上对所述第一AR模型的点击操作,显示模型选择界面,所述模型选择界面包括多个模型选项,每个模型选项对应一个AR模型;
    响应于在所述触摸屏上对所述多个模型选项中的第三模型选项的点击操作,所述触摸屏显示所述第一AR通信界面中的所述第一AR模型更换为所述第三模型选项对应的第三AR模型的AR通信界面。
  43. 根据权利要求35-42中任意一项所述的电子设备,其特征在于,所述处理器,还用于在所述触摸屏显示所述第一AR通信界面之后,响应于在所述触摸屏上的第三操作,开始录制所述第二电子设备与所述第一电子设备进行AR通信的视频数据。
  44. 根据权利要求35-43中任意一项所述的电子设备,其特征在于,所述触摸屏, 还用于在显示所述第一AR通信界面之后,响应于在所述触摸屏上对所述第一AR模型的第四操作,显示所述第一用户的联系人信息;所述联系人信息包括所述第一用户的电话号码、电子邮箱地址或者头像中的至少一个。
  45. 根据权利要求35-44中任意一项所述的电子设备,其特征在于,所述处理器,还用于在所述触摸屏显示所述第一AR通信界面之后,识别所述第二电子设备的麦克风采集的语音数据,以及由来自所述第一电子设备的音频电信号转换而来的语音数据;
    所述触摸屏,还用于显示所述处理器识别到的语音数据的文本。
  46. 根据权利要求35-45中任意一项所述的电子设备,其特征在于,所述触摸屏,还用于所述处理器识别到文本与预设文本对应的语音数据时,显示所述第一AR模型与所述第二AR模型执行所述预设文本对应的动作的动态图像。
  47. 一种计算机存储介质,其特征在于,所述计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-23中任意一项所述的增强现实的通信方法。
  48. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-23中任意一项所述的增强现实的通信方法。
PCT/CN2018/106789 2018-09-20 2018-09-20 增强现实的通信方法及电子设备 WO2020056694A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2018/106789 WO2020056694A1 (zh) 2018-09-20 2018-09-20 增强现实的通信方法及电子设备
US17/278,015 US11743954B2 (en) 2018-09-20 2018-09-20 Augmented reality communication method and electronic device
CN201880090776.3A CN111837381A (zh) 2018-09-20 2018-09-20 增强现实的通信方法及电子设备
EP18933804.9A EP3833012A4 (en) 2018-09-20 2018-09-20 AUGMENTED REALITY COMMUNICATION PROCESS AND ELECTRONIC DEVICES

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/106789 WO2020056694A1 (zh) 2018-09-20 2018-09-20 增强现实的通信方法及电子设备

Publications (1)

Publication Number Publication Date
WO2020056694A1 true WO2020056694A1 (zh) 2020-03-26

Family

ID=69888158

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/106789 WO2020056694A1 (zh) 2018-09-20 2018-09-20 增强现实的通信方法及电子设备

Country Status (4)

Country Link
US (1) US11743954B2 (zh)
EP (1) EP3833012A4 (zh)
CN (1) CN111837381A (zh)
WO (1) WO2020056694A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709537A (zh) * 2020-05-21 2021-11-26 云米互联科技(广东)有限公司 基于5g电视的用户互动方法、5g电视及可读存储介质
WO2022089224A1 (zh) * 2020-10-26 2022-05-05 腾讯科技(深圳)有限公司 一种视频通信方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112703684A (zh) * 2018-10-31 2021-04-23 富士通株式会社 信号发送方法、天线面板信息的指示方法、装置和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103368816A (zh) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 基于虚拟人物形象的即时通讯方法及系统
CN103916621A (zh) * 2013-01-06 2014-07-09 腾讯科技(深圳)有限公司 视频通信方法及装置
US9402057B2 (en) * 2012-04-02 2016-07-26 Argela Yazilim ve Bilisim Teknolojileri San. ve Tic. A.S. Interactive avatars for telecommunication systems
CN106803921A (zh) * 2017-03-20 2017-06-06 深圳市丰巨泰科电子有限公司 基于ar技术的即时音视频通信方法及装置
CN107750005A (zh) * 2017-09-18 2018-03-02 迈吉客科技(北京)有限公司 虚拟互动方法和终端
CN108234276A (zh) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 一种虚拟形象之间互动的方法、终端及系统

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8601386B2 (en) * 2007-04-20 2013-12-03 Ingenio Llc Methods and systems to facilitate real time communications in virtual reality
US9727128B2 (en) * 2010-09-02 2017-08-08 Nokia Technologies Oy Methods, apparatuses, and computer program products for enhancing activation of an augmented reality mode
US20130238778A1 (en) * 2011-08-26 2013-09-12 Reincloud Corporation Self-architecting/self-adaptive model
IN2014CN03530A (zh) * 2011-11-08 2015-07-03 Vidinoti Sa
EP2953099B1 (en) * 2013-02-01 2019-02-13 Sony Corporation Information processing device, terminal device, information processing method, and programme
GB2530460A (en) * 2013-06-06 2016-03-23 Standard & Poor S Financial Services Llc Financial information management system and user interface
US9191620B1 (en) 2013-12-20 2015-11-17 Sprint Communications Company L.P. Voice call using augmented reality
US9386270B2 (en) * 2014-01-15 2016-07-05 Cisco Technology, Inc. Displaying information about at least one participant in a video conference session
US9716796B2 (en) 2015-04-17 2017-07-25 Microsoft Technology Licensing, Llc Managing communication events
WO2017013936A1 (ja) 2015-07-21 2017-01-26 ソニー株式会社 情報処理装置、情報処理方法およびプログラム
CN105208273A (zh) 2015-09-24 2015-12-30 宇龙计算机通信科技(深圳)有限公司 使用双摄像头终端拍照的方法、装置及双摄像头终端
KR101768532B1 (ko) * 2016-06-08 2017-08-30 주식회사 맥스트 증강 현실을 이용한 화상 통화 시스템 및 방법
US20180063205A1 (en) * 2016-08-30 2018-03-01 Augre Mixed Reality Technologies, Llc Mixed reality collaboration
US10182153B2 (en) * 2016-12-01 2019-01-15 TechSee Augmented Vision Ltd. Remote distance assistance system and method
US10699461B2 (en) * 2016-12-20 2020-06-30 Sony Interactive Entertainment LLC Telepresence of multiple users in interactive virtual space
WO2018142222A1 (en) 2017-02-03 2018-08-09 Zyetric System Limited Augmented video reality
US20180324229A1 (en) * 2017-05-05 2018-11-08 Tsunami VR, Inc. Systems and methods for providing expert assistance from a remote expert to a user operating an augmented reality device
WO2018222756A1 (en) * 2017-05-30 2018-12-06 Ptc Inc. Object initiated communication
US11258734B1 (en) * 2017-08-04 2022-02-22 Grammarly, Inc. Artificial intelligence communication assistance for editing utilizing communication profiles
CN107544802A (zh) * 2017-08-30 2018-01-05 北京小米移动软件有限公司 设备识别方法及装置
US20190114675A1 (en) * 2017-10-18 2019-04-18 Yagerbomb Media Pvt. Ltd. Method and system for displaying relevant advertisements in pictures on real time dynamic basis
KR102479499B1 (ko) * 2017-11-22 2022-12-21 엘지전자 주식회사 이동 단말기
US11131973B2 (en) * 2017-12-06 2021-09-28 Arris Enterprises Llc System and method of IOT device control using augmented reality
US10937240B2 (en) * 2018-01-04 2021-03-02 Intel Corporation Augmented reality bindings of physical objects and virtual objects
US10755250B2 (en) * 2018-09-07 2020-08-25 Bank Of America Corporation Processing system for providing a teller assistant experience using enhanced reality interfaces

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103368816A (zh) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 基于虚拟人物形象的即时通讯方法及系统
US9402057B2 (en) * 2012-04-02 2016-07-26 Argela Yazilim ve Bilisim Teknolojileri San. ve Tic. A.S. Interactive avatars for telecommunication systems
CN103916621A (zh) * 2013-01-06 2014-07-09 腾讯科技(深圳)有限公司 视频通信方法及装置
CN108234276A (zh) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 一种虚拟形象之间互动的方法、终端及系统
CN106803921A (zh) * 2017-03-20 2017-06-06 深圳市丰巨泰科电子有限公司 基于ar技术的即时音视频通信方法及装置
CN107750005A (zh) * 2017-09-18 2018-03-02 迈吉客科技(北京)有限公司 虚拟互动方法和终端

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709537A (zh) * 2020-05-21 2021-11-26 云米互联科技(广东)有限公司 基于5g电视的用户互动方法、5g电视及可读存储介质
CN113709537B (zh) * 2020-05-21 2023-06-13 云米互联科技(广东)有限公司 基于5g电视的用户互动方法、5g电视及可读存储介质
WO2022089224A1 (zh) * 2020-10-26 2022-05-05 腾讯科技(深圳)有限公司 一种视频通信方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Also Published As

Publication number Publication date
EP3833012A1 (en) 2021-06-09
US11743954B2 (en) 2023-08-29
EP3833012A4 (en) 2021-08-04
CN111837381A (zh) 2020-10-27
US20210385890A1 (en) 2021-12-09

Similar Documents

Publication Publication Date Title
WO2021213120A1 (zh) 投屏方法、装置和电子设备
WO2018153267A1 (zh) 群组视频会话的方法及网络设备
WO2022100610A1 (zh) 投屏方法、装置、电子设备及计算机可读存储介质
CN114710640B (zh) 基于虚拟形象的视频通话方法、装置和终端
CN111132234A (zh) 一种数据传输方法及对应的终端
CN110198362B (zh) 一种在联系人中添加智能家居设备的方法及系统
WO2020056684A1 (zh) 通过转发模式连接的多tws耳机实现自动翻译的方法及装置
WO2021164289A1 (zh) 人像处理方法和装置以及终端
WO2020056694A1 (zh) 增强现实的通信方法及电子设备
CN113890745B (zh) 业务接续的决策方法、装置、电子设备和可读存储介质
CN114610193A (zh) 内容共享方法、电子设备及存储介质
CN114528581A (zh) 一种安全显示方法及电子设备
CN115426521A (zh) 用于截屏的方法、电子设备、介质以及程序产品
WO2020062304A1 (zh) 一种文件传输方法及电子设备
CN114339429A (zh) 音视频播放控制方法、电子设备和存储介质
CN111886849B (zh) 一种传输信息的方法及电子设备
CN113923351A (zh) 多路视频拍摄的退出方法、设备、存储介质和程序产品
CN115393676A (zh) 手势控制优化方法、装置、终端和存储介质
CN114827098A (zh) 合拍的方法、装置、电子设备和可读存储介质
CN114445522A (zh) 笔刷效果图生成方法、图像编辑方法、设备和存储介质
US20240045651A1 (en) Audio Output Method, Media File Recording Method, and Electronic Device
CN114071055B (zh) 一种快速加入会议的方法以及相关设备
CN114466100B (zh) 配件主题自适应方法、装置和系统
WO2022037412A1 (zh) 管理物联网设备的方法和装置
WO2022222702A1 (zh) 屏幕解锁方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18933804

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018933804

Country of ref document: EP

Effective date: 20210305

NENP Non-entry into the national phase

Ref country code: DE