CN111580658B - AR-based conference method and device and electronic equipment - Google Patents

AR-based conference method and device and electronic equipment Download PDF

Info

Publication number
CN111580658B
CN111580658B CN202010386118.8A CN202010386118A CN111580658B CN 111580658 B CN111580658 B CN 111580658B CN 202010386118 A CN202010386118 A CN 202010386118A CN 111580658 B CN111580658 B CN 111580658B
Authority
CN
China
Prior art keywords
participant
target model
editing
coordinate data
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010386118.8A
Other languages
Chinese (zh)
Other versions
CN111580658A (en
Inventor
汪铭扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010386118.8A priority Critical patent/CN111580658B/en
Publication of CN111580658A publication Critical patent/CN111580658A/en
Application granted granted Critical
Publication of CN111580658B publication Critical patent/CN111580658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an AR-based conference method and device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: uploading the position information of the target model and the gesture information of the participant to the virtual conference room; synchronizing the operation gestures of the participants in the virtual conference room according to the gesture information of the participants; and editing the target model according to the operation gesture of the participant under the condition that the position information of the target model is matched with the gesture information of the participant. In the embodiment of the application, the operation gestures of the participants are synchronized in the virtual conference room; and editing the target model in the virtual meeting room according to the operation gestures of the participant. Therefore, in the AR-based conference, the participant can edit the target model through different operation gestures, and the communication mode of the AR-based conference is richer by combining the model editing, so that the conference efficiency is improved.

Description

AR-based conference method and device and electronic equipment
Technical Field
The embodiment of the application relates to the field of communication, in particular to an AR-based conference method and device and electronic equipment.
Background
With the development of visual imaging technology, augmented reality (Augmented Reality, AR) has entered the field of view of people. The tool is a tool for assisting daily life or improving production efficiency by combining virtual images with a real environment.
The work pace of the present era is accelerated, and the scheme review of the remote multiparty conference is becoming a trend, and the AR technology is gradually applied to the remote multiparty conference. In the current AR-based conferences, images related to the conference are usually displayed in a virtual conference room, and discussion communication is mainly performed among the participants in a voice or text mode, so that the intention of the participants in the conference can not be well understood sometimes, and the efficiency of the conference is low.
Disclosure of Invention
The embodiment of the application aims to provide an AR-based conference method, an AR-based conference device and electronic equipment, which can solve the problem that the efficiency of the existing AR-based conference method is low.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides an AR-based conferencing method, the method including:
Uploading the position information of the target model and the gesture information of the participant to the virtual conference room;
synchronizing an operation gesture of the participant in the virtual conference room according to the gesture information of the participant;
Editing the target model according to the operation gesture of the participant under the condition that the position information of the target model is matched with the gesture information of the participant
In a second aspect, an embodiment of the present application provides an AR-based conferencing apparatus, including:
the first uploading module is used for uploading the position information of the target model and the gesture information of the participant to the virtual conference room;
The synchronization module is used for synchronizing the operation gestures of the participants in the virtual conference room according to the gesture information of the participants;
And the editing module is used for editing the target model according to the operation gesture of the participant under the condition that the position information of the target model is matched with the gesture information of the participant.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the AR-based conferencing method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the AR-based conferencing method as described in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, the operation gestures of the participants are synchronized in the virtual conference room; and editing the target model in the virtual meeting room according to the operation gestures of the participant. Therefore, in the AR-based conference, the participant can edit the target model through different operation gestures, and the communication mode of the AR-based conference is richer by combining the model editing, so that the conference efficiency is improved.
Drawings
Fig. 1 is a flow chart of an AR-based conferencing method according to an embodiment of the present application;
fig. 2a is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 2b is a second schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 2c is a third schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 2d is a schematic diagram of an application scenario provided in an embodiment of the present application;
Fig. 3 is a schematic structural diagram of an AR-based conference device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the application may be practiced otherwise than as specifically illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The AR-based conference method provided by the embodiment of the application is described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, the present application provides an AR-based conferencing method, the method comprising:
step 101: uploading the position information of the target model and the gesture information of the participant to the virtual conference room;
In the embodiment of the application, the virtual conference room refers to a virtual space constructed by an AR technology, each participant participating in the AR conference can upload information to the virtual conference room, and correspondingly each participant can acquire the information uploaded by other participants through the virtual conference room, and conference information transmission of a remote multiparty conference is realized through the virtual conference room.
The target model refers to a model for displaying conference information in an AR-based conference, the model can be a three-dimensional product model related to a conference discussion problem, the model can also be a two-dimensional document model related to the conference discussion problem, and the like, and the specific type of the target model is not limited in the embodiment of the application.
By uploading the position information of the target model to the virtual conference room, the target model is presented in the virtual conference room, so that each participant can watch the target model.
In some embodiments, the target model is a three-dimensional model, and the positional information of the target model includes three-dimensional coordinate data of the target model. The three-dimensional coordinate data is used for representing coordinate data information of the target model in each reference direction in the three-dimensional space, for example, the three-dimensional coordinate data is coordinate information of X, Y, Z axes of the target model; alternatively, volume data of the target model may be constructed based on three-dimensional coordinate data of the target model, for example: the three-dimensional coordinate data of the target model comprises 10 coordinates, and the volume data of the target model is determined according to the 10 coordinates. The volume data is used to represent the size of the space occupied by the object model.
In some embodiments, the target model is a two-dimensional model, and the position information of the target model includes two-dimensional coordinate data of the target model; the two-dimensional model may be a statistical chart for a plurality of items, or a planar design drawing of a certain product, which is not particularly limited in the embodiment of the present application.
Step 102: synchronizing the operation gestures of the participants in the virtual conference room according to the gesture information of the participants;
in an embodiment of the present application, the operation gestures of the participant include movements of the participant's hand, such as: clicking, stroking, etc., the participant's operational gestures also include displacements of the participant's hand, such as: waving, turning over, etc. of the hands.
In some embodiments, the gesture information of the participant includes first relative position information between the participant's hand and head, and second relative position information between the participant's hand and the virtual meeting room, optionally the first relative position information may be from an AR headset worn by the participant, usable to represent fine gesture operations, such as: clicking, small-range scratching, etc.; the second relative position information may be from a holographic image sensor for gesture operations representing larger displacements, such as: overturn, large-scale scratching, etc.
It should be noted that, the position information of the target model and the gesture information of the participant may be uploaded to the virtual conference room in the form of point cloud data.
Optionally, the physical body position information of the participant in the virtual conference room is generated as point cloud data to form real-time point cloud position data, and the real-time point cloud position data is recorded and uploaded into the virtual conference room, so that the model of the participant can be synchronously built in the virtual conference room.
Further, in some embodiments, synchronizing the operational gestures of the participant in the virtual meeting room according to the gesture information of the participant comprises: determining hand coordinate data of the participant according to the first relative position information and the second relative position information; and synchronizing the operation gestures of the participant according to the hand coordinate data of the participant.
In the embodiment of the application, based on the relative position information between the hands and the heads of the participants and the relative position information between the hands and the virtual conference room, the hand coordinate data of the participants are determined in the virtual conference room through a preset algorithm (the existing algorithm for synchronizing actions in the virtual conference room can be adopted), and the operation gestures of the participants are synchronized based on the hand coordinate data (such as the difference value of coordinate values), so that the establishment of the target model, the participants and the operation gestures of the participants in the virtual conference room is unified and synchronized, the real-time coordination of the target model, the participants and the operation gestures of the participants in the three-dimensional virtual conference room is realized by crossing the physical space.
Step 103: editing the target model according to the operation gesture of the participant under the condition that the position information of the target model is matched with the gesture information of the participant;
In the embodiment of the application, after the establishment of the target model and the synchronization of the operation gestures of the participant are completed, the participant can edit the target model through different operation gestures, specifically, the target model is edited according to the operation gestures of the participant under the condition that the position information of the target model is matched with the gesture information of the participant. The matching of the position information and the gesture information means that in the virtual meeting room, the hand actions of the participants and the target model have relevance, and whether the gesture actions of the participants are used for editing the target model or not can be judged according to whether the gesture actions of the participants are matched or not and the target model can be edited according to the operation gestures of which participants.
In some embodiments, the matching of the location information of the object model and the gesture information of the participant may be: coordinate data of the object model coincides with hand coordinate data of the participant, for example: the coordinates of the hands of the participants are the same as the coordinate values of a certain part of the target model, the coordinate data are superposed and displayed in the virtual conference room, and the hands of the participants contact the target model, so that the participants can be determined to edit the target model.
The above-mentioned editing can be to the adjustment of shape, position, colour, material, internal parameter, etc. of the target model, this editing can also be to mark, paint, etc. on the target model, the embodiment of the application does not limit the concrete form of editing.
In the embodiment of the application, the operation gestures of the participants are synchronized in the virtual conference room; and editing the target model in the virtual meeting room according to the operation gestures of the participant. Therefore, in the AR-based conference, the participant can edit the target model with different operation gestures, and the communication mode of the AR conference is richer by combining the model editing, so that the conference efficiency is improved.
Further, in some embodiments, for the case that the target model is a three-dimensional model, the position information of the target model includes three-dimensional coordinate data of the target model, and when the position information of the target model and gesture information of the participant are matched, the editing of the target model is performed according to the operation gesture of the participant, which specifically includes: under the condition that the hand coordinate data of the participant coincides with the three-dimensional coordinate data of the target model, acquiring editing permission of the participant on the target model; and editing the target model according to the editing authority of the target model and the operation gesture of the participant.
In the embodiment of the application, when the equipment monitors that the hand coordinate data of the participant and the three-dimensional coordinate data of the target model are coincident, the editing authority of the participant on the target model is obtained, the participant is defaulted to be a model editor, the model can be edited only by the editor, the rest participants are viewers, and the model cannot be edited before the authority is not given. The participant realizes the editing of the target model through the editing authority of the target model and the operation gestures of the participant.
Referring to fig. 2a, the AR-based conference shown in the drawing includes three participants A, B, C, in which the hand coordinate data of the participant a coincides with the three-dimensional coordinate data of the target model 20, so that the participant a is set as a model editor, and the participant B, C is set as a viewer, so that the participant a can edit the target model through operation gestures while expressing through language, so as to realize diversified virtual interactions, more efficiently perform information transmission, and improve conference efficiency.
In some embodiments, for the case where the target model is a two-dimensional model, the location information of the target model includes two-dimensional coordinate data of the target model, and after uploading the location information of the target model to the virtual conference room, the AR-based conference method further includes: dividing a target model into a plurality of grid areas;
In the embodiment of the application, the target model is divided into palace patterns so as to facilitate the operation of a user in a related area, improve the operation precision of the target model, and optionally record the position information and the volume information of the palace patterns in real time and generate and upload the position information and the volume information into a virtual conference room. For example: the target model is a data statistics chart, and the chart can be divided into a plurality of grid areas, and each grid area corresponds to one statistics item. It will be appreciated that the division of the grid area may also be performed by the participant, how the object model is divided by the participant.
Correspondingly, when the position information of the target model is matched with the gesture information of the participant, editing the target model according to the operation gesture of the participant, specifically including: acquiring editing permission of a participant on a target grid region in a plurality of Gong Geou domains under the condition that hand coordinate data of the participant coincides with two-dimensional coordinate data of the target grid region; editing the target grid region according to the editing authority of the target grid region and the operation gesture of the participant.
In the embodiment of the application, when the equipment monitors that the hand coordinate data of the participant and the two-dimensional coordinate data of a certain grid area of the target model are coincident, the system locks the grid area and the area near the grid area for the participant, defaults the participant to be an editor and an owner of the grid area, and other participants to be viewers of the grid area, and cannot edit the content of the grid area before the participant has permission. The participant realizes the editing of the target palace lattice area through the editing authority of the target palace lattice area and the operation gestures of the participant.
Thus, by dividing the target model into a plurality of palace areas, a participant can edit a single palace area, so that the editing is more flexible; meanwhile, a plurality of participants can edit different palace areas respectively, so that more participants participate in the discussion and formulation of the scheme, and the conference efficiency is improved.
Referring to fig. 2b, the AR-based conference shown in the figure includes three participants A, B, C, wherein the object model is a statistical chart, and the chart is divided into three grid areas, namely an area 21, an area 22 and an area 23, according to statistical items. The hand coordinate data of the participant a coincides with the three-dimensional coordinate data of the region 21, so that the participant a is set as an editor of the region 21, the hand coordinate data of the participant B coincides with the three-dimensional coordinate data of the region 23, so that the participant B is set as an editor of the region 23, and the participant C is a viewer, so that the participants a and B can edit the chart contents in the region 21 and the region 23 through operation gestures respectively, diversified virtual interaction is realized, more participants participate in discussion and formulation of the scheme, and conference efficiency is improved.
In some embodiments, for the case where the target model is a three-dimensional model, before editing the target model according to the operation gesture of the participant, the AR-based conference method further includes: uploading a physical material database of the target model to the virtual meeting room;
In the embodiment of the application, the selectable physical material characteristics of the target model are classified and input into a physical material database according to the position attribute, the color attribute, the texture material attribute, the reflectivity attribute, the sound attribute, the elastic deformation attribute or various other physical attributes, and are uploaded to a virtual meeting room so as to be called when the materials of the target model are edited.
Correspondingly, when the position information of the target model is matched with the gesture information of the participant, editing the target model according to the operation gesture of the participant, including: editing the material of the target model according to the operation gesture of the participant under the condition that the hand coordinate data of the participant is overlapped with the three-dimensional coordinate data of the target model, wherein the material of the target model is selected from a physical material database;
in the embodiment of the present application, the materials of the target model of the physical material database of the participant, optionally, as shown in fig. 2c, may display a table corresponding to the physical material database in the virtual meeting room, where the table records the properties of various materials, for example: vibration feedback value, acoustic frequency, displacement rate, etc. Facilitating the reference of the parameter person.
Correspondingly, when the hand coordinate data of the participant coincides with the three-dimensional coordinate data of the target model, editing the material of the target model according to the operation gesture of the participant, and the AR-based conference method further comprises: and feeding back feedback signals corresponding to the operation gestures of the participants and the materials of the target model according to the operation gestures of the participants and the materials of the target model at the overlapping positions of the coordinate data.
In the embodiment of the application, according to the operation position of the participant on the target model, the physical properties of the material at the position are combined to generate a corresponding feedback signal, and the feedback signal is used for feeding back the operation feeling of the material at the position to the participant.
As shown in fig. 2d, the operation gesture of the participant is to scratch the surface of the target model, and the target model material of the scratched position is a relatively coarse material, so that the feedback signal fed back can be a vibration signal, and the touch feeling on the coarse material is simulated by vibration, so that the participant can feel the scratched touch feeling of the coarse material. Specifically, the participant can operate with a handle with a touch (Haptic) function, and the handle can simulate the touch feeling of the target model material through Haptic vibration feedback according to the physical characteristics of different materials.
Similarly, the feedback signal may also include an acoustic signal, such as: the operation gesture of the participant is to strike the surface of the target model, the target model material of the striking position is selected from metal materials, the feedback signals fed back can comprise vibration signals and sound signals, the touch feeling of the metal materials is simulated through the vibration signals, and the sound generated by striking the metal materials is simulated through the sound signals
Through the mode, when a participant operates the target model, the simulation sense corresponding to the model material can be obtained, on one hand, the sense experience of virtual interaction can be improved, and on the other hand, the simulation sense is beneficial to faster determination of the scheme and improvement of conference efficiency.
Optionally, after the participant completes the material selection, the sensory simulation of the virtual material can be shared with other participants (e.g., data is sent to other participants so that the other participants can obtain the same feeling), so that the participants can more efficiently discuss the final design scheme in combination with the simulated sensory experience. After the virtual texture and sensory data corresponding to the texture can be fused with the target model, so that the three-dimensional white model, the texture map, the texture rendering data, the physical texture sensory simulation and other data can be stored in the cloud.
It should be noted that, in the AR-based conference method provided in the embodiment of the present application, the executing body may be an AR-based conference device, or a control module in the AR-based conference device for executing the loading AR-based conference method. In the embodiment of the application, taking the AR-based conference device as an example to execute and load the AR-based conference method, the AR-based conference method provided by the embodiment of the application is described.
Referring to fig. 3, an embodiment of the present application provides an AR-based conferencing apparatus 300, including:
A first uploading module 301, configured to upload, to a virtual conference room, location information of a target model and gesture information of a participant;
a synchronizing module 302, configured to synchronize an operation gesture of the participant in the virtual conference room according to gesture information of the participant;
and the editing module 303 is configured to edit the target model according to the operation gesture of the participant if the position information of the target model is matched with the gesture information of the participant.
Optionally, the gesture information of the participant includes first relative position information between the participant's hand and head, and second relative position information between the participant's hand and the virtual conference room;
The synchronization module 302 includes:
The determining unit is used for determining hand coordinate data of the participant according to the first relative position information and the second relative position information;
And the synchronization unit is used for synchronizing the operation gestures of the participant according to the hand coordinate data of the participant.
Optionally, the target model is a three-dimensional model; the position information of the target model comprises three-dimensional coordinate data of the target model; the editing module 303 includes:
The first acquisition unit is used for acquiring editing rights of the participant to the target model under the condition that the hand coordinate data of the participant coincides with the three-dimensional coordinate data of the target model;
The first editing unit is used for editing the target model according to the editing authority of the target model and the operation gesture of the participant.
Optionally, the target model is a two-dimensional model; the position information of the target model comprises two-dimensional coordinate data of the target model; the apparatus 300 further comprises:
The dividing module is used for dividing the target model into a plurality of grid areas after uploading the position information of the target model to the virtual conference room;
the editing module 303 includes:
the second acquisition unit is used for acquiring editing permission of the participant to the target grid region under the condition that hand coordinate data of the participant is coincident with three-dimensional coordinate data of the target grid region in the multiple grid regions;
the second editing unit is used for editing the target palace lattice area according to the editing authority of the target palace lattice area and the operation gesture of the participant.
Optionally, the apparatus 300 further includes:
The second uploading module is used for uploading the physical material database of the target model to the virtual meeting room;
the editing module 303 includes:
the third editing unit is used for editing the material of the target model according to the operation gesture of the participant under the condition that the hand coordinate data of the participant is coincident with the three-dimensional coordinate data of the target model, and the material of the target model is selected from the physical material database;
the apparatus 300 further comprises:
and the feedback module is used for feeding back feedback signals corresponding to the operation gesture of the participant and the material of the target model according to the operation gesture of the participant and the material of the target model at the position where the coordinate data overlap.
In the embodiment of the application, the operation gestures of the participants are synchronized in the virtual conference room; and editing the target model in the virtual meeting room according to the operation gestures of the participant. In this way, in the AR-based conference, the participant can edit the target model with different operation gestures, so that virtual interaction between the participant and the model is realized, and the interaction mode of the AR conference is more diversified.
The conference device based on AR in the embodiment of the application can be a device, and also can be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The AR-based conferencing device in the embodiments of the present application may be a device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The AR-based conference device provided in the embodiment of the present application can implement each process implemented by the AR-based conference device in the method embodiment of fig. 1 to 2d, and in order to avoid repetition, a detailed description is omitted here
Optionally, the embodiment of the present application further provides an electronic device, including a processor 410, a memory 409, and a program or an instruction stored in the memory 409 and capable of running on the processor 410, where the program or the instruction implements each process of the above-mentioned AR-based conference method embodiment when executed by the processor 410, and the same technical effects can be achieved, so that repetition is avoided, and details are not repeated here.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 105, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 410 is configured to upload location information of the target model and gesture information of the participant to the virtual conference room;
The processor 410 is further configured to synchronize an operation gesture of the participant in the virtual conference room according to gesture information of the participant;
The processor 410 is further configured to edit the target model according to the operation gesture of the participant if the position information of the target model matches the gesture information of the participant.
In the embodiment of the application, the operation gestures of the participants are synchronized in the virtual conference room; and editing the target model in the virtual meeting room according to the operation gestures of the participant. Therefore, in the AR-based conference, the participant can edit the target model with different operation gestures, and the communication mode of the AR-based conference is richer by combining the model editing, so that the conference efficiency is improved.
Alternatively, the process may be carried out in a single-stage,
The gesture information of the participant comprises first relative position information between the hands and the head of the participant and second relative position information between the hands of the participant and the virtual conference room;
The processor 410 is further configured to determine hand coordinate data of the participant according to the first relative position information and the second relative position information; and synchronizing the operation gestures of the participant according to the hand coordinate data of the participant.
Optionally, the target model is a three-dimensional model;
the position information of the target model comprises three-dimensional coordinate data of the target model;
The processor 410 is further configured to obtain editing rights of the participant to the target model when the hand coordinate data of the participant coincides with the three-dimensional coordinate data of the target model; editing the target model according to the editing authority of the target model and the operation gesture of the participant;
optionally, the target model is a two-dimensional model;
the position information of the target model comprises two-dimensional coordinate data of the target model;
A processor 410 further configured to divide the object model into a plurality of grid areas; acquiring editing permission of the participant on a target grid region in the multiple grid regions under the condition that hand coordinate data of the participant coincides with two-dimensional coordinate data of the target grid region; editing the target grid region according to the editing authority of the target grid region and the operation gesture of the participant.
Optionally, the processor 410 is further configured to upload a physical texture database of the target model to the virtual meeting room;
The processor 410 is further configured to edit a material of the target model according to an operation gesture of the participant, where the hand coordinate data of the participant coincides with the three-dimensional coordinate data of the target model, and the material of the target model is selected from the physical material database;
The processor 410 is further configured to, when the hand coordinate data of the participant coincides with the three-dimensional coordinate data of the target model, edit the material of the target model according to the operation gesture of the participant, and then feed back a feedback signal corresponding to the operation gesture of the participant and the material of the target model according to the operation gesture of the participant and the material of the target model at the location where the coordinate data coincides.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-mentioned AR-based conference method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the above-mentioned AR-based conference method embodiment, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (9)

1. An AR-based conferencing method, the method comprising:
Uploading the position information of the target model and the gesture information of the participant to the virtual conference room;
synchronizing an operation gesture of the participant in the virtual conference room according to the gesture information of the participant;
editing the target model according to the operation gesture of the participant under the condition that the position information of the target model is matched with the gesture information of the participant;
In the case where the target model is a three-dimensional model, before the editing of the target model according to the operation gesture of the participant, the method further includes:
uploading a physical material database of the target model to the virtual conference room, wherein one or more of position attribute, color attribute, texture material attribute, reflectivity attribute, sound attribute and elastic deformation attribute of the target model are stored in the physical material database;
And editing the target model according to the operation gesture of the participant under the condition that the position information of the target model is matched with the gesture information of the participant, wherein the editing comprises the following steps:
editing the material of the target model according to the operation gesture of the participant under the condition that the hand coordinate data of the participant is coincident with the three-dimensional coordinate data of the target model, wherein the material of the target model is selected from the physical material database;
The method further comprises, when the hand coordinate data of the participant coincides with the three-dimensional coordinate data of the target model, editing the material of the target model according to the operation gesture of the participant:
And feeding back feedback signals corresponding to the operation gestures of the participant and the materials of the target model according to the operation gestures of the participant and the materials of the target model at the overlapping position of the coordinate data, wherein the feedback signals comprise vibration signals and sound signals.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The gesture information of the participant comprises first relative position information between the hands and the head of the participant and second relative position information between the hands of the participant and the virtual conference room;
synchronizing an operation gesture of the participant in the virtual conference room according to the gesture information of the participant, including:
Determining hand coordinate data of the participant according to the first relative position information and the second relative position information;
And synchronizing the operation gestures of the participant according to the hand coordinate data of the participant.
3. The method of claim 2, wherein the target model is a three-dimensional model;
the position information of the target model comprises three-dimensional coordinate data of the target model;
And editing the target model according to the operation gesture of the participant under the condition that the position information of the target model is matched with the gesture information of the participant, wherein the editing comprises the following steps:
Acquiring editing permission of the participant on the target model under the condition that the hand coordinate data of the participant coincides with the three-dimensional coordinate data of the target model;
and editing the target model according to the editing authority of the target model and the operation gesture of the participant.
4. The method of claim 2, wherein the target model is a two-dimensional model;
the position information of the target model comprises two-dimensional coordinate data of the target model;
after the uploading the location information of the target model to the virtual conference room, the method further comprises:
dividing the target model into a plurality of grid areas;
And editing the target model according to the operation gesture of the participant under the condition that the position information of the target model is matched with the gesture information of the participant, wherein the editing comprises the following steps:
acquiring editing permission of the participant on a target grid region in the multiple grid regions under the condition that hand coordinate data of the participant coincides with two-dimensional coordinate data of the target grid region;
Editing the target grid region according to the editing authority of the target grid region and the operation gesture of the participant.
5. An AR-based conferencing device, comprising:
the first uploading module is used for uploading the position information of the target model and the gesture information of the participant to the virtual conference room;
The synchronization module is used for synchronizing the operation gestures of the participants in the virtual conference room according to the gesture information of the participants;
The editing module is used for editing the target model according to the operation gesture of the participant under the condition that the position information of the target model is matched with the gesture information of the participant;
The apparatus further comprises:
The second uploading module is used for uploading a physical material database of the target model to the virtual conference room when the target model is a three-dimensional model, wherein one or more of the position attribute, the color attribute, the texture material attribute, the reflectivity attribute, the sound attribute and the elastic deformation attribute of the target model are stored in the physical material database;
the editing module comprises:
the third editing unit is used for editing the material of the target model according to the operation gesture of the participant under the condition that the hand coordinate data of the participant is coincident with the three-dimensional coordinate data of the target model, and the material of the target model is selected from the physical material database;
The apparatus further comprises:
And the feedback module is used for feeding back feedback signals corresponding to the operation gesture of the participant and the material of the target model according to the operation gesture of the participant and the material of the target model at the superposition position of the coordinate data, wherein the feedback signals comprise vibration signals and sound signals.
6. The apparatus of claim 5, wherein the device comprises a plurality of sensors,
The gesture information of the participant comprises first relative position information between the hands and the head of the participant and second relative position information between the hands of the participant and the virtual conference room;
the synchronization module comprises:
The determining unit is used for determining hand coordinate data of the participant according to the first relative position information and the second relative position information;
And the synchronization unit is used for synchronizing the operation gestures of the participant according to the hand coordinate data of the participant.
7. The apparatus of claim 6, wherein the target model is a three-dimensional model;
the position information of the target model comprises three-dimensional coordinate data of the target model;
the editing module comprises:
The first acquisition unit is used for acquiring editing rights of the participant to the target model under the condition that the hand coordinate data of the participant coincides with the three-dimensional coordinate data of the target model;
The first editing unit is used for editing the target model according to the editing authority of the target model and the operation gesture of the participant.
8. The apparatus of claim 6, wherein the target model is a two-dimensional model;
the position information of the target model comprises two-dimensional coordinate data of the target model;
The apparatus further comprises:
The dividing module is used for dividing the target model into a plurality of grid areas after uploading the position information of the target model to the virtual conference room;
the editing module comprises:
The second acquisition unit is used for acquiring editing permission of the participant to the target grid region under the condition that the hand coordinate data of the participant is coincident with the two-dimensional coordinate data of the target grid region in the multiple grid regions;
the second editing unit is used for editing the target palace lattice area according to the editing authority of the target palace lattice area and the operation gesture of the participant.
9. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the AR-based conferencing method of any of claims 1 to 4.
CN202010386118.8A 2020-05-09 2020-05-09 AR-based conference method and device and electronic equipment Active CN111580658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010386118.8A CN111580658B (en) 2020-05-09 2020-05-09 AR-based conference method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010386118.8A CN111580658B (en) 2020-05-09 2020-05-09 AR-based conference method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111580658A CN111580658A (en) 2020-08-25
CN111580658B true CN111580658B (en) 2024-04-26

Family

ID=72112104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010386118.8A Active CN111580658B (en) 2020-05-09 2020-05-09 AR-based conference method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111580658B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112839196B (en) * 2020-12-30 2021-11-16 橙色云互联网设计有限公司 Method, device and storage medium for realizing online conference
CN114153316B (en) * 2021-12-15 2024-03-29 天翼电信终端有限公司 AR-based conference summary generation method, device, server and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104937641A (en) * 2013-02-01 2015-09-23 索尼公司 Information processing device, terminal device, information processing method, and programme
CN106125938A (en) * 2016-07-01 2016-11-16 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107390875A (en) * 2017-07-28 2017-11-24 腾讯科技(上海)有限公司 Information processing method, device, terminal device and computer-readable recording medium
CN107430437A (en) * 2015-02-13 2017-12-01 厉动公司 The system and method that real crawl experience is created in virtual reality/augmented reality environment
CN108510597A (en) * 2018-03-09 2018-09-07 北京小米移动软件有限公司 Edit methods, device and the non-transitorycomputer readable storage medium of virtual scene
CN108537889A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the electronic equipment of augmented reality model
CN110413108A (en) * 2019-06-28 2019-11-05 广东虚拟现实科技有限公司 Processing method, device, system, electronic equipment and the storage medium of virtual screen

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9524588B2 (en) * 2014-01-24 2016-12-20 Avaya Inc. Enhanced communication between remote participants using augmented and virtual reality

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104937641A (en) * 2013-02-01 2015-09-23 索尼公司 Information processing device, terminal device, information processing method, and programme
CN107430437A (en) * 2015-02-13 2017-12-01 厉动公司 The system and method that real crawl experience is created in virtual reality/augmented reality environment
CN106125938A (en) * 2016-07-01 2016-11-16 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107390875A (en) * 2017-07-28 2017-11-24 腾讯科技(上海)有限公司 Information processing method, device, terminal device and computer-readable recording medium
CN108510597A (en) * 2018-03-09 2018-09-07 北京小米移动软件有限公司 Edit methods, device and the non-transitorycomputer readable storage medium of virtual scene
CN108537889A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the electronic equipment of augmented reality model
CN110413108A (en) * 2019-06-28 2019-11-05 广东虚拟现实科技有限公司 Processing method, device, system, electronic equipment and the storage medium of virtual screen

Also Published As

Publication number Publication date
CN111580658A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN106200983B (en) A kind of system of combination virtual reality and BIM realization virtual reality scenario architectural design
KR101918262B1 (en) Method and system for providing mixed reality service
Dai Virtual reality for industrial applications
KR102495447B1 (en) Providing a tele-immersive experience using a mirror metaphor
Regenbrecht et al. Magicmeeting: A collaborative tangible augmented reality system
EP3769509B1 (en) Multi-endpoint mixed-reality meetings
Wang et al. Mixed reality in architecture, design, and construction
US9749367B1 (en) Virtualization of physical spaces for online meetings
CN109325450A (en) Image processing method, device, storage medium and electronic equipment
Fjeld et al. BUILD-IT: an intuitive design tool based on direct object manipulation
DE112020002268T5 (en) DEVICE, METHOD AND COMPUTER READABLE MEDIA FOR REPRESENTING COMPUTER GENERATED REALITY FILES
CN111580658B (en) AR-based conference method and device and electronic equipment
CN113196239A (en) Intelligent management of content related to objects displayed within a communication session
CN107274491A (en) A kind of spatial manipulation Virtual Realization method of three-dimensional scenic
Maher Designers and collaborative virtual environments
CN112309449A (en) Audio recording method and device
CN106716501A (en) Visual decoration design method, apparatus therefor, and robot
CN116484448A (en) Industrial model interaction method, system and equipment based on meta universe
CN114520950B (en) Audio output method, device, electronic equipment and readable storage medium
Rauterberg From gesture to action: Natural user interfaces
CN112686990A (en) Three-dimensional model display method and device, storage medium and computer equipment
Rauterberg New directions in User-System Interaction: augmented reality, ubiquitous and mobile computing
CN112328155B (en) Input device control method and device and electronic device
JPH1166351A (en) Method and device for controlling object operation inside three-dimensional virtual space and recording medium recording object operation control program
Wang et al. Issues in Mixed Reality-based design and collaboration environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant