CN108874363A - Object control method, apparatus, equipment and storage medium for AR scene - Google Patents
Object control method, apparatus, equipment and storage medium for AR scene Download PDFInfo
- Publication number
- CN108874363A CN108874363A CN201810720007.9A CN201810720007A CN108874363A CN 108874363 A CN108874363 A CN 108874363A CN 201810720007 A CN201810720007 A CN 201810720007A CN 108874363 A CN108874363 A CN 108874363A
- Authority
- CN
- China
- Prior art keywords
- control
- virtual objects
- action type
- voice
- control voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000009471 action Effects 0.000 claims description 60
- 230000003190 augmentative effect Effects 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 230000000875 corresponding effect Effects 0.000 description 54
- 230000006399 behavior Effects 0.000 description 5
- 230000001276 controlling effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000011800 void material Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of object control method, apparatus, equipment and storage medium for AR scene, and this method may include:Obtain the control voice for being directed to virtual objects;The virtual objects are the virtual objects that show on interface in the AR scene;According to the control voice, controls the virtual objects and execute the corresponding operation of the control voice.The convenience that user operates virtual objects can be improved in the present invention, improves user experience.
Description
Technical field
The present invention relates to field of computer technology, more particularly to it is a kind of for augmented reality (Augmented Reality,
Abbreviation AR) scene object control method, apparatus, equipment and storage medium.
Background technique
AR is also referred to as mixed reality, and the virtual information that can provide computer increases in real scene, to enhance user
Perception to real world.The virtual information can be shown on interface by virtual objects such as dummy objects.
In AR scene, user can be by carrying out touch control operation to virtual objects on the interface of terminal device, then realization pair
The control of virtual objects.However, user may be inconvenient to carry out touch control operation in some scenes, it just can not be to virtual objects
It is controlled.
This makes in AR scene, and user is not convenient enough to the operation of virtual objects, and user experience is poor.
Summary of the invention
The present invention provides a kind of object control method, apparatus, equipment and storage medium for AR scene, to improve user
To the convenience of virtual objects operation, user experience is improved.
In a first aspect, the present invention provides a kind of object control method for augmented reality AR scene, including:
Obtain the control voice for being directed to virtual objects;The virtual objects are the void that shows on interface in the AR scene
Quasi- object;
According to the control voice, controls the virtual objects and execute the corresponding operation of the control voice.
In the object control method, without user input touch control operation, can by control the voice control virtual objects,
Operation ease is improved, user experience is improved.
In a kind of achievable mode of first aspect, the control voice obtained for virtual objects, including:
Receive input audio;
Speech recognition is carried out to the input audio, determines the control voice for being directed to the virtual objects.
It can be achieved in mode in the another kind of first aspect, described according to the control voice, it is described virtual right to control
As the corresponding operation of the execution control voice, including:
Semantic analysis is carried out to the control voice, determines the semanteme of the control voice;
According to the semanteme, the action type for being directed to the virtual objects is determined;
According to the action type, controls the virtual objects and execute the corresponding operation of the action type.
Another in first aspect can be described according to the action type in realization mode, controls the virtual objects
The corresponding operation of the action type is executed, including:
According to the action type, the corresponding control instruction of the action type is generated;
According to the control instruction, controls the virtual objects and execute the corresponding operation of the action type.
Another in first aspect can be in realization mode, and the corresponding operation of the control voice includes following any:
Amplifying operation, reduction operation, animation play operation, audio and video playing operation, dialogue control operation.
Second aspect, the present invention provide a kind of object control device for augmented reality AR scene, including:
Module is obtained, for obtaining the control voice for being directed to virtual objects;The virtual objects are boundary in the AR scene
The virtual objects shown on face;
Control module, for it is corresponding to control the virtual objects execution control voice according to the control voice
Operation.
In a kind of achievable mode of second aspect, the acquisition module is specifically used for receiving input audio;To described
Input audio carries out speech recognition, determines the control voice for being directed to the virtual objects.
Second aspect another kind can be achieved mode in, the control module, be specifically used for the control voice into
Row semantic analysis determines the semanteme of the control voice;According to the semanteme, the operation class for being directed to the virtual objects is determined
Type;According to the action type, controls the virtual objects and execute the corresponding operation of the action type.
Another in second aspect can be in realization mode, the control module, is specifically used for according to the action type,
Generate the corresponding control instruction of the action type;According to the control instruction, controls the virtual objects and execute the operation
The corresponding operation of type.
Another in second aspect can be in realization mode, and the corresponding operation of the control voice includes following any:
Amplifying operation, reduction operation, animation play operation, audio and video playing operation, dialogue control operation.
The third aspect, the present invention provide a kind of augmented reality AR equipment, including:Memory and processor;The memory
It is connected to the processor;
The memory, for storing program instruction;
The processor, for being performed in program instruction, it can be achieved that stating described in first aspect for AR scene
Object control method.
Fourth aspect, the present invention provide a kind of computer readable storage medium, are stored thereon with computer program, the calculating
The object control method that AR scene is used for described in first aspect is realized when machine program is executed by processor.
The present invention provides a kind of object control method, apparatus, equipment and storage medium for AR scene, can pass through acquisition
For the control voice of virtual objects, which is the virtual objects shown on AR scene median surface, and according to the control
Voice controls the virtual objects and executes the corresponding operation of control voice.In the object control method, touch-control is inputted without user
Operation can realize the both hands liberation of object control in AR scene, improve behaviour by controlling the voice control virtual objects
Make convenience, improves user experience.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to do one simply to introduce, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of flow chart one of the object control method for AR scene provided in an embodiment of the present invention;
Fig. 2 is a kind of flowchart 2 of the object control method for AR scene provided in an embodiment of the present invention;
Fig. 3 is the structural schematic diagram of the object control device provided in an embodiment of the present invention for AR scene;
Fig. 4 is the structural schematic diagram of AR equipment provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
It should be noted that term " first ", " second " and " third " etc. in each section of the embodiment of the present invention and attached drawing
It is to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that the number used in this way
According to being interchangeable under appropriate circumstances, so that the embodiment of the present invention described herein can be in addition to illustrating herein or describing
Those of other than sequence implement.In addition, term " includes " and " having " and their any deformation, it is intended that covering is not
Exclusive includes, for example, the process, method, system, product or equipment for containing a series of steps or units be not necessarily limited to it is clear
Step or unit those of is listed on ground, but is not clearly listed or for these process, methods, product or is set
Standby intrinsic other step or units.
Method flow diagram involved in the following embodiments of the present invention is merely illustrative, it is not necessary in all
Appearance and step, nor is it necessary that and execute in the described sequence.For example, some steps can also decompose, and some steps can
To merge or partially merge, therefore, the sequence actually executed can change according to the actual situation.
Functional module in block diagram involved in the following embodiments of the present invention is only functional entity, not necessarily must be with
Physically separate entity is corresponding.I.e., it is possible to realize these functional entitys using software form, or in one or more hardware
It is realized in module or integrated circuit in these functional entitys or heterogeneous networks and/or processor and/or microcontroller and realizes this
A little functional entitys.
Combine multiple examples to the object control method method provided in an embodiment of the present invention for AR scene, dress as follows
It sets, electronic equipment and storage medium are illustrated.
Fig. 1 is a kind of flow chart one of the object control method for AR scene provided in an embodiment of the present invention.The object
Control method can execute corresponding software code realization by the processing equipment such as processor of AR equipment, can also be by the AR equipment
It manages device and executes corresponding software code, and other hardware entities is combined to realize.Wherein, AR equipment is, for example, desktop computer, notes
Originally, personal digital assistant (Personal Digital Assistant, abbreviation:PDA), smart phone, tablet computer, AR wearing
Any terminal device with AR function such as equipment.Illustratively, the terminal device with AR function is for example mountable AR application
Program, the server for example can be the application server of the AR application program.
It can be by executing AR application and the rendering generation figure friendship in the display equipment of AR equipment on the processor of AR equipment
Mutual interface can also render on graphical interaction interface during generating graphical interaction interface and generate virtual objects, so that should
It include at least one virtual objects in graphical interaction interface.
The image interactive interface that AR application generates is executed to be properly termed as:Interface in AR scene, or.The interface AR.It should
Virtual objects can be the virtual objects on the interface AR.The virtual objects can be virtual portrait, virtual life in AR scene
The dummy objects such as object.
As shown in Figure 1, the object control method shown in the present embodiment for AR scene may include as follows:
S101, the control voice for being directed to virtual objects is obtained, which is the void shown on the AR scene median surface
Quasi- object.
The control voice can be AR equipment in the case where showing the interface AR, and accessed is directed on the interface AR
The control voice of virtual objects.
Optionally, it is obtained in S101 as described above and may include for the control voices of virtual objects:
Speech recognition is carried out to input audio is received, determines the control voice for being directed to the virtual objects.
In a kind of example, AR equipment can receive the input that user is inputted by the voice-input device of the AR equipment
Audio.
In another example, it is defeated which can receive the voice that user is such as connect with the AR equipment by other equipment
Enter the input audio that equipment is inputted.Wherein, the voice-input device connecting with the AR equipment for example can be:AR equipment institute
The input equipments such as external microphone.
In another example, the AR equipment can the transmission of other equipment the input audio.Other equipment for example can be with
For terminals such as the control equipment such as mobile phones that is connect with the AR equipment.
The some possible examples that above are only the input audio, in method provided in an embodiment of the present invention, the input
The audio that audio can also obtain for other way, details are not described herein.
Automatic speech recognition (Automatic Speech can be used after receiving the input audio in AR equipment
Recognition, abbreviation ASR) technology, which is handled, the control voice is obtained.
Control button or switch button can be for example shown on the interface AR, when receiving the control button or switch button
After corresponding trigger action, the input audio can be acquired.Certainly, any button can not also be shown on the interface AR, as long as with
Family inputs voice, the acquisition of input audio can be carried out, to obtain the input audio.
S102, according to the control voice, control the virtual objects and execute the corresponding operation of control voice.
It can at least be shown in the control voice and carry or implicitly carry the information of the virtual objects and be directed to this virtually
The information of the control operation of object.
AR equipment can determine that the control voice is targeted and be somebody's turn to do after getting the control voice according to the control voice
Virtual objects and the corresponding operation of control voice then control the virtual objects and execute the corresponding operation of control voice.
After obtaining the control voice, which can control virtual objects execution pair directly according to the control voice
The operation answered can also first pre-process such as noise reduction process the control voice and then based on treated voice control
It makes the virtual objects and executes corresponding operation.
Optionally, the corresponding operation of the control voice for example may include following any:Amplifying operation, reduction operation, broadcasting
It operates, stop operation, playback progress adjustment operates, dialogue controls operation, interactive operation etc..Wherein, play operation includes:Animation
Play operation, audio and video playing operation etc. are any.Playback progress adjustment, which operates, may include:Fast forwarding or rewinding operation.
It,, can by executing S102 if the corresponding operation of the control voice is to zoom in or out operation in a kind of example
The virtual objects are controlled to zoom in or out on interface.
In another example, if the corresponding operation of the control voice is animation play or audio player operation, pass through
S102 is executed, the corresponding animation of the virtual objects or audio-video play out on controllable interface.
It, can be by executing S102 if the corresponding operation of the control voice is dialogue control operation in another example, it can
It controls the virtual objects and user or other equipment engages in the dialogue and for example take turns voice dialogues more.
In another example, if the corresponding operation of the control voice is interactive operation, it can be somebody's turn to do by executing S102 control
Virtual objects in virtual objects and other scenes interact operation, alternatively, interacting behaviour with other objects in scene
Make.
Object control method provided in an embodiment of the present invention for AR scene can be directed to the control of virtual objects by obtaining
Voice processed, the virtual objects are the virtual objects shown on AR scene median surface, and according to the control voice, and it is virtual right to control this
As executing the corresponding operation of control voice.In the object control method, touch control operation is inputted without user, control can be passed through
The voice control virtual objects realize the both hands liberation of object control in AR scene, improve operation ease, improve user
Experience.
On the basis of the above embodiments, the embodiment of the present invention also provides a kind of object control method for AR scene.
Fig. 2 is a kind of flowchart 2 of the object control method for AR scene provided in an embodiment of the present invention.Method shown in Fig. 2 can
For a kind of possible implementation for controlling the virtual objects in the above method, the control of the virtual controlling also can be used other
Mode realize that details are not described herein.
As shown in Fig. 2, controlling the virtual objects according to the control voice in S102 in method as shown above and executing the control
The corresponding operation of voice processed, it may include:
S201, semantic analysis is carried out to the control voice, determines the semanteme of the control voice.
Preset semantic model can be used in this method, semantic analysis is carried out to the control voice, determines the control voice
Semanteme.
Wherein, which is also referred to as preset semantic analysis algorithm.The semantic analysis is alternatively referred to as language
Justice parsing.
S202, according to the semanteme, determine the action type for being directed to the virtual objects.
It is semantic for the difference of same virtual objects, different action types can be corresponded to;Not for different virtual objects
With semanteme, similar and different action type can be corresponded to.That is, the semanteme can usually carry two layers of meaning, i.e., object and
Operate two layers of meaning.Therefore, in this method, according to the semanteme, both it needs to be determined that the virtual objects, also need to determine virtual for this
The action type of object.
For example, in this method, it can be according to the semanteme and the preset semantic corresponding relationship with action type, determining should
Semantic corresponding action type is the action type for the virtual objects.
In another example can also determine the control according to the semanteme and the preset semantic corresponding relationship with object in this method
The targeted virtual objects of the voice processed i.e. virtual objects are closed also according to the semanteme and preset semanteme are corresponding with action type
System determines that the corresponding action type of the semanteme is the action type for the virtual objects.
S203, according to the action type, control the virtual objects and execute the corresponding operation of the action type.
The corresponding operation of the control voice is the corresponding operation of the action type.The behaviour for the virtual objects has been determined
After making type, it can control the virtual objects according to the action type and execute corresponding operation, the i.e. corresponding behaviour of the action type
Make.
Optionally, according to the action type in S203 as shown above, the virtual objects is controlled and execute the action type pair
The operation answered may include:
According to the action type, the corresponding control instruction of the action type is generated;
According to the control instruction, controls the virtual objects and execute the corresponding operation of the action type.
Different operation type can correspond to different control instructions.
Therefore, to make the control to virtual objects more accurate, the behaviour can be generated according to the action type in this method
Make the corresponding control instruction of type, and the control instruction based on generation, controls the virtual objects and execute action type correspondence
Operation.
In this method, by carrying out semantic analysis to the control voice, the semanteme of the control voice is obtained, and according to the language
Justice determines the action type for being directed to the virtual objects, may make the operation control for the virtual objects more accurate, more meets
The operational requirements of user improve user experience.
Following is apparatus of the present invention embodiment, can be used for executing above method embodiment of the present invention, realization principle and
Technical effect is similar.
Fig. 3 is the structural schematic diagram of the object control device provided in an embodiment of the present invention for AR scene.This is used for AR
The object control device of scene can the mode of software and/or hardware be integrated in AR equipment.As shown in figure 3, the use of the present embodiment
May include in the object control device 30 of AR scene:
Module 31 is obtained, for obtaining the control voice for being directed to virtual objects;The virtual objects are interface in the AR scene
The virtual objects of upper display.
Control module 32, for controlling the virtual objects and executing the corresponding operation of control voice according to the control voice.
The object control device for AR scene of the present embodiment can be directed to the control voice of virtual objects by obtaining,
The virtual objects are the virtual objects shown on AR scene median surface, and according to the control voice, control virtual objects execution
The corresponding operation of control voice.This is used for the object control device of AR scene, can input touch control operation, Bian Ketong without user
The control voice control virtual objects are crossed, the both hands liberation of object control in AR scene is realized, improves operation ease, mention
High user experience.
Optionally, acquisition module 31 as shown above is specifically used for receiving input audio, and carries out language to the input audio
Sound identification, determines the control voice for being directed to the virtual objects.
Optionally, control module 32 as shown above is specifically used for carrying out semantic analysis to the control voice, determines the control
The semanteme of voice processed;According to the semanteme, the action type for being directed to the virtual objects is determined;According to the action type, the void is controlled
Quasi- object executes the corresponding operation of the action type.
Optionally, control module 32 as shown above is specifically used for that it is corresponding to generate the action type according to the action type
Control instruction;According to the control instruction, controls the virtual objects and execute the corresponding operation of the action type.
Optionally, the corresponding operation of the control voice includes following any:
Amplifying operation, reduction operation, animation play operation, audio and video playing operation, dialogue control operation etc..
Object control device provided in this embodiment for AR scene can be performed above-mentioned Fig. 1 it is any into Fig. 2 shown in
Object control method, specific implementation and effective effect, reference can be made to above-mentioned, details are not described herein.
A kind of AR equipment can also be provided in the embodiment of the present invention.Fig. 4 is that the structure of AR equipment provided in an embodiment of the present invention is shown
It is intended to.As shown in figure 4, the AR equipment 40 of the present embodiment includes:Memory 41 and processor 42.Wherein, memory 41 passes through total
Line is connect with processor 42.
Memory 41, for storing program instruction.
Processor 42, for being performed in program instruction, so that processor 42 performs the following operations:
Obtain the control voice for being directed to virtual objects;The virtual objects are in the AR scene, and what is shown on interface is virtual right
As;
According to the control voice, controls the virtual objects and execute the corresponding operation of control voice;
Optionally, processor 42 as shown above is specifically used for receiving input audio, and carries out voice to the input audio
Identification determines the control voice for being directed to the virtual objects.
Optionally, processor 42 as shown above is specifically used for carrying out semantic analysis to the control voice, determines the control
The semanteme of voice;According to the semanteme, the action type for being directed to the virtual objects is determined;According to the action type, it is virtual to control this
Object executes the corresponding operation of the action type.
Optionally, processor 42 as shown above is specifically used for that it is corresponding to generate the action type according to the action type
Control instruction;According to the control instruction, controls the virtual objects and execute the corresponding operation of the action type.
Optionally, the corresponding operation of the control voice includes following any:
Amplifying operation, reduction operation, animation play operation, audio and video playing operation, dialogue control operation etc..
The AR equipment of the present embodiment can be performed above-mentioned Fig. 1 it is any into Fig. 2 shown in be used for AR scene object control side
Method, specific implementation and effective effect, reference can be made to above-mentioned, details are not described herein.
The embodiment of the present invention also provides a kind of computer readable storage medium, is stored thereon with computer program, the calculating
Machine program can the processor 42 described in above-mentioned Fig. 4 execute and realize shown in upper any embodiment in the object control side of AR scene
Method, specific implementation and effective effect, reference can be made to above-mentioned, details are not described herein.
Those of ordinary skill in the art will appreciate that:Realize that all or part of the steps of above-mentioned each method embodiment can lead to
The relevant hardware of program instruction is crossed to complete.Computer program above-mentioned can store in a computer-readable storage medium
In.When being executed, execution includes the steps that above-mentioned each method embodiment to the program;And storage medium above-mentioned includes:In read-only
Deposit (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or light
The various media that can store program code such as disk.
Finally it should be noted that:The above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Present invention has been described in detail with reference to the aforementioned embodiments for pipe, those skilled in the art should understand that:Its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into
Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution
The range of scheme.
Claims (12)
1. a kind of object control method for augmented reality AR scene, which is characterized in that including:
Obtain the control voice for being directed to virtual objects;The virtual objects are in the AR scene, and what is shown on interface is virtual right
As;
According to the control voice, controls the virtual objects and execute the corresponding operation of the control voice.
2. the method according to claim 1, wherein it is described obtain be directed to virtual objects control voice, including:
Receive input audio;
Speech recognition is carried out to the input audio, determines the control voice for being directed to the virtual objects.
3. it is described virtual right to control the method according to claim 1, wherein described according to the control voice
As the corresponding operation of the execution control voice, including:
Semantic analysis is carried out to the control voice, determines the semanteme of the control voice;
According to the semanteme, the action type for being directed to the virtual objects is determined;
According to the action type, controls the virtual objects and execute the corresponding operation of the action type.
4. according to the method described in claim 3, it is described virtual right to control it is characterized in that, described according to the action type
As the corresponding operation of the execution action type, including:
According to the action type, the corresponding control instruction of the action type is generated;
According to the control instruction, controls the virtual objects and execute the corresponding operation of the action type.
5. method according to any of claims 1-4, which is characterized in that the corresponding operation of the control voice includes
It is following any:
Amplifying operation, reduction operation, animation play operation, audio and video playing operation, dialogue control operation.
6. a kind of object control device for augmented reality AR scene, which is characterized in that including:
Module is obtained, for obtaining the control voice for being directed to virtual objects;The virtual objects are in the AR scene, on interface
The virtual objects of display;
Control module, for controlling the virtual objects and executing the corresponding operation of the control voice according to the control voice.
7. device according to claim 6, which is characterized in that
The acquisition module is specifically used for receiving input audio;Speech recognition is carried out to the input audio, is determined for described
The control voice of virtual objects.
8. device according to claim 6, which is characterized in that
The control module is specifically used for carrying out semantic analysis to the control voice, determines the semanteme of the control voice;Root
According to the semanteme, the action type for being directed to the virtual objects is determined;According to the action type, controls the virtual objects and hold
The corresponding operation of the row action type.
9. device according to claim 8, which is characterized in that
The control module is specifically used for generating the corresponding control instruction of the action type according to the action type;According to
The control instruction controls the virtual objects and executes the corresponding operation of the action type.
10. the device according to any one of claim 6-9, which is characterized in that the corresponding operation packet of the control voice
It includes following any:
Amplifying operation, reduction operation, animation play operation, audio and video playing operation, dialogue control operation.
11. a kind of augmented reality AR equipment, which is characterized in that including:Memory and processor;The memory and the processing
Device connection;
The memory, for storing program instruction;
The processor realizes that claim 1-5 is described in any item for AR scene for being performed in program instruction
Object control method.
12. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
Claim 1-5 described in any item object control methods for AR scene are realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810720007.9A CN108874363A (en) | 2018-07-03 | 2018-07-03 | Object control method, apparatus, equipment and storage medium for AR scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810720007.9A CN108874363A (en) | 2018-07-03 | 2018-07-03 | Object control method, apparatus, equipment and storage medium for AR scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108874363A true CN108874363A (en) | 2018-11-23 |
Family
ID=64298518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810720007.9A Pending CN108874363A (en) | 2018-07-03 | 2018-07-03 | Object control method, apparatus, equipment and storage medium for AR scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108874363A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615958A (en) * | 2018-12-04 | 2019-04-12 | 深圳市诺信连接科技有限责任公司 | The processing method and VR of interactive VR image |
CN110299138A (en) * | 2019-06-28 | 2019-10-01 | 北京机械设备研究所 | A kind of augmented reality assembly technology instructs system and method |
CN110517683A (en) * | 2019-09-04 | 2019-11-29 | 上海六感科技有限公司 | Wear-type VR/AR equipment and its control method |
CN111219855A (en) * | 2020-01-13 | 2020-06-02 | 青岛海尔空调器有限总公司 | Method, device, equipment and system for air conditioner control |
CN112957733A (en) * | 2021-03-31 | 2021-06-15 | 歌尔股份有限公司 | Game picture display method, positioning base station, host equipment and related equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105468142A (en) * | 2015-11-16 | 2016-04-06 | 上海璟世数字科技有限公司 | Interaction method and system based on augmented reality technique, and terminal |
US20160124501A1 (en) * | 2014-10-31 | 2016-05-05 | The United States Of America As Represented By The Secretary Of The Navy | Secured mobile maintenance and operator system including wearable augmented reality interface, voice command interface, and visual recognition systems and related methods |
CN106558310A (en) * | 2016-10-14 | 2017-04-05 | 北京百度网讯科技有限公司 | Virtual reality sound control method and device |
-
2018
- 2018-07-03 CN CN201810720007.9A patent/CN108874363A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160124501A1 (en) * | 2014-10-31 | 2016-05-05 | The United States Of America As Represented By The Secretary Of The Navy | Secured mobile maintenance and operator system including wearable augmented reality interface, voice command interface, and visual recognition systems and related methods |
CN105468142A (en) * | 2015-11-16 | 2016-04-06 | 上海璟世数字科技有限公司 | Interaction method and system based on augmented reality technique, and terminal |
CN106558310A (en) * | 2016-10-14 | 2017-04-05 | 北京百度网讯科技有限公司 | Virtual reality sound control method and device |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615958A (en) * | 2018-12-04 | 2019-04-12 | 深圳市诺信连接科技有限责任公司 | The processing method and VR of interactive VR image |
CN110299138A (en) * | 2019-06-28 | 2019-10-01 | 北京机械设备研究所 | A kind of augmented reality assembly technology instructs system and method |
CN110517683A (en) * | 2019-09-04 | 2019-11-29 | 上海六感科技有限公司 | Wear-type VR/AR equipment and its control method |
CN111219855A (en) * | 2020-01-13 | 2020-06-02 | 青岛海尔空调器有限总公司 | Method, device, equipment and system for air conditioner control |
CN112957733A (en) * | 2021-03-31 | 2021-06-15 | 歌尔股份有限公司 | Game picture display method, positioning base station, host equipment and related equipment |
CN112957733B (en) * | 2021-03-31 | 2022-10-11 | 歌尔股份有限公司 | Game picture display method, positioning base station, host equipment and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108874363A (en) | Object control method, apparatus, equipment and storage medium for AR scene | |
CN109801644B (en) | Separation method, separation device, electronic equipment and readable medium for mixed sound signal | |
US10923102B2 (en) | Method and apparatus for broadcasting a response based on artificial intelligence, and storage medium | |
CN104252226B (en) | The method and electronic equipment of a kind of information processing | |
CN108597509A (en) | Intelligent sound interacts implementation method, device, computer equipment and storage medium | |
US20140036022A1 (en) | Providing a conversational video experience | |
WO2022022536A1 (en) | Audio playback method, audio playback apparatus, and electronic device | |
CN112309365B (en) | Training method and device of speech synthesis model, storage medium and electronic equipment | |
CN108156317A (en) | call voice control method, device and storage medium and mobile terminal | |
CN109461449A (en) | Voice awakening method and system for smart machine | |
CN110557699B (en) | Intelligent sound box interaction method, device, equipment and storage medium | |
WO2007070734A2 (en) | Method and system for directing attention during a conversation | |
CN113286641A (en) | Voice communication system of online game platform | |
CN108366299A (en) | A kind of media playing method and device | |
CN111524516A (en) | Control method based on voice interaction, server and display device | |
CN107623622A (en) | A kind of method and electronic equipment for sending speech animation | |
CN110377220A (en) | A kind of instruction response method, device, storage medium and electronic equipment | |
CN112291615A (en) | Audio output method and audio output device | |
CN107623830A (en) | A kind of video call method and electronic equipment | |
CN114630057A (en) | Method and device for determining special effect video, electronic equipment and storage medium | |
CN109445573A (en) | A kind of method and apparatus for avatar image interactive | |
CN111128120B (en) | Text-to-speech method and device | |
CN107329725A (en) | Method and apparatus for controlling many people's interactive applications | |
CN115396390A (en) | Interaction method, system and device based on video chat and electronic equipment | |
CN108495160A (en) | Intelligent control method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181123 |