CN117193531A - Display method, display device, electronic equipment and readable storage medium - Google Patents

Display method, display device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117193531A
CN117193531A CN202311143101.XA CN202311143101A CN117193531A CN 117193531 A CN117193531 A CN 117193531A CN 202311143101 A CN202311143101 A CN 202311143101A CN 117193531 A CN117193531 A CN 117193531A
Authority
CN
China
Prior art keywords
image
audio
information
interface
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311143101.XA
Other languages
Chinese (zh)
Inventor
姚言章
陈作行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311143101.XA priority Critical patent/CN117193531A/en
Publication of CN117193531A publication Critical patent/CN117193531A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a display method, a display device, electronic equipment and a readable storage medium, and belongs to the technical field of electronics. The method comprises the following steps: under the condition that first audio in the environment is detected, first azimuth information corresponding to the first audio source is obtained; according to the first azimuth information, controlling a first camera corresponding to the first azimuth information to acquire an image; displaying a first image in a first area of a first interface according to the image acquired by the first camera under the condition that the electronic equipment displays the first interface; wherein the first audio is non-noise audio.

Description

Display method, display device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of electronics, and particularly relates to a display method, a display device, electronic equipment and a readable storage medium.
Background
With the development of Virtual Reality (VR) technology, VR devices gradually enter lives of people, and people have higher requirements on use experience of VR devices.
Typically, when using VR devices, a user is likely to ignore interactions between the user and the external environment in order to provide the user with a better sense of immersive virtual interaction experience. For example, when other people in the external environment come out, the user cannot timely sense.
Therefore, in the prior art, when a user uses the VR device, the user has the disadvantage that the user cannot timely sense the emergency in the external environment.
Disclosure of Invention
The embodiment of the application aims to provide a display method which can solve the problem that a user cannot timely sense the emergency in the external environment when using VR equipment.
In a first aspect, an embodiment of the present application provides a display method, including: under the condition that first audio in the environment is detected, first azimuth information corresponding to the first audio source is obtained; according to the first azimuth information, controlling a first camera corresponding to the first azimuth information to acquire an image; displaying a first image in a first area of a first interface according to the image acquired by the first camera under the condition that the electronic equipment displays the first interface; wherein the first audio is non-noise audio.
In a second aspect, an embodiment of the present application provides a display apparatus, including: the first acquisition module is used for acquiring first orientation information corresponding to a first audio source under the condition that the first audio in the environment is detected; the acquisition module is used for controlling a first camera corresponding to the first azimuth information to acquire an image according to the first azimuth information; the first display module is used for displaying a first image in a first area of the first interface according to the image acquired by the first camera under the condition that the electronic equipment displays the first interface; wherein the first audio is non-noise audio.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
Thus, in the embodiment of the application, during the use of the electronic device by a user, the electronic device detects the audio in the environment at any time, and when the first audio of the non-noise audio in the environment is detected, the first position information corresponding to the first audio source is acquired, so that the first camera corresponding to the electronic device is controlled to acquire the image based on the first position information, the acquired image comprises the scene of the first audio, and the first image is displayed in the first area of the current first interface according to the acquired image. Therefore, in the embodiment of the application, when the user uses the equipment to perform immersive experience, the emergency in the outside is timely displayed in the equipment so as to play a role in reminding the user, and unnecessary loss caused by the fact that the user cannot timely sense the outside environment condition is avoided.
Drawings
FIG. 1 is a flow chart of a display method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 3 is one illustrative schematic diagram of an electronic device in accordance with an embodiment of the application;
FIG. 4 is a second illustrative schematic diagram of an electronic device according to an embodiment of the application;
FIG. 5 is a third illustrative diagram of a display method according to an embodiment of the application;
FIG. 6 is a diagram illustrating a display method according to an embodiment of the present application;
FIG. 7 is a fifth illustrative diagram of a display method according to an embodiment of the present application;
FIG. 8 is a diagram showing a display method according to an embodiment of the present application;
FIG. 9 is a diagram showing a display method according to an embodiment of the present application;
FIG. 10 is a schematic diagram eighth illustrative view showing a display method according to an embodiment of the present application;
fig. 11 is a block diagram of a display device according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a hardware architecture of an electronic device according to an embodiment of the present application;
fig. 13 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the accompanying drawings of the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The execution subject of the display method provided by the embodiment of the application can be the display device provided by the embodiment of the application or the electronic equipment integrated with the display device, wherein the display device can be realized in a hardware or software mode.
The display method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a display method according to an embodiment of the present application, where the method is applied to an electronic device for example, the method includes:
Step 110: under the condition that the first audio in the environment is detected, first orientation information corresponding to a first audio source is obtained.
Optionally, the electronic device in the present embodiment is an augmented reality (Augmented Reality, AR) device, VR device.
For example, referring to fig. 2, the electronic device in this embodiment is VR glasses or AR glasses.
In the application scenario of this embodiment, if the user wears the electronic device to perform VR experience indoors, in the process of user experience, the electronic device monitors the audio in the indoor environment at any time, and when the first audio is detected, the display method provided by this embodiment is adopted, and the image related to the first audio is sent out in the display environment of the electronic device, so that in the process of performing VR experience by the user, some emergency situations in the user environment are timely reminded.
Wherein the first audio is non-noise audio.
Optionally, when the first audio in the environment is detected, firstly judging whether the audio is noise audio, if so, eliminating the interference of environmental noise without processing; otherwise, if not, continuing the subsequent steps.
Optionally, the first audio is compared to an audio database to determine if the audio is noisy. Illustratively, noise audio such as white noise or the like.
In this embodiment, after the first audio is determined to be the non-noise audio, the first direction information corresponding to the first audio source is obtained, where the first direction information is used to indicate the direction, that is, based on this step, it can be determined from which direction the first audio is emitted.
In some application scenarios, the first orientation information is used to indicate a certain orientation in the environment. For example, southeast direction.
In some application scenarios, the first orientation information is used for a certain orientation with respect to the user. For example, to the right of the user.
Step 120: and controlling a first camera corresponding to the first orientation information to acquire an image according to the first orientation information.
In the step, a first camera corresponding to the first orientation information on the electronic equipment is controlled to acquire images according to the first orientation information.
Wherein the captured image content is a scene from which the first audio is emitted.
For example, referring to fig. 3, when a user is indoors and a person opens a door, the user makes a sound of opening the door, and controls the first camera toward the door to capture an image.
As another example, referring to fig. 4, when the user is in a room and has a cup falling to the ground, the first camera towards the cup is controlled to collect images by making a breaking sound of the cup.
Step 130: and under the condition that the electronic equipment displays the first interface, displaying a first image in a first area of the first interface according to the image acquired by the first camera.
In this step, the user immersively experiences the VR scene in the first interface with the electronic device displaying the first interface.
The first area is an area of the first interface, and may be an area having a low influence on the user's immersive VR experience.
Optionally, the first image is displayed in the first area with a specific visual effect, so as to avoid the display of the first image from being too abrupt.
Optionally, the first image is at least one picture; alternatively, the first image is at least one video.
For example, sound is continuously emitted in the environment, images are continuously collected until no sound exists in the scene, and correspondingly, a video is output as a first image; alternatively, a plurality of pictures are output as the first image, such as one picture per time interval.
For another example, after the first camera acquires the image, a picture or a video is output based on the acquired image.
Thus, in the embodiment of the application, during the use of the electronic device by a user, the electronic device detects the audio in the environment at any time, and when the first audio of the non-noise audio in the environment is detected, the first position information corresponding to the first audio source is acquired, so that the first camera corresponding to the electronic device is controlled to acquire the image based on the first position information, the acquired image comprises the scene of the first audio, and the first image is displayed in the first area of the current first interface according to the acquired image. Therefore, in the embodiment of the application, when the user uses the equipment to perform immersive experience, the emergency in the outside is timely displayed in the equipment so as to play a role in reminding the user, and unnecessary loss caused by the fact that the user cannot timely sense the outside environment condition is avoided.
In the flow of the display method according to another embodiment of the present application, step 120 includes:
substep A1: and identifying a first object which emits the first audio after the comparison of the sound library according to the characteristic information of the first audio.
In this step, the characteristic information of the first audio is compared with the audio characteristic information of the sound library to identify that the first audio is a door opening sound, a cup breaking sound, a telephone bell, etc., i.e., to identify a specific category of the first audio, and further to identify a first object that emits the first audio, such as a door, a cup, a telephone, etc.
Substep A2: according to the first orientation information, the first camera is controlled to acquire an image of a first object including a first action.
In this step, referring to fig. 5, the first camera is controlled to find the first object 501 in which the first action is being performed in the angle of view (a), thereby acquiring an image of the first object in which the first action is being performed.
Wherein, referring to fig. 5, the first object 501 is a door and the first action is opening.
In other examples, a user makes a sound that the cup falls on the ground and breaks directly in front of the user, a first camera mounted in a neutral position on the electronic device performs a feature comparison in a field of view in front of the user to find an object with cup features, and then captures an image of the cup that is breaking on the ground.
Optionally, the number of cameras of the electronic device is multiple, and the first camera whose field angle can cover the field of view pointed by the first orientation information can be controlled to collect images.
In this embodiment, first the first object that sends the first audio is first identified, then an image including the first object that is performing the first action is collected in the environment, so that the user is convenient to see the environmental situation related to the first audio at a glance, and further, the user is convenient to further determine whether the VR experience needs to be interrupted in time.
In the flow of the display method according to another embodiment of the present application, step 130 includes:
substep B1: and acquiring a first image corresponding to the first object from the acquired images.
Optionally, after the first camera is based on the acquired image, a portion may be cut out in the acquired image as a first image, where the first image includes the first object.
For example, referring to fig. 3 and 4, the first image 301 and the first image 401 in different scenes, the respective contents are only used to describe a scene in which the first audio is emitted, and not include other scenes in the environment.
In this embodiment, the displayed first image only includes the first object that emits the first audio, and there are no other objects in the environment, so that the image content for prompting the user is simple, so that the user can quickly understand the event in the environment, and avoid the user from excluding some interference factors.
In the flow of the display method according to another embodiment of the present application, step 120 includes:
substep C1: and acquiring gyroscope data, and determining first angle information of rotation of the electronic equipment according to the gyroscope data.
Substep C2: and determining a first camera of the electronic equipment, which corresponds to the first orientation information, according to the first angle information and the first orientation information.
When the user wears the electronic device on his head, the head of the following VR image can also rotate, and at this time, the camera on the electronic device can also rotate together. If the first audio is sent out in the environment, the image acquisition is carried out towards the place where the first audio is sent out, and the electronic equipment rotates, so that the camera does not face the place where the first audio is sent out any more, and therefore, according to the first angle information and the first direction information of the rotation of the electronic equipment, the camera is replaced with other cameras in time, and the camera for acquiring the image is always directed towards the place where the first audio is sent out.
Correspondingly, the first camera is used to represent a camera controlled to acquire an image of the environment, not just a fixed one.
Wherein the first orientation information is used to indicate a fixed orientation in the environment.
For example, the first orientation information is used to indicate the southeast direction in the environment, and after the electronic device rotates, the first orientation information indicates the southeast direction in the environment.
In this embodiment, in the process of controlling the first camera to collect an image, gyroscope data of the electronic device is obtained to continuously track rotation of the electronic device, so that the controlled camera is adjusted in real time in combination with first angle information and first direction information of rotation of the electronic device, so as to ensure that a sound source direction cannot be lost due to rotation of the electronic device.
In the flow of the display method according to another embodiment of the present application, before step 130, the method further includes:
step D1: and acquiring a second area of the user sight range corresponding to the first interface.
In this step, referring to fig. 6, a gaze of the user is acquired at a second area 601 of the first interface.
Optionally, the second region is identified in the first interface for further acquisition using eye tracking techniques.
Step D2: and determining a first area in the first interface except for the second area according to the first orientation information.
In this step, the first image is displayed outside the second area, i.e. outside the user's field of view, while being transmitted to the first interface.
For example, referring to fig. 7, a first region 702 is determined at a portion other than the second region 701.
Wherein the first area is located in a first opposite direction of the second area in case the first orientation information indicates a first opposite direction of the user.
Optionally, in determining the first region, reference is made to a rule at a sound source of the first audio that the first region is closer to.
For example, referring to fig. 7, the first direction information indicates the right side of the user, and the first area 702 is determined at the right side of the second area 701.
Optionally, the first image is displayed in suspension on the first interface.
In this embodiment, the first image is displayed outside the field of view of the user by using the eye tracking technology, so as to not cause too much interference to the VR experience of the user while playing a role in reminding the user of the external environment. The display position of the first image is combined with the first direction information, so that a user can be helped to quickly know from which direction the first audio is sent, and the user does not need to blindly find in the environment.
In the flow of the display method according to another embodiment of the present application, after step 120, the method further includes:
step E1: and outputting the first prompt information in a first mode according to the first direction information.
The first prompt information is used for prompting a user to adjust the sight range to correspond to the first area, corresponds to the first grade, and is associated with the collected image content.
In one application scenario, a first image is displayed first, and then a first prompt message is output to prompt a user to view the first image through the first prompt message.
In still another application scenario, the first prompt message is output first, and after receiving the feedback of the user, the first image is displayed, so that the stay time of the first image on the first interface is shortened as much as possible, the phenomenon that the user frequently changes the gaze range and the display position of the first image is unchanged, and the first image blocks the VR picture watched by the user, and the immersive experience of the user is ensured.
When the user perceives the first prompt information, the attention is not focused on the VR screen, so that the eye gaze position is changed, and when the user's visual field is detected to move from gazing at the second area to gazing at the first area based on the eye tracking technology, the user is considered to feedback the first prompt information, and a first image is displayed in the first area.
Alternatively, the first prompt may be audio information. For example, a prompt sound is output at the earphone end.
Optionally, the first hint information incorporates first orientation information. For example, with spatial audio mixing, a prompt sound is output at the earphone end closer to the sound source, thereby guiding the user's visual field to move in a specific direction.
Alternatively, the first hint information may be screen information.
For example, a visual effect such as highlighting, blinking, etc. is played in the first region. As another example, referring to fig. 8, a visual effect of a faint flickering highlight is played along a path 801 between a first region and a second region.
Alternatively, when it is detected that the user's field of view is moved from gazing at the second area to gazing at the first area 901 (as shown in fig. 9) based on the eye tracking technique, the output of the first hint information is stopped.
Optionally, a plurality of grades are set to output prompt messages of different grades. The higher the level, the stronger the prompting effect of the prompting information, for example, when the level is high, the prompting information can be a played alarm sound.
The first level of the first prompt message may be determined according to the content of the first image, where the first level is one of the set levels.
For example, if the content of the first image describes a dangerous scene, the level of the first prompt information is higher.
In this embodiment, a first prompt is output to indicate that the user is focused on a first area of the first interface. According to the content of the first image, the first level of the first prompt information is set, so that the prompt of different degrees can be presented according to the emergency degree in the environment event, and unnecessary loss to the user due to the emergency in the environment is avoided.
In the flow of the display method according to another embodiment of the present application, after step 130, the method further includes:
step F1: a first input is received, the first input being for triggering an interrupt to display a first interface.
The first input includes a touch input made by a user on a screen, and is not limited to click, slide, drag, and other inputs. The first input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the first input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
Step F2: in response to the first input, an image preview interface corresponding to the external environment is displayed at the electronic device.
For example, when a user inputs through voice or a handle and sends a command for exiting VR experience, the first interface is stopped to be displayed, an image preview interface is displayed on a display end of the electronic device, and an environmental image collected by a camera displayed on the image preview interface, specifically, the environmental image collected by the first camera.
Otherwise, the user inputs through voice or a handle, sends out a command for not exiting the VR experience, and continues to display the first interface and does not display the first image any more.
In this embodiment, based on the display of the first image, the user may interrupt the VR experience in time, and accordingly, the VR device displays the environmental image, so that the user may learn about the environmental situation as soon as possible under the circumstance of wearing the electronic device.
In the flow of the display method according to another embodiment of the present application, step 110 includes:
substep G1: and under the condition that at least two radios of the electronic equipment acquire the first audio respectively, acquiring decibel values of the first audio acquired by the at least two radios respectively.
In this step, first orientation information is obtained according to the decibel value of the first audio received by each radio on the electronic device.
Optionally, the radio uses a Microphone (MIC).
Substep G2: and acquiring first azimuth information according to the position of the radio corresponding to the maximum decibel value on the electronic equipment.
Wherein, at least two radios are distributed in different positions of the electronic equipment.
For example, referring to fig. 10, the electronic device worn by the user includes MIC1 10001 and MIC2 10002, where the decibel value received by MIC1 10001 is MIC1, the decibel value received by MIC2 10002 is MIC2, and when MIC1> MIC2, it indicates that the sound source 10003 is located closer to MIC1 10001.
The more the number of the radio receivers are arranged on the electronic equipment, the more accurate the acquired first direction information is.
Optionally, based on the gesture of the electronic device when the first audio is detected, a position of the radio corresponding to the maximum decibel value on the electronic device can be determined in the environment, and first position information for indicating the position is further acquired.
Optionally, based on the gesture of the electronic device when the first audio is detected, a position of the radio corresponding to the maximum decibel value on the electronic device can be determined relative to the position of the user, and the first position information for indicating the position is further acquired.
In this embodiment, the sound source of the first audio is determined by different db values of the first audio received by the plurality of radios on the electronic device, so as to obtain the first azimuth information according to the sound source.
In summary, in the application, on the basis of VR technology, a new external environment interaction mechanism is provided by combining with VR hardware facilities, the change of the external environment where a user is located is judged in real time through a camera and a MIC, different prompt grades are generated according to the environment conditions, and meanwhile, a corresponding prompt picture is generated in a user virtual picture, so that the user can timely sense the external environment on the basis of not forcibly interrupting the user experience, and the user can decide whether to exit the VR or not by himself, thereby improving the user experience. The application uses eyeball tracking technology to judge the visual field focusing range of the user in the virtual world in real time, and simultaneously uses MIC and a camera to judge the environmental change and the change azimuth source, and generates an environmental change image in the corresponding azimuth outside the visual field focusing range of the virtual world, thereby further ensuring the continuity of user experience; in addition, through guiding the user visual field to move, the external picture can be displayed after the user visual field reaches a certain area, the shielding of the external picture on the VR picture is further reduced, the interference on the user VR experience is reduced, and therefore the immersion of the user is improved.
According to the display method provided by the embodiment of the application, the execution main body can be a display device. In the embodiment of the present application, a display device executing a display method is taken as an example, and the display device provided in the embodiment of the present application is described.
Fig. 11 shows a block diagram of a display device according to an embodiment of the present application, the device including:
the first obtaining module 10 is configured to obtain first orientation information corresponding to a first audio source when a first audio in an environment is detected;
the acquisition module 20 is configured to control, according to the first orientation information, the first camera corresponding to the first orientation information to acquire an image;
the first display module 30 is configured to display, when the electronic device displays the first interface, a first image in a first area of the first interface according to the image acquired by the first camera;
wherein the first audio is non-noise audio.
Thus, in the embodiment of the application, during the use of the electronic device by a user, the electronic device detects the audio in the environment at any time, and when the first audio of the non-noise audio in the environment is detected, the first position information corresponding to the first audio source is acquired, so that the first camera corresponding to the electronic device is controlled to acquire the image based on the first position information, the acquired image comprises the scene of the first audio, and the first image is displayed in the first area of the current first interface according to the acquired image. Therefore, in the embodiment of the application, when the user uses the equipment to perform immersive experience, the emergency in the outside is timely displayed in the equipment so as to play a role in reminding the user, and unnecessary loss caused by the fact that the user cannot timely sense the outside environment condition is avoided.
Optionally, the acquisition module 20 includes:
the recognition unit is used for recognizing a first object which emits the first audio after the comparison of the sound library according to the characteristic information of the first audio;
and the control unit is used for controlling the first camera to acquire an image of the first object which is subjected to the first action according to the first orientation information.
Optionally, the first display module 30 includes:
the first acquisition unit is used for acquiring a first image corresponding to the first object from the acquired images.
Optionally, the acquisition module 20 includes:
the first determining unit is used for acquiring gyroscope data and determining first angle information of rotation of the electronic equipment according to the gyroscope data;
the second determining unit is used for determining a first camera of the electronic equipment, which corresponds to the first position information, according to the first angle information and the first position information;
wherein the first orientation information is used to indicate a fixed orientation in the environment.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a second area of the first interface corresponding to the sight range of the user;
the determining module is used for determining a first area in the first interface except for the second area according to the first direction information;
Wherein the first area is located in a first opposite direction of the second area in case the first orientation information indicates a first opposite direction of the user.
Optionally, the apparatus further comprises:
the output module is used for outputting first prompt information in a first mode according to the first direction information;
the first prompt information is used for prompting a user to adjust the sight range to correspond to the first area, corresponds to the first grade, and is associated with the collected image content.
Optionally, the apparatus further comprises:
the receiving module is used for receiving a first input, and the first input is used for triggering the interrupt display of the first interface;
and the second display module is used for responding to the first input and displaying an image preview interface corresponding to the external environment on the electronic equipment.
Optionally, the first acquisition module 10 includes:
the second acquisition unit is used for respectively acquiring decibel values of the first audio acquired by the at least two radios under the condition that the first audio is acquired by the at least two radios of the electronic equipment;
the third acquisition unit is used for acquiring first azimuth information according to the position of the radio corresponding to the maximum decibel value on the electronic equipment;
The at least two radios are distributed at different positions of the electronic equipment, and the different positions indicate different directions.
The display device in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The display device according to the embodiment of the application may be a device having an action system. The action system may be an Android (Android) action system, an ios action system, or other possible action systems, and the embodiment of the application is not limited specifically.
The display device provided by the embodiment of the application can realize each process realized by the embodiment of the method, and in order to avoid repetition, the description is omitted here.
Optionally, as shown in fig. 12, the embodiment of the present application further provides an electronic device 100, including a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and capable of running on the processor 101, where the program or the instruction implements each step of any one of the above display method embodiments when executed by the processor 101, and the steps can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device of the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 13 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, processor 1010, camera 1011, and the like.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 1010 is configured to obtain first azimuth information corresponding to a first audio source when the first audio in the environment is detected; according to the first azimuth information, controlling a first camera corresponding to the first azimuth information to acquire an image; a display unit 1006, configured to display, when the electronic device displays a first interface, a first image in a first area of the first interface according to an image acquired by the first camera; wherein the first audio is non-noise audio.
Thus, in the embodiment of the application, during the use of the electronic device by a user, the electronic device detects the audio in the environment at any time, and when the first audio of the non-noise audio in the environment is detected, the first position information corresponding to the first audio source is acquired, so that the first camera corresponding to the electronic device is controlled to acquire the image based on the first position information, the acquired image comprises the scene of the first audio, and the first image is displayed in the first area of the current first interface according to the acquired image. Therefore, in the embodiment of the application, when the user uses the equipment to perform immersive experience, the emergency in the outside is timely displayed in the equipment so as to play a role in reminding the user, and unnecessary loss caused by the fact that the user cannot timely sense the outside environment condition is avoided.
Optionally, the processor 1010 is further configured to identify, according to the feature information of the first audio, a first object that emits the first audio after the comparison of the sound library; and controlling the first camera to acquire an image of the first object which is subjected to the first action according to the first orientation information.
Optionally, the processor 1010 is further configured to acquire a first image corresponding to the first object in the acquired image.
Optionally, the processor 1010 is further configured to acquire gyroscope data, and determine first angle information of rotation of the electronic device according to the gyroscope data; determining a first camera of the electronic equipment corresponding to the first orientation information according to the first angle information and the first orientation information; wherein the first orientation information is used to indicate a fixed orientation in the environment.
Optionally, the processor 1010 is further configured to acquire a second area corresponding to the first interface in a line of sight range of the user; determining a first area in the first interface except for the second area according to the first orientation information; wherein the first area is located in a first opposite direction of the second area in a case where the first orientation information indicates the first opposite direction of the user.
Optionally, the processor 1010 is further configured to output a first prompt message in a first manner according to the first direction information; the first prompt information is used for prompting a user to adjust the sight range to a corresponding first area, the first prompt information corresponds to a first grade, and the first grade is associated with the collected image content.
Optionally, a user input unit 1007 is configured to receive a first input, where the first input is used to trigger to interrupt displaying the first interface; the display unit 1006 is further configured to display, in response to the first input, an image preview interface corresponding to an external environment on the electronic device.
Optionally, the processor 1010 is further configured to, in a case where the at least two radios of the electronic device respectively collect the first audio, respectively obtain decibel values of the at least two radios collected the first audio; acquiring the first azimuth information according to the position of the radio corresponding to the maximum decibel value on the electronic equipment; wherein the at least two radios are distributed at different locations of the electronic device, the different locations indicating different orientations.
In summary, in the application, on the basis of VR technology, a new external environment interaction mechanism is provided by combining with VR hardware facilities, the change of the external environment where a user is located is judged in real time through a camera and a MIC, different prompt grades are generated according to the environment conditions, and meanwhile, a corresponding prompt picture is generated in a user virtual picture, so that the user can timely sense the external environment on the basis of not forcibly interrupting the user experience, and the user can decide whether to exit the VR or not by himself, thereby improving the user experience. The application uses eyeball tracking technology to judge the visual field focusing range of the user in the virtual world in real time, and simultaneously uses MIC and a camera to judge the environmental change and the change azimuth source, and generates an environmental change image in the corresponding azimuth outside the visual field focusing range of the virtual world, thereby further ensuring the continuity of user experience; in addition, through guiding the user visual field to move, the external picture can be displayed after the user visual field reaches a certain area, the shielding of the external picture on the VR picture is further reduced, the interference on the user VR experience is reduced, and therefore the immersion of the user is improved.
It should be appreciated that in an embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or video images obtained by an image capturing device (e.g., a camera) in a video image capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an action system. The processor 1010 may integrate an application processor that primarily processes an action system, user pages, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the above-mentioned display method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is provided here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as computer readable memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the display method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the display method described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in part in the form of a computer software product stored on a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (12)

1. A display method, the method comprising:
under the condition that first audio in the environment is detected, first azimuth information corresponding to the first audio source is obtained;
according to the first azimuth information, controlling a first camera corresponding to the first azimuth information to acquire an image;
displaying a first image in a first area of a first interface according to the image acquired by the first camera under the condition that the electronic equipment displays the first interface;
wherein the first audio is non-noise audio.
2. The method of claim 1, wherein controlling the first camera corresponding to the first orientation information to acquire the image according to the first orientation information comprises:
according to the characteristic information of the first audio, recognizing and sending out a first object of the first audio after comparison of a sound library;
and controlling the first camera to acquire an image of the first object which is subjected to the first action according to the first orientation information.
3. The method of claim 2, wherein displaying a first image in a first area of the first interface based on the image captured by the first camera comprises:
And acquiring a first image corresponding to the first object from the acquired images.
4. The method of claim 1, wherein controlling the first camera corresponding to the first orientation information to acquire the image according to the first orientation information comprises:
acquiring gyroscope data, and determining first angle information of rotation of the electronic equipment according to the gyroscope data;
determining a first camera of the electronic equipment corresponding to the first orientation information according to the first angle information and the first orientation information;
wherein the first orientation information is used to indicate a fixed orientation in the environment.
5. The method of claim 1, wherein the method further comprises, prior to displaying the first image in the first area of the first interface:
acquiring a second area of the user sight line range corresponding to the first interface;
determining a first area in the first interface except for the second area according to the first orientation information;
wherein the first area is located in a first opposite direction of the second area in a case where the first orientation information indicates the first opposite direction of the user.
6. The method of claim 1, wherein after controlling the first camera corresponding to the first orientation information to acquire the image according to the first orientation information, the method further comprises:
outputting first prompt information in a first mode according to the first direction information;
the first prompt information is used for prompting a user to adjust the sight range to a corresponding first area, the first prompt information corresponds to a first grade, and the first grade is associated with the collected image content.
7. The method of claim 1, wherein after the first image is displayed in the first area of the first interface, the method further comprises:
receiving a first input, wherein the first input is used for triggering the interrupt display of the first interface;
and responding to the first input, and displaying an image preview interface corresponding to the external environment on the electronic equipment.
8. The method of claim 1, wherein the obtaining the first orientation information corresponding to the first audio source comprises:
under the condition that at least two radios of the electronic equipment acquire the first audio respectively, acquiring decibel values of the first audio acquired by the at least two radios respectively;
Acquiring the first azimuth information according to the position of the radio corresponding to the maximum decibel value on the electronic equipment;
wherein the at least two radios are distributed at different locations of the electronic device, the different locations indicating different orientations.
9. A display device, the device comprising:
the first acquisition module is used for acquiring first orientation information corresponding to a first audio source under the condition that the first audio in the environment is detected;
the acquisition module is used for controlling a first camera corresponding to the first azimuth information to acquire an image according to the first azimuth information;
the first display module is used for displaying a first image in a first area of the first interface according to the image acquired by the first camera under the condition that the electronic equipment displays the first interface;
wherein the first audio is non-noise audio.
10. The apparatus of claim 9, wherein the acquisition module comprises:
the identification unit is used for identifying a first object which sends out the first audio after the comparison of the sound library according to the characteristic information of the first audio;
and the control unit is used for controlling the first camera to acquire an image of the first object which is subjected to the first action according to the first orientation information.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the display method of any one of claims 1 to 8.
12. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implement the steps of the display method according to any one of claims 1 to 8.
CN202311143101.XA 2023-09-05 2023-09-05 Display method, display device, electronic equipment and readable storage medium Pending CN117193531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311143101.XA CN117193531A (en) 2023-09-05 2023-09-05 Display method, display device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311143101.XA CN117193531A (en) 2023-09-05 2023-09-05 Display method, display device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117193531A true CN117193531A (en) 2023-12-08

Family

ID=88991779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311143101.XA Pending CN117193531A (en) 2023-09-05 2023-09-05 Display method, display device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117193531A (en)

Similar Documents

Publication Publication Date Title
CN110874129B (en) Display system
US11205426B2 (en) Information processing device, information processing method, and program
US10133957B2 (en) Method and device for recognizing object
CN108628515B (en) Multimedia content operation method and mobile terminal
EP3299946B1 (en) Method and device for switching environment picture
AU2015275252A1 (en) Systems and method to modify display of augmented reality content
CN113938748B (en) Video playing method, device, terminal, storage medium and program product
CN107132769B (en) Intelligent equipment control method and device
JP6750697B2 (en) Information processing apparatus, information processing method, and program
CN109782968B (en) Interface adjusting method and terminal equipment
CN112154412A (en) Providing audio information with a digital assistant
CN107797662B (en) Viewing angle control method and device and electronic equipment
CN112969087A (en) Information display method, client, electronic equipment and storage medium
JPWO2018198499A1 (en) Information processing apparatus, information processing method, and recording medium
CN117193531A (en) Display method, display device, electronic equipment and readable storage medium
EP4131973A1 (en) Method and apparatus for processing live-streaming data
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
CN111176601B (en) Processing method and device
CN111064658B (en) Display control method and electronic equipment
CN112019960A (en) Method for monitoring scenes by utilizing earphone, device and readable storage medium
CN110784764A (en) Automatic television picture adjusting method, television and computer readable storage medium
CN114187874B (en) Brightness adjusting method, device and storage medium
CN115268827A (en) Spatial audio playing method and device, electronic equipment and medium
CN113824832B (en) Prompting method, prompting device, electronic equipment and storage medium
CN115665398B (en) Image adjusting method, device, equipment and medium based on virtual reality technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination