WO2023216993A1 - 录制数据处理方法、装置及电子设备 - Google Patents

录制数据处理方法、装置及电子设备 Download PDF

Info

Publication number
WO2023216993A1
WO2023216993A1 PCT/CN2023/092291 CN2023092291W WO2023216993A1 WO 2023216993 A1 WO2023216993 A1 WO 2023216993A1 CN 2023092291 W CN2023092291 W CN 2023092291W WO 2023216993 A1 WO2023216993 A1 WO 2023216993A1
Authority
WO
WIPO (PCT)
Prior art keywords
recording
recording data
data
pieces
screen
Prior art date
Application number
PCT/CN2023/092291
Other languages
English (en)
French (fr)
Inventor
高志稳
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023216993A1 publication Critical patent/WO2023216993A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations

Definitions

  • This application belongs to the field of multimedia technology, and specifically relates to a recording data processing method, device and electronic equipment.
  • the purpose of the embodiments of the present application is to provide a recording data processing method, device and electronic equipment, which can solve the problem of poor playback effect when multiple recorded data are played simultaneously.
  • embodiments of the present application provide a recording data processing method, the method is applied to a first electronic device, and the method includes:
  • the first screen splitting mode represents the effect of the M pieces of first recording data on all images when the images are displayed simultaneously.
  • each first recording data corresponds to a screen display position, and the screen display positions of different first recording data are at least partially different, and M is an integer greater than 1;
  • the spatial audio position information includes a spatial audio phase
  • the spatial audio phase matches the picture display position corresponding to the first recorded data, so that the first recorded data can be identified when playing the M first recorded data.
  • embodiments of the present application provide a recording data processing device, which is applied to a first electronic device.
  • the device includes:
  • the first acquisition module is used to acquire M first recording data, where M is an integer greater than 1;
  • the second acquisition module is used to obtain the first screen splitting mode of the M first recording data.
  • the first screen splitting mode represents the effect of the M first recording data on the displayed images when the images are displayed simultaneously. The way the screen is divided, each first recording data corresponds to a screen display position, and the screen display positions of different first recording data are at least partially different;
  • a first determination module configured to determine M pieces of spatial audio position information associated one by one with the M pieces of first recording data based on the first picture split-screen mode
  • the spatial audio position information includes a spatial audio phase
  • the spatial audio phase matches the picture display position corresponding to the first recorded data, so that the first recorded data can be identified when playing the M first recorded data.
  • inventions of the present application provide an electronic device.
  • the terminal includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor.
  • the program or instruction is When the processor is executed, the steps of the recording data processing method described in the first aspect are implemented.
  • embodiments of the present application provide a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the programs or instructions are executed by a processor, the recorded data processing as described in the first aspect is implemented. Method steps.
  • inventions of the present application provide a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the first aspect. The steps of the recorded data processing method.
  • the M pieces of spatial audio position information includes a spatial audio phase, and the spatial audio phase matches the picture display position corresponding to the first recording data.
  • the picture display of different recorded data can match the spatial phase audio of the corresponding position, so that the user can identify the sound source position of the picture of the recorded data, thus making the recording
  • the picture of the data is more spatial and three-dimensional, which can improve the playback effect of the recorded data.
  • Figure 1 is a flow chart of a recording data processing method provided by an embodiment of the present application.
  • FIGS. 2 to 5 are schematic diagrams of screen display in split-screen mode
  • Figure 6 is a schematic diagram of an example of spatial audio phase allocation
  • Figure 7 is a schematic diagram of spatial audio phase allocation of another example
  • Figure 8 is a schematic structural diagram of multiple second electronic devices authorizing the first electronic device
  • Figure 9 is a schematic structural diagram of multiple electronic devices for collaborative recording
  • Figures 10 and 11 are schematic diagrams of the process of adjusting the split-screen mode of the third screen
  • Figure 12 is a schematic diagram of the playback of M first recorded data
  • Figure 13 is a schematic diagram of the display when the video screen is paused
  • Figure 14 is a schematic diagram of the audio playback of M first video frames when a certain video frame is clicked;
  • Figure 15 is a schematic diagram of audio playback of M first video frames when a certain video frame is double-clicked
  • Figure 16 is a structural diagram of a recording data processing device provided by an embodiment of the present application.
  • Figure 17 is a structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 18 is a schematic diagram of the hardware structure of an electronic device that implements an embodiment of the present application.
  • first, second, etc. in the description and claims of this application are used to distinguish similar objects and are not used to describe a specific order or sequence. It is to be understood that the figures so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in orders other than those illustrated or described herein, and that "first,” “second,” etc. are distinguished Objects are usually of one type, and the number of objects is not limited. For example, the first object can be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the related objects are in an "or” relationship.
  • Figure 1 is a flow chart of a recording data processing method provided by an embodiment of the present application. As shown in Figure 1, it includes the following steps:
  • Step 101 Obtain M pieces of first recording data, and obtain a first screen splitting mode of the M pieces of first recording data.
  • the first screen splitting mode represents that the M pieces of first recording data are displayed simultaneously on the screen.
  • each first recording data corresponds to a screen display position, and the screen display positions of different first recording data are at least partially different.
  • M is an integer greater than 1.
  • the recording data processing method provided in this embodiment can be applied to the first electronic device.
  • the first electronic device can be any electronic device.
  • the first electronic device can be an authorized device for collaborative recording. That is, other electronic devices authorize the first electronic device so that the first electronic device can control at least two electronic devices to perform collaborative recording and obtain at least two pieces of recording data.
  • collaborative recording can refer to video recording or audio recording, etc., which is not specifically limited here.
  • the recording data may be video recording data, that is, video, or audio recording data, that is, audio.
  • recording data will be described using video recording data as an example.
  • the M first recording data may be at least two recording data obtained through collaborative recording, or may be any recording data, which is not specifically limited here. Among them, the M first recorded data can be associated with each other so that they can be played cooperatively during playback.
  • the M first recording data may be obtained in a variety of ways.
  • the first electronic device may activate a collaborative recording mode, and in the collaborative recording mode, it may cooperate with other electronic devices to record and obtain the M first recording data. .
  • M pieces of first recording data sent by other electronic devices and obtained by collaborative recording by other electronic devices may be received.
  • multiple recording data selected by the user can be correlated with each other to obtain M first recording data.
  • the user can select multiple recording data to target multiple The recording data is edited, and accordingly, the first electronic device can use multiple recording data selected by the user as M first recording data.
  • the picture split-screen method may refer to the method of dividing the screen by the pictures of the at least two videos when playing at least two recorded data such as videos in a split-screen manner, that is, the pictures of different videos are displayed in different display areas of the screen. Visually, the screen picture is divided into multiple video pictures.
  • the entire square area serves as a screen, and the screen is divided into multiple sub-display areas, and each sub-display area is used to display a video image of recorded data.
  • the screen can be divided according to the number of recorded data. When the screen is divided, it can be divided evenly, as shown in Figure 2 and Figure 4, or it can be divided unevenly, as shown in Figure 3 and Figure 5.
  • Figure 5 shows the video data recorded by the authorized device during collaborative recording displayed on the screen in the form of picture-in-picture.
  • the method of obtaining the first screen splitting mode may include multiple ways.
  • the user's setting information may be received, and the setting information includes the first screen splitting modes of M pieces of first recording data.
  • the first screen splitting method can be determined based on the number of electronic devices participating in collaborative recording, device attribute information, the position of the electronic device relative to the recording object, and other information.
  • Step 102 Based on the first screen splitting method, determine M pieces of spatial audio position information that are one-to-one associated with the M pieces of first recording data.
  • the spatial audio position information includes a spatial audio phase
  • the spatial audio phase matches the picture display position corresponding to the first recording data, so that the M first recording data can be played. According to the data, the sound source position of the picture of the first recording data can be identified.
  • the spatial audio position information may include spatial audio phase.
  • the spatial audio phase represents the spatial relative orientation of the sound source position relative to the user, such as the sound source position in front, left, right, or behind the user.
  • Spatial audio location information can also include spatial audio distance, such as the sound source location is 10 meters to the left of the user, 5 meters to the right, etc. In this way, through the spatial audio position information, the sound emitted by the audio device at a fixed position in the space can be simulated.
  • a 360° space is formed with the user as the center, and a spatial audio phase is assigned to the first recording data.
  • the assigned spatial audio position matches the picture display position of the first recorded data.
  • the first recorded data can be played based on the spatial audio technology and according to the spatial audio position information.
  • spatial audio technology can use the principle of the human ear to distinguish the direction of sound to locate objects. That is, setting a sound source point, and when the headset can identify the relative spatial position between the human ear and the sound source point, the spatial audio technology can play sounds with different time differences and sound level differences in the two ears, and even simulate the sound coming out of the ears. The effect of silhouette reflection. This spatial audio technology allows users to feel that sound is coming from the direction of the sound source, thereby simulating the effect of spatial audio.
  • the principle of how the human ear distinguishes the direction of sound is: the reason why the human ear can clearly determine which direction the sound is coming from is because the sound produces a time difference and a sound level difference between the two ears. Based on the time difference and sound level difference, the user can determine the direction of the sound source.
  • the time difference is due to the slight difference in the distance from the sound source to the human ear;
  • the sound level difference is due to the head's blocking of the sound on the one hand, and the human ear feels the different volume on the left and right sides, and on the other hand the judgment of the sound direction on the upper and lower sides
  • the ability is attributed to the existence of the auricle. Sound from the upper and lower sides produces different reflection effects in the auricle, and the difference can be captured by the human ear.
  • the allocated spatial audio position information may be associated with a file corresponding to the first recording data, and one first recording data may correspond to one file.
  • Figure 6 is a schematic diagram of an example of spatial audio phase allocation.
  • the left picture can correspond to the left spatial phase audio
  • the right picture can correspond to the right spatial phase audio.
  • Figure 7 is a schematic diagram of another example of spatial audio phase allocation.
  • the spatial audio phases of the four pictures are evenly distributed according to the equally divided pictures, such as upper left, lower left, upper right, lower right, etc.
  • the M first recording data are determined
  • the M pieces of spatial audio position information are associated one by one with the data, and the spatial audio position information includes a spatial audio phase, and the spatial audio phase matches the picture display position corresponding to the first recording data.
  • the picture display of different recorded data can match the spatial phase audio of the corresponding position, so that the user can identify the sound source position of the picture of the recorded data, thereby making the recorded data
  • the picture is more spatial and three-dimensional, which can improve the playback effect of recorded data.
  • the first electronic device can support a wired or wireless headphone function, and the first electronic device can use an audio module based on spatial audio technology, according to The spatial audio position information is used to play back the M first recorded data to achieve better playback effects.
  • the audio module can be a wired headset, a wireless headset, or a speaker.
  • the audio module includes at least two speakers.
  • the M first recording data are obtained by controlling at least two electronic devices to perform collaborative recording by the first electronic device as the authorized device.
  • the obtaining the M first recording data includes:
  • N is a positive integer less than or equal to M;
  • N is less than M
  • the M first recording data include N first recording data obtained based on the N recording data and the first
  • the first recording data obtained by the electronic device during collaborative recording with the N second electronic devices in the case where the first electronic device does not participate in collaborative recording, N equals M
  • the M first recording data includes N pieces of first recording data obtained based on the N pieces of recording data.
  • the first electronic device and each second electronic device support network data transmission.
  • the first electronic device can participate in collaborative recording for video or audio recording.
  • the first electronic device and each second electronic device support network data transmission.
  • Each second electronic device can support video recording and audio recording functions.
  • the first electronic device does not participate in collaborative recording for video or audio recording, but only serves as a control device to control at least two second electronic devices for collaborative recording.
  • the first electronic device can be used as an authorized device to establish a communication connection with each second electronic device.
  • the communication connection network can be a wireless network, such as Wireless Fidelity (WiFi), fifth-generation 5G high-speed network, etc.
  • the second electronic device can authorize the first electronic device through the protocol.
  • the permissions include online reading of audio and video data. If the authorization is successful, the first electronic device can have The target authority of each second electronic device enables the first electronic device to control N second electronic devices to perform coordinated shooting.
  • the user can start the collaborative recording mode.
  • the first electronic device can control N second electronic devices for real-time recording and data storage.
  • the electronic devices participating in collaborative recording only need to open the recording interface and enter the collaborative recording mode.
  • the first electronic device controls the start of recording.
  • a first recording control instruction is sent to each second electronic device.
  • Each second electronic device starts recording after receiving the first recording control instruction, and transmits the recording data to the first electronic device in real time through the network connection, or may also receive the recording data sent by the first electronic device. In the case of a control instruction to end recording, the recording data is transmitted to the first electronic device.
  • the first electronic device can also be connected at high speed through a wireless network to communicate with N second electronic devices at the same time over long distances.
  • the picture of the electronic device is recorded, and the beautiful picture of the collaboration can be clearly seen even if the first electronic device is not present.
  • the first electronic device can receive N pieces of recording data sent by N second electronic devices based on the first recording control instructions to obtain M pieces of first recording data.
  • the M pieces of first recording data include data recorded by each electronic device participating in collaborative recording, where the first recording data may be data obtained by the electronic devices participating in collaborative recording from the start of recording to the end of recording.
  • the M pieces of first recording data obtained through collaborative recording can be correlated with each other.
  • multiple authorized electronic devices are controlled through a wireless network connection to perform collaborative recording and generate different recording data, thereby achieving collaborative recording and simultaneous shooting in multiple places.
  • wireless network connection allows electronic devices to reduce time differences and space barriers, and can be operated and controlled in real time during collaborative shooting, improving the operational correlation between multiple devices.
  • the obtained recording data can be previewed and displayed in real time to improve the effect of collaborative shooting and improve user experience.
  • the method also includes:
  • the recording data obtained during the collaborative recording process is previewed and displayed;
  • the target information includes at least one of the following:
  • a pre-stored first mapping relationship which represents the mapping relationship between the number of recorded data and the screen split mode during the collaborative recording process
  • Device attribute information corresponding to each recording data during the collaborative recording process the device attribute information represents the role of the electronic device participating in the collaborative recording in the collaborative recording process;
  • the recording position corresponding to each recording data represents the position of the electronic device participating in the collaborative recording relative to the recording object.
  • the second screen splitting method for preview display can be determined according to the number of electronic devices participating in collaborative recording, that is, the number of recorded data, and according to the pre-stored first mapping relationship, as shown in Figure 2 to Figure 2 As shown in 5, the number of electronic devices participating in collaborative recording is different, and the screen splitting methods are different.
  • the device attribute information corresponding to each recording data during the collaborative recording process can be combined.
  • the device attribute information includes authorized devices, collaborative devices, etc. If the recorded data A is the data recorded by the first electronic device, that is, the authorized device, the image of the recorded data A can be displayed on the leftmost side of the screen, as shown in Figure 2 to Figure 4, the sub-display area on the leftmost side of the screen. Recording data A can be displayed, or data recorded by authorized devices can be displayed in picture-in-picture form, as shown in Figure 5.
  • the recording object which can be scenery, people, animals, objects, etc.
  • the first electronic device is at the upper left of the recording object when shooting
  • a second electronic device is at the upper right of the recording object when shooting
  • another second electronic device is at the upper right of the recording object.
  • the upper left screen can be the recording data screen captured by the first electronic device
  • the right screen can be the recording data screen captured by the collaborative device A
  • the rear screen can be the collaborative device B The captured recording data screen.
  • the screen of the recording data obtained during the collaborative recording process can be previewed and displayed according to the second screen split-screen mode, as shown in Figures 2 to 5.
  • the method also includes:
  • the first input is used to control the N second electronic devices to end collaborative recording.
  • the first input may be voice input, gesture input, touch input, etc., which is not specifically limited here.
  • the first electronic device can identify the second electronic device, that is, the collaborative device, for example, through Identify each second electronic device by receiving the identification information sent by the second electronic device, and when receiving the user's operation to end the recording (such as a click operation on the recording control), based on the received recording sent by the second electronic device data, automatically generate video files of the collaborative device (including the first recording data), and store and place them according to the identification information of the second electronic device.
  • the video files of each second electronic device are respectively named as video files of the collaborative device A.
  • collaborative device B recording, collaborative device C recording at the same time, a recording file can be automatically generated based on the data recorded by the first electronic device, and named as authorized device recording, for later data calling.
  • the generated M video files can be associated with each other, so that the M first video data can be correlated with each other, so that the collaborative shooting can be
  • the recorded data are interconnected, simplifying post-shooting editing operations.
  • the first screen splitting method for obtaining the M pieces of first recording data includes any of the following:
  • the target screen splitting method is determined as the first screen splitting method, and the target screen splitting method is to preview and display the collaborative recording process.
  • the screen splitting mode of the recording data obtained in the process, the M first recording data includes the recording data obtained in the collaborative recording process;
  • the first screen splitting mode is determined based on the second input and the third screen splitting mode.
  • the first screen splitting method may be the same as the first screen splitting method, that is, the first screen splitting method is the screen splitting method used for preview display during collaborative recording.
  • the screen split mode displayed in the preview can be associated with the generated video file, and accordingly the target screen split mode associated with the video file can be obtained as the first screen split mode.
  • the first screen splitting mode may be determined based on the amount of the first recorded data.
  • the method of determining the screen splitting mode of the first screen and the determining method of the screen splitting mode of the second screen may be the same or similar, and will not be described again here.
  • the user can sequentially adjust the third screen splitting method according to his or her own preferences.
  • the third screen splitting method may be a screen splitting method for preview display.
  • the user can perform a second input (such as dragging a video screen) for the third picture split-screen mode.
  • the first electronic device can determine the adjusted value based on the second input and the third picture split-screen mode.
  • the first screen split screen mode and save the video.
  • the left picture shows the split-screen mode of the third screen
  • the right picture shows the split-screen mode of the first screen after adjustment.
  • the second input may be voice input, gesture input, touch input, etc., which is not specifically limited here.
  • the method further includes:
  • M first video pictures are played in split-screen mode.
  • the M first video pictures correspond to the M first recording data one-to-one.
  • the first video pictures are The video picture corresponding to the video data in the first recording data, or the first video picture is the sound effect picture corresponding to the audio data in the first recording data;
  • the sound source position represented by the audio data matches the picture display position corresponding to the first recording data.
  • M first video pictures can be played in split-screen mode according to the first picture split-screen mode, where, when the first recording data is a video , the first video picture may be a video picture, and when the first recording data is audio, the first video picture may be a sound effect picture.
  • the first electronic device can support a wired or wireless headphone function, and the first electronic device can use an audio module based on spatial audio technology, according to The spatial audio position information is used to play back the M first recorded data to achieve better audio playback effects.
  • the audio module can be a wired headset, a wireless headset, or a speaker.
  • the audio module includes at least two speakers.
  • spatial audio technology can be used to assign sound attributes to different video images, allowing users to better identify sound sources and making the video more three-dimensional and spatial.
  • the method further includes:
  • the playback of the M pieces of first recorded data is paused, and the frame of each of the first video frames is highlighted.
  • the third input may be voice input, gesture input, touch input, etc., which is not specifically limited here.
  • the third input can be used to pause the playback of the M first recorded data.
  • the first electronic device can synchronously pause and play the M pieces of first recorded data, and highlight the frame of each first video frame in a manner including but not limited to color change, bolding, brightness, flashing, etc.
  • a frame-sized flash can appear on each video screen to remind the user to click for subsequent operations.
  • the method further includes:
  • the audio data in the target recording data corresponding to the first target video frame is played according to the first volume
  • the audio data in the target recording data corresponding to the first target video frame is played according to the first volume.
  • the second volume is played except for the M pieces of first recorded data, except for the target recording Audio data in the first recorded data other than the data, the first volume is greater than the second volume.
  • the fourth input may be voice input, gesture input, touch input, etc., which is not specifically limited here.
  • Touch input can be click input, double-click input, drag input, etc. The fourth input will be explained below using click input as an example.
  • the user can simultaneously pause and play the M first recording data, or click one of the video frames such as the first target video frame (the first target video frame corresponds to the target recording data) while simultaneously playing the M first recording data.
  • the sound of this video screen will be appropriately amplified, while the sounds of other video screens will be appropriately reduced or kept unchanged, so as to highlight the sound of the selected video screen.
  • the sound of the video picture of the first recording data recorded by the collaborative device A is amplified, and the sound of other video pictures is reduced or remains unchanged.
  • the fifth input may be voice input, gesture input, touch input, etc., which is not specifically limited here.
  • Touch input can be click input, double-click input, drag input, etc. The following fifth input is explained using double-click input as an example.
  • the user can double-click one of the video frames such as the second target video frame while simultaneously pausing the playback of the M first recorded data.
  • the first electronic device can output the sound of the video frame, and the sounds of other video frames will be Prohibited.
  • the user can also double-click one of the video frames such as the second target video frame while simultaneously playing the M first recorded data.
  • the first electronic device will prohibit the output of the sounds of other video frames while maintaining The sound output of the second target video picture.
  • the execution subject may be a recording data processing device, or a control module in the recording data processing device for executing the recording data processing method.
  • the recording data processing device performing the recording data processing method is used as an example to illustrate the recording data processing device provided by the embodiment of the present application.
  • Figure 16 is a structural diagram of a recording data processing device provided by an embodiment of the present application. The device is applied to a first electronic device. As shown in Figure 16, the recording data processing device 1600 includes:
  • the first acquisition module 1601 is used to acquire M pieces of first recording data, where M is an integer greater than 1;
  • the second acquisition module 1602 is used to obtain the first screen splitting mode of the M first recording data.
  • the first screen splitting mode represents the effect of the M first recording data on the displayed screen when the images are displayed simultaneously. The way the screen is divided, each first recording data corresponds to a screen display position, and the screen display positions of different first recording data are at least partially different;
  • the first determination module 1603 is used to determine the M pieces of spatial audio position information associated one by one with the M pieces of first recording data based on the first picture splitting method;
  • the spatial audio position information includes a spatial audio phase
  • the spatial audio phase matches the picture display position corresponding to the first recorded data, so that the first recorded data can be identified when playing the M first recorded data.
  • the first acquisition module 1601 is specifically used for:
  • N is a positive integer less than or equal to M;
  • N is less than M
  • the M first recording data include N first recording data obtained based on the N recording data and the first
  • the first recording data obtained by the electronic device during collaborative recording with the N second electronic devices in the case where the first electronic device does not participate in collaborative recording, N equals M
  • the M first recording data includes N pieces of first recording data obtained based on the N pieces of recording data.
  • the device also includes:
  • the second determination module is used to determine the split-screen mode of the second picture based on the target information
  • a display module configured to preview and display the recording data obtained during the collaborative recording process according to the second picture split-screen mode
  • the target information includes at least one of the following:
  • a pre-stored first mapping relationship which represents the mapping relationship between the number of recorded data and the screen split mode during the collaborative recording process
  • Device attribute information corresponding to each recording data during the collaborative recording process the device attribute information represents the role of the electronic device participating in the collaborative recording in the collaborative recording process;
  • the recording position corresponding to each recording data represents the position of the electronic device participating in the collaborative recording relative to the recording object.
  • the device also includes:
  • a storage module configured to, upon receiving a first input from a user, based on the identified identification information of the N second electronic devices, send to each second electronic device based on the first recording control instruction The recording data are stored separately to obtain the N first recording data;
  • the first input is used to control the N second electronic devices to end collaborative recording.
  • the second acquisition module 1602 is specifically used for any of the following:
  • the target screen splitting method is determined as the first screen splitting method, and the target screen splitting method is to preview and display the collaborative recording process.
  • the screen splitting mode of the recording data obtained in the process, the M first recording data includes the recording data obtained in the collaborative recording process;
  • the first screen splitting mode is determined based on the second input and the third screen splitting mode.
  • the device also includes:
  • a first playback module configured to perform split-screen playback of M first video pictures based on the first picture split-screen method, where the M first video pictures correspond to the M first recording data one-to-one,
  • the first video picture is a video picture corresponding to the video data in the first recording data, or the first video picture is a sound effect picture corresponding to the audio data in the first recording data;
  • a second playback module configured to synchronously play M pieces of audio data among the M pieces of first recorded data based on the M pieces of spatial audio position information
  • the sound source position represented by the audio data matches the picture display position corresponding to the first recording data.
  • the device also includes:
  • a pause playback processing module configured to pause playback of the M pieces of first recorded data when a third input is received, and highlight the borders of each of the first video frames.
  • the device also includes:
  • a third playback module configured to play the target recording data corresponding to the first target video picture according to the first volume when receiving a fourth input to the first target video picture among the M first video pictures.
  • the audio data in the first recording data, except the target recording data, is played according to the second volume, and the first volume is greater than the second volume.
  • the device also includes:
  • a playback prohibition processing module configured to prohibit playback of the M pieces of first recorded data, except for the second target video frame, when receiving a fifth input to a second target video frame among the M first video frames. Audio data in other first recording data other than the first recording data corresponding to the target video picture.
  • the first acquisition module 1601 acquires M pieces of first recording data
  • the second acquisition module 1602 acquires the first screen splitting mode of the M pieces of first recording data
  • the first determination module 1603 uses The first screen splitting method determines the M pieces of spatial audio position information that are associated one by one with the M pieces of first recording data.
  • the spatial audio position information includes a spatial audio phase, and the spatial audio phase is related to the first The screen display position corresponding to the recorded data matches.
  • the picture display of different recorded data can match the spatial phase audio of the corresponding position, so that the user can identify the sound source position of the picture of the recorded data, thus making the recording
  • the picture of the data is more spatial and three-dimensional, which can improve the playback effect of the recorded data.
  • the recording data processing device in the embodiment of the present application may be a device, or may be a component, integrated circuit, or chip in a terminal.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a personal digital assistant (personal digital assistant).
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computers (personal computers, PC), televisions (television, TV), teller machines or self-service machines, etc., this application The examples are not specifically limited.
  • the recording data processing device in the embodiment of the present application may be a device with an operating system.
  • the operating system can be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of this application.
  • the recording data processing device provided by the embodiment of the present application can implement each process implemented by the method embodiment in Figure 1. To avoid duplication, the details will not be described here.
  • this embodiment of the present application also provides an electronic device 1700.
  • the electronic device is a first electronic device, including a processor 1701 and a memory 1702, which are stored in the memory 1702 and can be processed in the process.
  • the program or instruction runs on the processor 1701.
  • the program or instruction When the program or instruction is executed by the processor 1701, it implements each process of the above recorded data processing method embodiment, and can achieve the same technical effect. To avoid duplication, it will not be described again here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • Figure 18 is a schematic diagram of the hardware structure of an electronic device that implements an embodiment of the present application.
  • the electronic device 1800 includes but is not limited to: radio frequency unit 1801, network module 1802, audio output unit 1803, input unit 1804, sensor 1805, display unit 1806, user input unit 1807, interface unit 1808, memory 1809, processor 1810, etc. part.
  • the electronic device 1800 may also include a power supply (such as a battery) that supplies power to various components.
  • the power supply may be logically connected to the processor 1810 through a power management system, thereby managing charging, discharging, and function through the power management system. Consumption management and other functions.
  • the structure of the electronic device shown in Figure 18 does not constitute a limitation of the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, which will not be described again here. .
  • the processor 1810 is used to obtain M first recording data, and obtain a first screen splitting mode of the M first recording data, and the first screen splitting mode represents the M first recording data.
  • Each first recording data corresponds to a screen display position.
  • the screen display positions of different first recording data are at least partially different.
  • M is an integer greater than 1; based on The first picture split screen method determines the M pieces of spatial audio position information associated one by one with the M pieces of first recording data;
  • the spatial audio position information includes a spatial audio phase
  • the spatial audio phase matches the picture display position corresponding to the first recorded data, so that the first recorded data can be identified when playing the M first recorded data.
  • M pieces of first recording data are obtained through the processor 1810, and a first picture splitting mode of the M pieces of first recording data is obtained; based on the first picture splitting method, the M pieces of first recording data are determined.
  • M pieces of spatial audio position information are associated one-to-one with the first recording data, and the spatial audio position information includes a spatial audio phase, and the spatial audio phase matches the picture display position corresponding to the first recording data.
  • the picture display of different recorded data can match the spatial phase audio of the corresponding position, so that the user can identify the sound source position of the picture of the recorded data, thus making the recording A better picture of the data
  • the sense of space and three-dimensionality can improve the playback effect of recorded data.
  • the radio frequency unit 1801 is configured to: When the first electronic device is communicatively connected to N second electronic devices, and the first electronic device has the target authority of the N second electronic devices, Send a first recording control instruction to each of the second electronic devices, the target permission indicates that the first electronic device can control the N second electronic devices to perform collaborative recording, and the first recording control instruction is used to Control the second electronic device to start collaborative recording, N is a positive integer less than or equal to M;
  • Radio frequency unit 1801 configured to receive N recording data sent by the N second electronic devices based on the first recording control instruction to obtain the M first recording data
  • N is less than M
  • the M first recording data include N first recording data obtained based on the N recording data and the first
  • the first recording data obtained by the electronic device during collaborative recording with the N second electronic devices in the case where the first electronic device does not participate in collaborative recording, N equals M
  • the M first recording data includes N pieces of first recording data obtained based on the N pieces of recording data.
  • the processor 1810 is configured to determine the second screen splitting method based on the target information
  • the display unit 1806 is configured to preview and display the recording data obtained during the collaborative recording process according to the second picture split-screen mode
  • the target information includes at least one of the following:
  • a pre-stored first mapping relationship which represents the mapping relationship between the number of recorded data and the screen split mode during the collaborative recording process
  • Device attribute information corresponding to each recording data during the collaborative recording process the device attribute information represents the role of the electronic device participating in the collaborative recording in the collaborative recording process;
  • the recording position corresponding to each recording data represents the position of the electronic device participating in the collaborative recording relative to the recording object.
  • the user input unit 1807 is used to receive the user's first input
  • the processor 1810 is further configured to, upon receiving the first input from the user, based on the identified identification information of the N second electronic devices, record the first recording for each second electronic device.
  • the recording data sent by the control instruction are stored separately to obtain the N first recording data;
  • the first input is used to control the N second electronic devices to end collaborative recording.
  • the processor 1810 is configured to determine the first screen splitting method based on the amount of the first recording data
  • the processor 1810 is configured to determine the target screen splitting mode as the first screen splitting mode, and the target screen splitting mode is preview when the M first recording data are obtained through collaborative recording. Display the split-screen mode of the recording data obtained during the collaborative recording process, where the M first recording data include the recording data obtained during the collaborative recording process;
  • the display unit 1806 is configured to display the M pictures of the first recorded data in split-screen mode based on the third picture split-screen mode;
  • the user input unit 1807 is used to receive the second input from the user to adjust the picture display position corresponding to the M pieces of first recording data
  • the processor 1810 is configured to display the M pictures of the first recording data in split-screen mode based on the third picture split-screen mode.
  • the adjusted second input determines the first screen splitting mode based on the second input and the third screen splitting mode.
  • the display unit 1806 is configured to perform split-screen playback of M first video pictures based on the first picture split-screen method, and the M first video pictures and the M first recording data are the same.
  • the first video picture is a video picture corresponding to the video data in the first recording data, or the first video picture is a sound effect picture corresponding to the audio data in the first recording data;
  • the audio output unit 1803 is configured to synchronously play the M pieces of audio data in the M pieces of first recorded data based on the M pieces of spatial audio position information;
  • the sound source position represented by the audio data matches the picture display position corresponding to the first recording data.
  • the user input unit 1807 is used to receive a third input
  • the processor 1810 is also configured to pause the playback of the M pieces of first recorded data when receiving a third input;
  • the display unit 1806 is configured to highlight the frame of each first video picture when the third input is received.
  • the user input unit 1807 is configured to receive a fourth input to the first target video picture among the M first video pictures;
  • the audio output unit 1803 is configured to play the target recording data corresponding to the first target video picture at the first volume according to the first volume when receiving a fourth input to the first target video picture among the M first video pictures.
  • the audio data in the first recording data, except the target recording data, is played according to the second volume, and the first volume is greater than the second volume.
  • the user input unit 1807 is configured to receive a fifth input to the second target video picture among the M first video pictures;
  • Processor 1810 configured to prohibit the playback of the M pieces of first recorded data, except for the second target video frame, when receiving a fifth input to a second target video frame among the M first video frames. Audio data in the first recording data other than the first recording data corresponding to the video picture.
  • the input unit 1804 may include a graphics processor (Graphics Processing Unit, GPU) 18041 and a microphone 18042.
  • the graphics processor 18041 is responsible for the image capture device (GPU) in the video capture mode or the image capture mode. Process the image data of still pictures or videos obtained by cameras (such as cameras).
  • the display unit 1806 may include a display panel 18061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1807 includes a touch panel 18071 and other input devices 18072. Touch panel 18071, also known as touch screen.
  • the touch panel 18071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 18072 may include but are not limited to physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be described again here.
  • Memory 1809 may be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • the processor 1810 can integrate an application processor and a modem processor.
  • the application processor mainly processes the operating system, user interface, application programs, etc.
  • the modem processor mainly processes wireless communications. It can be understood that the above modem processor may not be integrated into the processor 1810.
  • Embodiments of the present application also provide a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the program or instructions are executed by a processor, each process of the above embodiments of the recording data processing method is implemented, and can achieve The same technical effects are not repeated here to avoid repetition.
  • the processor is the processor in the electronic device described in the above embodiment.
  • the readable storage media includes computer-readable storage media, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks, etc.
  • An embodiment of the present application further provides a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the above embodiment of the recording data processing method. Each process can achieve the same technical effect. To avoid repetition, we will not go into details here.
  • chips mentioned in the embodiments of this application may also be called system-on-chip, system-on-a-chip, system-on-a-chip or system-on-chip, etc.
  • the essence of the technical solution or the part that contributes to the existing technology can be embodied in the form of a computer software product.
  • the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes a number of instructions. It is used to cause a terminal (which can be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in various embodiments of this application.

Abstract

本申请公开了一种录制数据处理方法、装置及电子设备,属于多媒体技术领域。该方法包括:获取M个第一录制数据,以及获取所述M个第一录制数据的第一画面分屏方式,所述第一画面分屏方式表征所述M个第一录制数据在画面同时显示时对所显示的屏幕画面进行划分的方式,每个第一录制数据对应一个画面显示位置,不同第一录制数据的画面显示位置至少部分不同;基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息;所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配,以使播放所述M个第一录制数据时可识别所述第一录制数据的画面的声源位置。

Description

录制数据处理方法、装置及电子设备
相关申请的交叉引用
本申请主张在2022年5月9日在中国提交的中国专利申请No.202210501027.3的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于多媒体技术领域,具体涉及一种录制数据处理方法、装置及电子设备。
背景技术
随着电子技术的高速发展,电子设备得到了广泛的应用。用户可以使用电子设备进行数据录制,如录像或录音等,以实时记录自己身边的美好和快乐瞬间。
目前,多个录制数据同步播放时,其播放效果比较差。
发明内容
本申请实施例的目的是提供一种录制数据处理方法、装置及电子设备,能够解决多个录制数据同步播放时,其播放效果比较差的问题。
第一方面,本申请实施例提供了一种录制数据处理方法,所述方法应用于第一电子设备,所述方法包括:
获取M个第一录制数据,以及获取所述M个第一录制数据的第一画面分屏方式,所述第一画面分屏方式表征所述M个第一录制数据在画面同时显示时对所显示的屏幕画面进行划分的方式,每个第一录制数据对应一个画面显示位置,不同第一录制数据的画面显示位置至少部分不同,M为大于1的整数;
基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息;
其中,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配,以使播放所述M个第一录制数据时可识别所述第一录制数据的画面的声源位置。
第二方面,本申请实施例提供了一种录制数据处理装置,所述装置应用于第一电子设备,该装置包括:
第一获取模块,用于获取M个第一录制数据,M为大于1的整数;
第二获取模块,用于获取所述M个第一录制数据的第一画面分屏方式,所述第一画面分屏方式表征所述M个第一录制数据在画面同时显示时对所显示的屏幕画面进行划分的方式,每个第一录制数据对应一个画面显示位置,不同第一录制数据的画面显示位置至少部分不同;
第一确定模块,用于基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息;
其中,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配,以使播放所述M个第一录制数据时可识别所述第一录制数据的画面的声源位置。
第三方面,本申请实施例提供了一种电子设备,该终端包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的录制数据处理方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的录制数据处理方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的录制数据处理方法的步骤。
在本申请实施例中,通过获取M个第一录制数据,以及获取所述M个第一录制数据的第一画面分屏方式;基于所述第一画面分屏方式,确定所述 M个第一录制数据一一关联的M个空间音频位置信息,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配。如此,按照空间音频位置信息进行M个第一录制数据同步播放时,可以使得不同录制数据的画面显示匹配对应位置的空间相位音频,使用户可识别录制数据的画面声源位置,从而可以使得录制数据的画面更具空间感和立体感,进而可以提高录制数据的播放效果。
附图说明
图1是本申请实施例提供的录制数据处理方法的流程图;
图2-图5是画面分屏方式下的屏幕画面显示示意图;
图6是一示例的空间音频相位分配示意图;
图7是另一示例的空间音频相位分配示意图;
图8是多个第二电子设备对第一电子设备进行授权的结构示意图;
图9是多个电子设备进行协同录制的结构示意图;
图10-图11是对第三画面分屏方式进行调整的过程示意图;
图12是M个第一录制数据的播放示意图;
图13是暂停播放视频画面时的显示示意图;
图14是点击某一视频画面时M个第一视频画面的音频播放示意图;
图15是双击某一视频画面时M个第一视频画面的音频播放示意图;
图16是本申请实施例提供的录制数据处理装置的结构图;
图17是本申请实施例提供的电子设备的结构图;
图18为实现本申请实施例的一种电子设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施 例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的录制数据处理进行详细地说明。
图1是本申请实施例提供的录制数据处理方法的流程图,如图1所示,包括以下步骤:
步骤101,获取M个第一录制数据,以及获取所述M个第一录制数据的第一画面分屏方式,所述第一画面分屏方式表征所述M个第一录制数据在画面同时显示时对所显示的屏幕画面进行划分的方式,每个第一录制数据对应一个画面显示位置,不同第一录制数据的画面显示位置至少部分不同。
其中,M为大于1的整数。
本实施例提供的录制数据处理方法可以应用于第一电子设备,第一电子设备可以为任一电子设备,在一可选实施方式中,第一电子设备可以为用于协同录制的授权设备,即其他电子设备对第一电子设备进行授权,以使第一电子设备可控制至少两个电子设备进行协同录制,得到至少两个录制数据。其中,协同录制可以指的是录像或录音等,这里不进行具体限定。
该步骤中,录制数据可以为录像数据即视频,也可以为录音数据即音频。以下实施例,录制数据将以录像数据为例进行说明。
M个第一录制数据可以为协同录像得到的至少两个录制数据,也可以为任意的录制数据,这里不进行具体限定。其中,M个第一录制数据之间可以相互关联,以在播放时可以协同播放。
M个第一录制数据的获取方式可以包括多种,在一可选实施方式中,第一电子设备可以启动协同录像模式,在该协同录像模式下协同其他电子设备录制得到M个第一录制数据。
在另一可选实施方式中,可以接收其他电子设备发送的由其他电子设备协同录像得到的M个第一录制数据。
在又一可选实施方式中,可以将用户选择的多个录制数据进行相互关联,得到M个第一录制数据,比如,在视频剪辑模式下,用户可以选择多个录制数据,以针对多个录制数据进行剪辑,相应的,第一电子设备可以将用户选择的多个录制数据作为M个第一录制数据。
画面分屏方式可以指的是分屏播放至少两个录制数据如视频时,这至少两个视频的画面对屏幕画面的划分方式,即将不同视频的画面分别显示在屏幕画面的不同显示区域,在视觉上,则屏幕画面被分成多个视频画面。
如图2至如图5所示,整个方形区域作为屏幕画面,该屏幕画面被划分成多个子显示区域,每个子显示区域用于显示一个录制数据的视频画面。其中,可以根据录制数据的数量对屏幕画面进行分屏,在画面分屏时,可以均分,如图2和如图4所示,也可以不均分,如图3和如图5所示。图5是将协同录制中授权设备录制得到的录像数据以画中画的形式显示在屏幕画面中。
第一画面分屏方式的获取方式可以包括多种,在一可选实施方式中,可以接收用户的设置信息,该设置信息包括M个第一录制数据的第一画面分屏方式。在另一可选实施方式,可以根据参与协同录制的电子设备的数量、设备属性信息、电子设备相对于录制对象的位置等信息,确定第一画面分屏方式,在又一可选实施方式中,可以接收其他电子设备发送的关联信息,该关联信息包括M个第一录制数据关联的第一画面分屏方式。
步骤102,基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息。
其中,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配,以使播放所述M个第一录制数 据时可识别所述第一录制数据的画面的声源位置。
该步骤中,空间音频位置信息可以包括空间音频相位,空间音频相位表征声源位置相对于用户的空间相对方位,如声源位置在用户的前方、左边、右边或后方等。空间音频位置信息还可以包括空间音频距离,如声源位置在用户左边10米、右边5米等。如此,通过空间音频位置信息,可以模拟出空间中固定位置的音频设备发出声音。
可以针对每个第一录制数据,基于第一画面分屏方式中该第一录制数据的画面显示位置,以用户为圆心形成360°的空间,给该第一录制数据分配一个空间音频相位,其分配的空间音频位置与该第一录制数据的画面显示位置匹配。
之后,可以基于空间音频技术,按照空间音频位置信息进行第一录制数据的播放。其中,空间音频技术可以借助人耳分辨声音方向的原理,来进行物体的定位。即设定一个声源点,当耳机可识别到人耳与声源点之间的空间相对位置,则通过空间音频技术可在两耳中播放出不同时间差、声级差的声音,甚至模拟出耳廓反射的效果。该空间音频技术可让用户感觉到声音是从声源点的方向传过来的,从而可以模拟出空间音频的效果。
人耳分辨声音方向的原理为:人耳之所以可以明确地判断出声音从哪个方向传过来,是因为声音在两耳产生了时间差与声级差,根据时间差与声级差使得用户可判断音源方向。时间差是因为声源到人耳的距离略有差异;而声级差一方面是由于头部对声音的遮挡,人耳感受到左右侧的音量大小不同,另一方面对上下侧的声音方向的判断能力则归功于耳廓的存在,来自上下侧的声音在耳廓中产生不同的反射效果,其差异可由人耳捕捉到。
由于不同第一录制数据的画面显示位置不同,采用空间音频技术播放第一录制数据的音频时其声音来源的方位不同,如此可以使得不同录制数据的画面显示匹配对应位置的空间相位音频,使用户可识别录制数据的画面声源位置。其中,所分配的空间音频位置信息可以与第一录制数据对应的文件关联,一个第一录制数据可以对应一个文件。
以图2为例,图6是一示例的空间音频相位分配示意图,左边画面可以对应左边空间相位音频,右边画面可以对应右边空间相位音频。以图4为例,图7是另一示例的空间音频相位分配示意图,根据均分画面平均分配这四个画面的空间音频相位,如左上,左下,右上,右下等。
本实施例中,通过获取M个第一录制数据,以及获取所述M个第一录制数据的第一画面分屏方式;基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配。如此,按照空间音频位置信息进行M个第一录制数据播放时,可以使得不同录制数据的画面显示匹配对应位置的空间相位音频,使用户可识别录制数据的画面声源位置,从而可以使得录制数据的画面更具空间感和立体感,进而可以提高录制数据的播放效果。
另外,为了可以使M个第一录制数据播放时呈现更好的空间感和立体感,第一电子设备可以支持有线或无线耳机功能,第一电子设备可以使用音频模组基于空间音频技术,按照空间音频位置信息进行M个第一录制数据的播放,达到更好的播放效果。其中,该音频模组可以为有线耳机,也可以为无线耳机,还可以为扬声器,如该音频模组包括至少两个扬声器。
在一可选实施方式中,M个第一录制数据是第一电子设备作为授权设备,控制至少两个电子设备进行协同录制得到,所述获取M个第一录制数据,包括:
在所述第一电子设备与N个第二电子设备通信连接,且所述第一电子设备具备所述N个第二电子设备的目标权限的情况下,向每个所述第二电子设备发送第一录制控制指令,所述目标权限表征所述第一电子设备可控制所述N个第二电子设备进行协同录制,所述第一录制控制指令用于控制所述第二电子设备开始协同录制,N为小于或等于M的正整数;
接收所述N个第二电子设备基于所述第一录制控制指令发送的N个录制数据,以得到所述M个第一录制数据;
其中,在所述第一电子设备参与协同录制的情况下,N小于M,且所述M个第一录制数据包括基于所述N个录制数据获得的N个第一录制数据和所述第一电子设备在与所述N个第二电子设备协同录制过程中得到的第一录制数据;在所述第一电子设备不参与协同录制的情况下,N等于M,所述M个第一录制数据包括基于所述N个录制数据获得的N个第一录制数据。
本实施方式中,第一电子设备和每个第二电子设备均支持网络数据传输,在一场景中,第一电子设备可参与协同录制进行录像或录音,在该场景下,第一电子设备和每个第二电子设备可支持视频录像和录音功能。在另一场景中,第一电子设备并未参与协同录制进行录像或录音,其仅作为控制设备控制至少两个第二电子设备进行协同录制。
第一电子设备可以作为授权设备,与每个第二电子设备建立通信连接,其通信连接网络可以为无线网络,如无线保真(Wireless Fidelity,WiFi)、第五代5G高速网络等。
如图8所示,在通信连接的基础上,第二电子设备可通过协议对第一电子设备进行授权,权限包括在线读取录音录像数据,在授权成功的情况下,第一电子设备可以具备每个第二电子设备的目标权限,使得第一电子设备可控制N个第二电子设备进行协同拍摄。
在授权之后,用户可启动协同录像模式,第一电子设备在协同录像模式下,可以控制N个第二电子设备进行实时录像和数据存储。如图9所示,参与协同录制的电子设备只需要打开录像界面并进入协同录像模式,由第一电子设备进行开始录像控制,相应的,向每个第二电子设备发送第一录制控制指令。
每个第二电子设备在接收到第一录制控制指令的情况下,开始录像,通过网络连接,实时将录制数据传输给第一电子设备,或者也可以在接收到第一电子设备发送的用于结束录像的控制指令的情况下,将录制数据传输给第一电子设备。
第一电子设备也可以通过无线网络高速连接,远距离同时间对N个第二 电子设备的画面进行录像,即使第一电子设备不在现场也可以清晰的看到协同的优美画面。
相应地,如图9所示,第一电子设备可以接收N个第二电子设备基于第一录制控制指令发送的N个录制数据,以得到M个第一录制数据。其中,M个第一录制数据包括参与协同录制的各电子设备录制得到的数据,其中,第一录制数据可以为参与协同录制的电子设备从开始录制至结束录制之间得到的数据。协同录制得到的M个第一录制数据之间可以相互关联。
本实施方式中,通过无线网络连接对授权多台电子设备进行控制,以进行协同录像,生成不同的录像数据,从而可以实现协同录制,实现多地同时拍摄。其中,无线网络连接使电子设备减少时间差和空间隔阂,协同拍摄过程中可以实时操作和控制,提高了多设备之间的操作相关性。
在协同录制过程中,可以实时预览显示获取到的录制数据的画面,以提高协同拍摄的效果,提高用户体验,所述方法还包括:
基于目标信息,确定第二画面分屏方式;
按照所述第二画面分屏方式,预览显示协同录制过程中获取到的录制数据;
其中,所述目标信息包括以下至少一项:
预先存储的第一映射关系,所述第一映射关系表征协同录制过程中录制数据的数量与画面分屏方式的映射关系;
在协同录制过程中各录制数据对应的设备属性信息,所述设备属性信息表征参与协同录制的电子设备在协同录制过程中的角色;
在协同录制过程中各录制数据对应的录制位置,所述录制位置表征参与协同录制的电子设备相对于录制对象的位置。
本实施方式中,可以根据参与协同录制的电子设备的数量,即录制数据的数量,按照预先存储的第一映射关系,确定用于预览显示的第二画面分屏方式,如图2至如图5所示,参与协同录制的电子设备的数量不同,其画面分屏方式不同。
在确定第二画面分屏方式时,可以结合协同录制过程中各录制数据对应的设备属性信息,设备属性信息包括授权设备、协同设备等。如录制数据A为第一电子设备即授权设备录制的数据,可以将录制数据A的画面显示在屏幕画面的最左边,如图2至如图4所示,屏幕画面最左侧的子显示区域可以显示录制数据A,也可以以画中画的形式显示授权设备录制的数据,如图5所示。
另外,在针对录制对象(可以景色、人物、动物、物品等)进行多角度的协同录制时,可以参考参与协同录制的各电子设备相对于录制对象的位置,确定第二画面分屏方式,比如,以图2为例,若第一电子设备在拍摄时在录制对象的左边,第二电子设备在拍摄时在录制对象的右边,此时,左边画面可以为第一电子设备拍摄得到的录制数据画面,右边画面可以为第二电子设备拍摄得到的录制数据画面。
以图3为例,若第一电子设备在拍摄时在录制对象的左上方,一第二电子设备(协同设备A)在拍摄时在录制对象的右上方,另一第二电子设备(协同设备B)在拍摄时在录制对象的后方,则左上方画面可以为第一电子设备拍摄得到的录制数据画面,右方向画面可以为协同设备A拍摄得到的录制数据画面,后方画面可以为协同设备B拍摄得到的录制数据画面。
相应的,可以按照第二画面分屏方式,预览显示协同录制过程中获取到的录制数据的画面,如图2至如图5所示。
可选地,所述方法还包括:
在接收到用户的第一输入的情况下,基于识别到的所述N个第二电子设备的标识信息,对每个第二电子设备基于所述第一录制控制指令发送的录制数据进行分别存储,得到所述N个第一录制数据;
其中,所述第一输入用于控制所述N个第二电子设备结束协同录制。
本实施方式中,第一输入可以为语音输入、手势输入或触控输入等,这里不进行具体限定。
第一电子设备可以对第二电子设备即协同设备进行识别,比如,可以通 过接收第二电子设备发送的标识信息识别各个第二电子设备,并在接收到用户结束录像的操作(如对录像控件的点击操作)的情况下,基于接收到的第二电子设备发送的录制数据,自动生成协同设备的录像文件(其内包括第一录制数据),并分别按照第二电子设备的标识信息进行存储放置,比如,各个第二电子设备的录像文件分别命名为协同设备A录像、协同设备B录像、协同设备C录像;同时,可以基于第一电子设备录制的数据也自动生成录像文件,命名为授权设备录像,以便后面进行数据调用。
可选地,在得到参与协同录制的各电子设备的录像文件的情况下,可以将生成的M个录像文件建立关联关系,如此可以将M个第一录像数据相互关联,从而可以使得协同拍摄得到的录制数据相互关联,简化了拍摄后的剪辑操作。
可选地,所述获取所述M个第一录制数据的第一画面分屏方式,包括以下任一项:
基于所述第一录制数据的数量,确定所述第一画面分屏方式;
在所述M个第一录制数据是通过协同录制得到的情况下,将目标画面分屏方式确定为所述第一画面分屏方式,所述目标画面分屏方式为预览显示所述协同录制过程中获取到的录制数据的画面分屏方式,所述M个第一录制数据包括所述协同录制过程中获取到的录制数据;
在基于第三画面分屏方式分屏显示所述M个第一录制数据的画面的情况下,若接收到用户对所述M个第一录制数据对应的画面显示位置进行调整的第二输入,基于所述第二输入和所述第三画面分屏方式,确定所述第一画面分屏方式。
在一可选实施方式中,第一画面分屏方式可以与第一画面分屏方式相同,即第一画面分屏方式即为协同录制过程中用于预览显示的画面分屏方式,在结束协同拍摄的情况下,可以将预览显示的画面分屏方式与生成的录像文件关联,相应可以获取录像文件关联的目标画面分屏方式作为第一画面分屏方式。
在另一可选实施方式中,可以基于所述第一录制数据的数量,确定所述第一画面分屏方式。其中,第一画面分屏方式的确定方式与第二画面分屏方式的确定方式可以相同或类似,这里不进行赘述。
在又一可选实施方式中,如果用户对录像文件本身关联的第三画面分屏方式不满意,可以根据自身喜好对第三画面分屏方式进行顺序调整。其中,第三画面分屏方式可为预览显示的画面分屏方式。
比如,用户可以针对第三画面分屏方式执行第二输入(如将一视频画面的拖动输入),相应的,第一电子设备可以基于第二输入和第三画面分屏方式,确定调整后的第一画面分屏方式,并保存录像。如图10和如图11所示,左图为第三画面分屏方式,右图为调整后得到的第一画面分屏方式。其中,第二输入可以为语音输入、手势输入或触控输入等,这里不进行具体限定。
如此,可以实现第一画面分屏方式的获取。
可选地,所述基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息之后,所述方法还包括:
基于所述第一画面分屏方式,对M个第一视频画面进行分屏播放,所述M个第一视频画面与所述M个第一录制数据一一对应,所述第一视频画面为所述第一录制数据中视频数据对应的视频画面,或者所述第一视频画面为所述第一录制数据中音频数据对应的音效画面;
基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放;
其中,播放所述M个音频数据时,针对每个第一录制数据中的音频数据,所述音频数据表征的声源位置与所述第一录制数据对应的画面显示位置匹配。
本实施方式中,在确定第一画面分屏方式的情况下,可以按照第一画面分屏方式,对M个第一视频画面进行分屏播放,其中,在第一录制数据为视频的情况下,第一视频画面可以为视频画面,在第一录制数据为音频的情况下,第一视频画面可以为音效画面。
如图12所示,进入录像播放界面,点击播放,可以按照时间顺序对多个 第一录制数据的画面进行分屏播放。并基于M个第一录制数据一一关联的M个空间音频位置信息,采用空间音频技术对所述M个第一录制数据中的M个音频数据进行同步播放。音频同步播放时,根据画面的显示位置不同,其声音来源的方位不同。
另外,为了可以使M个第一录制数据播放时呈现更好的空间感和立体感,第一电子设备可以支持有线或无线耳机功能,第一电子设备可以使用音频模组基于空间音频技术,按照空间音频位置信息进行M个第一录制数据的播放,达到更好的音频播放效果。其中,该音频模组可以为有线耳机,也可以为无线耳机,还可以为扬声器,如该音频模组包括至少两个扬声器。
本实施方式中,可以基于空间音频技术,赋予了不同视频画面的声音属性,使用户可以更好的辨别声音来源,使视频更具立体感和空间感。
可选地,所述基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放之后,还包括:
在接收到第三输入的情况下,暂停播放所述M个第一录制数据,并突出显示各所述第一视频画面的边框。
本实施方式中,第三输入可以为语音输入、手势输入或触控输入等,这里不进行具体限定。
第三输入可以用于暂停播放M个第一录制数据。相应的,第一电子设备可以同步暂停播放M个第一录制数据,并突出显示各第一视频画面的边框,其突出显示的方式包括但不限于变色、加粗、亮度、闪烁等。如图13所示,暂停视频播放时,各个视频画面可以出现边框大小的闪烁,以提醒用户可以点击进行后续操作。
可选地,所述基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放之后,还包括:
在接收到对所述M个第一视频画面中第一目标视频画面的第四输入的情况下,按照第一音量播放所述第一目标视频画面对应的目标录制数据中的音频数据,并按照第二音量播放除所述M个第一录制数据中,除所述目标录制 数据之外的第一录制数据中的音频数据,所述第一音量大于第二音量。
本实施方式中,第四输入可以为语音输入、手势输入或触控输入等,这里不进行具体限定。触控输入可以为点击输入、双击输入、拖动输入等,以下第四输入以点击输入为例进行说明。
用户可以在同步暂停播放M个第一录制数据,或者也可以在同步播放M个第一录制数据的情况下,点击其中一个视频画面如第一目标视频画面(第一目标视频画面对应目标录制数据对应的视频画面),该视频画面的声音会被适当的放大,同时其他视频画面的声音会被适当减小或保持不变,以便突出选中视频画面的声音。
如图14所示,协同设备A录制得到的第一录制数据的视频画面的声音被放大,其他视频画面的声音被减小或保持不变。
如此,可以提高M个第一录制数据播放的灵活性。
可选地,所述基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放之后,还包括
在接收到对所述M个第一视频画面中第二目标视频画面的第五输入的情况下,禁止播放所述M个第一录制数据中,除所述第二目标视频画面对应的第一录制数据之外的其他第一录制数据中的音频数据。
本实施方式中,第五输入可以为语音输入、手势输入或触控输入等,这里不进行具体限定。触控输入可以为点击输入、双击输入、拖动输入等,以下第五输入以双击输入为例进行说明。
用户可以在同步暂停播放M个第一录制数据的情况下,双击其中一个视频画面如第二目标视频画面,相应的,第一电子设备可以输出该视频画面的声音,其他视频画面的声音将会被禁止。
或者,用户也可以在同步播放M个第一录制数据的情况下,双击其中一个视频画面如第二目标视频画面,相应的,第一电子设备将会禁止其他视频画面的声音的输出,而保持第二目标视频画面的声音输出。
如图15所示,双击协同设备A录制得到的第一录制数据的视频画面, 仅输出协同设备A录制得到的第一录制数据的视频画面的声音,其他视频画面的声音禁止输出,同时可以全屏播放协同设备A录制得到的第一录制数据。
如此,可以提高M个第一录制数据播放的灵活性。
需要说明的是,本申请实施例提供的录制数据处理方法,执行主体可以为录制数据处理装置,或者录制数据处理装置中的用于执行录制数据处理方法的控制模块。本申请实施例中以录制数据处理装置执行录制数据处理方法为例,说明本申请实施例提供的录制数据处理装置。
参见图16,图16是本申请实施例提供的录制数据处理装置的结构图,所述装置应用于第一电子设备,如图16所示,录制数据处理装置1600包括:
第一获取模块1601,用于获取M个第一录制数据,M为大于1的整数;
第二获取模块1602,用于获取所述M个第一录制数据的第一画面分屏方式,所述第一画面分屏方式表征所述M个第一录制数据在画面同时显示时对所显示的屏幕画面进行划分的方式,每个第一录制数据对应一个画面显示位置,不同第一录制数据的画面显示位置至少部分不同;
第一确定模块1603,用于基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息;
其中,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配,以使播放所述M个第一录制数据时可识别所述第一录制数据的画面的声源位置。
可选地,所述第一获取模块1601,具体用于:
在所述第一电子设备与N个第二电子设备通信连接,且所述第一电子设备具备所述N个第二电子设备的目标权限的情况下,向每个所述第二电子设备发送第一录制控制指令,所述目标权限表征所述第一电子设备可控制所述N个第二电子设备进行协同录制,所述第一录制控制指令用于控制所述第二电子设备开始协同录制,N为小于或等于M的正整数;
接收所述N个第二电子设备基于所述第一录制控制指令发送的N个录制数据,以得到所述M个第一录制数据;
其中,在所述第一电子设备参与协同录制的情况下,N小于M,且所述M个第一录制数据包括基于所述N个录制数据获得的N个第一录制数据和所述第一电子设备在与所述N个第二电子设备协同录制过程中得到的第一录制数据;在所述第一电子设备不参与协同录制的情况下,N等于M,所述M个第一录制数据包括基于所述N个录制数据获得的N个第一录制数据。
可选地,所述装置还包括:
第二确定模块,用于基于目标信息,确定第二画面分屏方式;
显示模块,用于按照所述第二画面分屏方式,预览显示协同录制过程中获取到的录制数据;
其中,所述目标信息包括以下至少一项:
预先存储的第一映射关系,所述第一映射关系表征协同录制过程中录制数据的数量与画面分屏方式的映射关系;
在协同录制过程中各录制数据对应的设备属性信息,所述设备属性信息表征参与协同录制的电子设备在协同录制过程中的角色;
在协同录制过程中各录制数据对应的录制位置,所述录制位置表征参与协同录制的电子设备相对于录制对象的位置。
可选地,所述装置还包括:
存储模块,用于在接收到用户的第一输入的情况下,基于识别到的所述N个第二电子设备的标识信息,对每个第二电子设备基于所述第一录制控制指令发送的录制数据进行分别存储,得到所述N个第一录制数据;
其中,所述第一输入用于控制所述N个第二电子设备结束协同录制。
可选地,所述第二获取模块1602,具体用于以下任一项:
基于所述第一录制数据的数量,确定所述第一画面分屏方式;
在所述M个第一录制数据是通过协同录制得到的情况下,将目标画面分屏方式确定为所述第一画面分屏方式,所述目标画面分屏方式为预览显示所述协同录制过程中获取到的录制数据的画面分屏方式,所述M个第一录制数据包括所述协同录制过程中获取到的录制数据;
在基于第三画面分屏方式分屏显示所述M个第一录制数据的画面的情况下,若接收到用户对所述M个第一录制数据对应的画面显示位置进行调整的第二输入,基于所述第二输入和所述第三画面分屏方式,确定所述第一画面分屏方式。
可选地,所述装置还包括:
第一播放模块,用于基于所述第一画面分屏方式,对M个第一视频画面进行分屏播放,所述M个第一视频画面与所述M个第一录制数据一一对应,所述第一视频画面为所述第一录制数据中视频数据对应的视频画面,或者所述第一视频画面为所述第一录制数据中音频数据对应的音效画面;
第二播放模块,用于基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放;
其中,播放所述M个音频数据时,针对每个第一录制数据中的音频数据,所述音频数据表征的声源位置与所述第一录制数据对应的画面显示位置匹配。
可选地,所述装置还包括:
暂停播放处理模块,用于在接收到第三输入的情况下,暂停播放所述M个第一录制数据,并突出显示各所述第一视频画面的边框。
可选地,所述装置还包括:
第三播放模块,用于在接收到对所述M个第一视频画面中第一目标视频画面的第四输入的情况下,按照第一音量播放所述第一目标视频画面对应的目标录制数据中的音频数据,并按照第二音量播放除所述M个第一录制数据中,除所述目标录制数据之外的第一录制数据中的音频数据,所述第一音量大于第二音量。
可选地,所述装置还包括:
禁止播放处理模块,用于在接收到对所述M个第一视频画面中第二目标视频画面的第五输入的情况下,禁止播放所述M个第一录制数据中,除所述第二目标视频画面对应的第一录制数据之外的其他第一录制数据中的音频数据。
本实施例中,通过第一获取模块1601获取M个第一录制数据,以及通过第二获取模块1602获取所述M个第一录制数据的第一画面分屏方式;通过第一确定模块1603基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配。如此,按照空间音频位置信息进行M个第一录制数据同步播放时,可以使得不同录制数据的画面显示匹配对应位置的空间相位音频,使用户可识别录制数据的画面声源位置,从而可以使得录制数据的画面更具空间感和立体感,进而可以提高录制数据的播放效果。
本申请实施例中的录制数据处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的录制数据处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的录制数据处理装置能够实现图1的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图17所示,本申请实施例还提供一种电子设备1700,该电子设备为第一电子设备,包括处理器1701,存储器1702,存储在存储器1702上并可在所述处理器1701上运行的程序或指令,该程序或指令被处理器1701执行时实现上述录制数据处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图18为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1800包括但不限于:射频单元1801、网络模块1802、音频输出单元1803、输入单元1804、传感器1805、显示单元1806、用户输入单元1807、接口单元1808、存储器1809、以及处理器1810等部件。
本领域技术人员可以理解,电子设备1800还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1810逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图18中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器1810,用于获取M个第一录制数据,以及获取所述M个第一录制数据的第一画面分屏方式,所述第一画面分屏方式表征所述M个第一录制数据在画面同时显示时对所显示的屏幕画面进行划分的方式,每个第一录制数据对应一个画面显示位置,不同第一录制数据的画面显示位置至少部分不同,M为大于1的整数;基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息;
其中,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配,以使播放所述M个第一录制数据时可识别所述第一录制数据的画面的声源位置。
本实施例中,通过处理器1810获取M个第一录制数据,以及获取所述M个第一录制数据的第一画面分屏方式;基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配。如此,按照空间音频位置信息进行M个第一录制数据同步播放时,可以使得不同录制数据的画面显示匹配对应位置的空间相位音频,使用户可识别录制数据的画面声源位置,从而可以使得录制数据的画面更具 空间感和立体感,进而可以提高录制数据的播放效果。
可选地,射频单元1801,用于在所述第一电子设备与N个第二电子设备通信连接,且所述第一电子设备具备所述N个第二电子设备的目标权限的情况下,向每个所述第二电子设备发送第一录制控制指令,所述目标权限表征所述第一电子设备可控制所述N个第二电子设备进行协同录制,所述第一录制控制指令用于控制所述第二电子设备开始协同录制,N为小于或等于M的正整数;
射频单元1801,用于接收所述N个第二电子设备基于所述第一录制控制指令发送的N个录制数据,以得到所述M个第一录制数据;
其中,在所述第一电子设备参与协同录制的情况下,N小于M,且所述M个第一录制数据包括基于所述N个录制数据获得的N个第一录制数据和所述第一电子设备在与所述N个第二电子设备协同录制过程中得到的第一录制数据;在所述第一电子设备不参与协同录制的情况下,N等于M,所述M个第一录制数据包括基于所述N个录制数据获得的N个第一录制数据。
可选地,处理器1810,用于基于目标信息,确定第二画面分屏方式;
显示单元1806,用于按照所述第二画面分屏方式,预览显示协同录制过程中获取到的录制数据;
其中,所述目标信息包括以下至少一项:
预先存储的第一映射关系,所述第一映射关系表征协同录制过程中录制数据的数量与画面分屏方式的映射关系;
在协同录制过程中各录制数据对应的设备属性信息,所述设备属性信息表征参与协同录制的电子设备在协同录制过程中的角色;
在协同录制过程中各录制数据对应的录制位置,所述录制位置表征参与协同录制的电子设备相对于录制对象的位置。
可选地,用户输入单元1807,用于接收用户的第一输入;
处理器1810,还用于在接收到用户的第一输入的情况下,基于识别到的所述N个第二电子设备的标识信息,对每个第二电子设备基于所述第一录制 控制指令发送的录制数据进行分别存储,得到所述N个第一录制数据;
其中,所述第一输入用于控制所述N个第二电子设备结束协同录制。
可选地,处理器1810,用于基于所述第一录制数据的数量,确定所述第一画面分屏方式;
处理器1810,用于在所述M个第一录制数据是通过协同录制得到的情况下,将目标画面分屏方式确定为所述第一画面分屏方式,所述目标画面分屏方式为预览显示所述协同录制过程中获取到的录制数据的画面分屏方式,所述M个第一录制数据包括所述协同录制过程中获取到的录制数据;
显示单元1806,用于基于第三画面分屏方式分屏显示所述M个第一录制数据的画面;
用户输入单元1807,用于接收用户对所述M个第一录制数据对应的画面显示位置进行调整的第二输入;
处理器1810,用于在基于第三画面分屏方式分屏显示所述M个第一录制数据的画面的情况下,若接收到用户对所述M个第一录制数据对应的画面显示位置进行调整的第二输入,基于所述第二输入和所述第三画面分屏方式,确定所述第一画面分屏方式。
可选地,显示单元1806,用于基于所述第一画面分屏方式,对M个第一视频画面进行分屏播放,所述M个第一视频画面与所述M个第一录制数据一一对应,所述第一视频画面为所述第一录制数据中视频数据对应的视频画面,或者所述第一视频画面为所述第一录制数据中音频数据对应的音效画面;
音频输出单元1803,用于基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放;
其中,播放所述M个音频数据时,针对每个第一录制数据中的音频数据,所述音频数据表征的声源位置与所述第一录制数据对应的画面显示位置匹配。
可选地,用户输入单元1807,用于接收第三输入;
处理器1810,还用于在接收到第三输入的情况下,暂停播放所述M个第一录制数据;
显示单元1806,用于在接收到第三输入的情况下,突出显示各所述第一视频画面的边框。
可选地,用户输入单元1807,用于接收对所述M个第一视频画面中第一目标视频画面的第四输入;
音频输出单元1803,用于在接收到对所述M个第一视频画面中第一目标视频画面的第四输入的情况下,按照第一音量播放所述第一目标视频画面对应的目标录制数据中的音频数据,并按照第二音量播放除所述M个第一录制数据中,除所述目标录制数据之外的第一录制数据中的音频数据,所述第一音量大于第二音量。
可选地,用户输入单元1807,用于接收对所述M个第一视频画面中第二目标视频画面的第五输入;
处理器1810,用于在接收到对所述M个第一视频画面中第二目标视频画面的第五输入的情况下,禁止播放所述M个第一录制数据中,除所述第二目标视频画面对应的第一录制数据之外的其他第一录制数据中的音频数据。
应理解的是,本申请实施例中,输入单元1804可以包括图形处理器(Graphics Processing Unit,GPU)18041和麦克风18042,图形处理器18041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1806可包括显示面板18061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板18061。用户输入单元1807包括触控面板18071以及其他输入设备18072。触控面板18071,也称为触摸屏。触控面板18071可包括触摸检测装置和触摸控制器两个部分。其他输入设备18072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器1809可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器1810可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1810中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述录制数据处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述录制数据处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的 技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (20)

  1. 一种录制数据处理方法,所述方法应用于第一电子设备,所述方法包括:
    获取M个第一录制数据,以及获取所述M个第一录制数据的第一画面分屏方式,所述第一画面分屏方式表征所述M个第一录制数据在画面同时显示时对所显示的屏幕画面进行划分的方式,每个第一录制数据对应一个画面显示位置,不同第一录制数据的画面显示位置至少部分不同,M为大于1的整数;
    基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息;
    其中,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配,以使播放所述M个第一录制数据时可识别所述第一录制数据的画面的声源位置。
  2. 根据权利要求1所述的方法,其中,所述获取M个第一录制数据,包括:
    在所述第一电子设备与N个第二电子设备通信连接,且所述第一电子设备具备所述N个第二电子设备的目标权限的情况下,向每个所述第二电子设备发送第一录制控制指令,所述目标权限表征所述第一电子设备可控制所述N个第二电子设备进行协同录制,所述第一录制控制指令用于控制所述第二电子设备开始协同录制,N为小于或等于M的正整数;
    接收所述N个第二电子设备基于所述第一录制控制指令发送的N个录制数据,以得到所述M个第一录制数据;
    其中,在所述第一电子设备参与协同录制的情况下,N小于M,且所述M个第一录制数据包括基于所述N个录制数据获得的N个第一录制数据和所述第一电子设备在与所述N个第二电子设备协同录制过程中得到的第一录制数据;在所述第一电子设备不参与协同录制的情况下,N等于M,所述M个 第一录制数据包括基于所述N个录制数据获得的N个第一录制数据。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    基于目标信息,确定第二画面分屏方式;
    按照所述第二画面分屏方式,预览显示协同录制过程中获取到的录制数据;
    其中,所述目标信息包括以下至少一项:
    预先存储的第一映射关系,所述第一映射关系表征协同录制过程中录制数据的数量与画面分屏方式的映射关系;
    在协同录制过程中各录制数据对应的设备属性信息,所述设备属性信息表征参与协同录制的电子设备在协同录制过程中的角色;
    在协同录制过程中各录制数据对应的录制位置,所述录制位置表征参与协同录制的电子设备相对于录制对象的位置。
  4. 根据权利要求2所述的方法,其中,所述方法还包括:
    在接收到用户的第一输入的情况下,基于识别到的所述N个第二电子设备的标识信息,对每个第二电子设备基于所述第一录制控制指令发送的录制数据进行分别存储,得到所述N个第一录制数据;
    其中,所述第一输入用于控制所述N个第二电子设备结束协同录制。
  5. 根据权利要求1所述的方法,其中,所述获取所述M个第一录制数据的第一画面分屏方式,包括以下任一项:
    基于所述第一录制数据的数量,确定所述第一画面分屏方式;
    在所述M个第一录制数据是通过协同录制得到的情况下,将目标画面分屏方式确定为所述第一画面分屏方式,所述目标画面分屏方式为预览显示所述协同录制过程中获取到的录制数据的画面分屏方式,所述M个第一录制数据包括所述协同录制过程中获取到的录制数据;
    在基于第三画面分屏方式分屏显示所述M个第一录制数据的画面的情况下,若接收到用户对所述M个第一录制数据对应的画面显示位置进行调整的第二输入,基于所述第二输入和所述第三画面分屏方式,确定所述第一画面 分屏方式。
  6. 根据权利要求1所述的方法,其中,所述基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息之后,所述方法还包括:
    基于所述第一画面分屏方式,对M个第一视频画面进行分屏播放,所述M个第一视频画面与所述M个第一录制数据一一对应,所述第一视频画面为所述第一录制数据中视频数据对应的视频画面,或者所述第一视频画面为所述第一录制数据中音频数据对应的音效画面;
    基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放;
    其中,播放所述M个音频数据时,针对每个第一录制数据中的音频数据,所述音频数据表征的声源位置与所述第一录制数据对应的画面显示位置匹配。
  7. 根据权利要求6所述的方法,其中,所述基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放之后,还包括:
    在接收到第三输入的情况下,暂停播放所述M个第一录制数据,并突出显示各所述第一视频画面的边框。
  8. 根据权利要求6所述的方法,其中,所述基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放之后,还包括:
    在接收到对所述M个第一视频画面中第一目标视频画面的第四输入的情况下,按照第一音量播放所述第一目标视频画面对应的目标录制数据中的音频数据,并按照第二音量播放除所述M个第一录制数据中,除所述目标录制数据之外的第一录制数据中的音频数据,所述第一音量大于第二音量。
  9. 根据权利要求6所述的方法,其中,所述基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放之后,还包括
    在接收到对所述M个第一视频画面中第二目标视频画面的第五输入的情况下,禁止播放所述M个第一录制数据中,除所述第二目标视频画面对应的第一录制数据之外的其他第一录制数据中的音频数据。
  10. 一种录制数据处理装置,所述装置应用于第一电子设备,所述装置包括:
    第一获取模块,用于获取M个第一录制数据,M为大于1的整数;
    第二获取模块,用于获取所述M个第一录制数据的第一画面分屏方式,所述第一画面分屏方式表征所述M个第一录制数据在画面同时显示时对所显示的屏幕画面进行划分的方式,每个第一录制数据对应一个画面显示位置,不同第一录制数据的画面显示位置至少部分不同;
    第一确定模块,用于基于所述第一画面分屏方式,确定所述M个第一录制数据一一关联的M个空间音频位置信息;
    其中,所述空间音频位置信息包括空间音频相位,所述空间音频相位与所述第一录制数据对应的画面显示位置匹配,以使播放所述M个第一录制数据时可识别所述第一录制数据的画面的声源位置。
  11. 根据权利要求10所述的装置,其中,所述第一获取模块,具体用于:
    在所述第一电子设备与N个第二电子设备通信连接,且所述第一电子设备具备所述N个第二电子设备的目标权限的情况下,向每个所述第二电子设备发送第一录制控制指令,所述目标权限表征所述第一电子设备可控制所述N个第二电子设备进行协同录制,所述第一录制控制指令用于控制所述第二电子设备开始协同录制,N为小于或等于M的正整数;
    接收所述N个第二电子设备基于所述第一录制控制指令发送的N个录制数据,以得到所述M个第一录制数据;
    其中,在所述第一电子设备参与协同录制的情况下,N小于M,且所述M个第一录制数据包括基于所述N个录制数据获得的N个第一录制数据和所述第一电子设备在与所述N个第二电子设备协同录制过程中得到的第一录制数据;在所述第一电子设备不参与协同录制的情况下,N等于M,所述M个 第一录制数据包括基于所述N个录制数据获得的N个第一录制数据。
  12. 根据权利要求11所述的装置,其中,所述装置还包括:
    第二确定模块,用于基于目标信息,确定第二画面分屏方式;
    显示模块,用于按照所述第二画面分屏方式,预览显示协同录制过程中获取到的录制数据;
    其中,所述目标信息包括以下至少一项:
    预先存储的第一映射关系,所述第一映射关系表征协同录制过程中录制数据的数量与画面分屏方式的映射关系;
    在协同录制过程中各录制数据对应的设备属性信息,所述设备属性信息表征参与协同录制的电子设备在协同录制过程中的角色;
    在协同录制过程中各录制数据对应的录制位置,所述录制位置表征参与协同录制的电子设备相对于录制对象的位置。
  13. 根据权利要求11所述的装置,其中,所述装置还包括:
    存储模块,用于在接收到用户的第一输入的情况下,基于识别到的所述N个第二电子设备的标识信息,对每个第二电子设备基于所述第一录制控制指令发送的录制数据进行分别存储,得到所述N个第一录制数据;
    其中,所述第一输入用于控制所述N个第二电子设备结束协同录制。
  14. 根据权利要求10所述的装置,其中,所述第二获取模块,具体用于以下任一项:
    基于所述第一录制数据的数量,确定所述第一画面分屏方式;
    在所述M个第一录制数据是通过协同录制得到的情况下,将目标画面分屏方式确定为所述第一画面分屏方式,所述目标画面分屏方式为预览显示所述协同录制过程中获取到的录制数据的画面分屏方式,所述M个第一录制数据包括所述协同录制过程中获取到的录制数据;
    在基于第三画面分屏方式分屏显示所述M个第一录制数据的画面的情况下,若接收到用户对所述M个第一录制数据对应的画面显示位置进行调整的第二输入,基于所述第二输入和所述第三画面分屏方式,确定所述第一画面 分屏方式。
  15. 根据权利要求10所述的装置,其中,所述装置还包括:
    第一播放模块,用于基于所述第一画面分屏方式,对M个第一视频画面进行分屏播放,所述M个第一视频画面与所述M个第一录制数据一一对应,所述第一视频画面为所述第一录制数据中视频数据对应的视频画面,或者所述第一视频画面为所述第一录制数据中音频数据对应的音效画面;
    第二播放模块,用于基于所述M个空间音频位置信息,对所述M个第一录制数据中的M个音频数据进行同步播放;
    其中,播放所述M个音频数据时,针对每个第一录制数据中的音频数据,所述音频数据表征的声源位置与所述第一录制数据对应的画面显示位置匹配。
  16. 根据权利要求15所述的装置,其中,所述装置还包括:
    暂停播放处理模块,用于在接收到第三输入的情况下,暂停播放所述M个第一录制数据,并突出显示各所述第一视频画面的边框。
  17. 根据权利要求15所述的装置,其中,所述装置还包括:
    第三播放模块,用于在接收到对所述M个第一视频画面中第一目标视频画面的第四输入的情况下,按照第一音量播放所述第一目标视频画面对应的目标录制数据中的音频数据,并按照第二音量播放除所述M个第一录制数据中,除所述目标录制数据之外的第一录制数据中的音频数据,所述第一音量大于第二音量。
  18. 根据权利要求15所述的装置,其中,所述装置还包括:
    禁止播放处理模块,用于在接收到对所述M个第一视频画面中第二目标视频画面的第五输入的情况下,禁止播放所述M个第一录制数据中,除所述第二目标视频画面对应的第一录制数据之外的其他第一录制数据中的音频数据。
  19. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-9任一项所述的录制数据处理方法的步骤。
  20. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-9任一项所述的录制数据处理方法的步骤。
PCT/CN2023/092291 2022-05-09 2023-05-05 录制数据处理方法、装置及电子设备 WO2023216993A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210501027.3 2022-05-09
CN202210501027.3A CN114827686A (zh) 2022-05-09 2022-05-09 录制数据处理方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2023216993A1 true WO2023216993A1 (zh) 2023-11-16

Family

ID=82513319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/092291 WO2023216993A1 (zh) 2022-05-09 2023-05-05 录制数据处理方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN114827686A (zh)
WO (1) WO2023216993A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827686A (zh) * 2022-05-09 2022-07-29 维沃移动通信有限公司 录制数据处理方法、装置及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110121303A (ko) * 2010-04-30 2011-11-07 주식회사 에스원 음원 위치 표시 장치 및 그 방법
CN107333144A (zh) * 2016-04-28 2017-11-07 深圳锐取信息技术股份有限公司 基于足球赛事转播系统的多路画面显示方法及装置
CN111246104A (zh) * 2020-01-22 2020-06-05 维沃移动通信有限公司 视频录制方法及电子设备
CN113329138A (zh) * 2021-06-03 2021-08-31 维沃移动通信有限公司 视频拍摄方法、视频播放方法和电子设备
CN113365013A (zh) * 2020-03-06 2021-09-07 华为技术有限公司 一种音频处理方法及设备
CN114827686A (zh) * 2022-05-09 2022-07-29 维沃移动通信有限公司 录制数据处理方法、装置及电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108462892B (zh) * 2018-03-26 2019-08-06 百度在线网络技术(北京)有限公司 图像和音频同步播放的处理方法及设备
CN109194999B (zh) * 2018-09-07 2021-07-09 深圳创维-Rgb电子有限公司 一种实现声音与图像同位的方法、装置、设备及介质
CN111131866B (zh) * 2019-11-25 2021-06-15 华为技术有限公司 一种投屏音视频播放方法及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110121303A (ko) * 2010-04-30 2011-11-07 주식회사 에스원 음원 위치 표시 장치 및 그 방법
CN107333144A (zh) * 2016-04-28 2017-11-07 深圳锐取信息技术股份有限公司 基于足球赛事转播系统的多路画面显示方法及装置
CN111246104A (zh) * 2020-01-22 2020-06-05 维沃移动通信有限公司 视频录制方法及电子设备
CN113365013A (zh) * 2020-03-06 2021-09-07 华为技术有限公司 一种音频处理方法及设备
CN113329138A (zh) * 2021-06-03 2021-08-31 维沃移动通信有限公司 视频拍摄方法、视频播放方法和电子设备
CN114827686A (zh) * 2022-05-09 2022-07-29 维沃移动通信有限公司 录制数据处理方法、装置及电子设备

Also Published As

Publication number Publication date
CN114827686A (zh) 2022-07-29

Similar Documents

Publication Publication Date Title
US11895426B2 (en) Method and apparatus for capturing video, electronic device and computer-readable storage medium
WO2017173793A1 (zh) 一种视频投屏方法及装置
WO2020077855A1 (zh) 视频拍摄方法、装置、电子设备及计算机可读存储介质
US20200117353A1 (en) Theming for virtual collaboration
US20210321046A1 (en) Video generating method, apparatus, electronic device and computer storage medium
WO2020083021A1 (zh) 视频录制方法、视频播放方法、装置、设备及存储介质
WO2016177296A1 (zh) 一种生成视频的方法和装置
US8391671B2 (en) Information processing device and method, recording medium, and program
CN109660817B (zh) 视频直播方法、装置及系统
JP2016509380A (ja) ビデオヘッドフォン、システム、プラットフォーム、方法、機器、およびメディア
US8099460B2 (en) Information processing device and method, recording medium, and program
WO2015078199A1 (zh) 直播互动方法、装置、客户端、服务器及系统
CN110798622B (zh) 一种共享拍摄方法及电子设备
JP2014116797A (ja) 情報処理装置、情報処理方法、情報処理システム及びプログラム
CN109920065A (zh) 资讯的展示方法、装置、设备及存储介质
CN111343476A (zh) 视频共享方法、装置、电子设备及存储介质
WO2023216993A1 (zh) 录制数据处理方法、装置及电子设备
US20210311699A1 (en) Method and device for playing voice, electronic device, and storage medium
CN112261481B (zh) 互动视频的创建方法、装置、设备及可读存储介质
US20220078221A1 (en) Interactive method and apparatus for multimedia service
CN106412712A (zh) 视频播放方法及装置
JP4572615B2 (ja) 情報処理装置および方法、記録媒体、並びにプログラム
WO2023098011A1 (zh) 视频播放方法及电子设备
KR20170012109A (ko) 동화상 재생 프로그램, 장치, 및 방법
US10298525B2 (en) Information processing apparatus and method to exchange messages

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23802765

Country of ref document: EP

Kind code of ref document: A1