WO2018094871A1 - 体感控制数据的生成、输出控制方法及装置 - Google Patents

体感控制数据的生成、输出控制方法及装置 Download PDF

Info

Publication number
WO2018094871A1
WO2018094871A1 PCT/CN2017/072091 CN2017072091W WO2018094871A1 WO 2018094871 A1 WO2018094871 A1 WO 2018094871A1 CN 2017072091 W CN2017072091 W CN 2017072091W WO 2018094871 A1 WO2018094871 A1 WO 2018094871A1
Authority
WO
WIPO (PCT)
Prior art keywords
somatosensory
control data
file
control
somatosensory control
Prior art date
Application number
PCT/CN2017/072091
Other languages
English (en)
French (fr)
Inventor
包磊
Original Assignee
包磊
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 包磊 filed Critical 包磊
Publication of WO2018094871A1 publication Critical patent/WO2018094871A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the invention belongs to the field of multimedia technology, and particularly relates to a method and a device for generating and outputting somatosensory control data.
  • the prior art can simulate the other body sensations that humans have, including simulating the perceptions of the human body such as touch, force, temperature, humidity, wind, and smell.
  • the realization principle of the somatosensory sensing is to send a somatosensory control signal to the somatosensory sensing device attached to the human body surface to control the somatosensory sensing device to perform various types of somatosensory feedback to the human body. For example, when shaking hands with a virtual character in a VR game, the force feedback control signal is used to control the magnitude and duration of the force generated by the force sensor located at the hand, thereby bringing a force feedback experience that matches the handshake behavior in the game.
  • the technician found that the prior art has at least the following technical defects:
  • the embodiments of the present invention provide a method and a device for generating and outputting somatosensory control data, so as to solve the somatosensory experience in the prior art that matches the virtual environment created by the video content, Combining or synchronizing the output for a large amount of post-processing, it takes a lot of time The problem of cost.
  • an embodiment of the present invention provides a method for generating a somatosensory control data, where the method includes:
  • a somatosensory control data packet associated with the image frame is separately generated;
  • the somatosensory control device synchronously outputs the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device based on a playback frame rate of the video file, to implement the somatosensory transmission Output control of the sensing device.
  • the generating the somatosensory control data packet associated with the image frame separately includes:
  • the M and the N are each an integer greater than or equal to 1.
  • the generating the somatosensory control data packet associated with the image frame separately includes:
  • control information of the jth type of the somatosensory sensing device of the i-th body point associated with the image frame is not obtained, a preset character is written in each data bit corresponding to the control information;
  • the i and the j are integers greater than or equal to 1, and the i is less than or equal to the M, and the j is less than or equal to the N.
  • the preset character is 0.
  • a device for generating somatosensory control data comprising:
  • a first generating unit configured to separately generate, for each image frame of the video file, the image frame Closed somatosensory control data packet;
  • a second generating unit configured to sequentially arrange all the generated somatosensory control data packets according to a playing sequence of the related image frames to generate a somatosensory control file
  • a publishing unit configured to issue the somatosensory control file, so that the somatosensory control device synchronously outputs the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device based on a playback frame rate of the video file, to implement Output control of the somatosensory sensing device.
  • the first generating unit includes:
  • a first writing subunit configured to write the acquired somatosensory control data of the M body points to the somatosensory control data packet related to the image frame;
  • the M and the N are each an integer greater than or equal to 1.
  • the first generating unit further includes:
  • a second writing subunit configured to: if the control information of the jth type of the somatosensory sensing device of the i-th body point associated with the image frame is not obtained, each data bit corresponding to the control information Write preset characters;
  • the i and the j are integers greater than or equal to 1, and the i is less than or equal to the M, and the j is less than or equal to the N.
  • the preset character is 0.
  • a somatosensory control data packet is generated based on each image frame of the video file, and all the generated somatosensory control data packets are sequentially arranged according to the playback order of the related image frames to obtain a somatosensory control file and released, so that the somatosensory control is performed.
  • the device synchronously outputs the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device based on the playback frame rate of the video file, so as to implement output control of the somatosensory sensing device, and the video content is realized simply and efficiently. Create a virtual environment that matches the somatosensory simulation, It saves the time cost incurred by a large number of post-technical processes.
  • an embodiment of the present invention provides a method for controlling output control of somatosensory control data, where the method includes:
  • a somatosensory control file associated with the currently played video file the somatosensory control file being arranged by the somatosensory control data packet, each of the somatosensory control data packets being sequentially associated with one image frame in the video file;
  • the synchronously outputting the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device includes:
  • the M, the N, and the i are all integers greater than or equal to 1, and the i is less than or equal to M.
  • the synchronizing output of the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device further includes:
  • control information corresponding to the jth type of the somatosensory sensing device of the i-th body point in the somatosensory control data packet is a preset character in each data bit, the somatosensory control data packet is associated with While the image frame is being played, the output of the control information is stopped;
  • the j is an integer greater than or equal to 1, and the j is less than or equal to N.
  • the preset character is 0.
  • the loading and the currently playing video file include:
  • a somatosensory control file having the same file name as the currently played video file is loaded.
  • the loading the somatosensory control file associated with the currently played video file includes:
  • an embodiment of the present invention provides a device for generating somatosensory control data, where the device includes:
  • a first generating unit configured to generate, for each image frame of the video file, a somatosensory control data packet associated with the image frame
  • a second generating unit configured to sequentially arrange all the generated somatosensory control data packets according to a playing sequence of the related image frames to generate a somatosensory control file
  • a publishing unit configured to issue the somatosensory control file, so that the somatosensory control device synchronously outputs the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device based on a playback frame rate of the video file, to implement Output control of the somatosensory sensing device.
  • the output unit includes:
  • a first parsing subunit configured to parse the body sense control data of the M body points from the somatosensory control data packet
  • a second parsing subunit configured to parse control information of the N types of the somatosensory sensing devices from the parsed somatosensory control data of the i-th body point;
  • a first output control subunit configured to output the parsed control information to the N types of somatosensory sensing devices on the i-th body point;
  • the M, the N, and the i are all integers greater than or equal to 1, and the i is less than or equal to M.
  • the output unit further includes:
  • a second output control subunit configured to: in the somatosensory control data packet, the i-th body point
  • the control information corresponding to the jth type of the somatosensory sensing device is a preset character in each data bit, and the output of the control information is stopped while the image frame associated with the somatosensory control data packet is played;
  • the j is an integer greater than or equal to 1, and the j is less than or equal to N.
  • the preset character is 0.
  • the loading unit is specifically configured to:
  • a somatosensory control file having the same file name as the currently played video file is loaded.
  • the loading unit is specifically configured to:
  • the somatosensory control device synchronously outputs the somatosensory control data packet related to the image frame in the somatosensory control file to the somatosensory sensing device based on the playback frame rate of the video file, so as to realize output control of the somatosensory sensing device, which is simple.
  • the somatosensory simulation matching the virtual environment created by the video content is efficiently realized, which saves the time cost caused by a large amount of post-processing.
  • FIG. 1 is a flowchart of an implementation of a method for generating somatosensory control data according to an embodiment of the present invention
  • FIG. 2 is a specific implementation flowchart of a method S101 for generating somatosensory control data according to an embodiment of the present invention
  • FIG. 3 is a structural block diagram of a device for generating body feeling control data according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a computing node according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of an implementation of an output control method for somatosensory control data according to an embodiment of the present invention
  • FIG. 6 is a specific implementation of an output control method S502 for receiving somatosensory control data according to an embodiment of the present invention. flow chart;
  • FIG. 7 is a structural block diagram of an output control apparatus for somatosensory control data according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of another computing node according to an embodiment of the present invention.
  • the somatosensory control data is used to control the somatosensory sensing device, and the somatosensory control data is sent from the somatosensory control device in the form of a somatosensory control signal and transmitted to the somatosensory sensor electrically connected to the somatosensory control device.
  • the somatosensory control device and one or more somatosensory sensing devices can be combined with the wearable product, for example, attaching the somatosensory control device and the plurality of somatosensory sensing devices to the wearable body that wraps the entire body of the user.
  • the somatosensory control device outputs the somatosensory control data to the plurality of somatosensory sensing devices, so that the plurality of somatosensory sensing devices perform a somatosensory simulation on the user based on the received somatosensory control data.
  • a somatosensory control packet associated with the image frame is generated for each image frame of the video file.
  • the video file refers to a multimedia file that includes audio and video information. Further, the video file may further include a VR video file, which is characterized by being capable of presenting a 3D and full-view visual effect.
  • a VR video file which is characterized by being capable of presenting a 3D and full-view visual effect.
  • the video file formats in the prior art such as AVI, WMV, MPEG, MOV, etc.
  • the original content is one.
  • the frame is captured and acquired one frame at a time. Therefore, playing the video file is actually displaying the captured image frame frame by frame, and combining the synchronized play of the audio to achieve the audiovisual effect. Therefore, based on the data characteristics that the video file is substantially composed of image frames, in the embodiment of the present invention, each image frame in the video file is added as a data packet to generate a somatosensory control associated with the image frame. data pack.
  • the somatosensory control data packet associated with an image frame may carry the somatosensory control data of one or more body points, and the somatosensory control data of each body point includes one Control information for one or more types of somatosensory sensing devices.
  • the process of generating an image frame related somatosensory control data packet is as follows:
  • S201 Acquire somatosensory control data of M body points related to the image frame, and the somatosensory control data of each body point includes control information of N types of somatosensory sensing devices.
  • M and the N are integers greater than or equal to 1.
  • the somatosensory control data packet associated with the image frame is used to control the generation of a somatosensory sensing device on a body point during playback of the image frame.
  • Somatosensory simulation when M is greater than 1 and N is equal to 1, the somatosensory control data packet associated with each image frame is used to control the same somatosensory sensing device on multiple body points while the image frame is being played.
  • Simulation when both M and N are greater than 1, the somatosensory control data packet associated with each image frame is used to control a plurality of somatosensory sensing devices on a plurality of body points to simultaneously generate a somatosensory simulation during playback of the image frame.
  • At least one or more types of somatosensory sensing devices as listed in Table 1 may be integrated according to product requirements:
  • a frame format of a somatosensory control data packet is also proposed, and the frame format comprises a start control frame as a frame header and a somatosensory data frame.
  • the start control frame includes 4 bytes of control data, which are control frame byte 1, control frame byte 2, control frame byte 3, and control frame byte. 4.
  • Each control frame byte can be written into the corresponding data content according to the needs of the somatosensory control.
  • the control frame byte can be used to indicate the number of body points covered by the somatosensory control data packet, or to indicate the somatosensory control data packet. Whether it needs to be shielded by the control device.
  • the somatosensory data frame carries the somatosensory control data of several kinds of somatosensory sensors sequentially written, and also reserves the data bits for writing the somatosensory control data of several other somatosensory sensors according to development needs in the future:
  • the somatosensory control data corresponding to each of the somatosensory sensors may be used to indicate the somatosensory sensor related somatosensory control mode, or to describe the somatosensory sensor related somatosensory function implementation. Except as shown in Table 3, each of the somatosensory sensors corresponds to 4 bytes of somatosensory control data, which are respectively a somatosensory control mode byte 1, a somatosensory function byte 1, a somatosensory function byte 2, and a somatosensory function byte. 3.
  • Table 4 shows an example of the frame structure of a complete somatosensory control data packet.
  • the frame header of the somatosensory control data packet is a 4-byte initial control frame, and the somatosensory data frame is sequentially written.
  • the somatosensory control data of 13 kinds of somatosensory sensing devices can further write the somatosensory control data of the seven kinds of somatosensory sensing devices, and the somatosensory control data of each somatosensory sensing device is 4 byte.
  • Table 5 shows the somatosensory control data frame structures corresponding to the 13 somatosensory sensing preambles mentioned in Table 1:
  • the somatosensory control data packet is generated by using the frame format described above, on the one hand, the somatosensory control data of a plurality of different types of somatosensory sensing devices can be integrated into a unified data format. Effectively eliminating the fragmentation phenomenon of the sensing device due to different suppliers; on the other hand, writing the somatosensory control data of a plurality of different types of somatosensory sensing devices into one data packet for transmission, To some extent, the occurrence of packet loss during data transmission is reduced, and the reliability of data communication is improved.
  • control information of the jth type of the somatosensory sensing device of the i-th body point associated with the image frame is not obtained, a preset character is written in each data bit corresponding to the control information;
  • the i and the j are integers greater than or equal to 1, and the i is less than or equal to the M, and the j is less than or equal to the N.
  • a character is preset. For any image frame, if a body image is not matched to a certain type of somatosensory simulation while the image frame is being played, the somatosensory control data packet associated with the image frame is Each data bit occupied by the somatosensory sensing device of the type corresponding to the body point is written into the preset character. For example, in a virtual indoor environment, no wind sense simulation is required, and each data bit occupied by the wind sensing device on all body points is written in the image frame related somatosensory control data packet.
  • the preset character is described in a virtual indoor environment.
  • the frame writing scheme of the above-mentioned somatosensory control data packet can ensure that the data lengths of all the somatosensory control data packets are consistent, so that the data lengths of the related somatosensory control data packets are the same for all the image frames in the video file, which is beneficial to the whole
  • the amount of data of the somatosensory control file associated with the video file is estimated and can help to quickly detect the packet loss during the data verification process.
  • the preset character is 0.
  • the process of playing a video file is actually a process of sequentially displaying the captured image frames on a frame-by-frame basis. Therefore, in the embodiment of the present invention, all generated according to the playback order of the image frames in the video file.
  • the somatosensory control data packets are sequentially arranged to generate a somatosensory control file.
  • the somatosensory control file is issued, so that the somatosensory control device synchronously outputs the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device based on the playback frame rate of the video file to implement Output control of the somatosensory sensing device.
  • the issuer of the somatosensory control file may be a video content provider, a somatosensory device provider, or a third party somatosensory control data provider.
  • the issuer of the somatosensory control file may package the somatosensory control file and the video content, publish the somatosensory control file and the somatosensory device supplier, or in an independent third party somatosensory control data service platform.
  • the somatosensory control file is issued to enable the user to obtain the somatosensory control file through the relevant channel and import it into the somatosensory control device of the somatosensory product, and the somatosensory control device can synchronously output the somatosensory control based on the playing frame rate of the video file.
  • the somatosensory control data in the file is sent to the somatosensory sensing device to effect output control of the somatosensory sensing device.
  • FIG. 3 is a block diagram showing the structure of the apparatus for generating the somatosensory control data according to the embodiment of the present invention. For the convenience of explanation, only the parts related to the present embodiment are shown.
  • the apparatus includes:
  • the first generating unit 31 generates a somatosensory control data packet related to the image frame for each image frame of the video file;
  • the second generating unit 32 sequentially arranges all the generated somatosensory control data packets in accordance with the playing order of the related image frames to generate a somatosensory control file;
  • the issuing unit 33 the somatosensory control file is issued, so that the somatosensory control device synchronously outputs the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device based on the playback frame rate of the video file to implement Output control of the somatosensory sensing device.
  • the first generating unit 31 includes:
  • a first writing subunit wherein the acquired somatosensory control data of the M body points are written into the somatosensory control data packet related to the image frame;
  • the M and the N are each an integer greater than or equal to 1.
  • the first generating unit 31 further includes:
  • a second writing subunit configured to: if the control information of the jth type of the somatosensory sensing device of the i-th body point associated with the image frame is not obtained, each data bit corresponding to the control information Write preset characters;
  • the i and the j are integers greater than or equal to 1, and the i is less than or equal to the M, and the j is less than or equal to the N.
  • the preset character is 0.
  • FIG. 4 is a schematic diagram of a computing node 400 according to an embodiment of the present invention. For the convenience of explanation, only the parts related to the present embodiment are shown.
  • the computing node 400 may be a host server that includes computing power, or a personal computer PC, or a portable computer or terminal that can be carried.
  • the specific embodiment of the present invention does not limit the specific implementation of the computing node.
  • Compute node 400 includes:
  • a processor 410 a communication interface 420, a memory 430, and a bus 440.
  • the processor 410, the communication interface 420, and the memory 430 complete communication with each other via the bus 440.
  • the communication interface 420 is configured to communicate with a network element, such as a virtual machine management center, shared storage, and the like.
  • the processor 410 is configured to execute a program.
  • the program can include program code, the program code including computer operating instructions.
  • the processor 410 may be a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
  • CPU Central Processing Unit
  • ASIC Application Specific Integrated Circuit
  • the memory 430 is configured to store a program.
  • the memory 430 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
  • the program may be specifically configured to execute a method for generating a somatosensory control data, the method comprising:
  • a somatosensory control data packet associated with the image frame is separately generated;
  • the somatosensory control device synchronously outputs the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device based on a playback frame rate of the video file, to implement the somatosensory transmission Output control of the sensing device.
  • the generating the somatosensory control data packet related to the image frame separately includes:
  • the sense control data includes control information of N types of somatosensory sensing devices
  • the M and the N are each an integer greater than or equal to 1.
  • the generating a somatosensory control data packet related to the image frame separately includes:
  • control information of the jth type of the somatosensory sensing device of the i-th body point associated with the image frame is not obtained, a preset character is written in each data bit corresponding to the control information;
  • the i and the j are integers greater than or equal to 1, and the i is less than or equal to the M, and the j is less than or equal to the N.
  • the preset character is 0.
  • a somatosensory control data packet is generated based on each image frame of the video file, and all the generated somatosensory control data packets are sequentially arranged according to the playback order of the related image frames to obtain a somatosensory control file and released, so that the somatosensory control is performed.
  • the device synchronously outputs the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device based on the playback frame rate of the video file, so as to implement output control of the somatosensory sensing device, and the video content is realized simply and efficiently.
  • the so-called virtual environment matching the somatosensory simulation saves the time cost caused by a large number of post-technical processing.
  • S501 Load a somatosensory control file associated with the currently played video file, where the somatosensory control file is arranged by the somatosensory control data packet, and each of the somatosensory control data packets is sequentially associated with one image frame in the video file. .
  • the user can acquire a somatosensory control file matching the video file through the relevant channel and import it into the somatosensory control device of the somatosensory product. While playing the video file, the somatosensory control file is loaded by the somatosensory control device in the form of an external file equivalent to the video file.
  • the method, data format and data content of the somatosensory control file have been elaborated in the above embodiments. I will not go into details here.
  • a somatosensory control file having the same file name as the currently played video file is loaded.
  • the video file and the somatosensory control file of the same file name are regarded as files having an associated relationship, and the file name of the video file is obtained while the video file is ready for playback, and the local folder or network resource is searched for and obtained.
  • the somatosensory control file is the same as the file name of the video file, that is, the somatosensory control file associated with the video file is recognized and read by the file name.
  • a control file is considered a file that is associated with a video file.
  • the loaded somatosensory control file is sequentially arranged by the somatosensory control data packet related to the currently played video file image frame, each image frame in the video file is combined with a somatosensory control data packet.
  • the somatosensory control device when playing a video file, based on the playback frame rate of the video file, the somatosensory control device can output the somatosensory control data packet to the somatosensory sensing device one by one at the same frequency, thereby realizing the synchronization of the video file and the somatosensory simulation. Output.
  • the synchronizing output of the somatosensory control data packets in the somatosensory control file to the somatosensory sensing device includes:
  • the somatosensory control data of the M body points are parsed from the somatosensory control data packet.
  • the parsed control information is respectively output to N types of somatosensory sensing devices on the i-th body point.
  • the M, the N, and the i are all integers greater than or equal to 1, and the i is less than or equal to M.
  • the somatosensory control data can be accurately outputted to a specific somatosensory sensing device located at a specific body point, and then for the whole body M A total of M*N individual sensing devices on the body point can simultaneously realize the output of the somatosensory simulation in the process of playing related image frames, bringing a full-scale somatosensory experience.
  • a character is preset on the basis of the frame format of the somatosensory control data packet described above, and for any image frame, if a certain frame point is not matched with a certain body point while playing the image frame
  • each data bit occupied by the body sensing device corresponding to the body point is written into the preset character, for example, The preset character 0 is written to ensure that the data lengths of all the somatosensory control packets are consistent.
  • the synchronizing output of the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device further includes:
  • control information corresponding to the jth type of the somatosensory sensing device of the i-th body point in the somatosensory control data packet is a preset character in each data bit, the somatosensory control data packet is associated with While the image frame is being played, the output of the control information is stopped;
  • the j is an integer greater than or equal to 1, and the j is less than or equal to N.
  • the somatosensory control device detects a specific body point
  • the control information corresponding to a specific type of somatosensory sensing device is a preset character in each data bit, and at this time, the output of the control information of the somatosensory sensing device on the body point is stopped, so that Although the data length of the somatosensory control data packet associated with each image frame is the same, the somatosensory control device can still determine whether to output control information or stop outputting control information according to the data content on each data bit, thereby flexibly implementing Output control of multiple body points and multiple somatosensory sensing devices.
  • the somatosensory control device synchronously outputs the somatosensory control data packet related to the image frame in the somatosensory control file to the somatosensory sensing device based on the playback frame rate of the video file, so as to realize output control of the somatosensory sensing device, which is simple.
  • the somatosensory simulation matching the virtual environment created by the video content is efficiently realized, which saves the time cost caused by a large amount of post-processing.
  • FIG. 7 is a block diagram showing the structure of the output control device for the somatosensory control data provided by the embodiment of the present invention. For the convenience of explanation, only the parts related to the present embodiment are shown.
  • the apparatus includes:
  • the loading unit 71 loads a somatosensory control file associated with the currently played video file, and the somatosensory control file is arranged by the somatosensory control data packet, and each of the somatosensory control data packets sequentially and one image in the video file respectively Frame correlation
  • the output unit 72 synchronously outputs the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device based on the playback frame rate of the video file to implement output control of the somatosensory sensing device.
  • the output unit 72 includes:
  • a first parsing subunit parsing the somatosensory control data of the M body points from the somatosensory control data packet
  • the second parsing subunit parses the control information of the N types of the somatosensory sensing devices from the parsed somatosensory control data of the i-th body point;
  • a first output control subunit that outputs the parsed control information to the i-th body point N types of somatosensory sensing devices in position;
  • the M, the N, and the i are all integers greater than or equal to 1, and the i is less than or equal to M.
  • the output unit 72 further includes:
  • a second output control subunit if the control information corresponding to the jth type of the somatosensory sensing device of the i-th body point in the somatosensory control data packet is a preset character in each data bit, then While the somatosensory control data packet related to the image frame is played, the control information is stopped from being output;
  • the j is an integer greater than or equal to 1, and the j is less than or equal to N.
  • the preset character is 0.
  • the loading unit 71 is specifically configured to:
  • a somatosensory control file having the same file name as the currently played video file is loaded.
  • the loading unit 71 is specifically configured to:
  • FIG. 8 is a schematic diagram of a computing node 800 according to an embodiment of the present invention. For the convenience of explanation, only the parts related to the present embodiment are shown.
  • the computing node 800 may be a host server that includes computing power, or a personal computer PC, or a portable computer or terminal that can be carried.
  • the specific embodiment of the present invention does not limit the specific implementation of the computing node.
  • Computing node 800 includes:
  • a processor 810 a communications interface 820, a memory 830, and a bus 840.
  • the processor 810, the communication interface 820, and the memory 830 complete communication with each other via the bus 840.
  • the communication interface 820 is configured to communicate with a network element, such as a virtual machine management center, shared storage, and the like.
  • the processor 810 is configured to execute a program.
  • the program can include program code, the program code including computer operating instructions.
  • the processor 810 may be a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
  • CPU Central Processing Unit
  • ASIC Application Specific Integrated Circuit
  • the memory 830 is configured to store a program.
  • the memory 830 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
  • the program may be specifically configured to perform a transmission control method for the somatosensory control data, the method comprising:
  • a somatosensory control file associated with the currently played video file the somatosensory control file being arranged by the somatosensory control data packet, each of the somatosensory control data packets being sequentially associated with one image frame in the video file;
  • the synchronously outputting the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device comprises:
  • the M, the N, and the i are all integers greater than or equal to 1, and the i is less than or equal to M.
  • the synchronously outputting the somatosensory control data packet in the somatosensory control file to the somatosensory sensing device further includes:
  • control information corresponding to the jth type of the somatosensory sensing device of the i-th body point in the somatosensory control data packet is a preset character in each data bit, the somatosensory control data packet is associated with While the image frame is being played, the output of the control information is stopped;
  • the j is an integer greater than or equal to 1, and the j is less than or equal to N.
  • the preset character is 0.
  • the loading the somatosensory control file associated with the currently played video file includes:
  • a somatosensory control file having the same file name as the currently played video file is loaded.
  • the loading the somatosensory control file associated with the currently played video file includes:
  • each functional unit in the embodiment may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit, and the integrated unit may be implemented in the form of hardware. It can also be implemented in the form of a software functional unit.
  • the specific names of the respective functional units are only for the purpose of facilitating mutual differentiation, and are not intended to limit the scope of protection of the present application.
  • the disclosed apparatus and apparatus may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical sexual, mechanical or other form.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • An integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, can be stored in a computer readable storage medium.
  • the medium includes a number of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of various embodiments of the embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory, a random access memory, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

一种体感控制数据的生成、输出控制方法及装置,适用于多媒体技术领域。所述方法包括:对于视频文件的每个图像帧,分别生成与该图像帧相关的体感控制数据包(S101);依照相关的图像帧的播放顺序,将生成的所有所述体感控制数据包依序排列,生成体感控制文件(S102);发布所述体感控制文件,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制(S103)。该方法实现了与视频内容所营造的虚拟环境相匹配的体感模拟,节约了因大量后期技术加工所产生的时间成本。

Description

体感控制数据的生成、输出控制方法及装置 技术领域
本发明属于多媒体技术领域,尤其涉及体感控制数据的生成、输出控制方法及装置。
背景技术
现有技术可以实现对人所具有的其他体感进行模拟,包括对触感、力感、温感、湿感、风感、嗅觉等一切人体所具有的感知进行模拟。体感传感的实现原理为:向附着在人体体表的体感传感装置发送体感控制信号,以控制体感传感装置向人体进行各类体感反馈。例如,在VR游戏中与虚拟人物握手时,通过力反馈控制信号对位于手部的力传感器产生的力度大小及持续时间进行控制,从而带来与游戏中的握手行为相匹配的力反馈体验。然而,技术人员在研发过程中发现,现有技术至少存在以下技术缺陷:
现有的视频内容提供商及各类体感设备供应商各自独立提供服务,视频文件与体感传感控制信号的格式也各不相同,因此,若要产生与视频内容所营造的虚拟环境相匹配的体感体验,需要对二者的结合或同步输出进行大量的后期技术加工,会耗费大量的时间成本。
发明内容
有鉴于此,本发明实施例提供了体感控制数据的生成、输出控制方法及装置,以解决现有技术中若要产生与视频内容所营造的虚拟环境相匹配的体感体验,需要对二者的结合或同步输出进行大量的后期技术加工,会耗费大量的时 间成本的问题。
第一方面,本发明实施例提供了一种体感控制数据的生成方法,所述方法包括:
对于视频文件的每个图像帧,分别生成与该图像帧相关的体感控制数据包;
依照相关的图像帧的播放顺序,将生成的所有所述体感控制数据包依序排列,生成体感控制文件;
发布所述体感控制文件,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
作为第一方面的第一种可能的实现方式,所述分别生成与该图像帧相关的体感控制数据包包括:
获取与该图像帧相关的M个身体点位的体感控制数据,每个身体点位的体感控制数据均包含N种类型的体感传感装置的控制信息;
将获取到的所述M个身体点位的体感控制数据均写入与该图像帧相关的所述体感控制数据包;
所述M和所述N均为大于或等于1的整数。
结合第一方面的第一种可能的实现方式,在第二种可能的实现方式中,所述分别生成与该图像帧相关的体感控制数据包还包括;
若未获取到与该图像帧相关的第i个身体点位的第j种类型的体感传感装置的控制信息,在该控制信息对应的每个数据位均写入预设字符;
所述i和所述j均为大于或等于1的整数,且所述i小于或等于所述M,所述j小于或等于所述N。
结合第一方面的第二种可能的实现方式,在第三种可能的实现方式中,所述预设字符为0。
第二方面,一种体感控制数据的生成装置,所述装置包括:
第一生成单元,用于对于视频文件的每个图像帧,分别生成与该图像帧相 关的体感控制数据包;
第二生成单元,用于依照相关的图像帧的播放顺序,将生成的所有所述体感控制数据包依序排列,生成体感控制文件;
发布单元,用于发布所述体感控制文件,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
作为第二方面的第一种可能的实现方式,所述第一生成单元包括:
获取子单元,用于获取与该图像帧相关的M个身体点位的体感控制数据,每个身体点位的体感控制数据均包含N种类型的体感传感装置的控制信息;
第一写入子单元,用于将获取到的所述M个身体点位的体感控制数据均写入与该图像帧相关的所述体感控制数据包;
所述M和所述N均为大于或等于1的整数。
结合第二方面的第一种可能的实现方式,在第二种可能的实现方式中,所述第一生成单元还包括;
第二写入子单元,用于若未获取到与该图像帧相关的第i个身体点位的第j种类型的体感传感装置的控制信息,在该控制信息对应的每个数据位均写入预设字符;
所述i和所述j均为大于或等于1的整数,且所述i小于或等于所述M,所述j小于或等于所述N。
结合第二方面的第二种可能的实现方式,在第三种可能的实现方式中,所述预设字符为0。
本发明实施例基于视频文件的每个图像帧生成体感控制数据包,并依照相关的图像帧的播放顺序,将生成的所有体感控制数据包依序排列得到体感控制文件并发布,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的体感控制数据包至体感传感装置,以实现对体感传感装置的输出控制,简单高效地实现了与视频内容所营造的虚拟环境相匹配的体感模拟, 节约了因大量后期技术加工所产生的时间成本。
第三方面,本发明实施例提供了一种体感控制数据的输出控制方法,所述方法包括:
加载与当前播放的视频文件关联的体感控制文件,所述体感控制文件由体感控制数据包排列得到,每个所述体感控制数据包依序分别与所述视频文件中的一个图像帧相关;
基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
作为第三方面的第一种可能的实现方式,所述同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置包括:
从所述体感控制数据包中解析出M个身体点位的体感控制数据;
从解析出的第i个身体点位的体感控制数据中解析出N种类型的体感传感装置的控制信息;
将解析出的所述控制信息分别输出至第i个身体点位上的N种类型的体感传感装置中;
所述M、所述N和所述i均为大于或等于1的整数,且所述i小于或等于M。
结合第三方面的第一种可能的实现方式,在第二种可能的实现方式中,所述同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置还包括:
若所述体感控制数据包中,第i个身体点位的第j种类型的体感传感装置对应的控制信息在每个数据位上均为预设字符,则在该体感控制数据包相关的图像帧播放的同时,停止输出该控制信息;
所述j为大于或等于1的整数,且所述j小于或等于N。
结合第三方面的第二种可能的实现方式,在第三种可能的实现方式中,所述预设字符为0。
作为第三方面的第四种可能的实现方式,所述加载与当前播放的视频文件 关联的体感控制文件包括:
加载与所述当前播放的视频文件的文件名相同的体感控制文件。
作为第三方面的第五种可能的实现方式,所述加载与当前播放的视频文件关联的体感控制文件包括:
加载与所述当前播放的视频文件位于同一文件夹内的体感控制文件。
第四方面,本发明实施例提供了一种体感控制数据的生成装置,所述装置包括:
第一生成单元,用于对于视频文件的每个图像帧,分别生成与该图像帧相关的体感控制数据包;
第二生成单元,用于依照相关的图像帧的播放顺序,将生成的所有所述体感控制数据包依序排列,生成体感控制文件;
发布单元,用于发布所述体感控制文件,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
作为第四方面的第一种可能的实现方式,所述输出单元包括:
第一解析子单元,用于从所述体感控制数据包中解析出M个身体点位的体感控制数据;
第二解析子单元,用于从解析出的第i个身体点位的体感控制数据中解析出N种类型的体感传感装置的控制信息;
第一输出控制子单元,用于将解析出的所述控制信息分别输出至第i个身体点位上的N种类型的体感传感装置中;
所述M、所述N和所述i均为大于或等于1的整数,且所述i小于或等于M。
结合第四方面的第一种可能的实现方式,在第二种可能的实现方式中,所述输出单元还包括:
第二输出控制子单元,用于若所述体感控制数据包中,第i个身体点位的 第j种类型的体感传感装置对应的控制信息在每个数据位上均为预设字符,则在该体感控制数据包相关的图像帧播放的同时,停止输出该控制信息;
所述j为大于或等于1的整数,且所述j小于或等于N。
结合第四方面的第二种可能的实现方式,在第三种可能的实现方式中,所述预设字符为0。
作为第四方面的第四种可能的实现方式,所述加载单元具体用于:
加载与所述当前播放的视频文件的文件名相同的体感控制文件。
作为第四方面的第五种可能的实现方式,所述加载单元具体用于:
加载与所述当前播放的视频文件位于同一文件夹内的体感控制文件。
本发明实施例中,体感控制装置基于视频文件的播放帧率,同步输出体感控制文件中与图像帧相关的体感控制数据包至体感传感装置,以实现对体感传感装置的输出控制,简单高效地实现了与视频内容所营造的虚拟环境相匹配的体感模拟,节约了因大量后期技术加工所产生的时间成本。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的体感控制数据的生成方法的实现流程图;
图2是本发明实施例提供的体感控制数据的生成方法S101的具体实现流程图;
图3是本发明实施例提供的体感控制数据的生成装置的结构框图;
图4是本发明实施例提供的一种计算节点的示意图;
图5是本发明实施例提供的体感控制数据的输出控制方法的实现流程图;
图6是本发明实施例提供的体感控制数据的输出控制方法S502的具体实现 流程图;
图7是本发明实施例提供的体感控制数据的输出控制装置的结构框图;
图8是本发明实施例提供的另一种计算节点的示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本发明实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本发明。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本发明的描述。
在本发明实施例中,所述体感控制数据用于对体感传感装置进行控制,体感控制数据以体感控制信号的形式从体感控制装置发出,并传送至与体感控制装置电连接的体感传感装置中。在实际的产品形态中,体感控制装置及一个或多个的体感传感装置可与可穿戴产品相结合,例如,将体感控制装置与多个体感传感装置附着于包裹用户全身的可穿戴本体内,通过体感控制装置向多个体感传感装置分别输出体感控制数据,从而使得这多个体感传感装置根据各自接收到的体感控制数据对用户进行体感模拟。
首先,对本发明实施例提供的体感控制数据的生成方法进行详细阐述,其实现流程如图1所示:
在S101中,对于视频文件的每个图像帧,分别生成与该图像帧相关的体感控制数据包。
所述视频文件,是指包含了音频、视频信息的多媒体文件,进一步地,所述视频文件还可以包括VR视频文件,其特点在于能够呈现3D及全视角的视觉效果。对于现有技术中存在多种视频文件格式,例如AVI、WMV、MPEG、MOV等视频文件格式,视频文件中的视频信息在获取时,其原始内容均是一 帧一帧地拍摄和获取的,因此,播放视频文件实际上就是对拍摄的图像帧进行逐帧显示,并结合音频的同步播放以达到视听效果。因此,基于视频文件实质上是由图像帧所构成的数据特性,在本发明实施例中,以视频文件中的每个图像帧为一个数据包添加节点,分别生成与该图像帧相关的体感控制数据包。
进一步地,作为本发明的一个实施例,与一图像帧相关的体感控制数据包中可以携带一个或多个身体点位的体感控制数据,而其中每个身体点位的体感控制数据均包含一种或多种类型的体感传感装置的控制信息。具体地,如图2所示,生成一图像帧相关的体感控制数据包的过程如下:
S201,获取与该图像帧相关的M个身体点位的体感控制数据,每个身体点位的体感控制数据均包含N种类型的体感传感装置的控制信息。
S202,将获取到的所述M个身体点位的体感控制数据均写入与该图像帧相关的所述体感控制数据包。
其中,所述M和所述N均为大于或等于1的整数。
基于图2所示的实施例,当M和N均等于1时,与图像帧相关的体感控制数据包用于在该图像帧播放时,控制一个身体点位上的一种体感传感装置产生体感模拟;当M大于1且N等于1时,与每个图像帧相关的体感控制数据包用于在该图像帧播放时,控制多个身体点位上的同一种体感传感装置同时产生体感模拟;当M和N均大于1时,与每个图像帧相关的体感控制数据包用于在该图像帧播放时,控制多个身体点位上的多种体感传感装置同时产生体感模拟。
在本发明实施例中,对于每个身体点位,根据产品需求,至少可以集成如表1所列举的任意一种或多种类型的体感传感装置:
表1
Figure PCTCN2017072091-appb-000001
Figure PCTCN2017072091-appb-000002
优选地,在本发明实施例中,还提出了一种体感控制数据包的帧格式,该帧格式的构成包括作为帧头的起始控制帧以及体感数据帧。
(一)起始控制帧中携带了若干字节的控制数据:
示例性地,如表2所示,该起始控制帧中包括了4个字节的控制数据,分别为控制帧字节1、控制帧字节2、控制帧字节3及控制帧字节4,每个控制帧字节可以根据体感控制的需要写入相应的数据内容,例如,可以用控制帧字节来表明体感控制数据包所覆盖的身体点位数量,或者表明该体感控制数据包是否需要被控制装置所屏蔽。
表2
Figure PCTCN2017072091-appb-000003
(二)体感数据帧中携带了顺序写入的若干种体感传感器的体感控制数据,同时还预留了数据位,用于将来根据开发需要写入若干其他体感传感器的体感控制数据:
在体感数据帧中,每种体感传感器对应的体感控制数据可以用于表明该体感传感器相关的体感控制模式,或者用于描述该体感传感器相关的体感功能实现。示例性地,如表3所示,每种体感传感器对应4个字节的体感控制数据,分别为体感控制模式字节1、体感功能字节1、体感功能字节2及体感功能字节 3。
表3
Figure PCTCN2017072091-appb-000004
表4示出了一个完整的体感控制数据包的帧结构示例,从表4中可以看出,该体感控制数据包的帧头为4字节的起始控制帧,体感数据帧中顺序写入了13种体感传感装置的体感控制数据,同时还包括7种功能预留,能够再写入7种体感传感装置的体感控制数据,且每种体感传感装置的体感控制数据为4个字节。
表4
Figure PCTCN2017072091-appb-000005
示例性地,表5示出了表1提及的13种体感传感前置对应的体感控制数据帧结构:
表5
Figure PCTCN2017072091-appb-000006
Figure PCTCN2017072091-appb-000007
在体感传感数据的通信处理过程中,若采用上文所述的帧格式生成体感控制数据包,一方面可以将多种不同类型的体感传感装置的体感控制数据整合为统一的数据格式,有效地消除感传感装置因来自不同的供应商而导致的碎片化现象,另一方面,将多种不同类型的体感传感装置的体感控制数据写入一个数据包中进行传输,也可以在一定程度上降低数据传输过程中丢包现象的出现,提高了数据通信的可靠性。
此外,在上文所述的体感控制数据包帧格式的基础之上,作为本发明的一个实施例,在生成体感控制数据包的过程中:
若未获取到与该图像帧相关的第i个身体点位的第j种类型的体感传感装置的控制信息,在该控制信息对应的每个数据位均写入预设字符;
所述i和所述j均为大于或等于1的整数,且所述i小于或等于所述M,所述j小于或等于所述N。
即,预设一个字符,对于任一图像帧来说,若在该图像帧播放的同时未对某个身体点位匹配某一类型的体感模拟,则在该图像帧相关的体感控制数据包 中,将该身体点位对应的该类型的体感传感装置所占据的每个数据位均写入该预设字符。例如,在虚拟的室内环境中,不需要进行风感模拟,则在图像帧相关的体感控制数据包中,将所有身体点位上的风感传感装置所占据的每个数据位均写入该预设字符。
以上体感控制数据包的帧写入方案可以保证所有体感控制数据包的数据长度一致,这样一来,对于视频文件中的所有图像帧,相关的体感控制数据包数据长度均相同,有利于对整个视频文件相关的体感控制文件的数据量进行预估,并且可以有助于在数据校验过程中快速地发现丢包情况。
可选地,所述预设字符为0。
在S102中,依照相关的图像帧的播放顺序,将生成的所有所述体感控制数据包依序排列,生成体感控制文件。
如上文所述,播放视频文件的过程实际上是对拍摄得到的图像帧依序逐帧显示的过程,因此,在本发明实施例中,根据视频文件中图像帧的播放顺序,将生成的所有体感控制数据包依序排列后生成体感控制文件。
在S103中,发布所述体感控制文件,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
在本发明实施例中,体感控制文件的发布方可以为视频内容提供商,体感设备供应商,或者第三方的体感控制数据提供方。在依据视频文件生成体感控制文件之后,体感控制文件的发布方可以将体感控制文件与视频内容打包发布、将体感控制文件与体感设备供应商合作发布,或者在独立的第三方体感控制数据服务平台发布体感控制文件,以使得用户通过相关渠道获取到体感控制文件,并将其导入体感产品的体感控制装置中,体感控制装置便可以基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
对应于上文实施例所述的体感控制数据的生成方法,图3示出了本发明实施例提供的体感控制数据的生成装置的结构框图。为了便于说明,仅示出了与本实施例相关的部分。
参照图3,该装置包括:
第一生成单元31,对于视频文件的每个图像帧,分别生成与该图像帧相关的体感控制数据包;
第二生成单元32,依照相关的图像帧的播放顺序,将生成的所有所述体感控制数据包依序排列,生成体感控制文件;
发布单元33,发布所述体感控制文件,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
可选地,所述第一生成单元31包括:
获取子单元,获取与该图像帧相关的M个身体点位的体感控制数据,每个身体点位的体感控制数据均包含N种类型的体感传感装置的控制信息;
第一写入子单元,将获取到的所述M个身体点位的体感控制数据均写入与该图像帧相关的所述体感控制数据包;
所述M和所述N均为大于或等于1的整数。
可选地,所述第一生成单元31还包括;
第二写入子单元,用于若未获取到与该图像帧相关的第i个身体点位的第j种类型的体感传感装置的控制信息,在该控制信息对应的每个数据位均写入预设字符;
所述i和所述j均为大于或等于1的整数,且所述i小于或等于所述M,所述j小于或等于所述N。
可选地,所述预设字符为0。
对应于上文实施例所述的体感控制数据的生成方法,图4示出了本发明实施例提供的一种计算节点400的示意图。为了便于说明,仅示出了与本实施例相关的部分。
其中,计算节点400可能是包含计算能力的主机服务器,或者是个人计算机PC,或者是可携带的便携式计算机或终端等等,本发明具体实施例并不对计算节点的具体实现做限定。计算节点400包括:
处理器(processor)410,通信接口(Communications Interface)420,存储器(memory)430,总线440。
处理器410,通信接口420,存储器430通过总线440完成相互间的通信。
通信接口420,用于与网元通信,比如虚拟机管理中心、共享存储等。
处理器410,用于执行程序。
具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。
处理器410可能是一个中央处理器(Central Processing Unit,CPU),或者是特定集成电路(Application Specific Integrated Circuit,ASIC),或者是被配置成实施本发明实施例的一个或多个集成电路。
存储器430,用于存放程序。存储器430可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序具体可以用于执行一种体感控制数据的生成方法,所述方法包括:
对于视频文件的每个图像帧,分别生成与该图像帧相关的体感控制数据包;
依照相关的图像帧的播放顺序,将生成的所有所述体感控制数据包依序排列,生成体感控制文件;
发布所述体感控制文件,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
可选地,所述分别生成与该图像帧相关的体感控制数据包包括:
获取与该图像帧相关的M个身体点位的体感控制数据,每个身体点位的体 感控制数据均包含N种类型的体感传感装置的控制信息;
将获取到的所述M个身体点位的体感控制数据均写入与该图像帧相关的所述体感控制数据包;
所述M和所述N均为大于或等于1的整数。
可选地,所述分别生成与该图像帧相关的体感控制数据包还包括;
若未获取到与该图像帧相关的第i个身体点位的第j种类型的体感传感装置的控制信息,在该控制信息对应的每个数据位均写入预设字符;
所述i和所述j均为大于或等于1的整数,且所述i小于或等于所述M,所述j小于或等于所述N。
可选地,所述预设字符为0。
本发明实施例基于视频文件的每个图像帧生成体感控制数据包,并依照相关的图像帧的播放顺序,将生成的所有体感控制数据包依序排列得到体感控制文件并发布,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的体感控制数据包至体感传感装置,以实现对体感传感装置的输出控制,简单高效地实现了与视频内容所营造的虚拟环境相匹配的体感模拟,节约了因大量后期技术加工所产生的时间成本。
接下来,对本发明实施例提供的体感控制数据的输出控制方法进行详细阐述,其实现流程如图5所示:
S501,加载与当前播放的视频文件关联的体感控制文件,所述体感控制文件由体感控制数据包排列得到,每个所述体感控制数据包依序分别与所述视频文件中的一个图像帧相关。
如上文所述,用户可以通过相关渠道获取到与视频文件相匹配的体感控制文件,并将其导入体感产品的体感控制装置中。在播放视频文件的同时,体感控制文件会以相当于视频文件的外挂文件的形式被体感控制装置加载。体感控制文件的生成方法、数据格式及数据内容已在上文实施例中进行了详细阐述, 在此不赘述。
对于体感控制文件的加载方法,作为本发明的一个实施例,可以通过以下方式实现:
加载与所述当前播放的视频文件的文件名相同的体感控制文件。
即,将相同文件名的视频文件和体感控制文件视为具备关联关系的文件,在视频文件播放就绪的同时,获取到视频文件的文件名,并通过对本地文件夹或者网络资源进行搜索,获取到与视频文件的文件名相同的体感控制文件,也即通过文件名辨识并读取与视频文件关联的体感控制文件。
对于体感控制文件的加载方法,作为本发明的另一实施例,还可以通过以下方式实现:
加载与所述当前播放的视频文件位于同一文件夹内的体感控制文件。
在视频文件播放就绪的同时,通过对本地文件夹或者网络资源文件夹进行搜索,基于文件夹内的文件名后缀,获取到与视频文件位于同一文件夹内的体感控制文件,将获取到的体感控制文件视为与视频文件具备关联关系的文件。
S502,基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
在本发明实施例中,由于加载的体感控制文件是由与当前播放的视频文件图像帧相关的体感控制数据包依序排列得到的,视频文件中的每一图像帧均与一个体感控制数据包相关,因此,在播放视频文件时,基于视频文件的播放帧率,可以由体感控制装置以相同的频率逐一输出体感控制数据包至体感传感装置,从而便实现了视频文件与体感模拟的同步输出。
作为本发明的一个实施例,当每个体感控制数据包中包含了M个身体点位 上的N种类型的体感传感装置的控制信息时,则如图6所示,所述同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置包括:
S601,从所述体感控制数据包中解析出M个身体点位的体感控制数据。
S602,从解析出的第i个身体点位的体感控制数据中解析出N种类型的体感传感装置的控制信息。
S603,将解析出的所述控制信息分别输出至第i个身体点位上的N种类型的体感传感装置中。
所述M、所述N和所述i均为大于或等于1的整数,且所述i小于或等于M。
基于图6对应的实施例,在体感控制装置输出体感控制数据包的过程中,可以精准地将体感控制数据输出至位于一具体身体点位的一具体体感传感装置中,那么对于全身M个身体点位上总共M*N个体感传感装置,便可以在相关的图像帧播放的过程中同时实现体感模拟的输出,带来全方位的体感体验。
此外,如上文所述,在上述的体感控制数据包帧格式的基础之上预设一个字符,对于任一图像帧来说,若在该图像帧播放的同时未对某个身体点位匹配某一类型的体感模拟,则在该图像帧相关的体感控制数据包中,将该身体点位对应的该类型的体感传感装置所占据的每个数据位均写入该预设字符,例如,写入预设字符0,以保证所有体感控制数据包的数据长度一致。那么在该方案之下,所述同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置还包括:
若所述体感控制数据包中,第i个身体点位的第j种类型的体感传感装置对应的控制信息在每个数据位上均为预设字符,则在该体感控制数据包相关的图像帧播放的同时,停止输出该控制信息;
所述j为大于或等于1的整数,且所述j小于或等于N。
即,在任一图像帧播放的同时,若体感控制装置检测到具体某个身体点位 上的具体某类体感传感装置对应的控制信息在每个数据位上均为预设字符,则在该时刻停止对该身体点位上的该体感传感装置输出控制信息,这样一来,虽然每个图像帧相关的体感控制数据包的数据长度是相同的,但是体感控制装置仍然可以根据每个数据位上的数据内容来判断是输出控制信息还是停止输出控制信息,从而灵活地实现了对多身体点位、多个体感传感装置的输出控制。
本发明实施例中,体感控制装置基于视频文件的播放帧率,同步输出体感控制文件中与图像帧相关的体感控制数据包至体感传感装置,以实现对体感传感装置的输出控制,简单高效地实现了与视频内容所营造的虚拟环境相匹配的体感模拟,节约了因大量后期技术加工所产生的时间成本。
对应于上文实施例所述的体感控制数据的输出控制方法,图7示出了本发明实施例提供的体感控制数据的输出控制装置的结构框图。为了便于说明,仅示出了与本实施例相关的部分。
参照图7,该装置包括:
加载单元71,加载与当前播放的视频文件关联的体感控制文件,所述体感控制文件由体感控制数据包排列得到,每个所述体感控制数据包依序分别与所述视频文件中的一个图像帧相关;
输出单元72,基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
可选地,所述输出单元72包括:
第一解析子单元,从所述体感控制数据包中解析出M个身体点位的体感控制数据;
第二解析子单元,从解析出的第i个身体点位的体感控制数据中解析出N种类型的体感传感装置的控制信息;
第一输出控制子单元,将解析出的所述控制信息分别输出至第i个身体点 位上的N种类型的体感传感装置中;
所述M、所述N和所述i均为大于或等于1的整数,且所述i小于或等于M。
可选地,所述输出单元72还包括:
第二输出控制子单元,若所述体感控制数据包中,第i个身体点位的第j种类型的体感传感装置对应的控制信息在每个数据位上均为预设字符,则在该体感控制数据包相关的图像帧播放的同时,停止输出该控制信息;
所述j为大于或等于1的整数,且所述j小于或等于N。
可选地,所述预设字符为0。
可选地,所述加载单元71具体用于:
加载与所述当前播放的视频文件的文件名相同的体感控制文件。
可选地,所述加载单元71具体用于:
加载与所述当前播放的视频文件位于同一文件夹内的体感控制文件。
对应于上文实施例所述的体感控制数据的传输控制方法,图8示出了本发明实施例提供的一种计算节点800的示意图。为了便于说明,仅示出了与本实施例相关的部分。
其中,计算节点800可能是包含计算能力的主机服务器,或者是个人计算机PC,或者是可携带的便携式计算机或终端等等,本发明具体实施例并不对计算节点的具体实现做限定。计算节点800包括:
处理器(processor)810,通信接口(Communications Interface)820,存储器(memory)830,总线840。
处理器810,通信接口820,存储器830通过总线840完成相互间的通信。
通信接口820,用于与网元通信,比如虚拟机管理中心、共享存储等。
处理器810,用于执行程序。
具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。
处理器810可能是一个中央处理器(Central Processing Unit,CPU),或者是特定集成电路(Application Specific Integrated Circuit,ASIC),或者是被配置成实施本发明实施例的一个或多个集成电路。
存储器830,用于存放程序。存储器830可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序具体可以用于执行一种体感控制数据的传输控制方法,所述方法包括:
加载与当前播放的视频文件关联的体感控制文件,所述体感控制文件由体感控制数据包排列得到,每个所述体感控制数据包依序分别与所述视频文件中的一个图像帧相关;
基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
进一步地,所述同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置包括:
从所述体感控制数据包中解析出M个身体点位的体感控制数据;
从解析出的第i个身体点位的体感控制数据中解析出N种类型的体感传感装置的控制信息;
将解析出的所述控制信息分别输出至第i个身体点位上的N种类型的体感传感装置中;
所述M、所述N和所述i均为大于或等于1的整数,且所述i小于或等于M。
进一步地,所述同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置还包括:
若所述体感控制数据包中,第i个身体点位的第j种类型的体感传感装置对应的控制信息在每个数据位上均为预设字符,则在该体感控制数据包相关的图像帧播放的同时,停止输出该控制信息;
所述j为大于或等于1的整数,且所述j小于或等于N。
进一步地,所述预设字符为0。
进一步地,所述加载与当前播放的视频文件关联的体感控制文件包括:
加载与所述当前播放的视频文件的文件名相同的体感控制文件。
进一步地,所述加载与当前播放的视频文件关联的体感控制文件包括:
加载与所述当前播放的视频文件位于同一文件夹内的体感控制文件。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元完成,即将装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述装置中单元的具体工作过程,可以参考前述装置实施例中的对应过程,在此不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同装置来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本发明所提供的实施例中,应该理解到,所揭露的装置和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电 性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本发明实施例各个实施例装置的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例各实施例技术方案的精神和范围。
以上仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (20)

  1. 一种体感控制数据的生成方法,其特征在于,所述方法包括:
    对于视频文件的每个图像帧,分别生成与该图像帧相关的体感控制数据包;
    依照相关的图像帧的播放顺序,将生成的所有所述体感控制数据包依序排列,生成体感控制文件;
    发布所述体感控制文件,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
  2. 如权利要求1所述的方法,其特征在于,所述分别生成与该图像帧相关的体感控制数据包包括:
    获取与该图像帧相关的M个身体点位的体感控制数据,每个身体点位的体感控制数据均包含N种类型的体感传感装置的控制信息;
    将获取到的所述M个身体点位的体感控制数据均写入与该图像帧相关的所述体感控制数据包;
    所述M和所述N均为大于或等于1的整数。
  3. 如权利要求2所述的方法,其特征在于,所述分别生成与该图像帧相关的体感控制数据包还包括;
    若未获取到与该图像帧相关的第i个身体点位的第j种类型的体感传感装置的控制信息,在该控制信息对应的每个数据位均写入预设字符;
    所述i和所述j均为大于或等于1的整数,且所述i小于或等于所述M,所述j小于或等于所述N。
  4. 如权利要求3所述的方法,其特征在于,所述预设字符为0。
  5. 一种体感控制数据的输出控制方法,其特征在于,所述方法包括:
    加载与当前播放的视频文件关联的体感控制文件,所述体感控制文件由体感控制数据包排列得到,每个所述体感控制数据包依序分别与所述视频文件中的一个图像帧相关;
    基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
  6. 如权利要求5所述的方法,其特征在于,所述同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置包括:
    从所述体感控制数据包中解析出M个身体点位的体感控制数据;
    从解析出的第i个身体点位的体感控制数据中解析出N种类型的体感传感装置的控制信息;
    将解析出的所述控制信息分别输出至第i个身体点位上的N种类型的体感传感装置中;
    所述M、所述N和所述i均为大于或等于1的整数,且所述i小于或等于M。
  7. 如权利要求6所述的方法,其特征在于,所述同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置还包括:
    若所述体感控制数据包中,第i个身体点位的第j种类型的体感传感装置对应的控制信息在每个数据位上均为预设字符,则在该体感控制数据包相关的图像帧播放的同时,停止输出该控制信息;
    所述j为大于或等于1的整数,且所述j小于或等于N。
  8. 如权利要求7所述的方法,其特征在于,所述预设字符为0。
  9. 如权利要求5所述的方法,其特征在于,所述加载与当前播放的视频文件关联的体感控制文件包括:
    加载与所述当前播放的视频文件的文件名相同的体感控制文件。
  10. 如权利要求5所述的方法,其特征在于,所述加载与当前播放的视频文件关联的体感控制文件包括:
    加载与所述当前播放的视频文件位于同一文件夹内的体感控制文件。
  11. 一种体感控制数据的生成装置,其特征在于,所述装置包括:
    第一生成单元,用于对于视频文件的每个图像帧,分别生成与该图像帧相关的体感控制数据包;
    第二生成单元,用于依照相关的图像帧的播放顺序,将生成的所有所述体感控制数据包依序排列,生成体感控制文件;
    发布单元,用于发布所述体感控制文件,以使体感控制装置基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
  12. 如权利要求11所述的装置,其特征在于,所述第一生成单元包括:
    获取子单元,用于获取与该图像帧相关的M个身体点位的体感控制数据,每个身体点位的体感控制数据均包含N种类型的体感传感装置的控制信息;
    第一写入子单元,用于将获取到的所述M个身体点位的体感控制数据均写入与该图像帧相关的所述体感控制数据包;
    所述M和所述N均为大于或等于1的整数。
  13. 如权利要求12所述的装置,其特征在于,所述第一生成单元还包括;
    第二写入子单元,用于若未获取到与该图像帧相关的第i个身体点位的第j 种类型的体感传感装置的控制信息,在该控制信息对应的每个数据位均写入预设字符;
    所述i和所述j均为大于或等于1的整数,且所述i小于或等于所述M,所述j小于或等于所述N。
  14. 如权利要求13所述的装置,其特征在于,所述预设字符为0。
  15. 一种体感控制数据的输出控制装置,其特征在于,所述装置包括:
    加载单元,用于加载与当前播放的视频文件关联的体感控制文件,所述体感控制文件由体感控制数据包排列得到,每个所述体感控制数据包依序分别与所述视频文件中的一个图像帧相关;
    输出单元,用于基于所述视频文件的播放帧率,同步输出所述体感控制文件中的所述体感控制数据包至体感传感装置,以实现对所述体感传感装置的输出控制。
  16. 如权利要求15所述的装置,其特征在于,所述输出单元包括:
    第一解析子单元,用于从所述体感控制数据包中解析出M个身体点位的体感控制数据;
    第二解析子单元,用于从解析出的第i个身体点位的体感控制数据中解析出N种类型的体感传感装置的控制信息;
    第一输出控制子单元,用于将解析出的所述控制信息分别输出至第i个身体点位上的N种类型的体感传感装置中;
    所述M、所述N和所述i均为大于或等于1的整数,且所述i小于或等于M。
  17. 如权利要求16所述的装置,其特征在于,所述输出单元还包括:
    第二输出控制子单元,用于若所述体感控制数据包中,第i个身体点位的第j种类型的体感传感装置对应的控制信息在每个数据位上均为预设字符,则在该体感控制数据包相关的图像帧播放的同时,停止输出该控制信息;
    所述j为大于或等于1的整数,且所述j小于或等于N。
  18. 如权利要求17所述的装置,其特征在于,所述预设字符为0。
  19. 如权利要求15所述的装置,其特征在于,所述加载单元具体用于:
    加载与所述当前播放的视频文件的文件名相同的体感控制文件。
  20. 如权利要求15所述的装置,其特征在于,所述加载单元具体用于:
    加载与所述当前播放的视频文件位于同一文件夹内的体感控制文件。
PCT/CN2017/072091 2016-11-22 2017-01-22 体感控制数据的生成、输出控制方法及装置 WO2018094871A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611048448.6 2016-11-22
CN201611048448.6A CN106527730B (zh) 2016-11-22 2016-11-22 体感控制数据的生成、输出控制方法及装置

Publications (1)

Publication Number Publication Date
WO2018094871A1 true WO2018094871A1 (zh) 2018-05-31

Family

ID=58356856

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/072091 WO2018094871A1 (zh) 2016-11-22 2017-01-22 体感控制数据的生成、输出控制方法及装置

Country Status (2)

Country Link
CN (1) CN106527730B (zh)
WO (1) WO2018094871A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454365B (zh) * 2016-11-22 2018-07-10 包磊 多媒体数据的编码、解码方法及编码、解码装置
CN107491172B (zh) * 2017-08-16 2020-10-09 歌尔科技有限公司 体感数据获取方法、装置及电子设备
CN113058259B (zh) * 2021-04-22 2024-04-19 杭州当贝网络科技有限公司 基于游戏内容的体感动作识别方法、系统及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324488A (zh) * 2013-07-12 2013-09-25 山东易创电子有限公司 一种特效信息获取方法及装置
CN104093078A (zh) * 2013-11-29 2014-10-08 腾讯科技(北京)有限公司 一种播放视频文件的方法及装置
CN104780403A (zh) * 2014-03-28 2015-07-15 方小祥 基于脚本的体感视频提供系统及方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9019087B2 (en) * 2007-10-16 2015-04-28 Immersion Corporation Synchronization of haptic effect data in a media stream
KR20150110356A (ko) * 2014-03-21 2015-10-02 임머숀 코퍼레이션 센서의 데이터를 햅틱 효과들로 변환하는 시스템들 및 방법들
CN104007819B (zh) * 2014-05-06 2017-05-24 清华大学 手势识别方法、装置及Leap Motion体感控制系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324488A (zh) * 2013-07-12 2013-09-25 山东易创电子有限公司 一种特效信息获取方法及装置
CN104093078A (zh) * 2013-11-29 2014-10-08 腾讯科技(北京)有限公司 一种播放视频文件的方法及装置
CN104780403A (zh) * 2014-03-28 2015-07-15 方小祥 基于脚本的体感视频提供系统及方法

Also Published As

Publication number Publication date
CN106527730B (zh) 2018-05-11
CN106527730A (zh) 2017-03-22

Similar Documents

Publication Publication Date Title
CN106507161B (zh) 视频直播方法及直播装置
CN114185829B (zh) 用于多种通信业务的共享的资源
CN108897691A (zh) 基于接口模拟服务的数据处理方法、装置、服务器和介质
WO2018094871A1 (zh) 体感控制数据的生成、输出控制方法及装置
JP6125680B2 (ja) メッセンジャを利用したコンテンツ提供方法、システム、および記録媒体
JP2016505960A5 (zh)
US9282382B2 (en) Hint based media content streaming
US20140365342A1 (en) Resource provisioning for electronic books
JP2008547083A (ja) メディア転送通信の直列化
JP6709697B2 (ja) 通信費用の節減のためのコンテンツストリーミングサービス方法およびシステム
WO2018095003A1 (zh) 多媒体数据的实时传输方法及装置
CN109873735A (zh) H5页面的性能测试方法、装置和计算机设备
CN110347349A (zh) 浏览器中打印指定内容的方法、装置和计算机设备
WO2017101416A1 (zh) 一种用于多端内容发布的设备、系统及方法
CN108712299A (zh) 一种监测直播延时的方法、装置、设备和计算机存储介质
WO2018095001A1 (zh) 体感传感数据的通信处理方法及装置
WO2018095002A1 (zh) 多媒体数据的编码、解码方法及编码、解码装置
US10084849B1 (en) System and method for providing and interacting with coordinated presentations
CN106055403B (zh) 基于usb存储设备进行终端快速配置的方法及系统
US7925741B2 (en) Black-box host stack latency measurement
CN110262856A (zh) 一种应用程序数据采集方法、装置、终端及存储介质
CN104756502A (zh) 通信设备之间的视频和音频共享的方法、设备和系统
CN104079982B (zh) 视频的种子文件处理方法及装置
KR20050029266A (ko) 개인용 컴퓨터에서 제작된 발표 파일을 네트워크 단말기,휴대용 저장장치 및 휴대용 멀티미디어 재생 장치에서 사용할 수 있도록 변환하는 파일 형식과 재생장치 및 방법
CN103248662B (zh) 数据处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17873515

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17873515

Country of ref document: EP

Kind code of ref document: A1