CN111200745A - Viewpoint information acquisition method, apparatus, device and computer storage medium - Google Patents

Viewpoint information acquisition method, apparatus, device and computer storage medium Download PDF

Info

Publication number
CN111200745A
CN111200745A CN201911418595.1A CN201911418595A CN111200745A CN 111200745 A CN111200745 A CN 111200745A CN 201911418595 A CN201911418595 A CN 201911418595A CN 111200745 A CN111200745 A CN 111200745A
Authority
CN
China
Prior art keywords
viewpoint
information
viewpoint information
terminal
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911418595.1A
Other languages
Chinese (zh)
Inventor
尚家乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Priority to CN201911418595.1A priority Critical patent/CN111200745A/en
Publication of CN111200745A publication Critical patent/CN111200745A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42221Transmission circuitry, e.g. infrared [IR] or radio frequency [RF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Analytical Chemistry (AREA)
  • Biophysics (AREA)
  • Neurosurgery (AREA)
  • Biomedical Technology (AREA)
  • Chemical & Material Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a viewpoint information acquisition method, which comprises the following steps: the method comprises the steps of receiving a data acquisition instruction sent by a terminal, acquiring pose information of a user through a preset sensor, processing the pose information to obtain viewpoint information, and sending the viewpoint information to the terminal so that the terminal can adjust a video image according to the viewpoint information. The invention also discloses a viewpoint information acquisition device, equipment and a computer storage medium. The invention acquires and processes the user pose information by using the TWS earphone to obtain the viewpoint information which can be used for multi-viewpoint video adjustment, and breaks the limitation that the traditional method for acquiring the viewpoint information needs to use complex and expensive external auxiliary equipment.

Description

Viewpoint information acquisition method, apparatus, device and computer storage medium
Technical Field
The present invention relates to the field of wireless networks and video processing, and in particular, to a method, an apparatus, a device, and a computer storage medium for collecting viewpoint information.
Background
The viewpoint following technology can change the real-time viewpoint of the video animation according to the position and posture information of the user, so that the visual angle of the lens in the video animation is kept synchronous with the head position and posture of the user, and the user can have an immersive feeling.
The conventional method for realizing the viewpoint information acquisition mostly depends on external auxiliary positioning equipment with high cost and complex structure, and the principle is that the user pose information acquired by the auxiliary positioning equipment is transmitted to a host computer with high cost, and the host computer processes video information according to the viewpoint information and outputs and displays the video information.
Disclosure of Invention
The invention mainly aims to provide a viewpoint information acquisition method, a viewpoint information acquisition device, viewpoint information acquisition equipment and a computer storage medium, and aims to solve the technical problems that the conventional auxiliary equipment for realizing viewpoint information acquisition is complex to use and high in cost.
In order to achieve the above object, the present invention provides a viewpoint information collecting method applied to a TWS (True Wireless Stereo) headphone, the viewpoint information collecting method including the steps of:
receiving a data acquisition instruction sent by a terminal;
acquiring pose information of a user through a preset sensor, and processing the pose information to obtain viewpoint information;
and sending the viewpoint information to a terminal so that the terminal adjusts the video image according to the viewpoint information.
In an embodiment, the step of receiving the data acquisition instruction sent by the terminal includes:
detecting whether a user is worn;
and when the user wears the glasses, executing the step of acquiring the pose information of the user through a preset sensor and processing the pose information to obtain viewpoint information.
In an embodiment, the step of acquiring pose information of a user through a preset sensor, and processing the pose information to obtain viewpoint information includes:
acquiring acceleration information on preset X, Y and Z three axes through a preset sensor;
calculating pose information according to the acceleration information, wherein the pose information comprises: roll angle, pitch angle and yaw angle;
and converting the roll angle, the pitch angle and the yaw angle into a rotation matrix, and taking the rotation matrix as viewpoint information.
In one embodiment, the step of collecting acceleration information on three axes of preset X, Y and Z through preset sensors includes:
and when the acquisition time reaches the preset window time, filtering the data acquired in the preset window time to generate acceleration information.
In addition, to achieve the above object, the present invention provides a viewpoint information collecting method, which is applied to a terminal, and includes the steps of:
when multi-viewpoint video playing is detected, sending a data acquisition instruction to a TWS earphone;
receiving viewpoint information sent by a TWS earphone, and processing the viewpoint information to obtain an actual viewpoint position;
and rendering a video picture according to the actual viewpoint position, and outputting and displaying the rendered video picture.
In an embodiment, the step of processing the viewpoint information to obtain an actual viewpoint position includes:
extracting a rotation matrix in the viewpoint information;
inputting the rotation matrix into a preset formula to obtain an actual viewpoint position, wherein the preset formula is as follows: pworld=RG-sensorRoffsetPuser+ToffsetSaid P isworldAs the actual viewpoint position, RG-sensorTo rotate the matrix, Puser、RoffsetAnd ToffsetIs a preset calculation parameter.
In addition, in order to achieve the above object, the present invention further provides a viewpoint information collecting system, where the viewpoint information collecting system includes a TWS headset and a terminal, and the viewpoint information collecting system includes the following steps:
when detecting multi-viewpoint video playing, the terminal sends a data acquisition instruction to a TWS earphone;
the TWS earphone receives a data acquisition instruction sent by a terminal;
the TWS earphone collects the pose information of a user through a preset sensor and processes the pose information to obtain viewpoint information;
the TWS earphone sends the viewpoint information to a terminal so that the terminal adjusts a video image according to the viewpoint information;
the terminal receives viewpoint information sent by a TWS earphone, and processes the viewpoint information to obtain an actual viewpoint position;
and the terminal renders a video picture according to the actual viewpoint position and outputs and displays the rendered video picture.
In addition, to achieve the above object, the present invention further provides a viewpoint information collecting apparatus, including:
a receiving module: the data acquisition device is used for receiving a data acquisition instruction sent by a terminal;
the acquisition processing module: the system comprises a sensor, a display unit and a control unit, wherein the sensor is used for acquiring pose information of a user through a preset sensor and processing the pose information to obtain viewpoint information;
a sending module: and the system is used for sending the viewpoint information to a terminal.
In addition, to achieve the above object, the present invention further provides a viewpoint information collecting device, where the viewpoint information collecting device includes a TWS headset and a terminal:
the TWS headset includes: a first memory, a first processor, and a computer program stored on the first memory and executable on the first processor, wherein the computer program when executed by the first processor implements the steps of the viewpoint information collecting method as described above.
The terminal includes: a second memory, a second processor, and a computer program stored on the second memory and executable on the second processor, wherein the computer program when executed by the second processor implements the steps of the viewpoint information collecting method as described above.
In addition, to achieve the above object, the present invention also provides a computer storage medium;
the computer storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the viewpoint information acquisition method as described above.
According to the viewpoint information acquisition method, the apparatus, the device and the computer storage medium provided by the embodiment of the invention, the TWS earphone is used for acquiring the pose information of the user through the preset sensor, and the pose information is processed and calculated to the viewpoint information so as to send the viewpoint information to the terminal, so that the terminal performs animation viewpoint adjustment according to the viewpoint information and realizes viewpoint following.
Drawings
FIG. 1 is a schematic diagram of an apparatus in a hardware operating environment according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a viewpoint information collecting method according to a first embodiment of the present invention;
fig. 3 is a schematic flow chart of a viewpoint information collecting method according to a fifth embodiment of the present invention;
fig. 4 is a schematic diagram of device interaction of the viewpoint information collection method of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Because the conventional method for acquiring the viewpoint information mostly depends on external auxiliary positioning equipment with high cost and complex structure, the method acquires the pose information of a user through the auxiliary positioning equipment and transmits the pose information to a host computer with high cost for processing, the method has high cost, and the method is generally limited by that the equipment can only be operated in a fixed place and cannot be conveniently realized.
The invention provides a solution, which is characterized in that a sensor preset by a TWS earphone is used for acquiring the pose information of a user, and the pose information is processed and calculated to obtain the viewpoint information, so that the viewpoint information is sent to a terminal for video picture rendering to adjust the viewpoint of a video picture, and therefore, the method is realized by means of complex and expensive auxiliary equipment without the traditional method, and the viewpoint following is realized conveniently and low in cost.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a TWS headset (also called a viewpoint information collecting device, where the viewpoint information collecting device may be formed by a separate viewpoint information collecting device or formed by combining other devices with a viewpoint information collecting device) in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the TWS headset may include: a processor 1001, such as a Central Processing Unit (CPU), a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may comprise an input unit such as a headset key, and the optional user interface 1003 may also comprise a standard wired, wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., WIFI interface, WIreless FIdelity, WIFI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the TWS headset may further include a sensor, an audio circuit, a bluetooth module; the input unit is compared with a display screen and a touch screen; the network interface can be selected from WIFI, probes and the like except Bluetooth in the wireless interface. Such as light sensors, motion sensors, and other sensors. Of course, the TWS headset may also be configured with other sensors such as a gyroscope and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the TWS headset structure shown in fig. 1 does not constitute a limitation of the TWS headset and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, the computer software product is stored in a computer storage medium (also called computer storage medium, computer medium, readable computer storage medium, computer readable computer storage medium, or direct medium, etc., and the computer storage medium may be a non-volatile readable computer storage medium, such as RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method according to the embodiments of the present invention, and a memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a computer program.
In the TWS headset shown in fig. 1, the network interface 1004 is mainly used for connecting the terminal, and performing data communication with the terminal; and the processor 1001 may be configured to call the computer program stored in the memory 1005 and execute the steps of the viewpoint information collecting method provided by the following embodiments of the present invention.
Referring to fig. 2, in a first embodiment of a viewpoint information collecting method of the present invention, the viewpoint information collecting method includes:
and step S110, receiving a data acquisition instruction sent by the terminal.
The method of the embodiment is applied to a TWS headset based on the TWS technical principle that speakers are divided into a master speaker and a slave speaker, and the master speaker is a speaker capable of receiving audio of a specific protocol and streaming media control signals transmitted by sound source equipment such as a smart phone and a notebook computer and transmitting the audio to other TWS equipment. A slave speaker refers to a speaker capable of receiving audio from a master speaker. The TWS technology enables audio of a specific protocol to be transmitted from a master loudspeaker to a slave loudspeaker, the audio is synchronously played in two separated loudspeakers, and therefore a stereo effect is achieved.
As a TWS headset of the user information collecting and processing device, the TWS headset receives a data collecting instruction sent by a terminal, and it can be understood that information transmission between the TWS headset and the terminal is premised on connection, in this embodiment, a connection mode between the TWS headset and the terminal may be wireless connection such as bluetooth and WIFI, and is not specifically limited herein, the terminal is a terminal having functions of communication with the TWS headset, data processing, data storage, data output, and the like, and the terminal is not specifically limited herein, and may be a device with a display device such as a smart phone and a tablet computer, or a device with a display device such as a PC host and not having a display device, but may be connected to an external display device through a wire or wirelessly to output and display processed animation data.
After the TWS headset is connected to the terminal, it can be understood that, at this time, the TWS headset has two situations that the user is worn or not worn, and therefore the wearing state of the user needs to be determined, the method for determining the wearing state will be introduced in the subsequent embodiments, which are not described herein, the TWS headset receives a data acquisition instruction sent by the terminal, the data acquisition instruction includes a specific instruction frame agreed with the TWS headset, and the TWS headset can analyze and acquire the specific instruction frame from the data acquisition instruction and perform an operation according to the specific instruction frame.
And step S120, acquiring pose information of the user through a preset sensor, and processing the pose information to obtain viewpoint information.
After receiving a data acquisition instruction sent by a terminal, the TWS headset analyzes and acquires the specific instruction frame from the data acquisition instruction, executes data acquisition operation according to the specific instruction frame, and acquires pose information of a TWS headset user (wearer), wherein the pose information describes head action information of the user.
Specifically, the method comprises the following steps: in the TWS headset of this embodiment, the preset sensor is a sensor that can collect pose information of a user, and may be an acceleration sensor (G-sensor) or a gyroscope, which can collect information of a user's movement change when the user wears the TWS headset, and may be a single working sensor or a plurality of sensors that work in cooperation to obtain more accurate information.
After the TWS headset establishes communication connection with the terminal and is worn on the ear of a user, the TWS headset receives a data acquisition instruction sent by the terminal, the TWS headset analyzes and executes the data acquisition instruction, the acceleration sensor acquires acceleration information of X, Y, Z three axes during user action and caches the acceleration information to a First In First Out (FIFO) channel, the FIFO channel is a storage structure realized by a register or a Random Access Memory (RAM), the data In the FIFO channel adopts a First In First Out read-write mode, when a downstream cannot process data output from an upstream In time, the acquired data is easily lost, the data amount acquired by the acceleration sensor is large, and the downstream calculation processing is complex, so that the data acquired by the acceleration sensor is temporarily stored by using the FIFO channel.
When the data acquisition time of the acceleration sensor reaches the window time (the window time refers to the acquisition time length set by a user), preprocessing is carried out on the triaxial acceleration information acquired in the window time, a part of data which is far away from a normal value often exists in the data acquired by the acceleration sensor, the part of data which is far away from the normal value is like noise in a signal, if the data is included in the data acquired by the acceleration sensor during calculation, a large deviation is caused to a calculation result, and therefore the data acquired by the acceleration sensor needs to be preprocessed, the preprocessing includes but is not limited to filtering, smoothing and other processing modes, due to the real-time characteristic of viewpoint information acquisition, animation needs to be adjusted and output and displayed in time when a user viewpoint moves, and it can be understood that if the data acquisition time of the acceleration sensor is long, the subsequent animation adjustment and output display have large delay, therefore, each data acquisition time of the acceleration sensor is limited by a short window time, and the data in the acceleration sensor acquisition window time is processed to obtain viewpoint information.
Step S130, sending the viewpoint information to a terminal so that the terminal adjusts the video image according to the viewpoint information.
In this embodiment, the TWS headset is used as a device for collecting user information, performs functions of information collection and partial data processing, and in order to avoid excessive calculation pressure of the TWS headset and fit with daily use scenes, adjustment operation of a video picture needs to be implemented at a terminal.
In this embodiment, by using a manner of a TWS headset with a built-in acceleration sensor, when a user wears the TWS headset, the built-in acceleration sensor can acquire pose information of the user, and process and calculate viewpoint information reflecting viewpoint movement of the user according to the pose information, and send the viewpoint information to a terminal so that the terminal adjusts a video image according to the viewpoint information.
Further, on the basis of the first embodiment of the present invention, a second embodiment of the viewpoint information collecting method of the present invention is proposed.
This embodiment is a step following step S110 in the first implementation, and after the step of receiving the data acquisition instruction sent by the terminal, the method includes:
step a1, detecting whether the user is wearing.
As a TWS headset of the user information collecting and processing device receives a data collecting instruction sent by a terminal, it can be understood that information transmission of two terminals needs to be premised on establishing connection, in this embodiment, a connection mode between the TWS headset and the terminal is not specifically limited, and may be wireless connection such as bluetooth and WIFI.
After the TWS headset establishes a connection with the terminal, it can be understood that there are two situations when the TWS headset is not worn or worn by the user, therefore, wearing state detection is required, which is realized by a sensor, the kind of which is not specifically limited herein, may be a capacitive proximity sensor, a photoelectric proximity sensor, an infrared sensor, etc., and the present embodiment takes a capacitive proximity sensor as an example, when the proximity sensor detects that it is worn, there may be instances of the TWS headset being held in the user's hand, in a pocket, etc., this situation may cause the TWS headset to collect erroneous user action information, based on the two TWS headsets having a relatively fixed relative position after being worn, the acceleration sensors arranged in the two TWS earphones are matched with the capacitance proximity sensor for detection, so that special conditions can be effectively avoided.
Step a2, when the user wears the glasses, the step of acquiring the pose information of the user through the preset sensor and processing the pose information to obtain the viewpoint information is executed.
And when the TWS earphone detects that the user wears the TWS earphone, executing the step of acquiring the pose information of the user through a preset sensor and processing the pose information to obtain viewpoint information.
In the embodiment, the wearing state of the TWS headset is judged after the TWS headset collects data, so that the TWS headset is ensured to be correctly worn in the ear of the user when the data is collected, and the accuracy of the data collection is improved.
Further, on the basis of the above-described embodiments of the present invention, a third embodiment of the viewpoint information collection of the present invention is proposed.
The present embodiment is a refinement of step S120 in the first embodiment, where the step of acquiring pose information of a user by a preset sensor and processing the pose information to obtain viewpoint information includes:
and b1, acquiring acceleration information on preset X, Y and Z three axes through a preset sensor.
When a user acts, the TWS earphone collects acceleration information on a preset X, Y, Z axis through a built-in acceleration sensor, the preset X, Y, Z axis is three axes in a user viewpoint coordinate system, the acceleration information is unprocessed original data collected by the sensor, the original data comprises abnormal data partially far deviating from normal data, the acceleration information is cached in an FIFO channel, the data in the FIFO channel adopts a first-in first-out read-write mode, when data output at an upstream cannot be processed in time at a downstream, the collected data are easily lost, the data quantity collected by the acceleration sensor is large, downstream calculation processing is complex, and therefore the original data collected by the acceleration sensor are temporarily stored by using the FIFO channel.
Step b2, calculating pose information according to the acceleration information, wherein the pose information comprises: roll angle, pitch angle, and yaw angle.
And the TWS earphone calculates pose information according to the acceleration information, wherein the pose information specifically comprises a roll angle, a pitch angle and a yaw angle.
The system can be obtained by calculating acceleration information by using mathematical methods such as a quaternion method, a first-order complementary filtering algorithm, a Kalman filtering algorithm and the like, and a pitch angle, a yaw angle and a roll angle reflect the relation of the center coordinate of the viewpoint of the user relative to a world coordinate system.
And b3, converting the roll angle, the pitch angle and the yaw angle into a rotation matrix, and taking the rotation matrix as viewpoint information.
Converting the roll angle, the pitch angle and the yaw angle into a rotation matrix, for example, in a preset coordinate system, X, Y, Z is three axes of the preset coordinate system, a point P exists in the coordinate system, the point P can be regarded as a point obtained by rotating a certain angle around X, Y, Z three axes, for example, the point P rotates α around the X axis, rotates β around the Y axis, and rotates θ around the Z axis, and the TWS headset uses the rotation matrix as viewpoint information.
The rotation matrix of the point P rotated α degrees around X is Rx(α), the
Figure BDA0002351809980000091
The rotation matrix of the P point about Y by β degrees is Ry(β), thereforeThe above-mentioned
Figure BDA0002351809980000092
The rotation matrix of the P point about Y by β degrees is Ry(β), the
Figure BDA0002351809980000093
Then the rotation matrix of P points is R, said
Figure BDA0002351809980000094
In the embodiment, the earphone processes the pose information to obtain the viewpoint information and sends the viewpoint information to the terminal, so that data operation of the terminal is reduced, and data processing is more standard and convenient.
Further, on the basis of the above-described embodiments of the present invention, a fourth embodiment of the viewpoint information collection of the present invention is proposed.
This embodiment is a refinement of step b1 in the third embodiment, and the step of collecting the acceleration information on the preset X, Y and Z three axes through the preset sensors includes:
and c1, when the acquisition time reaches the preset window time, filtering the data acquired in the preset window time to generate acceleration information.
Because the real-time characteristic of viewpoint information acquisition requires that animation needs to be adjusted and output and displayed in time when a user viewpoint moves, it can be understood that if the data acquisition time of the acceleration sensor is long, the subsequent animation adjustment output and display has large lag, the data acquisition time of the acceleration sensor is acquired within a short window time, when the data acquisition time of the acceleration sensor reaches the window time, the acquired data in the window time is preprocessed, the data acquired by the acceleration sensor usually has a part which is far away from a normal value, the part of the data which is far away from the normal value is like noise in a signal, if the data is included in calculation, the calculation result is greatly deviated, the data acquired by the acceleration sensor needs to be preprocessed, and the data preprocessing at least comprises a filtering operation, the digital filtering method includes methods such as amplitude limiting filtering, median filtering, weighted average filtering, moving average filtering, and the like, and is not particularly limited herein, and acceleration information is generated after preprocessing operation is performed on data acquired within a window time.
In the embodiment, the real-time performance of acceleration information generation is improved by a method for collecting data by dividing a time window, so that the real-time performance of terminal video picture adjustment is improved, and the accuracy of data collection is improved by removing noise data in the data by a data preprocessing method such as filtering.
Referring to fig. 3, in a fifth embodiment of a viewpoint information collection method according to the present invention, the viewpoint information collection method is applied to a terminal, and the viewpoint information collection method includes:
step S210, when the multi-viewpoint video playing is detected, a data acquisition instruction is sent to the TWS earphone.
When a user watches a multi-view video (the multi-view video refers to a video with a picture capable of changing according to the view of the user) output and displayed by a terminal, the terminal sends a data acquisition instruction to a TWS earphone through an active mode, for example, the user clicks a specific function button, or the terminal automatically detects whether the video being played is the multi-view video, if the video being played is the multi-view video, the data acquisition instruction is sent to the TWS earphone, so that the TWS earphone acquires the pose information of the user through a preset sensor.
Step S220, receiving the viewpoint information sent by the TWS earphone, and processing the viewpoint information to obtain the actual viewpoint position.
In practical application, because a user viewpoint central coordinate system is different from a world coordinate system, a user viewpoint pose in the user viewpoint central coordinate system cannot be directly reflected in the world coordinate system, and the actual viewpoint position in the user viewpoint central coordinate system can be obtained only through similar conversion calculation.
And step S230, rendering a video picture according to the actual viewpoint position, and outputting and displaying the rendered video picture.
The terminal renders the video pictures according to the actual viewpoint positions obtained through calculation to generate rendered video pictures, and then outputs and displays the rendered video pictures, and taking the terminal without a display function as an example, as shown in fig. 4, the terminal renders the video pictures according to the actual viewpoint information to generate rendered video pictures, and then displays the rendered video picture output values through external equipment in a WIFI, Type-C and other modes.
The output display can be output on the display of the terminal, or the output terminal is connected with an external display for output.
In this embodiment, the terminal sends a data acquisition instruction to the TWS terminal, receives viewpoint information returned by the TWS terminal, processes the viewpoint information to obtain an actual viewpoint, and performs corresponding rendering and output display on a video picture by using the actual viewpoint, and completes the operation of receiving TWS headphone data and adjusting the video picture by using the data at the terminal, thereby achieving adjustment and output of a multi-viewpoint video picture conveniently at low cost.
Further, on the basis of the fifth embodiment of the present invention, a sixth embodiment of the viewpoint information collection of the present invention is proposed.
This embodiment is a refinement of step S220 in the fifth embodiment, where the step of receiving viewpoint information sent by the TWS headset and processing the viewpoint information to obtain an actual viewpoint position includes:
step d1, extracting the rotation matrix in the viewpoint information.
Step d2, inputting the rotation matrix into a preset formula to obtain the actual viewpoint position, wherein the preset formula is as follows: pworld=RG-sensorRoffsetPuser+ToffsetSaid P isworldAs the actual viewpoint position, RG-sensorTo rotate the matrix, Puser、RoffsetAnd ToffsetIs a preset calculation parameter.
In practical application, because a user viewpoint central coordinate system is different from a world coordinate system, a user viewpoint pose in the user viewpoint central coordinate system cannot be directly reflected in the world coordinate system, and an actual viewpoint position, an actual viewpoint and a rotation matrix in the user viewpoint central coordinate system can be obtained only through similar conversion calculation if a formula P existsworld=RG-sensorRoffsetPuser+ToffsetThe relationship, formula, RG-sensorTo rotate the matrix, Puser、RoffsetAnd ToffsetAnd obtaining the actual viewpoint for adjusting the animation viewpoint by utilizing the formula for the preset calculation parameters.
In the embodiment, the terminal calculates the actual viewpoint position by extracting the rotation matrix in the viewpoint information, so that the viewpoint position is subsequently applied to adjusting the multi-viewpoint video picture.
In addition, the invention also provides a viewpoint information acquisition system, which comprises a TWS earphone and a terminal, and the viewpoint information acquisition system comprises the following steps:
when detecting multi-viewpoint video playing, the terminal sends a data acquisition instruction to a TWS earphone;
the TWS earphone receives a data acquisition instruction sent by a terminal;
the TWS earphone collects the pose information of a user through a preset sensor and processes the pose information to obtain viewpoint information;
the TWS earphone sends the viewpoint information to a terminal so that the terminal adjusts a video image according to the viewpoint information;
the terminal receives viewpoint information sent by a TWS earphone, and processes the viewpoint information to obtain an actual viewpoint position;
and the terminal renders a video picture according to the actual viewpoint position and outputs and displays the rendered video picture.
In addition, an embodiment of the present invention further provides a viewpoint information collecting device, where the viewpoint information collecting device includes:
a receiving module: the data acquisition device is used for receiving a data acquisition instruction sent by a terminal;
the acquisition processing module: the system comprises a sensor, a display unit and a control unit, wherein the sensor is used for acquiring pose information of a user through a preset sensor and processing the pose information to obtain viewpoint information;
a sending module: and the system is used for sending the viewpoint information to a terminal.
In addition, the invention also provides a viewpoint information acquisition device, which comprises a TWS earphone and a terminal:
the TWS headset includes: a first memory, a first processor, and a computer program stored on the first memory and executable on the first processor, wherein the computer program when executed by the first processor implements the steps of the viewpoint information collecting method as described above.
The terminal includes: a second memory, a second processor, and a computer program stored on the second memory and executable on the second processor, wherein the computer program when executed by the second processor implements the steps of the viewpoint information collecting method as described above.
In addition, the embodiment of the invention also provides a computer storage medium.
The computer storage medium stores thereon a computer program, which when executed by a processor implements the operations in the viewpoint information collecting method provided by the above embodiments.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity/action/object from another entity/action/object without necessarily requiring or implying any actual such relationship or order between such entities/actions/objects; the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
For the apparatus embodiment, since it is substantially similar to the method embodiment, it is described relatively simply, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, in that elements described as separate components may or may not be physically separate. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A viewpoint information collection method is applied to a TWS headset, and the viewpoint information collection method includes the following steps:
receiving a data acquisition instruction sent by a terminal;
acquiring pose information of a user through a preset sensor, and processing the pose information to obtain viewpoint information;
and sending the viewpoint information to a terminal so that the terminal adjusts the video image according to the viewpoint information.
2. The viewpoint information collecting method according to claim 1, wherein the step of receiving the data collecting instruction transmitted from the terminal, after the step, comprises:
detecting whether a user is worn;
and when the user wears the glasses, executing the step of acquiring the pose information of the user through a preset sensor and processing the pose information to obtain viewpoint information.
3. The viewpoint information collecting method according to claim 1, wherein the step of collecting the pose information of the user by a preset sensor and processing the pose information to obtain the viewpoint information comprises:
acquiring acceleration information on preset X, Y and Z three axes through a preset sensor;
calculating pose information according to the acceleration information, wherein the pose information comprises: roll angle, pitch angle and yaw angle;
and converting the roll angle, the pitch angle and the yaw angle into a rotation matrix, and taking the rotation matrix as viewpoint information.
4. A viewpoint information collecting method according to claim 3, wherein the step of collecting acceleration information on the preset X, Y and Z-axis through preset sensors comprises:
and when the acquisition time reaches the preset window time, filtering the data acquired in the preset window time to generate acceleration information.
5. A viewpoint information collection method is applied to a terminal, and the viewpoint information collection method comprises the following steps:
when multi-viewpoint video playing is detected, sending a data acquisition instruction to a TWS earphone;
receiving viewpoint information sent by a TWS earphone, and processing the viewpoint information to obtain an actual viewpoint position;
and rendering a video picture according to the actual viewpoint position, and outputting and displaying the rendered video picture.
6. The viewpoint information collecting method according to claim 5, wherein the step of processing the viewpoint information to obtain an actual viewpoint position includes:
extracting a rotation matrix in the viewpoint information;
inputting the rotation matrix into a preset formula to obtain an actual viewpoint position, wherein the preset formula is as follows: pworld=RG- sensorRoffsetPuser+ToffsetSaid P isworldAs the actual viewpoint position, RG-sensorTo rotate the matrix, Puser、RoffsetAnd ToffsetIs a preset calculation parameter.
7. A viewpoint information collecting system is characterized in that the viewpoint information collecting system comprises a TWS earphone and a terminal, and the viewpoint information collecting system comprises the following steps:
when detecting multi-viewpoint video playing, the terminal sends a data acquisition instruction to a TWS earphone;
the TWS earphone receives a data acquisition instruction sent by a terminal;
the TWS earphone collects the pose information of a user through a preset sensor and processes the pose information to obtain viewpoint information;
the TWS earphone sends the viewpoint information to a terminal so that the terminal adjusts a video image according to the viewpoint information;
the terminal receives viewpoint information sent by a TWS earphone, and processes the viewpoint information to obtain an actual viewpoint position;
and the terminal renders a video picture according to the actual viewpoint position and outputs and displays the rendered video picture.
8. A viewpoint information collecting apparatus, characterized by comprising:
a receiving module: the data acquisition device is used for receiving a data acquisition instruction sent by a terminal;
the acquisition processing module: the system comprises a sensor, a display unit and a control unit, wherein the sensor is used for acquiring pose information of a user through a preset sensor and processing the pose information to obtain viewpoint information;
a sending module: and the system is used for sending the viewpoint information to a terminal.
9. A viewpoint information collecting apparatus, comprising a TWS headset and a terminal:
the TWS headset includes: a first memory, a first processor, and a computer program stored on the first memory and executable on the first processor, wherein the computer program when executed by the first processor implements the viewpoint information collecting method steps of any one of claims 1 to 4.
The terminal includes: second memory, a second processor, and a computer program stored on the second memory and executable on the second processor, wherein the computer program, when executed by the second processor, implements the viewpoint information collecting method steps of any one of claims 5 to 6.
10. A computer storage medium, characterized in that the computer storage medium has stored thereon a computer program which, when being executed by a processor, realizes the steps of the viewpoint information collecting method according to any one of claims 1 to 6.
CN201911418595.1A 2019-12-31 2019-12-31 Viewpoint information acquisition method, apparatus, device and computer storage medium Pending CN111200745A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911418595.1A CN111200745A (en) 2019-12-31 2019-12-31 Viewpoint information acquisition method, apparatus, device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911418595.1A CN111200745A (en) 2019-12-31 2019-12-31 Viewpoint information acquisition method, apparatus, device and computer storage medium

Publications (1)

Publication Number Publication Date
CN111200745A true CN111200745A (en) 2020-05-26

Family

ID=70747499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911418595.1A Pending CN111200745A (en) 2019-12-31 2019-12-31 Viewpoint information acquisition method, apparatus, device and computer storage medium

Country Status (1)

Country Link
CN (1) CN111200745A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022161026A1 (en) * 2021-01-28 2022-08-04 Oppo广东移动通信有限公司 Action recognition method and apparatus, and electronic device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1604015A (en) * 2003-09-30 2005-04-06 佳能株式会社 Data conversion method and apparatus, and orientation measurement apparatus
CN102499616A (en) * 2011-09-28 2012-06-20 天津大学 Acceleration transducer based three-dimensional magnetic field positioning system and method of endoscope probe
CN103000161A (en) * 2012-12-14 2013-03-27 北京小米科技有限责任公司 Image displaying method and device and intelligent handheld terminal
CN107493531A (en) * 2017-08-04 2017-12-19 歌尔科技有限公司 A kind of head pose detection method, device and earphone
CN107533233A (en) * 2015-03-05 2018-01-02 奇跃公司 System and method for augmented reality
US20180124385A1 (en) * 2016-10-28 2018-05-03 Sharp Laboratories Of America, Inc. Visual communication and information display device with multiple view point rendering
CN109478288A (en) * 2016-07-15 2019-03-15 武礼伟仁株式会社 Virtual reality system and information processing system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1604015A (en) * 2003-09-30 2005-04-06 佳能株式会社 Data conversion method and apparatus, and orientation measurement apparatus
CN102499616A (en) * 2011-09-28 2012-06-20 天津大学 Acceleration transducer based three-dimensional magnetic field positioning system and method of endoscope probe
CN103000161A (en) * 2012-12-14 2013-03-27 北京小米科技有限责任公司 Image displaying method and device and intelligent handheld terminal
CN107533233A (en) * 2015-03-05 2018-01-02 奇跃公司 System and method for augmented reality
CN109478288A (en) * 2016-07-15 2019-03-15 武礼伟仁株式会社 Virtual reality system and information processing system
US20180124385A1 (en) * 2016-10-28 2018-05-03 Sharp Laboratories Of America, Inc. Visual communication and information display device with multiple view point rendering
CN107493531A (en) * 2017-08-04 2017-12-19 歌尔科技有限公司 A kind of head pose detection method, device and earphone

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022161026A1 (en) * 2021-01-28 2022-08-04 Oppo广东移动通信有限公司 Action recognition method and apparatus, and electronic device and storage medium

Similar Documents

Publication Publication Date Title
JP2020532914A (en) Virtual audio sweet spot adaptation method
CN111784765B (en) Object measurement method, virtual object processing method, virtual object measurement device, virtual object processing device, medium and electronic equipment
CN109685915B (en) Image processing method and device and mobile terminal
CN111147743B (en) Camera control method and electronic equipment
CN111970625B (en) Recording method and device, terminal and storage medium
WO2014185170A1 (en) Image processing device, image processing method, and program
CN111724412A (en) Method and device for determining motion trail and computer storage medium
CN110555815B (en) Image processing method and electronic equipment
CN115002295A (en) Image data synchronization method and device, terminal equipment and storage medium
CN111970626A (en) Recording method and apparatus, recording system, and storage medium
CN111200745A (en) Viewpoint information acquisition method, apparatus, device and computer storage medium
CN108401194B (en) Time stamp determination method, apparatus and computer-readable storage medium
WO2023124972A1 (en) Display state switching method, apparatus and system, electronic device and storage medium
CN109688064B (en) Data transmission method and device, electronic equipment and storage medium
CN111416948A (en) Image processing method and electronic equipment
CN110839108A (en) Noise reduction method and electronic equipment
WO2023108016A1 (en) Augmented reality using a split architecture
CN112927718B (en) Method, device, terminal and storage medium for sensing surrounding environment
CN115361636A (en) Sound signal adjusting method and device, terminal equipment and storage medium
CN109144461A (en) Sounding control method, device, electronic device and computer-readable medium
CN110830724B (en) Shooting method and electronic equipment
CN110708582B (en) Synchronous playing method, device, electronic equipment and medium
CN109951341B (en) Content acquisition method, device, terminal and storage medium
CN114339294A (en) Network jitter confirmation method, device, equipment and storage medium
CN110443841B (en) Method, device and system for measuring ground depth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200526