WO2023078133A1 - Video playback method and device - Google Patents

Video playback method and device Download PDF

Info

Publication number
WO2023078133A1
WO2023078133A1 PCT/CN2022/127534 CN2022127534W WO2023078133A1 WO 2023078133 A1 WO2023078133 A1 WO 2023078133A1 CN 2022127534 W CN2022127534 W CN 2022127534W WO 2023078133 A1 WO2023078133 A1 WO 2023078133A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
angle
orientation
playback
Prior art date
Application number
PCT/CN2022/127534
Other languages
French (fr)
Chinese (zh)
Inventor
王亚飞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023078133A1 publication Critical patent/WO2023078133A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the invention relates to the technical field of virtual reality, in particular to a video playing method and device.
  • VR technology is a technology that can construct panoramic videos by collecting data from the real world. With the advancement of VR technology, the panoramic video that VR technology can build is getting closer to reality. VR technology is also widely used in daily life. Users can watch different areas of VR video by adjusting the viewing angle of VR video. . For example, in the field of real estate sales and real estate decoration, VR technology can be used to build panoramic videos of houses, so that users can view houses online through panoramic videos of houses, and users can watch different rooms of houses by adjusting the viewing angle of the panoramic videos of houses. .
  • the way for the user to adjust the viewing angle of the video playback lacks a sense of interaction, which reduces the user's sense of immersion and poor user experience.
  • the present invention provides a video playing method and device, which can improve the user's sense of interaction when adjusting the viewing angle of video playing. To achieve the above object, the present invention adopts the following technical solutions:
  • the present invention provides a method for playing a video, the method comprising: collecting target information first, and then adjusting a video playback angle of view according to the target information.
  • the target information includes user face orientation information and user movement information
  • the user face orientation information is used to indicate the direction of the user's face orientation
  • the user movement information is used to indicate the user's movement direction.
  • the user when the user adjusts the playing angle of the video, the user does not need to manipulate the mouse or touch the screen, but can directly change the face orientation and move the body, so that the playing angle of the video can be adjusted according to the user's actions , thereby improving the sense of interaction when the user adjusts the viewing angle of the video, increasing the user's sense of immersion when watching the video, and improving the user experience.
  • the above-mentioned adjustment of the playing angle of view of the video according to the target information may include: adjusting the orientation of the playing angle of view according to the user's face orientation information; adjusting the playing angle of view according to the user's movement information s position.
  • the user when the user adjusts the playing angle of the video, the user can not only adjust the orientation of the playing angle of view by changing the face orientation, but also adjust the position of the playing angle of view by moving. For example, if the user wants to adjust the orientation of the viewing angle of the video to move upward, the user can adjust the orientation of the viewing angle of the video to move upward by raising the head. Users can also pass. For another example, if the user adjusts the position of the playback angle of view of the video to move forward, the user can move forward to adjust the position of the playback angle of view of the video to move forward.
  • the adjusting the orientation of the playback angle of view according to the user's face orientation information includes: determining an orientation adjustment direction of the playback angle of view according to the user's face orientation information; Direction and preset orientation adjustment adjusts the orientation of the playing angle of view.
  • the video playback device first determines that the orientation adjustment direction of the playback angle of view is downward according to the user's face orientation information, and then adjusts the orientation of the playback angle of view downward by 30 degrees.
  • the user's face orientation information can also be used to characterize the angle of the user's face orientation.
  • the adjusting the orientation of the playback viewing angle according to the user's face orientation information may include: determining adjustment information according to the user's face orientation information; adjusting the playback angle of view according to the adjustment information.
  • the orientation of the viewing angle may be used to characterize the direction of the orientation of the playback viewing angle and the angle of the orientation of the playback viewing angle after adjustment.
  • the adjusted orientation of the playing angle of view may be the same as the orientation of the user's face.
  • the adjusted orientation angle of the playback viewing angle and the orientation angle of the user's face may be the same.
  • the video playback device first determines that the orientation of the adjusted playback viewing angle is 45 degrees upward according to the user's face orientation information, and then adjusts the orientation of the playback viewing angle upward by 75 degrees so that the orientation of the playback viewing angle is adjusted from downward 30 degrees to upward 45 degrees . It can be seen that the orientation (direction and angle) of the adjusted playback viewing angle is the same as the orientation of the user's face.
  • the user can adjust the orientation of the playback viewing angle by changing the orientation of the face and returning it to the normal position. That is, the user can first change the face orientation, and then return to the previous face orientation (hereinafter referred to as the back-to-orientation process).
  • the video playback device may generate user face orientation information according to the user's face orientation, and then adjust the orientation of the playback viewing angle according to the information.
  • the video playback device will not generate user's face orientation information according to the user's face orientation, nor will it adjust the orientation of the playback viewing angle.
  • adjusting the position of the playback angle of view according to the user movement information includes: determining the position adjustment direction of the playback angle of view according to the user movement information; adjusting the direction according to the position and a preset The position adjustment amount adjusts the position of the playing angle of view.
  • the video playback device first determines that the position adjustment direction of the playback angle of view is forward according to the user movement information, and then adjusts the position of the playback angle of view forward by N.
  • the user movement information is also used to characterize the distance traveled by the user.
  • the adjusting the position of the playback angle of view according to the user movement information includes: determining the position adjustment direction of the playback angle of view and the position of the playback angle of view according to the user movement information Adjusting the distance; adjusting the position of the playing angle of view according to the position adjustment direction and the position adjustment distance.
  • the first distance is the vertical distance between the user and the boundary of the first target area in the user's moving direction
  • the second distance is the vertical distance between the position of the playing angle of view and the boundary of the second target area in the user's moving direction
  • the second area is The two-dimensional plane area corresponding to the video
  • * is the multiplication sign
  • / is the division sign.
  • the video playback device first determines the adjustment direction of the playback angle of view according to the user's movement information as a backward adjustment distance of M2, and then adjusts the playback angle of view backward by M2.
  • M2 M1*(a/b)
  • a is the first distance, that is, the vertical distance between the user and the boundary of the first area behind
  • b is the second distance, that is, the distance between the position of the viewing angle of view and the boundary of the second area behind the vertical distance between them.
  • the second distance corresponding to the same movement direction of the user before and after adjusting the orientation of the playback viewing angle may change. Therefore, after moving to the boundary of the first area, the user can change the orientation of the playback viewing angle by changing the orientation of the face and returning to the normal position, so that the user can watch other positions of the video.
  • the collecting target information may include: collecting the target information by a collection unit, where the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
  • the collecting target information may include: receiving collection information sent by a collection device, where the collection information includes the target information.
  • the present invention also provides a video playback device, the device includes a processing unit, the processing unit is used to: collect target information, the target information includes user face orientation information and user movement information, the user face The orientation information is used to represent the direction of the user's face, and the user movement information is used to represent the direction of the user's movement; the playing angle of the video is adjusted according to the target information.
  • the processing unit is specifically configured to: adjust the orientation of the playback angle of view according to the user's face orientation information; adjust the position of the playback angle of view according to the user movement information.
  • the processing unit is specifically configured to: determine an orientation adjustment direction of the playback viewing angle according to the user's face orientation information; adjust the playback angle according to the orientation adjustment direction and a preset orientation adjustment amount. The orientation of the viewing angle.
  • the user's face orientation information can also be used to characterize the angle of the user's face orientation.
  • the processing unit is specifically configured to: determine adjustment information according to the user's face orientation information, where the adjustment information is used to represent the adjusted orientation of the playing angle of view and the orientation of the user's face. The angle of the orientation of the playing angle of view; adjusting the orientation of the playing angle of view according to the adjustment information.
  • the adjusted orientation of the playing angle of view may be the same as the orientation of the user's face.
  • the adjusted orientation angle of the playback viewing angle and the orientation angle of the user's face may be the same.
  • the processing unit is specifically configured to: determine a position adjustment direction of the playback viewing angle according to the user movement information; adjust the playback viewing angle according to the position adjustment direction and a preset position adjustment amount s position.
  • the user movement information is also used to characterize the distance traveled by the user.
  • the processing unit is specifically configured to: determine the position adjustment direction of the playback viewing angle and the position adjustment distance of the playback viewing angle according to the user movement information; The position adjustment distance adjusts the position of the playing angle of view.
  • position adjustment distance user movement distance*(second distance/first distance).
  • the processing unit is specifically configured to: collect the target information through a collection unit, where the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
  • the processing unit is specifically configured to: receive collection information sent by a collection device, where the collection information includes the target information.
  • the present invention also provides a video playback device, which includes: at least one processor, and when the at least one processor executes program codes or instructions, the above first aspect or any possible implementation thereof can be realized method described in .
  • the video playback device may further include at least one memory, and the at least one memory is used to store the program code or instruction.
  • the present invention further provides a chip, including: an input interface, an output interface, and at least one processor.
  • the chip also includes a memory.
  • the at least one processor is used to execute the code in the memory, and when the at least one processor executes the code, the chip implements the method described in the above first aspect or any possible implementation thereof.
  • the aforementioned chip may also be an integrated circuit.
  • the present invention further provides an electronic device, the terminal comprising the above-mentioned video playing device or the above-mentioned chip.
  • the electronic device may be a smart screen.
  • the present invention further provides a computer-readable storage medium for storing a computer program, and the computer program includes a method for implementing the above-mentioned first aspect or any possible implementation thereof.
  • the present invention also provides a computer program product containing instructions, which, when run on a computer, enable the computer to implement the method described in the above first aspect or any possible implementation thereof.
  • the video playing device, electronic equipment, computer storage medium, computer program product and chip provided in this embodiment are all used to execute the video playing method provided above, therefore, the beneficial effects it can achieve can refer to the above provided The beneficial effects in the video playing method will not be repeated here.
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a user interface of an electronic device provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a user interface of another electronic device provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a user interface of another electronic device provided by an embodiment of the present invention.
  • Fig. 6 is a schematic diagram of a viewer's activity area provided by an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of generating a second region provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of mapping between a first area and a second area provided by an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of another mapping between a first area and a second area provided by an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another mapping between the first area and the second area provided by the embodiment of the present invention.
  • Fig. 11 is a schematic diagram of another mapping between the first area and the second area provided by the embodiment of the present invention.
  • FIG. 12 is a schematic flowchart of a video playback method provided by an embodiment of the present invention.
  • Fig. 13 is a schematic structural diagram of a device provided by an embodiment of the present invention.
  • Fig. 14 is a schematic structural diagram of another device provided by an embodiment of the present invention.
  • FIG. 15 is a schematic structural diagram of a chip provided by an embodiment of the present invention.
  • first and second in the description and drawings of the present invention are used to distinguish different objects, or to distinguish different processes for the same object, rather than to describe a specific sequence of objects.
  • VR technology can be used to build panoramic videos of houses, so that users can view houses online through panoramic videos of houses, and users can watch different rooms of houses by adjusting the viewing angle of the panoramic videos of houses. .
  • the embodiment of the present invention provides a video playback method, which can improve the user's sense of interaction when adjusting the viewing angle of the video playback.
  • the video playback method provided by the embodiments of the present invention can be applied to smart screens, mobile phones, tablet computers, notebook computers, ultra-mobile personal computers (ultra-mobile personal computers, UMPCs), netbooks, personal digital assistants (personal digital assistants, PDAs), etc.
  • the embodiment of the present invention does not impose any limitation on the specific type of the electronic device.
  • FIG. 1 is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present invention.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus
  • the processor 110 can couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input of the battery 142 and/or the charging management module 140, and supplies power for the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160, etc.
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the touch sensor, the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, image signals in formats such as YUV. It should be understood that in the description of the embodiment of the present invention, the image in RGB format is used as an example for introduction, and the embodiment of the present invention does not limit the image format.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100.
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes).
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect temperature.
  • Touch sensor 180K also known as "touch panel".
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects.
  • the motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 .
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • an operating system with a layered architecture is taken as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 2 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the operating system is divided into four layers, which are application program layer, application program framework layer, operating system runtime (runtime) and system library, and kernel layer from top to bottom.
  • the application layer can consist of a series of application packages. As shown in FIG. 2, the application package may include application programs such as camera, photo album, music, and settings.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions. As shown in Figure 2, the application framework layer can include window managers, content providers, view systems, resource managers, notification managers, etc.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages.
  • the notification information displayed in the status bar can disappear automatically after a short stay, such as a message reminder to inform the user that the download is complete.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, or the notification manager can also emit a prompt sound, such as electronic device vibration, indicator light flashing, and the like.
  • Runtime includes core library and virtual machine. Runtime is responsible for the scheduling and management of the operating system.
  • the core library consists of two parts: one part is the functions that the java language needs to call, and the other part is the core library of the operating system.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: surface manager (surface manager), media library (media libraries), 3D graphics processing library (for example: open graphics library (open graphics library, OpenGL) embedded systems (embedded systems, ES)), 2D graphics engine (for example : Scene graph library (scene graph library, SGL)), etc.
  • surface manager surface manager
  • media library media libraries
  • 3D graphics processing library for example: open graphics library (open graphics library, OpenGL) embedded systems (embedded systems, ES)
  • 2D graphics engine for example : Scene graph library (scene graph library, SGL)
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: moving pictures experts group (MPEG) 4, H.264, MP3, advanced audio coding (AAC), multi-rate adaptive (adaptibve multi rate, AMR), joint photographic experts group (joint photographic experts group, JPG), portable network graphics (portable network graphics, PNG) and so on.
  • MPEG moving pictures experts group
  • AAC advanced audio coding
  • AMR multi-rate adaptive (adaptibve multi rate, AMR)
  • joint photographic experts group joint photographic experts group
  • JPG joint photographic experts group
  • PNG portable network graphics
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer may include hardware driver modules, such as display drivers, camera drivers, sensor drivers, etc.
  • the application framework layer may call the hardware driver modules of the kernel layer.
  • Fig. 3 is a schematic diagram of a graphical user interface (graphical user interface, GUI) provided by an embodiment of the present invention.
  • Fig. 3 shows that the screen display system of the smart screen displays the currently output interface content, which is the main content of the smart screen. interface.
  • the content of the interface displays multiple application programs (applications, Apps), for example, applications such as VR, clock, calendar, gallery, and memo. It can be understood that the interface content may also include other more application programs, which is not limited in this embodiment of the present invention.
  • Users can instruct the smart screen to open VR applications by touching specific controls on the screen of the smart screen, pressing specific physical buttons or button combinations, inputting voice, and gestures in the air.
  • the smart screen starts the VR application.
  • the user can click the "VR" application icon on the main interface by operating the remote control of the smart screen to instruct the smart screen to start the VR application.
  • the user can instruct the smart screen to start the VR application and display the VR application interface through a voice command (such as "open the VR application").
  • the above-mentioned videos include but are not limited to VR videos, panoramic videos, VR panoramic videos, VR real-scene videos or other 3D panoramic videos.
  • the VR application is a VR house viewing application, and the user can instruct the smart screen to play a 3D panoramic video of the house through the app to view the interior of the house.
  • the VR application can also be other VR applications, and the user can also play other videos through the VR application. Such as videos of attractions, videos of shopping malls, or other videos.
  • the specific method for generating the foregoing video may be processed by any method conceivable by those skilled in the art, which is not specifically limited in this embodiment of the present application.
  • a panoramic camera is used to collect real-world images to generate a video.
  • the user can also instruct the smart screen to play a preview navigation map of the video (also called a VR panoramic video navigation map), and the user can determine the position and orientation of the current playback perspective in the entire video through the preview navigation map.
  • the preview navigation map is a plan view used to indicate the position and orientation of the current playing angle in the entire video.
  • the user when the user wants to adjust the viewing angle of the video, the user can indicate the viewing angle of the VR application video by moving in the viewer's active area (hereinafter referred to as the first area).
  • the first area may be an area defined in front of an electronic device (such as a smart screen) to limit the user to watch videos, and is usually a rectangular area where a collection device (such as a camera) can clearly capture the user's position and user's face orientation.
  • the location of the user within the first area may be referred to as the viewer location.
  • the initial position of the playback angle of view may be the center of the second area.
  • the initial orientation of the playback perspective may be forward.
  • the second area is a two-dimensional plane area corresponding to the video.
  • a rectangle may be formed according to the maximum length and maximum width of the graphic corresponding to the video to generate the second area.
  • the left side of FIG. 7 is the graphic corresponding to the video
  • the right side of FIG. 7 is the second area.
  • the smart screen obtains the user's movement information (that is, the information used to characterize the user's moving direction and moving distance) through a collection device (such as a camera) and measures the distance between the user and the boundary of the first area in front (hereinafter referred to as the first area boundary).
  • a collection device such as a camera
  • a distance) is a
  • the distance between the location where the smart screen obtains the current viewing angle and the boundary of the second area in front (hereinafter referred to as the second distance) is b
  • the user in the first area, the user is watching the video of the house through the smart screen in front.
  • the user wants to move the viewing angle of the video 90 degrees to the left
  • the user can Rotate the face orientation to the left by 90 degrees in the area
  • the smart screen obtains the user’s face orientation information (that is, the information representing the direction and angle of the user’s face orientation) through the acquisition device, and then the smart screen first determines the playback angle of view based on the user’s face orientation information.
  • the orientation adjustment direction is to the left, and then the orientation adjustment angle of the playback viewing angle is determined to be 90 degrees according to the user's face orientation information, and finally the orientation of the video playback viewing angle is adjusted to the left by 90 degrees.
  • the user can also move the orientation of the viewing angle of the video up or down by turning the face up or down (that is, looking up or down).
  • the user in the first area is watching the video of the house through the smart screen in front.
  • the user in the first area is watching the video of the house through the smart screen in front.
  • the user wants to move the viewing angle of the video to the right
  • the user can move to the Move M5 to the right
  • the smart screen obtains the user's movement information through the acquisition device and measures the distance between the user and the boundary of the first area on the right as a
  • the smart screen obtains the distance between the position of the current viewing angle and the boundary of the second area on the right It is b
  • user face orientation information and user movement information can be collected through infrared detection, wearable sensors, and indoor positioning.
  • the smart screen can also adjust the viewing angle of the video according to the user's face orientation information and user movement information to simulate events (such as touch gesture events, voice events, etc.).
  • events such as touch gesture events, voice events, etc.
  • the VR application supports left-swipe, right-swipe, up-slide, down-slide, and double-tap touch gesture events to adjust the playing angle of the video.
  • the left sliding touch gesture event corresponds to adjusting the orientation of the playback viewing angle to the right
  • the right sliding touch gesture event corresponds to adjusting the orientation of the playback viewing angle to the left
  • the upward sliding touch gesture event corresponds to adjusting the orientation of the playback viewing angle downward
  • the sliding touch gesture event corresponds to adjusting the orientation of the playback viewing angle downward
  • the gesture event corresponds to adjusting the orientation of the playback angle upward
  • the double-tap touch gesture event corresponds to adjusting the position of the playback angle forward.
  • the smart screen can collect and simulate the above-mentioned touch gesture events to adjust the viewing angle of the video according to the user's face orientation information and user movement information.
  • the user's face orientation information collected by the smart screen indicates that the direction of the user's face orientation is upward (that is, head up), and the smart screen can simulate a sliding touch gesture event based on this information, thereby adjusting the orientation of the playback viewing angle upward.
  • the smart screen can simulate a double-tap touch gesture event based on this information, thereby adjusting the position of the playback angle of view forward.
  • the embodiment of the present invention provides a video playback method, including:
  • the video playback device collects target information.
  • the target information includes the user's face orientation information and user movement information
  • the user's face orientation information is used to represent the direction of the user's face orientation
  • the user movement information is used to represent the user's movement direction.
  • the video playback device may collect target information through a collection unit, and the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
  • the video playback device may receive the collection information sent by the collection device.
  • the collected information includes target information.
  • the user's face orientation information can also be used to characterize the angle of the user's face orientation.
  • the user movement information is also used to characterize the distance traveled by the user.
  • the video playing device adjusts the playing angle of the video according to the target information.
  • the video playback device may first adjust the orientation of the playback angle of view according to the user's face orientation information, and then adjust the position of the playback angle of view according to the user's movement information.
  • the above-mentioned adjustment of the orientation of the playback angle of view according to the user's face orientation information includes: determining the orientation adjustment direction of the playback angle of view according to the user's face orientation information; adjusting the orientation of the playback angle of view according to the orientation adjustment direction and a preset orientation adjustment amount. towards.
  • the video playback device first determines that the orientation adjustment direction of the playback angle of view is downward according to the user's face orientation information, and then adjusts the orientation of the playback angle of view downward by 30 degrees.
  • the above-mentioned adjusting the orientation of the playback angle of view according to the user's face orientation information may include: determining adjustment information according to the user's face orientation information; and adjusting the orientation of the playback angle of view according to the adjustment information.
  • the adjustment information is used to represent the direction of the orientation of the adjusted playback angle of view and the angle of the orientation of the playback angle of view.
  • the direction of the adjusted playback viewing angle and the direction of the user's face may be the same.
  • the angle of the adjusted playing angle of view may be the same as the angle of the user's face.
  • the video playback device first determines that the orientation of the adjusted playback viewing angle is 45 degrees upward according to the user's face orientation information, and then adjusts the orientation of the playback viewing angle upward by 75 degrees so that the orientation of the playback viewing angle is adjusted from downward 30 degrees to upward 45 degrees . It can be seen that the orientation (direction and angle) of the adjusted playback viewing angle is the same as the orientation of the user's face.
  • the user can adjust the orientation of the playback viewing angle by changing the orientation of the face and returning it to the normal position. That is, the user can first change the face orientation, and then return to the previous face orientation (hereinafter referred to as the back-to-orientation process).
  • the video playback device may generate user face orientation information according to the user's face orientation, and then adjust the orientation of the playback viewing angle according to the information.
  • the video playback device will not generate user's face orientation information according to the user's face orientation, nor will it adjust the orientation of the playback viewing angle.
  • the above-mentioned adjusting the position of the playback angle of view according to the user movement information includes: determining the position adjustment direction of the playback angle of view according to the user movement information; and adjusting the position of the playback angle of view according to the position adjustment direction and the preset position adjustment amount.
  • the video playback device first determines that the position adjustment direction of the playback angle of view is forward according to the user movement information, and then adjusts the position of the playback angle of view forward by N.
  • adjusting the position of the playback angle of view according to the user movement information includes: determining the position adjustment direction and the position adjustment distance of the playback angle of view according to the user movement information; adjusting the direction and position adjustment distance according to the position adjustment The position of the playback camera.
  • the first distance is the vertical distance between the user and the boundary of the first target area in the user's moving direction
  • the second distance is the vertical distance between the position of the playing angle of view and the boundary of the second target area in the user's moving direction
  • the second area is The two-dimensional plane area corresponding to the video
  • * is the multiplication sign
  • / is the division sign.
  • the video playback device first determines the adjustment direction of the playback angle of view according to the user's movement information as a backward adjustment distance of M2, and then adjusts the playback angle of view backward by M2.
  • M2 M1*(a/b)
  • a is the first distance, that is, the vertical distance between the user and the boundary of the first area behind
  • b is the second distance, that is, the distance between the position of the viewing angle of view and the boundary of the second area behind the vertical distance between them.
  • the second distance corresponding to the same movement direction of the user before and after adjusting the orientation of the playback viewing angle may change. Therefore, after moving to the boundary of the first area, the user can change the orientation of the playback viewing angle by changing the orientation of the face and returning to the normal position, so that the user can watch other positions of the video.
  • the second distance used when calculating the movement distance of the playback angle of view is no longer The vertical distance between the position of the playback angle of view and the boundary of the second rear area, but the vertical distance between the position of the playback angle of view and the boundary of the second area on the left.
  • the user when the user adjusts the playing angle of the video, the user does not need to manipulate the mouse or touch the screen, but directly changes the face orientation and moves the body, so that the playing angle of the video can be adjusted accordingly. It can be adjusted according to the user's actions, thereby improving the sense of interaction when the user adjusts the viewing angle of the video, reducing the sense of immersion when the user is watching the video, and improving the user experience.
  • the embodiment of the present invention also provides a video playback device, the device includes a processing unit, the processing unit is used to: collect target information, the target information includes user facial orientation information and user movement information, and the user facial orientation information is used to represent the user's facial orientation
  • the direction of the user's movement information is used to represent the direction of the user's movement; adjust the viewing angle of the video according to the target information.
  • the processing unit is specifically configured to: adjust the orientation of the playback angle of view according to the user's face orientation information; adjust the position of the playback angle of view according to the user movement information.
  • the processing unit is specifically configured to: determine an orientation adjustment direction of the playback viewing angle according to the user's face orientation information; and adjust the orientation of the playback viewing angle according to the orientation adjustment direction and a preset orientation adjustment amount.
  • the user's face orientation information can also be used to characterize the angle of the user's face orientation.
  • the processing unit is specifically configured to: determine adjustment information according to the user's face orientation information, where the adjustment information is used to characterize the orientation of the adjusted playback viewing angle and the orientation angle of the playback viewing angle; Adjust the orientation of the playback perspective.
  • the direction of the adjusted playback viewing angle and the direction of the user's face may be the same.
  • the angle of the adjusted playing angle of view may be the same as the angle of the user's face.
  • the processing unit is specifically configured to: determine the position adjustment direction of the playback angle of view according to the user movement information; and adjust the position of the playback angle of view according to the position adjustment direction and the preset position adjustment amount.
  • the user movement information is also used to characterize the distance traveled by the user.
  • the processing unit is specifically configured to: determine the position adjustment direction of the playback angle of view and the position adjustment distance of the playback angle of view according to the user movement information; and adjust the position of the playback angle of view according to the position adjustment direction and the position adjustment distance.
  • position adjustment distance user movement distance*(second distance/first distance).
  • the processing unit is specifically configured to: collect target information through a collection unit, where the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
  • the processing unit is specifically configured to: receive collection information sent by the collection device, where the collection information includes target information.
  • the electronic device includes hardware and/or software modules corresponding to each function.
  • the present invention can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions in combination with the embodiments for each specific application, but such implementation should not be regarded as exceeding the scope of the present invention.
  • the functional modules of the electronic device may be divided according to the above method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules may be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 13 shows a possible composition diagram of the electronic device involved in the above embodiment.
  • the apparatus 1300 may include: a transceiver unit 1301 and a processing A unit 1302, the processing unit 1302 may implement the method executed by the electronic device in the foregoing method embodiments, and/or other processes used in the technologies described herein.
  • the apparatus 1300 may include a processing unit, a storage unit and a communication unit.
  • the processing unit may be used to control and manage the actions of the apparatus 1300, for example, may be used to support the apparatus 1300 to execute the steps performed by the above-mentioned units.
  • the storage unit may be used to support the device 1300 to execute stored program codes, and/or data, and the like.
  • the communication unit may be used to support communication of the apparatus 1300 with other devices.
  • the processing unit may be a processor or a controller. It can implement or execute the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor can also be a combination of computing functions, such as a combination of one or more microprocessors, a combination of digital signal processing (digital signal processing, DSP) and a microprocessor, and the like.
  • the storage unit may be a memory.
  • the communication unit may be a radio frequency circuit, a bluetooth chip, a wireless fidelity (wireless fidelity, Wi-Fi) chip, and other devices that interact with other electronic devices.
  • the electronic device involved in this embodiment of the present invention may be an apparatus 1400 having the structure shown in FIG. 14 , where the apparatus 1400 includes a processor 1401 and a transceiver 1402 .
  • the transceiver unit 1301 and the processing unit 1302 in FIG. 13 may be implemented by the processor 1401 .
  • the apparatus 1400 may further include a memory 1403, and the processor 1401 and the memory 1403 communicate with each other through an internal connection path.
  • the relevant functions implemented by the storage unit in FIG. 13 may be implemented by the memory 1403 .
  • the embodiment of the present invention also provides a computer storage medium, the computer storage medium stores computer instructions, and when the computer instructions are run on the electronic device, the electronic device executes the above related method steps to realize the video playback in the above embodiment method.
  • An embodiment of the present invention also provides a computer program product, which, when running on a computer, causes the computer to execute the above-mentioned related steps, so as to realize the video playing method in the above-mentioned embodiment.
  • the embodiment of the present invention also provides an electronic device, and this device may specifically be a chip, an integrated circuit, a component or a module.
  • the device may include a connected processor and a memory for storing instructions, or the device may include at least one processor for fetching instructions from an external memory.
  • the processor can execute instructions, so that the chip executes the video playing method in the above method embodiments.
  • FIG. 15 shows a schematic structural diagram of a chip 1500 .
  • the chip 1500 includes one or more processors 1501 and an interface circuit 1502 .
  • the above-mentioned chip 1500 may further include a bus 1503 .
  • the processor 1501 may be an integrated circuit chip with signal processing capabilities. During implementation, each step of the above video playing method may be completed by an integrated logic circuit of hardware in the processor 1501 or instructions in the form of software.
  • the above-mentioned processor 1501 may be a general-purpose processor, a digital signal processing (digital signal processing, DSP) device, an integrated circuit (application specific integrated circuit, ASIC), a field-programmable gate array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the interface circuit 1502 can be used for sending or receiving data, instructions or information.
  • the processor 1501 can use the data, instructions or other information received by the interface circuit 1502 to process, and can send the processing completion information through the interface circuit 1502 .
  • the chip further includes a memory, which may include a read-only memory and a random access memory, and provides operation instructions and data to the processor.
  • a portion of the memory may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory stores executable software modules or data structures
  • the processor can execute corresponding operations by calling operation instructions stored in the memory (the operation instructions can be stored in the operating system).
  • the chip can be used in the electronic device or DOP involved in the embodiment of the present invention.
  • the interface circuit 1502 may be used to output an execution result of the processor 1501 .
  • processor 1501 and the interface circuit 1502 can be realized by hardware design, software design, or a combination of software and hardware, which is not limited here.
  • the electronic device, computer storage medium, computer program product or chip provided in this embodiment is all used to execute the corresponding method provided above, therefore, the beneficial effects it can achieve can refer to the corresponding method provided above The beneficial effects in the method will not be repeated here.
  • An embodiment of the present invention also provides a terminal device, where the terminal device includes the above-mentioned electronic device.
  • the terminal device is a vehicle or an intelligent robot.
  • sequence numbers of the above-mentioned processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, rather than by the embodiment of the present invention.
  • the implementation process constitutes any limitation.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the above units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above functions are realized in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the essence of the technical solution of the present invention or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the above-mentioned methods in various embodiments of the present invention.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other various media that can store program codes.
  • the first application in the embodiment of the present invention is an application related to audio content.
  • the first application may also be extended to an application of video content, or an application of audio and video content.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a video playback method and device, which relate to the technical field of virtual reality, and which may improve the feeling of interaction for users when adjusting the angle of view of video playback. The method comprises: first collecting target information, and then adjusting the angle of view of video playback according to the target information. The target information comprises user face orientation information and user movement information, the user face orientation information being used for representing the direction in which the face of a user is oriented, and the user movement information being used for representing the direction in which the user moves. In the video playback method provided in the present invention, when a user adjusts the angle of view of video playback, the angle of view of video playback may be directly adjusted along with an action of the user by changing the face orientation and moving the body without operating a mouse or a touch screen, thereby increasing the feeling of interaction for the user when the angle of view of video playback is adjusted.

Description

视频播放方法和装置Video playback method and device
本申请要求于2021年11月05日提交中国专利局、申请号为202111303353.5、申请名称为“视频播放方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application with application number 202111303353.5 and application title "Video Playing Method and Device" filed with the China Patent Office on November 05, 2021, the entire contents of which are incorporated herein by reference.
技术领域technical field
本发明涉及虚拟现实技术领域,尤其涉及视频播放方法和装置。The invention relates to the technical field of virtual reality, in particular to a video playing method and device.
背景技术Background technique
虚拟现实(virtual reality,VR)技术是一种可以通过采集来自真实世界的数据构建全景视频的技术。随着VR技术的进步,VR技术所能构建的全景视频越来越趋近于真实,VR技术也被广泛应用于日常生活中,用户可以通过调整VR视频的播放视角以观看VR视频的不同区域。例如,在房产销售,房产装修领域可以通过VR技术构建房屋的全景视频,以使得用户可以通过房屋的全景视频实现线上看房,用户可以通过调整房屋的全景视频的播放视角观看房屋的不同房间。Virtual reality (VR) technology is a technology that can construct panoramic videos by collecting data from the real world. With the advancement of VR technology, the panoramic video that VR technology can build is getting closer to reality. VR technology is also widely used in daily life. Users can watch different areas of VR video by adjusting the viewing angle of VR video. . For example, in the field of real estate sales and real estate decoration, VR technology can be used to build panoramic videos of houses, so that users can view houses online through panoramic videos of houses, and users can watch different rooms of houses by adjusting the viewing angle of the panoramic videos of houses. .
然而现有VR技术中,用户调整视频播放视角的方式缺乏交互感,从而导致用户沉浸感降低,用户体验较差。However, in the existing VR technology, the way for the user to adjust the viewing angle of the video playback lacks a sense of interaction, which reduces the user's sense of immersion and poor user experience.
发明内容Contents of the invention
本发明提供了视频播放方法和装置,能够提升用户调整视频播放视角时的交互感。为达到上述目的,本发明采用如下技术方案:The present invention provides a video playing method and device, which can improve the user's sense of interaction when adjusting the viewing angle of video playing. To achieve the above object, the present invention adopts the following technical solutions:
第一方面,本发明提供了一种视频播放方法,该方法包括:先采集目标信息,然后根据所述目标信息调整视频的播放视角。其中,所述目标信息包括用户面部朝向信息和用户移动信息,所述用户面部朝向信息用于表征用户面部朝向的方向,所述用户移动信息用于表征用户移动的方向。In a first aspect, the present invention provides a method for playing a video, the method comprising: collecting target information first, and then adjusting a video playback angle of view according to the target information. Wherein, the target information includes user face orientation information and user movement information, the user face orientation information is used to indicate the direction of the user's face orientation, and the user movement information is used to indicate the user's movement direction.
在本发明提供的视频播放方法中,用户在调整视频的播放视角时,无需通过操纵鼠标或触控屏幕,而是可以直接通过改变面部朝向和移动身体,使视频的播放视角随用户动作而调整,从而提升了用户调整视频的播放视角时的交互感,增加了用户观看视频时的沉浸感降低,提升了用户体验。In the video playing method provided by the present invention, when the user adjusts the playing angle of the video, the user does not need to manipulate the mouse or touch the screen, but can directly change the face orientation and move the body, so that the playing angle of the video can be adjusted according to the user's actions , thereby improving the sense of interaction when the user adjusts the viewing angle of the video, increasing the user's sense of immersion when watching the video, and improving the user experience.
在一种可能的实现方式中,上述根据所述目标信息调整视频的播放视角,可以包括:根据所述用户面部朝向信息调整所述播放视角的朝向;根据所述用户移动信息调整所述播放视角的位置。In a possible implementation manner, the above-mentioned adjustment of the playing angle of view of the video according to the target information may include: adjusting the orientation of the playing angle of view according to the user's face orientation information; adjusting the playing angle of view according to the user's movement information s position.
在本发明提供的视频播放方法中,用户在调整视频的播放视角时,用户不仅可以通过改变面部朝向以调整播放视角的朝向,还可以通过移动以调整播放视角的位置。例如,用户想调整视频的播放视角的朝向向上移动,用户则可以通过抬头以调整视频的播放视角的朝向向上移动。用户还可以通过。又例如,用户调整视频的播放视角的位置向前移动,用 户则可以通过向前移动以调整视频的播放视角的位置向前移动。In the video playing method provided by the present invention, when the user adjusts the playing angle of the video, the user can not only adjust the orientation of the playing angle of view by changing the face orientation, but also adjust the position of the playing angle of view by moving. For example, if the user wants to adjust the orientation of the viewing angle of the video to move upward, the user can adjust the orientation of the viewing angle of the video to move upward by raising the head. Users can also pass. For another example, if the user adjusts the position of the playback angle of view of the video to move forward, the user can move forward to adjust the position of the playback angle of view of the video to move forward.
在一种可能的实现方式中,所述根据所述用户面部朝向信息调整所述播放视角的朝向,包括:根据所述用户面部朝向信息确定所述播放视角的朝向调整方向;根据所述朝向调整方向和预设朝向调整量调整所述播放视角的朝向。In a possible implementation manner, the adjusting the orientation of the playback angle of view according to the user's face orientation information includes: determining an orientation adjustment direction of the playback angle of view according to the user's face orientation information; Direction and preset orientation adjustment adjusts the orientation of the playing angle of view.
示例性地,以用户面部朝向的方向为朝下,预设朝向调整量为30度为例。视频播放装置先根据用户面部朝向信息确定播放视角的朝向调整方向为向下,然后将播放视角的朝向向下调整30度。Exemplarily, it is assumed that the user's face is facing downwards and the preset orientation adjustment amount is 30 degrees as an example. The video playback device first determines that the orientation adjustment direction of the playback angle of view is downward according to the user's face orientation information, and then adjusts the orientation of the playback angle of view downward by 30 degrees.
可选地,用户面部朝向信息还可用于表征用户面部朝向的角度。Optionally, the user's face orientation information can also be used to characterize the angle of the user's face orientation.
在另一种可能的实现方式中,所述根据所述用户面部朝向信息调整所述播放视角的朝向,可以包括:根据所述用户面部朝向信息确定调整信息;根据所述调整信息调整所述播放视角的朝向。其中,所述调整信息用于表征调整后的所述播放视角的朝向的方向和所述播放视角的朝向的角度。In another possible implementation manner, the adjusting the orientation of the playback viewing angle according to the user's face orientation information may include: determining adjustment information according to the user's face orientation information; adjusting the playback angle of view according to the adjustment information. The orientation of the viewing angle. Wherein, the adjustment information is used to characterize the direction of the orientation of the playback viewing angle and the angle of the orientation of the playback viewing angle after adjustment.
可选地,调整后的所述播放视角的朝向的方向和所述用户面部朝向的方向可以相同。Optionally, the adjusted orientation of the playing angle of view may be the same as the orientation of the user's face.
可选地,调整后的所述播放视角的朝向的角度和所述用户面部朝向的角度可以相同。Optionally, the adjusted orientation angle of the playback viewing angle and the orientation angle of the user's face may be the same.
示例性地,以用户面部朝向为朝上45度,当前视角的朝向为朝下30度为例。视频播放装置先根据用户面部朝向信息确定调整后的播放视角的朝向为朝上45度,然后将播放视角的朝向向上调整75度以使播放视角的朝向由朝下30度调整为朝上45度。可以看出调整后的播放视角的朝向(方向和角度)与用户面部朝向相同。Exemplarily, it is assumed that the user's face is facing upward at 45 degrees and the current viewing angle is facing downward at 30 degrees. The video playback device first determines that the orientation of the adjusted playback viewing angle is 45 degrees upward according to the user's face orientation information, and then adjusts the orientation of the playback viewing angle upward by 75 degrees so that the orientation of the playback viewing angle is adjusted from downward 30 degrees to upward 45 degrees . It can be seen that the orientation (direction and angle) of the adjusted playback viewing angle is the same as the orientation of the user's face.
可以理解的是,用户在观看电子设备播放的视频时,用户的面部朝向通常是正对电子设备的屏幕。用户在改变面部朝向后,用户的面部朝向可能无法再正对电子设备的屏幕。因此可能会造成用户在改变面部朝向后无法继续观看电子设备播放的视频的情况发生。It can be understood that when a user watches a video played by an electronic device, the user's face is usually facing directly to the screen of the electronic device. After the user changes the face orientation, the user's face orientation may no longer face the screen of the electronic device. Therefore, the situation that the user cannot continue to watch the video played by the electronic device after changing the face orientation may occur.
为此,用户可以采用改变面部朝向并回正的方式调整播放视角的朝向。即用户可以先改变面部朝向,然后再恢复至之前的面部朝向(以下简称为回正过程)。在改变面部朝向过程中,视频播放装置可以根据用户面部朝向生成用户面部朝向信息,然后根据该信息调整播放视角的朝向。在回正过程中视频播放装置不会根据用户面部朝向生成用户面部朝向信息,也不会调整播放视角的朝向。To this end, the user can adjust the orientation of the playback viewing angle by changing the orientation of the face and returning it to the normal position. That is, the user can first change the face orientation, and then return to the previous face orientation (hereinafter referred to as the back-to-orientation process). In the process of changing the face orientation, the video playback device may generate user face orientation information according to the user's face orientation, and then adjust the orientation of the playback viewing angle according to the information. During the back-to-centering process, the video playback device will not generate user's face orientation information according to the user's face orientation, nor will it adjust the orientation of the playback viewing angle.
在一种可能的实现方式中,根据所述用户移动信息调整所述播放视角的位置,包括:根据所述用户移动信息确定所述播放视角的位置调整方向;根据所述位置调整方向和预设位置调整量调整所述播放视角的位置。In a possible implementation manner, adjusting the position of the playback angle of view according to the user movement information includes: determining the position adjustment direction of the playback angle of view according to the user movement information; adjusting the direction according to the position and a preset The position adjustment amount adjusts the position of the playing angle of view.
示例性地,以用户向前移动,预设移动调整量为N为例。视频播放装置先根据用户移动信息确定播放视角的位置调整方向为向前,然后将播放视角的位置向前调整N。Exemplarily, it is assumed that the user moves forward and the preset movement adjustment amount is N as an example. The video playback device first determines that the position adjustment direction of the playback angle of view is forward according to the user movement information, and then adjusts the position of the playback angle of view forward by N.
可选地,用户移动信息还用于表征用户移动的距离。Optionally, the user movement information is also used to characterize the distance traveled by the user.
在另一种可能的实现方式中,所述根据所述用户移动信息调整所述播放视角的位置,包括:根据所述用户移动信息确定所述播放视角的位置调整方向和所述播放视角的位置调整距离;根据所述位置调整方向和所述位置调整距离调整所述播放视角的位置。In another possible implementation manner, the adjusting the position of the playback angle of view according to the user movement information includes: determining the position adjustment direction of the playback angle of view and the position of the playback angle of view according to the user movement information Adjusting the distance; adjusting the position of the playing angle of view according to the position adjustment direction and the position adjustment distance.
可选地,位置调整距离与用户移动距离之间可以存在换算关系,所述换算关系可以为:位置调整距离=用户移动距离*(第二距离/第一距离)。其中,第一距离为用户与用户移动方向上第一目标区域边界之间的垂直距离,第二距离播放视角的位置与用户移动方向上第二目标区域边界之间的垂直距离,第二区域为视频对应的二维平面区域,*为乘号,/为 除号。例如,用户移动距离为30,第一距离为100,第二距离为200,则位置调整距离为30*(200/100)=60。Optionally, there may be a conversion relationship between the position adjustment distance and the user movement distance, and the conversion relationship may be: position adjustment distance=user movement distance*(second distance/first distance). Among them, the first distance is the vertical distance between the user and the boundary of the first target area in the user's moving direction, and the second distance is the vertical distance between the position of the playing angle of view and the boundary of the second target area in the user's moving direction, and the second area is The two-dimensional plane area corresponding to the video, * is the multiplication sign, / is the division sign. For example, if the user's moving distance is 30, the first distance is 100, and the second distance is 200, then the position adjustment distance is 30*(200/100)=60.
示例性地,以用户向后移动M1为例。视频播放装置先根据用户移动信息确定播放视角的位置调整方向为向后调整距离为M2,然后将播放视角的位置向后调整M2。Exemplarily, take the user moving M1 backward as an example. The video playback device first determines the adjustment direction of the playback angle of view according to the user's movement information as a backward adjustment distance of M2, and then adjusts the playback angle of view backward by M2.
其中,M2=M1*(a/b),a为第一距离即用户与后方的第一区域边界之间的垂直距离,b为第二距离即播放视角的位置与后方的第二区域边界之间的垂直距离。Among them, M2=M1*(a/b), a is the first distance, that is, the vertical distance between the user and the boundary of the first area behind, and b is the second distance, that is, the distance between the position of the viewing angle of view and the boundary of the second area behind the vertical distance between them.
可以理解的是,若用户采用改变面部朝向并回正的方式调整播放视角的朝向,用户同一移动方向在调整播放视角的朝向前后所对应的第二距离可能会发生改变。因此,用户可以在移动到第一区域的边界后通过改变面部朝向并回正的方式使播放视角的朝向改变以便用户观看视频的其他位置。It can be understood that if the user adjusts the orientation of the playback viewing angle by changing the orientation of the face and returning it to the normal position, the second distance corresponding to the same movement direction of the user before and after adjusting the orientation of the playback viewing angle may change. Therefore, after moving to the boundary of the first area, the user can change the orientation of the playback viewing angle by changing the orientation of the face and returning to the normal position, so that the user can watch other positions of the video.
在一种可能的方式中,所述采集目标信息,可以包括:通过采集单元采集所述目标信息,所述采集单元包括图像采集单元、声音采集单元或红外采集单元中的至少一项。In a possible manner, the collecting target information may include: collecting the target information by a collection unit, where the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
在另一种可能的实现方式中,所述采集目标信息,可以包括:接收采集装置发送的采集信息,所述采集信息包括所述目标信息。In another possible implementation manner, the collecting target information may include: receiving collection information sent by a collection device, where the collection information includes the target information.
第二方面,本发明还提供了一种视频播放装置,该装置包括处理单元,所述处理单元用于:采集目标信息,所述目标信息包括用户面部朝向信息和用户移动信息,所述用户面部朝向信息用于表征用户面部朝向的方向,所述用户移动信息用于表征用户移动的方向;根据所述目标信息调整视频的播放视角。In a second aspect, the present invention also provides a video playback device, the device includes a processing unit, the processing unit is used to: collect target information, the target information includes user face orientation information and user movement information, the user face The orientation information is used to represent the direction of the user's face, and the user movement information is used to represent the direction of the user's movement; the playing angle of the video is adjusted according to the target information.
在一种可能的实现方式中,所述处理单元具体用于:根据所述用户面部朝向信息调整所述播放视角的朝向;根据所述用户移动信息调整所述播放视角的位置。In a possible implementation manner, the processing unit is specifically configured to: adjust the orientation of the playback angle of view according to the user's face orientation information; adjust the position of the playback angle of view according to the user movement information.
在一种可能的实现方式中,所述处理单元具体用于:根据所述用户面部朝向信息确定所述播放视角的朝向调整方向;根据所述朝向调整方向和预设朝向调整量调整所述播放视角的朝向。In a possible implementation manner, the processing unit is specifically configured to: determine an orientation adjustment direction of the playback viewing angle according to the user's face orientation information; adjust the playback angle according to the orientation adjustment direction and a preset orientation adjustment amount. The orientation of the viewing angle.
可选地,用户面部朝向信息还可用于表征用户面部朝向的角度。Optionally, the user's face orientation information can also be used to characterize the angle of the user's face orientation.
在另一种可能的实现方式中,所述处理单元具体用于:根据所述用户面部朝向信息确定调整信息,所述调整信息用于表征调整后的所述播放视角的朝向的方向和所述播放视角的朝向的角度;根据所述调整信息调整所述播放视角的朝向。In another possible implementation manner, the processing unit is specifically configured to: determine adjustment information according to the user's face orientation information, where the adjustment information is used to represent the adjusted orientation of the playing angle of view and the orientation of the user's face. The angle of the orientation of the playing angle of view; adjusting the orientation of the playing angle of view according to the adjustment information.
可选地,调整后的所述播放视角的朝向的方向和所述用户面部朝向的方向可以相同。Optionally, the adjusted orientation of the playing angle of view may be the same as the orientation of the user's face.
可选地,调整后的所述播放视角的朝向的角度和所述用户面部朝向的角度可以相同。Optionally, the adjusted orientation angle of the playback viewing angle and the orientation angle of the user's face may be the same.
在一种可能的实现方式中,所述处理单元具体用于:根据所述用户移动信息确定所述播放视角的位置调整方向;根据所述位置调整方向和预设位置调整量调整所述播放视角的位置。In a possible implementation manner, the processing unit is specifically configured to: determine a position adjustment direction of the playback viewing angle according to the user movement information; adjust the playback viewing angle according to the position adjustment direction and a preset position adjustment amount s position.
可选地,用户移动信息还用于表征用户移动的距离。Optionally, the user movement information is also used to characterize the distance traveled by the user.
在另一种可能的实现方式中,所述处理单元具体用于:根据所述用户移动信息确定所述播放视角的位置调整方向和所述播放视角的位置调整距离;根据所述位置调整方向和所述位置调整距离调整所述播放视角的位置。In another possible implementation manner, the processing unit is specifically configured to: determine the position adjustment direction of the playback viewing angle and the position adjustment distance of the playback viewing angle according to the user movement information; The position adjustment distance adjusts the position of the playing angle of view.
可选地,位置调整距离与用户移动距离之间可以存在换算关系,所述换算关系可以为:位置调整距离=用户移动距离*(第二距离/第一距离)。Optionally, there may be a conversion relationship between the position adjustment distance and the user movement distance, and the conversion relationship may be: position adjustment distance=user movement distance*(second distance/first distance).
在一种可能的实现方式中,所述处理单元具体用于:通过采集单元采集所述目标信息, 所述采集单元包括图像采集单元、声音采集单元或红外采集单元中的至少一项。In a possible implementation manner, the processing unit is specifically configured to: collect the target information through a collection unit, where the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
在另一种可能的实现方式中,所述处理单元具体用于:接收采集装置发送的采集信息,所述采集信息包括所述目标信息。In another possible implementation manner, the processing unit is specifically configured to: receive collection information sent by a collection device, where the collection information includes the target information.
第三方面,本发明还提供了一种视频播放装置,该装置包括:至少一个处理器,当所述至少一个处理器执行程序代码或指令时,实现上述第一方面或其任意可能的实现方式中所述的方法。In a third aspect, the present invention also provides a video playback device, which includes: at least one processor, and when the at least one processor executes program codes or instructions, the above first aspect or any possible implementation thereof can be realized method described in .
可选地,该视频播放装置还可以包括至少一个存储器,该至少一个存储器用于存储该程序代码或指令。Optionally, the video playback device may further include at least one memory, and the at least one memory is used to store the program code or instruction.
第四方面,本发明还提供了一种芯片,包括:输入接口、输出接口、至少一个处理器。可选地,该芯片还包括存储器。该至少一个处理器用于执行该存储器中的代码,当该至少一个处理器执行该代码时,该芯片实现上述第一方面或其任意可能的实现方式中所述的方法。In a fourth aspect, the present invention further provides a chip, including: an input interface, an output interface, and at least one processor. Optionally, the chip also includes a memory. The at least one processor is used to execute the code in the memory, and when the at least one processor executes the code, the chip implements the method described in the above first aspect or any possible implementation thereof.
可选地,上述芯片还可以为集成电路。Optionally, the aforementioned chip may also be an integrated circuit.
第五方面,本发明还提供了一种电子设备,该终端包括上述视频播放装置或上述芯片。In a fifth aspect, the present invention further provides an electronic device, the terminal comprising the above-mentioned video playing device or the above-mentioned chip.
可选地,所述电子设备可以为智慧屏。Optionally, the electronic device may be a smart screen.
第六方面,本发明还提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序包括用于实现上述第一方面或其任意可能的实现方式中所述的方法。In a sixth aspect, the present invention further provides a computer-readable storage medium for storing a computer program, and the computer program includes a method for implementing the above-mentioned first aspect or any possible implementation thereof.
第七方面,本发明还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机实现上述第一方面或其任意可能的实现方式中所述的方法。In a seventh aspect, the present invention also provides a computer program product containing instructions, which, when run on a computer, enable the computer to implement the method described in the above first aspect or any possible implementation thereof.
本实施例提供的视频播放装置、电子设备、计算机存储介质、计算机程序产品和芯片均用于执行上文所提供的视频播放方法,因此,其所能达到的有益效果可参考上文所提供的视频播放方法中的有益效果,此处不再赘述。The video playing device, electronic equipment, computer storage medium, computer program product and chip provided in this embodiment are all used to execute the video playing method provided above, therefore, the beneficial effects it can achieve can refer to the above provided The beneficial effects in the video playing method will not be repeated here.
附图说明Description of drawings
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present invention. For those skilled in the art, other drawings can also be obtained based on these drawings without creative effort.
图1为本发明实施例提供的一种电子设备的结构示意图;FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention;
图2为本发明实施例提供的一种电子设备的软件结构示意图;FIG. 2 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present invention;
图3为本发明实施例提供的一种电子设备的用户界面示意图;FIG. 3 is a schematic diagram of a user interface of an electronic device provided by an embodiment of the present invention;
图4为本发明实施例提供的另一种电子设备的用户界面示意图;FIG. 4 is a schematic diagram of a user interface of another electronic device provided by an embodiment of the present invention;
图5为本发明实施例提供的又一种电子设备的用户界面示意图;FIG. 5 is a schematic diagram of a user interface of another electronic device provided by an embodiment of the present invention;
图6为本发明实施例提供的一种观看者活动区域的示意图;Fig. 6 is a schematic diagram of a viewer's activity area provided by an embodiment of the present invention;
图7为本发明实施例提供的一种生成第二区域的示意图;FIG. 7 is a schematic diagram of generating a second region provided by an embodiment of the present invention;
图8为本发明实施例提供的一种第一区域与第二区域的映射示意图;FIG. 8 is a schematic diagram of mapping between a first area and a second area provided by an embodiment of the present invention;
图9为本发明实施例提供的另一种第一区域与第二区域的映射示意图;FIG. 9 is a schematic diagram of another mapping between a first area and a second area provided by an embodiment of the present invention;
图10为本发明实施例提供的又一种第一区域与第二区域的映射示意图;FIG. 10 is a schematic diagram of another mapping between the first area and the second area provided by the embodiment of the present invention;
图11为本发明实施例提供的又一种第一区域与第二区域的映射示意图;Fig. 11 is a schematic diagram of another mapping between the first area and the second area provided by the embodiment of the present invention;
图12为本发明实施例提供的一种视频播放方法的流程示意图;FIG. 12 is a schematic flowchart of a video playback method provided by an embodiment of the present invention;
图13为本发明实施例提供的一种装置的结构示意图;Fig. 13 is a schematic structural diagram of a device provided by an embodiment of the present invention;
图14为本发明实施例提供的另一种装置的结构示意图;Fig. 14 is a schematic structural diagram of another device provided by an embodiment of the present invention;
图15为本发明实施例提供的一种芯片的结构示意图。FIG. 15 is a schematic structural diagram of a chip provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。The term "and/or" in this article is just an association relationship describing associated objects, which means that there can be three relationships, for example, A and/or B can mean: A exists alone, A and B exist simultaneously, and there exists alone B these three situations.
本发明的说明书以及附图中的术语“第一”和“第二”等是用于区别不同的对象,或者用于区别对同一对象的不同处理,而不是用于描述对象的特定顺序。The terms "first" and "second" in the description and drawings of the present invention are used to distinguish different objects, or to distinguish different processes for the same object, rather than to describe a specific sequence of objects.
此外,本发明的描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选的还包括其他没有列出的步骤或单元,或可选的还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。In addition, the terms "including" and "having" and any variations thereof mentioned in the description of the present invention are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or units is not limited to the listed steps or units, but may optionally include other unlisted steps or units, or may optionally also include Other steps or elements inherent to the process, method, product or apparatus are included.
需要说明的是,本发明实施例的描述中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本发明实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。It should be noted that, in the description of the embodiments of the present invention, words such as "exemplarily" or "for example" are used as examples, illustrations or illustrations. Any embodiment or design solution described as "exemplary" or "for example" in the embodiments of the present invention shall not be interpreted as being more preferred or more advantageous than other embodiments or design solutions. Rather, the use of words such as "exemplarily" or "for example" is intended to present related concepts in a concrete manner.
在本发明的描述中,除非另有说明,“多个”的含义是指两个或两个以上。In the description of the present invention, unless otherwise specified, the meaning of "plurality" refers to two or more than two.
日常生活中,用户可以通过调整VR视频的播放视角以观看VR视频的不同区域。例如,在房产销售,房产装修领域可以通过VR技术构建房屋的全景视频,以使得用户可以通过房屋的全景视频实现线上看房,用户可以通过调整房屋的全景视频的播放视角观看房屋的不同房间。In daily life, users can watch different areas of the VR video by adjusting the viewing angle of the VR video. For example, in the field of real estate sales and real estate decoration, VR technology can be used to build panoramic videos of houses, so that users can view houses online through panoramic videos of houses, and users can watch different rooms of houses by adjusting the viewing angle of the panoramic videos of houses. .
然而现有VR技术中,用户调整视频播放视角的方式主要为操作鼠标或触控屏幕,这些调整视频播放视角的方式缺乏交互感,从而导致用户沉浸感降低,用户体验较差。However, in the existing VR technology, the way for users to adjust the viewing angle of video playback is mainly by operating the mouse or touching the screen. These methods of adjusting the viewing angle of video playback lack a sense of interaction, resulting in reduced user immersion and poor user experience.
为此本发明实施例提供了一种视频播放方法,能够提升用户调整视频播放视角时的交互感。To this end, the embodiment of the present invention provides a video playback method, which can improve the user's sense of interaction when adjusting the viewing angle of the video playback.
本发明实施例提供的视频播放方法可以应用于智慧屏、手机、平板电脑、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等具有视频(全景视频)播放功能的电子设备上,本发明实施例对电子设备的具体类型不作任何限制。The video playback method provided by the embodiments of the present invention can be applied to smart screens, mobile phones, tablet computers, notebook computers, ultra-mobile personal computers (ultra-mobile personal computers, UMPCs), netbooks, personal digital assistants (personal digital assistants, PDAs), etc. On an electronic device with a video (panoramic video) playback function, the embodiment of the present invention does not impose any limitation on the specific type of the electronic device.
示例性地,图1是本发明实施例提供的一例电子设备100的结构示意图。电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1, 天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。Exemplarily, FIG. 1 is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present invention. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本发明另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that, the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present invention, the electronic device 100 may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。Wherein, the controller may be the nerve center and command center of the electronic device 100 . The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
其中,I2C接口是一种双向同步串行总线,处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。Wherein, the I2C interface is a bidirectional synchronous serial bus, and the processor 110 can couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 . The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 . MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 . The processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本发明另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 . In other embodiments of the present invention, the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储 器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。The charging management module 140 is configured to receive a charging input from a charger. Wherein, the charger may be a wireless charger or a wired charger. The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives the input of the battery 142 and/or the charging management module 140, and supplies power for the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160, etc.
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos and the like. The display screen 194 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
电子设备100可以通过ISP,摄像头193,触摸传感器、视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 can realize the shooting function through the ISP, the camera 193 , the touch sensor, the video codec, the GPU, the display screen 194 and the application processor.
其中,ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。Wherein, the ISP is used for processing the data fed back by the camera 193 . For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号,应理解,在本发明实施例的描述中,以RGB格式的图像为例进行介绍,本发明实施例对图像格式不做限定。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。 Camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects it to the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, image signals in formats such as YUV. It should be understood that in the description of the embodiment of the present invention, the image in RGB format is used as an example for introduction, and the embodiment of the present invention does not limit the image format. . In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100. The internal memory 121 may be used to store computer-executable program codes including instructions. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 . The internal memory 121 may include an area for storing programs and an area for storing data.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。陀螺仪传感器 180B可以用于确定电子设备100的运动姿态。气压传感器180C用于测量气压。磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。温度传感器180J用于检测温度。触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。骨传导传感器180M可以获取振动信号。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。The pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal. The gyro sensor 180B can be used to determine the motion posture of the electronic device 100. The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a Hall sensor. The electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case. The acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). The distance sensor 180F is used to measure the distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing. Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes. The ambient light sensor 180L is used for sensing ambient light brightness. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like. The temperature sensor 180J is used to detect temperature. Touch sensor 180K, also known as "touch panel". The touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”. The touch sensor 180K is used to detect a touch operation on or near it. The bone conduction sensor 180M can acquire vibration signals. The audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。The keys 190 include a power key, a volume key and the like. The key 190 may be a mechanical key. It can also be a touch button. The electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 . The motor 191 can generate a vibrating reminder. The motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 . The indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like. The SIM card interface 195 is used for connecting a SIM card.
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以一种分层架构的操作系统为例,示例性说明电子设备100的软件结构。The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present invention, an operating system with a layered architecture is taken as an example to illustrate the software structure of the electronic device 100 .
图2是本发明实施例的电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将该操作系统分为四层,从上至下分别为应用程序层,应用程序框架层,操作系统运行时(runtime)和系统库,以及内核层。FIG. 2 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present invention. The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces. In some embodiments, the operating system is divided into four layers, which are application program layer, application program framework layer, operating system runtime (runtime) and system library, and kernel layer from top to bottom.
应用程序层可以包括一系列应用程序包。如图2所示,应用程序包可以包括相机、相册、音乐、设置等应用程序。The application layer can consist of a series of application packages. As shown in FIG. 2, the application package may include application programs such as camera, photo album, music, and settings.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,资源管理器,通知管理器等。The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer. The application framework layer includes some predefined functions. As shown in Figure 2, the application framework layer can include window managers, content providers, view systems, resource managers, notification managers, etc.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。A window manager is used to manage window programs. The window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make it accessible to applications. Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示 界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. The view system can be used to build applications. A display interface can consist of one or more views. For example, a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,状态栏中显示通知信息可以短暂停留后自动消失,例如用于告知用户下载完成的消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,或者通知管理器还可以发出提示音,例如电子设备振动,指示灯闪烁等。The notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages. The notification information displayed in the status bar can disappear automatically after a short stay, such as a message reminder to inform the user that the download is complete. The notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, or the notification manager can also emit a prompt sound, such as electronic device vibration, indicator light flashing, and the like.
Runtime包括核心库和虚拟机。Runtime负责该操作系统的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是该操作系统的核心库。Runtime includes core library and virtual machine. Runtime is responsible for the scheduling and management of the operating system. The core library consists of two parts: one part is the functions that the java language needs to call, and the other part is the core library of the operating system.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in virtual machines. The virtual machine executes the java files of the application program layer and the application program framework layer as binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:开放图形库(open graphics library,OpenGL)嵌入式系统(embedded systems,ES)),2D图形引擎(例如:场景图库(scene graph library,SGL))等。A system library can include multiple function modules. For example: surface manager (surface manager), media library (media libraries), 3D graphics processing library (for example: open graphics library (open graphics library, OpenGL) embedded systems (embedded systems, ES)), 2D graphics engine (for example : Scene graph library (scene graph library, SGL)), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:动态图像专家组(moving pictures experts group,MPEG)4,H.264,MP3,高级音频编码(advanced audio coding,AAC),多速率自适应(adaptibve multi rate,AMR),图像专家联合小组(joint photographic experts group,JPG),便携式网络图形(portable network graphics,PNG)等。The media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc. The media library can support a variety of audio and video encoding formats, such as: moving pictures experts group (MPEG) 4, H.264, MP3, advanced audio coding (AAC), multi-rate adaptive (adaptibve multi rate, AMR), joint photographic experts group (joint photographic experts group, JPG), portable network graphics (portable network graphics, PNG) and so on.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc. 2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。其中,内核层可以包含硬件驱动模块,例如显示驱动,摄像头驱动、传感器驱动等,应用程序框架层可以调用内核层的硬件驱动模块。The kernel layer is the layer between hardware and software. Wherein, the kernel layer may include hardware driver modules, such as display drivers, camera drivers, sensor drivers, etc., and the application framework layer may call the hardware driver modules of the kernel layer.
为了便于理解,本发明以下实施例将以具有图1和图2所示结构的电子设备为例,介绍本发明实施例提供的视频播放方法。For ease of understanding, the following embodiments of the present invention will use the electronic device with the structure shown in FIG. 1 and FIG. 2 as an example to introduce the video playing method provided by the embodiment of the present invention.
以下实施例中所涉及的技术方案均可以在具有上述硬件架构和软件架构的电子设备100中实现。The technical solutions involved in the following embodiments can all be implemented in the electronic device 100 having the above-mentioned hardware architecture and software architecture.
本发明实施例以电子设备100是智慧屏为例,结合附图对本发明实施例提供的技术方案进行详细说明。In this embodiment of the present invention, taking the electronic device 100 as an example of a smart screen, the technical solution provided by the embodiment of the present invention will be described in detail with reference to the accompanying drawings.
图3是本发明实施例提供的一种图形用户界面(graphical user interface,GUI)示意图,图3示出了智慧屏的屏幕显示系统显示了当前输出的界面内容,该界面内容为智慧屏的主界面。该界面内容显示了多款应用程序(application,App),例如,VR、时钟、日历、图库、备忘录等应用程序。可以理解的是,界面内容还可以包括其他更多的应用程序,本发明实施例对此不作限定。Fig. 3 is a schematic diagram of a graphical user interface (graphical user interface, GUI) provided by an embodiment of the present invention. Fig. 3 shows that the screen display system of the smart screen displays the currently output interface content, which is the main content of the smart screen. interface. The content of the interface displays multiple application programs (applications, Apps), for example, applications such as VR, clock, calendar, gallery, and memo. It can be understood that the interface content may also include other more application programs, which is not limited in this embodiment of the present invention.
用户可以通过触摸智慧屏的屏幕上特定的控件、按压特定的物理按键或按键组合、输入语音、隔空手势等方式,指示智慧屏开启VR应用。响应于接收到用户开启VR应用的指示后,智慧屏启动VR应用。Users can instruct the smart screen to open VR applications by touching specific controls on the screen of the smart screen, pressing specific physical buttons or button combinations, inputting voice, and gestures in the air. In response to receiving an instruction from the user to start the VR application, the smart screen starts the VR application.
示例性地,用户可以通过操作智慧屏的遥控器在主界面上点击“VR”应用图标,指示智慧屏开启VR应用。For example, the user can click the "VR" application icon on the main interface by operating the remote control of the smart screen to instruct the smart screen to start the VR application.
又示例性地,用户可以通过语音指令(如“打开VR应用”)指示智慧屏开启VR应用并显示VR应用界面。As another example, the user can instruct the smart screen to start the VR application and display the VR application interface through a voice command (such as "open the VR application").
用户可以通过VR应用界面可以播放各类视频。上述视频包括但不限于VR视频、全景视频、VR全景视频、VR实景视频或等3D全景视频。Users can play various videos through the VR application interface. The above-mentioned videos include but are not limited to VR videos, panoramic videos, VR panoramic videos, VR real-scene videos or other 3D panoramic videos.
示例性地,如图4所示,该VR应用为VR看房应用,用户可以指示智慧屏通过该应用播放房屋的3D全景视频以观看房屋的内部情况。可以理解的是,VR应用也可以为其他VR应用,用户还可以通过VR应用播放其他视频。如景点的视频、商场的视频或其他视频。Exemplarily, as shown in FIG. 4 , the VR application is a VR house viewing application, and the user can instruct the smart screen to play a 3D panoramic video of the house through the app to view the interior of the house. It can be understood that the VR application can also be other VR applications, and the user can also play other videos through the VR application. Such as videos of attractions, videos of shopping malls, or other videos.
生成上述视频的具体方法可以采用本领域技术人员能够想到的任何一种方法进行处理,本申请实施例对此不作具体限定。例如,通过全景摄像头采集实景图像生成视频。The specific method for generating the foregoing video may be processed by any method conceivable by those skilled in the art, which is not specifically limited in this embodiment of the present application. For example, a panoramic camera is used to collect real-world images to generate a video.
如图5所示,用户还可以指示智慧屏播放视频的预览导航图(也可以称为VR全景视频导航图),用户可以通过预览导航图确定当前播放视角在整个视频内所在的位置和朝向。其中,预览导航图是用于指示当前播放视角在整个视频内所在的位置和朝向的平面图。As shown in Figure 5, the user can also instruct the smart screen to play a preview navigation map of the video (also called a VR panoramic video navigation map), and the user can determine the position and orientation of the current playback perspective in the entire video through the preview navigation map. Wherein, the preview navigation map is a plan view used to indicate the position and orientation of the current playing angle in the entire video.
如图6所示,当用户想要调整视频的播放视角时,用户可以通过在观看者活动区域(以下简称为第一区域)活动指示VR应用视频的播放视角。其中,第一区域可以是在电子设备(如智慧屏)前面划定的用于限定用户观看视频的区域,通常为采集装置(如摄像头)可以清晰捕捉到用户位置和用户面部朝向的矩形区域。用户在第一区域内的位置可称为观看者位置。As shown in FIG. 6 , when the user wants to adjust the viewing angle of the video, the user can indicate the viewing angle of the VR application video by moving in the viewer's active area (hereinafter referred to as the first area). Wherein, the first area may be an area defined in front of an electronic device (such as a smart screen) to limit the user to watch videos, and is usually a rectangular area where a collection device (such as a camera) can clearly capture the user's position and user's face orientation. The location of the user within the first area may be referred to as the viewer location.
可选地,播放视角的初始位置可以为第二区域的中心。播放视角的初始朝向可以为朝前。其中,第二区域为视频对应的二维平面区域。Optionally, the initial position of the playback angle of view may be the center of the second area. The initial orientation of the playback perspective may be forward. Wherein, the second area is a two-dimensional plane area corresponding to the video.
可选地,如图7所示,可以根据视频对应的图形的长度最大值和宽度最大值做矩形以生成第二区域。其中,图7左边为视频对应的图形,图7右边为第二区域。Optionally, as shown in FIG. 7 , a rectangle may be formed according to the maximum length and maximum width of the graphic corresponding to the video to generate the second area. Wherein, the left side of FIG. 7 is the graphic corresponding to the video, and the right side of FIG. 7 is the second area.
示例性地,如图8所示,在第一区域中用户正在通过前方的智慧屏观看房屋的视频,当用户想要使视频的播放视角位置向前移动时,用户可以在第一区域中向前移动M1,智慧屏通过采集装置(如摄像头)得到用户移动信息(即用于表征用户移动方向和移动距离的信息)并测量得到用户与前方第一区域边界之间的距离(以下简称为第一距离)为a,然后智慧屏获取当前视角的位置与前方第二区域边界之间的距离(以下简称为第二距离)为b,之后智慧屏先根据用户移动信息确定播放视角的位置调整方向为向前,再根据用户移动信息、第一距离和第二距离确定播放视角的位置调整距离为M1*(a/b)=M2,最后将视频的播放视角位置向前移动M2。For example, as shown in FIG. 8 , in the first area, the user is watching the video of the house through the smart screen in front. When the user wants to move the viewing angle position of the video forward, the user can move the Moving forward M1, the smart screen obtains the user's movement information (that is, the information used to characterize the user's moving direction and moving distance) through a collection device (such as a camera) and measures the distance between the user and the boundary of the first area in front (hereinafter referred to as the first area boundary). A distance) is a, and then the distance between the location where the smart screen obtains the current viewing angle and the boundary of the second area in front (hereinafter referred to as the second distance) is b, and then the smart screen first determines the position of the playback viewing angle and adjusts the direction according to the user's movement information To move forward, determine the position adjustment distance of the playback angle of view according to the user movement information, the first distance and the second distance as M1*(a/b)=M2, and finally move the video playback angle of view position forward by M2.
又示例性地,如图9所示,在第一区域中用户正在通过前方的智慧屏观看房屋的视频,当用户想要使视频的播放视角朝向向左移动90度时,用户可以在第一区域中将面部朝向向左旋转90度,智慧屏通过采集装置得到用户面部朝向信息(即用于表征用户面部朝向的方向和角度的信息),之后智慧屏先根据用户面部朝向信息确定播放视角的朝向调整方 向为向左,再根据用户面部朝向信息确定播放视角的朝向调整角度为90度,最后将视频的播放视角的朝向向左调整90度。通过图9可见,在调整前播放视角的朝向为正对第二区域的左方,将视频的播放视角的朝向向左调整90度后,播放视角的朝向为正对第二区域的前方。For another example, as shown in FIG. 9 , in the first area, the user is watching the video of the house through the smart screen in front. When the user wants to move the viewing angle of the video 90 degrees to the left, the user can Rotate the face orientation to the left by 90 degrees in the area, and the smart screen obtains the user’s face orientation information (that is, the information representing the direction and angle of the user’s face orientation) through the acquisition device, and then the smart screen first determines the playback angle of view based on the user’s face orientation information. The orientation adjustment direction is to the left, and then the orientation adjustment angle of the playback viewing angle is determined to be 90 degrees according to the user's face orientation information, and finally the orientation of the video playback viewing angle is adjusted to the left by 90 degrees. It can be seen from FIG. 9 that before adjustment, the orientation of the playback angle of view is directly facing the left of the second area, and after adjusting the orientation of the video playback angle of view to the left by 90 degrees, the orientation of the playback angle of view is directly facing the front of the second area.
可以理解的是,用户也可以通过将面部朝向向上或向下(即抬头或低头)使视频的播放视角的朝向向上或向下移动。It can be understood that the user can also move the orientation of the viewing angle of the video up or down by turning the face up or down (that is, looking up or down).
又示例性地,如图10所示,第一区域中用户正在通过前方的智慧屏观看房屋的视频,当用户想要使视频的播放视角位置向后移动时,用户可以在第一区域中向后移动M3,智慧屏通过采集装置得到用户移动信息并测量得到用户与后方第一区域边界之间的距离为a,然后智慧屏获取当前视角的位置与后方第二区域边界之间的距离为b,之后智慧屏先根据用户移动信息确定播放视角的位置调整方向为向后,再根据用户移动信息、第一距离和第二距离确定播放视角的位置调整距离为M3*(a/b)=M4,最后将视频的播放视角位置向后移动M4。For another example, as shown in FIG. 10, the user in the first area is watching the video of the house through the smart screen in front. When the user wants to move the viewing angle position of the video backward, the user can move the After moving M3, the smart screen obtains the user's movement information through the collection device and measures the distance between the user and the boundary of the first rear area as a, and then the smart screen obtains the distance between the position of the current viewing angle and the boundary of the second rear area as b , and then the smart screen first determines the position adjustment direction of the playback angle of view according to the user's movement information to be backward, and then determines the position adjustment distance of the playback angle of view according to the user's movement information, the first distance and the second distance as M3*(a/b)=M4 , and finally move the viewing angle of the video backwards by M4.
又示例性地,如图11所示,第一区域中用户正在通过前方的智慧屏观看房屋的视频,当用户想要使视频的播放视角位置向右移动时,用户可以在第一区域中向右移动M5,智慧屏通过采集装置得到用户移动信息并测量得到用户与右方第一区域边界之间的距离为a,然后智慧屏获取当前视角的位置与右方第二区域边界之间的距离为b,之后智慧屏先根据用户移动信息确定播放视角的位置调整方向为向右,再根据用户移动信息、第一距离和第二距离确定播放视角的位置调整距离为M5*(a/b)=M6,最后将视频的播放视角位置向后移动M6。For another example, as shown in Figure 11, the user in the first area is watching the video of the house through the smart screen in front. When the user wants to move the viewing angle of the video to the right, the user can move to the Move M5 to the right, the smart screen obtains the user's movement information through the acquisition device and measures the distance between the user and the boundary of the first area on the right as a, and then the smart screen obtains the distance between the position of the current viewing angle and the boundary of the second area on the right It is b, and then the smart screen first determines the position adjustment direction of the playback angle of view according to the user's movement information to the right, and then determines the position adjustment distance of the playback angle of view according to the user's movement information, the first distance and the second distance as M5*(a/b) = M6, and finally move the viewing angle position of the video backwards by M6.
上述采集用户面部朝向信息、采集用户移动信息和测量距离的具体方法可以采用本领域技术人员能够想到的任何一种方法进行处理,本申请实施例对此不做具体限定。例如,可以通过红外检测、可穿戴传感器、室内定位等方式采集用户面部朝向信息和用户移动信息。The above specific methods of collecting the user's face orientation information, collecting the user's movement information, and measuring the distance can be processed by any method conceivable by those skilled in the art, which is not specifically limited in this embodiment of the present application. For example, user face orientation information and user movement information can be collected through infrared detection, wearable sensors, and indoor positioning.
智慧屏还可以根据用户面部朝向信息和用户移动信息模拟事件(如触控手势事件、语音事件等)调整视频的播放视角。The smart screen can also adjust the viewing angle of the video according to the user's face orientation information and user movement information to simulate events (such as touch gesture events, voice events, etc.).
示例性地,VR应用支持左滑、右滑、上滑、下滑、双击触控手势事件调整视频的播放视角。其中,左滑触控手势事件对应向右调整播放视角的朝向,右滑触控手势事件对应向左调整播放视角的朝向,上滑触控手势事件对应向下调整播放视角的朝向,下滑触控手势事件对应向上调整播放视角的朝向,双击触控手势事件对应向前调整播放视角的位置。在用户通过VR应用观看视频时,智慧屏可以采集并根据用户面部朝向信息和用户移动信息模拟上述触控手势事件调整视频的播放视角。例如,智慧屏采集的用户面部朝向信息指示用户的面部朝向的方向为向上(即抬头),则智慧屏可以根据该信息模拟下滑触控手势事件,从而向上调整播放视角的朝向。又例如,智慧屏采集的用户移动信息指示用户移动的方向为向前,则智慧屏可以根据该信息模拟双击触控手势事件,从而向前调整播放视角的位置。Exemplarily, the VR application supports left-swipe, right-swipe, up-slide, down-slide, and double-tap touch gesture events to adjust the playing angle of the video. Among them, the left sliding touch gesture event corresponds to adjusting the orientation of the playback viewing angle to the right, the right sliding touch gesture event corresponds to adjusting the orientation of the playback viewing angle to the left, the upward sliding touch gesture event corresponds to adjusting the orientation of the playback viewing angle downward, and the sliding touch gesture event corresponds to adjusting the orientation of the playback viewing angle downward. The gesture event corresponds to adjusting the orientation of the playback angle upward, and the double-tap touch gesture event corresponds to adjusting the position of the playback angle forward. When a user watches a video through a VR application, the smart screen can collect and simulate the above-mentioned touch gesture events to adjust the viewing angle of the video according to the user's face orientation information and user movement information. For example, the user's face orientation information collected by the smart screen indicates that the direction of the user's face orientation is upward (that is, head up), and the smart screen can simulate a sliding touch gesture event based on this information, thereby adjusting the orientation of the playback viewing angle upward. For another example, if the user movement information collected by the smart screen indicates that the direction of the user's movement is forward, the smart screen can simulate a double-tap touch gesture event based on this information, thereby adjusting the position of the playback angle of view forward.
下面结合附图对本发明实施例提供的视频播放方法进行详细说明。The video playing method provided by the embodiment of the present invention will be described in detail below in conjunction with the accompanying drawings.
如图12所示,本发明实施例提供视频播放方法,包括:As shown in Figure 12, the embodiment of the present invention provides a video playback method, including:
S1201、视频播放装置采集目标信息。S1201. The video playback device collects target information.
其中,目标信息包括用户面部朝向信息和用户移动信息,用户面部朝向信息用于表征用户面部朝向的方向,用户移动信息用于表征用户移动的方向。Wherein, the target information includes the user's face orientation information and user movement information, the user's face orientation information is used to represent the direction of the user's face orientation, and the user movement information is used to represent the user's movement direction.
在一种可能的方式中,视频播放装置可以通过采集单元采集目标信息,采集单元包括图像采集单元、声音采集单元或红外采集单元中的至少一项。In a possible manner, the video playback device may collect target information through a collection unit, and the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
在另一种可能的实现方式中,视频播放装置可以接收采集装置发送的采集信息。其中,采集信息包括目标信息。In another possible implementation manner, the video playback device may receive the collection information sent by the collection device. Wherein, the collected information includes target information.
可选地,用户面部朝向信息还可用于表征用户面部朝向的角度。Optionally, the user's face orientation information can also be used to characterize the angle of the user's face orientation.
可选地,用户移动信息还用于表征用户移动的距离。Optionally, the user movement information is also used to characterize the distance traveled by the user.
S1202、视频播放装置根据目标信息调整视频的播放视角。S1202. The video playing device adjusts the playing angle of the video according to the target information.
在一种可能的实现方式中,视频播放装置可以先根据用户面部朝向信息调整播放视角的朝向,然后根据用户移动信息调整播放视角的位置。In a possible implementation manner, the video playback device may first adjust the orientation of the playback angle of view according to the user's face orientation information, and then adjust the position of the playback angle of view according to the user's movement information.
在一种可能的实现方式中,上述根据用户面部朝向信息调整播放视角的朝向,包括:根据用户面部朝向信息确定播放视角的朝向调整方向;根据朝向调整方向和预设朝向调整量调整播放视角的朝向。In a possible implementation manner, the above-mentioned adjustment of the orientation of the playback angle of view according to the user's face orientation information includes: determining the orientation adjustment direction of the playback angle of view according to the user's face orientation information; adjusting the orientation of the playback angle of view according to the orientation adjustment direction and a preset orientation adjustment amount. towards.
示例性地,以用户面部朝向的方向为朝下,预设朝向调整量为30度为例。视频播放装置先根据用户面部朝向信息确定播放视角的朝向调整方向为向下,然后将播放视角的朝向向下调整30度。Exemplarily, it is assumed that the user's face is facing downwards and the preset orientation adjustment amount is 30 degrees as an example. The video playback device first determines that the orientation adjustment direction of the playback angle of view is downward according to the user's face orientation information, and then adjusts the orientation of the playback angle of view downward by 30 degrees.
在另一种可能的实现方式中,上述根据用户面部朝向信息调整播放视角的朝向,可以包括:根据用户面部朝向信息确定调整信息;根据调整信息调整播放视角的朝向。其中,调整信息用于表征调整后的播放视角的朝向的方向和播放视角的朝向的角度。In another possible implementation manner, the above-mentioned adjusting the orientation of the playback angle of view according to the user's face orientation information may include: determining adjustment information according to the user's face orientation information; and adjusting the orientation of the playback angle of view according to the adjustment information. Wherein, the adjustment information is used to represent the direction of the orientation of the adjusted playback angle of view and the angle of the orientation of the playback angle of view.
可选地,调整后的播放视角的朝向的方向和用户面部朝向的方向可以相同。Optionally, the direction of the adjusted playback viewing angle and the direction of the user's face may be the same.
可选地,调整后的播放视角的朝向的角度和用户面部朝向的角度可以相同。Optionally, the angle of the adjusted playing angle of view may be the same as the angle of the user's face.
示例性地,以用户面部朝向为朝上45度,当前视角的朝向为朝下30度为例。视频播放装置先根据用户面部朝向信息确定调整后的播放视角的朝向为朝上45度,然后将播放视角的朝向向上调整75度以使播放视角的朝向由朝下30度调整为朝上45度。可以看出调整后的播放视角的朝向(方向和角度)与用户面部朝向相同。Exemplarily, it is assumed that the user's face is facing upward at 45 degrees and the current viewing angle is facing downward at 30 degrees. The video playback device first determines that the orientation of the adjusted playback viewing angle is 45 degrees upward according to the user's face orientation information, and then adjusts the orientation of the playback viewing angle upward by 75 degrees so that the orientation of the playback viewing angle is adjusted from downward 30 degrees to upward 45 degrees . It can be seen that the orientation (direction and angle) of the adjusted playback viewing angle is the same as the orientation of the user's face.
可以理解的是,用户在观看电子设备播放的视频时,用户的面部朝向通常是正对电子设备的屏幕。用户在改变面部朝向后,用户的面部朝向可能无法再正对电子设备的屏幕。因此可能会造成用户在改变面部朝向后无法继续观看电子设备播放的视频的情况发生。It can be understood that when a user watches a video played by an electronic device, the user's face is usually facing directly to the screen of the electronic device. After the user changes the face orientation, the user's face orientation may no longer face the screen of the electronic device. Therefore, the situation that the user cannot continue to watch the video played by the electronic device after changing the face orientation may occur.
为此,用户可以采用改变面部朝向并回正的方式调整播放视角的朝向。即用户可以先改变面部朝向,然后再恢复至之前的面部朝向(以下简称为回正过程)。在改变面部朝向过程中,视频播放装置可以根据用户面部朝向生成用户面部朝向信息,然后根据该信息调整播放视角的朝向。在回正过程中视频播放装置不会根据用户面部朝向生成用户面部朝向信息,也不会调整播放视角的朝向。To this end, the user can adjust the orientation of the playback viewing angle by changing the orientation of the face and returning it to the normal position. That is, the user can first change the face orientation, and then return to the previous face orientation (hereinafter referred to as the back-to-orientation process). In the process of changing the face orientation, the video playback device may generate user face orientation information according to the user's face orientation, and then adjust the orientation of the playback viewing angle according to the information. During the back-to-centering process, the video playback device will not generate user's face orientation information according to the user's face orientation, nor will it adjust the orientation of the playback viewing angle.
在一种可能的实现方式中,上述根据用户移动信息调整播放视角的位置,包括:根据用户移动信息确定播放视角的位置调整方向;根据位置调整方向和预设位置调整量调整播放视角的位置。In a possible implementation manner, the above-mentioned adjusting the position of the playback angle of view according to the user movement information includes: determining the position adjustment direction of the playback angle of view according to the user movement information; and adjusting the position of the playback angle of view according to the position adjustment direction and the preset position adjustment amount.
示例性地,以用户向前移动,预设移动调整量为N为例。视频播放装置先根据用户移动信息确定播放视角的位置调整方向为向前,然后将播放视角的位置向前调整N。Exemplarily, it is assumed that the user moves forward and the preset movement adjustment amount is N as an example. The video playback device first determines that the position adjustment direction of the playback angle of view is forward according to the user movement information, and then adjusts the position of the playback angle of view forward by N.
在另一种可能的实现方式中,根据用户移动信息调整播放视角的位置,包括:根据用户移动信息确定播放视角的位置调整方向和播放视角的位置调整距离;根据位置调整方向和位置调整距离调整播放视角的位置。In another possible implementation manner, adjusting the position of the playback angle of view according to the user movement information includes: determining the position adjustment direction and the position adjustment distance of the playback angle of view according to the user movement information; adjusting the direction and position adjustment distance according to the position adjustment The position of the playback camera.
可选地,位置调整距离与用户移动距离之间可以存在换算关系,换算关系可以为:位置调整距离=用户移动距离*(第二距离/第一距离)。其中,第一距离为用户与用户移动方向上第一目标区域边界之间的垂直距离,第二距离播放视角的位置与用户移动方向上第二目标区域边界之间的垂直距离,第二区域为视频对应的二维平面区域,*为乘号,/为除号。例如,用户移动距离为30,第一距离为100,第二距离为200,则位置调整距离为30*(200/100)=60。Optionally, there may be a conversion relationship between the position adjustment distance and the user movement distance, and the conversion relationship may be: position adjustment distance=user movement distance*(second distance/first distance). Among them, the first distance is the vertical distance between the user and the boundary of the first target area in the user's moving direction, and the second distance is the vertical distance between the position of the playing angle of view and the boundary of the second target area in the user's moving direction, and the second area is The two-dimensional plane area corresponding to the video, * is the multiplication sign, / is the division sign. For example, if the user's moving distance is 30, the first distance is 100, and the second distance is 200, then the position adjustment distance is 30*(200/100)=60.
示例性地,以用户向后移动M1为例。视频播放装置先根据用户移动信息确定播放视角的位置调整方向为向后调整距离为M2,然后将播放视角的位置向后调整M2。Exemplarily, take the user moving M1 backward as an example. The video playback device first determines the adjustment direction of the playback angle of view according to the user's movement information as a backward adjustment distance of M2, and then adjusts the playback angle of view backward by M2.
其中,M2=M1*(a/b),a为第一距离即用户与后方的第一区域边界之间的垂直距离,b为第二距离即播放视角的位置与后方的第二区域边界之间的垂直距离。Among them, M2=M1*(a/b), a is the first distance, that is, the vertical distance between the user and the boundary of the first area behind, and b is the second distance, that is, the distance between the position of the viewing angle of view and the boundary of the second area behind the vertical distance between them.
可以理解的是,若用户采用改变面部朝向并回正的方式调整播放视角的朝向,用户同一移动方向在调整播放视角的朝向前后所对应的第二距离可能会发生改变。因此,用户可以在移动到第一区域的边界后通过改变面部朝向并回正的方式使播放视角的朝向改变以便用户观看视频的其他位置。It can be understood that if the user adjusts the orientation of the playback viewing angle by changing the orientation of the face and returning it to the normal position, the second distance corresponding to the same movement direction of the user before and after adjusting the orientation of the playback viewing angle may change. Therefore, after moving to the boundary of the first area, the user can change the orientation of the playback viewing angle by changing the orientation of the face and returning to the normal position, so that the user can watch other positions of the video.
例如,图9中在用户面部朝向未向左旋转90度时,用户面部和播放视角均朝前,用户若向后移动,则在计算播放视角的位置移动距离时所以使用的第二距离为播放视角的位置与后方的第二区域边界之间的垂直距离。而在用户面部朝向向左旋转90度并回正后,用户面部朝前但播放视角朝右,用户若向后移动,则在计算播放视角的位置移动距离时所以使用的第二距离不再是播放视角的位置与后方的第二区域边界之间的垂直距离,而是播放视角的位置与左方的第二区域边界之间的垂直距离。For example, in Figure 9, when the user's face is not rotated 90 degrees to the left, the user's face and the playback angle of view are both facing forward. If the user moves backward, the second distance used when calculating the movement distance of the playback angle of view is the playback The vertical distance between the position of the viewing angle and the border of the second area behind. After the user's face rotates 90 degrees to the left and returns to normal, the user's face is facing forward but the playback angle of view is facing right. If the user moves backward, the second distance used when calculating the movement distance of the playback angle of view is no longer The vertical distance between the position of the playback angle of view and the boundary of the second rear area, but the vertical distance between the position of the playback angle of view and the boundary of the second area on the left.
可以看出,在本发明提供的视频播放方法中,用户在调整视频的播放视角时,无需通过操纵鼠标或触控屏幕,而是可以直接通过改变面部朝向和移动身体,使视频的播放视角随用户动作而调整,从而提升了用户调整视频的播放视角时的交互感,增加了用户观看视频时的沉浸感降低,提升了用户体验。It can be seen that in the video playing method provided by the present invention, when the user adjusts the playing angle of the video, the user does not need to manipulate the mouse or touch the screen, but directly changes the face orientation and moves the body, so that the playing angle of the video can be adjusted accordingly. It can be adjusted according to the user's actions, thereby improving the sense of interaction when the user adjusts the viewing angle of the video, reducing the sense of immersion when the user is watching the video, and improving the user experience.
本发明实施例还提供了一种视频播放装置,该装置包括处理单元,处理单元用于:采集目标信息,目标信息包括用户面部朝向信息和用户移动信息,用户面部朝向信息用于表征用户面部朝向的方向,用户移动信息用于表征用户移动的方向;根据目标信息调整视频的播放视角。The embodiment of the present invention also provides a video playback device, the device includes a processing unit, the processing unit is used to: collect target information, the target information includes user facial orientation information and user movement information, and the user facial orientation information is used to represent the user's facial orientation The direction of the user's movement information is used to represent the direction of the user's movement; adjust the viewing angle of the video according to the target information.
在一种可能的实现方式中,处理单元具体用于:根据用户面部朝向信息调整播放视角的朝向;根据用户移动信息调整播放视角的位置。In a possible implementation manner, the processing unit is specifically configured to: adjust the orientation of the playback angle of view according to the user's face orientation information; adjust the position of the playback angle of view according to the user movement information.
在一种可能的实现方式中,处理单元具体用于:根据用户面部朝向信息确定播放视角的朝向调整方向;根据朝向调整方向和预设朝向调整量调整播放视角的朝向。In a possible implementation manner, the processing unit is specifically configured to: determine an orientation adjustment direction of the playback viewing angle according to the user's face orientation information; and adjust the orientation of the playback viewing angle according to the orientation adjustment direction and a preset orientation adjustment amount.
可选地,用户面部朝向信息还可用于表征用户面部朝向的角度。Optionally, the user's face orientation information can also be used to characterize the angle of the user's face orientation.
在另一种可能的实现方式中,处理单元具体用于:根据用户面部朝向信息确定调整信息,调整信息用于表征调整后的播放视角的朝向的方向和播放视角的朝向的角度;根据调整信息调整播放视角的朝向。In another possible implementation manner, the processing unit is specifically configured to: determine adjustment information according to the user's face orientation information, where the adjustment information is used to characterize the orientation of the adjusted playback viewing angle and the orientation angle of the playback viewing angle; Adjust the orientation of the playback perspective.
可选地,调整后的播放视角的朝向的方向和用户面部朝向的方向可以相同。Optionally, the direction of the adjusted playback viewing angle and the direction of the user's face may be the same.
可选地,调整后的播放视角的朝向的角度和用户面部朝向的角度可以相同。Optionally, the angle of the adjusted playing angle of view may be the same as the angle of the user's face.
在一种可能的实现方式中,处理单元具体用于:根据用户移动信息确定播放视角的位置调整方向;根据位置调整方向和预设位置调整量调整播放视角的位置。In a possible implementation manner, the processing unit is specifically configured to: determine the position adjustment direction of the playback angle of view according to the user movement information; and adjust the position of the playback angle of view according to the position adjustment direction and the preset position adjustment amount.
可选地,用户移动信息还用于表征用户移动的距离。Optionally, the user movement information is also used to characterize the distance traveled by the user.
在另一种可能的实现方式中,处理单元具体用于:根据用户移动信息确定播放视角的位置调整方向和播放视角的位置调整距离;根据位置调整方向和位置调整距离调整播放视角的位置。In another possible implementation manner, the processing unit is specifically configured to: determine the position adjustment direction of the playback angle of view and the position adjustment distance of the playback angle of view according to the user movement information; and adjust the position of the playback angle of view according to the position adjustment direction and the position adjustment distance.
可选地,位置调整距离与用户移动距离之间可以存在换算关系,换算关系可以为:位置调整距离=用户移动距离*(第二距离/第一距离)。Optionally, there may be a conversion relationship between the position adjustment distance and the user movement distance, and the conversion relationship may be: position adjustment distance=user movement distance*(second distance/first distance).
在一种可能的实现方式中,处理单元具体用于:通过采集单元采集目标信息,采集单元包括图像采集单元、声音采集单元或红外采集单元中的至少一项。In a possible implementation manner, the processing unit is specifically configured to: collect target information through a collection unit, where the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
在另一种可能的实现方式中,处理单元具体用于:接收采集装置发送的采集信息,采集信息包括目标信息。In another possible implementation manner, the processing unit is specifically configured to: receive collection information sent by the collection device, where the collection information includes target information.
下面将结合图13和图14介绍用于执行上述视频播放方法的电子设备。An electronic device for performing the above video playing method will be introduced below with reference to FIG. 13 and FIG. 14 .
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。It can be understood that, in order to realize the above functions, the electronic device includes hardware and/or software modules corresponding to each function. Combining the algorithm steps of each example described in the embodiments disclosed herein, the present invention can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions in combination with the embodiments for each specific application, but such implementation should not be regarded as exceeding the scope of the present invention.
本发明实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In the embodiment of the present invention, the functional modules of the electronic device may be divided according to the above method examples. For example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The above integrated modules may be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
在采用对应各个功能划分各个功能模块的情况下,图13示出了上述实施例中涉及的电子设备的一种可能的组成示意图,如图13示,该装置1300可以包括:收发单元1301和处理单元1302,该处理单元1302可以实现上述方法实施例中由电子设备所执行的方法,和/或用于本文所描述的技术的其他过程。In the case of dividing each functional module corresponding to each function, FIG. 13 shows a possible composition diagram of the electronic device involved in the above embodiment. As shown in FIG. 13 , the apparatus 1300 may include: a transceiver unit 1301 and a processing A unit 1302, the processing unit 1302 may implement the method executed by the electronic device in the foregoing method embodiments, and/or other processes used in the technologies described herein.
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。It should be noted that all relevant content of the steps involved in the above method embodiments can be referred to the function description of the corresponding function module, and will not be repeated here.
在采用集成的单元的情况下,装置1300可以包括处理单元、存储单元和通信单元。其中,处理单元可以用于对装置1300的动作进行控制管理,例如,可以用于支持装置1300执行上述各个单元执行的步骤。存储单元可以用于支持装置1300执行存储程序代码、和/或数据等。通信单元可以用于支持装置1300与其他设备的通信。In case of an integrated unit, the apparatus 1300 may include a processing unit, a storage unit and a communication unit. Wherein, the processing unit may be used to control and manage the actions of the apparatus 1300, for example, may be used to support the apparatus 1300 to execute the steps performed by the above-mentioned units. The storage unit may be used to support the device 1300 to execute stored program codes, and/or data, and the like. The communication unit may be used to support communication of the apparatus 1300 with other devices.
其中,处理单元可以是处理器或控制器。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理(digital signal processing,DSP)和微处理器的组合等等。存储单元可以是存储器。通信单元具体可以为射频电路、蓝牙芯片、无线保 真(wireless fidelity,Wi-Fi)芯片等与其他电子设备交互的设备。Wherein, the processing unit may be a processor or a controller. It can implement or execute the various illustrative logical blocks, modules and circuits described in connection with the present disclosure. The processor can also be a combination of computing functions, such as a combination of one or more microprocessors, a combination of digital signal processing (digital signal processing, DSP) and a microprocessor, and the like. The storage unit may be a memory. Specifically, the communication unit may be a radio frequency circuit, a bluetooth chip, a wireless fidelity (wireless fidelity, Wi-Fi) chip, and other devices that interact with other electronic devices.
在一种可能的实现方式中,本发明实施例所涉及的电子设备可以为具有图14所示结构的装置1400,该装置1400包括处理器1401和收发器1402。图13中的收发单元1301和处理单元1302所实现的相关功能可以由处理器1401来实现。In a possible implementation manner, the electronic device involved in this embodiment of the present invention may be an apparatus 1400 having the structure shown in FIG. 14 , where the apparatus 1400 includes a processor 1401 and a transceiver 1402 . Related functions implemented by the transceiver unit 1301 and the processing unit 1302 in FIG. 13 may be implemented by the processor 1401 .
可选地,该装置1400还可以包括存储器1403,该处理器1401和该存储器1403通过内部连接通路互相通信。图13中的存储单元所实现的相关功能可以由存储器1403来实现。Optionally, the apparatus 1400 may further include a memory 1403, and the processor 1401 and the memory 1403 communicate with each other through an internal connection path. The relevant functions implemented by the storage unit in FIG. 13 may be implemented by the memory 1403 .
本发明实施例还提供了一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的视频播放方法。The embodiment of the present invention also provides a computer storage medium, the computer storage medium stores computer instructions, and when the computer instructions are run on the electronic device, the electronic device executes the above related method steps to realize the video playback in the above embodiment method.
本发明实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的视频播放方法。An embodiment of the present invention also provides a computer program product, which, when running on a computer, causes the computer to execute the above-mentioned related steps, so as to realize the video playing method in the above-mentioned embodiment.
本发明实施例还提供了一种电子设备,这个装置具体可以是芯片、集成电路、组件或模块。具体的,该装置可包括相连的处理器和用于存储指令的存储器,或者该装置包括至少一个处理器,用于从外部存储器获取指令。当装置运行时,处理器可执行指令,以使芯片执行上述各方法实施例中的视频播放方法。The embodiment of the present invention also provides an electronic device, and this device may specifically be a chip, an integrated circuit, a component or a module. Specifically, the device may include a connected processor and a memory for storing instructions, or the device may include at least one processor for fetching instructions from an external memory. When the device is running, the processor can execute instructions, so that the chip executes the video playing method in the above method embodiments.
图15示出了一种芯片1500的结构示意图。芯片1500包括一个或多个处理器1501以及接口电路1502。可选的,上述芯片1500还可以包含总线1503。FIG. 15 shows a schematic structural diagram of a chip 1500 . The chip 1500 includes one or more processors 1501 and an interface circuit 1502 . Optionally, the above-mentioned chip 1500 may further include a bus 1503 .
处理器1501可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述视频播放方法的各步骤可以通过处理器1501中的硬件的集成逻辑电路或者软件形式的指令完成。The processor 1501 may be an integrated circuit chip with signal processing capabilities. During implementation, each step of the above video playing method may be completed by an integrated logic circuit of hardware in the processor 1501 or instructions in the form of software.
可选地,上述的处理器1501可以是通用处理器、数字信号处理(digital signal processing,DSP)器、集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。Optionally, the above-mentioned processor 1501 may be a general-purpose processor, a digital signal processing (digital signal processing, DSP) device, an integrated circuit (application specific integrated circuit, ASIC), a field-programmable gate array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. Various methods and steps disclosed in the embodiments of the present invention may be implemented or executed. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
接口电路1502可以用于数据、指令或者信息的发送或者接收,处理器1501可以利用接口电路1502接收的数据、指令或者其他信息,进行加工,可以将加工完成信息通过接口电路1502发送出去。The interface circuit 1502 can be used for sending or receiving data, instructions or information. The processor 1501 can use the data, instructions or other information received by the interface circuit 1502 to process, and can send the processing completion information through the interface circuit 1502 .
可选的,芯片还包括存储器,存储器可以包括只读存储器和随机存取存储器,并向处理器提供操作指令和数据。存储器的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。Optionally, the chip further includes a memory, which may include a read-only memory and a random access memory, and provides operation instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (NVRAM).
可选的,存储器存储了可执行软件模块或者数据结构,处理器可以通过调用存储器存储的操作指令(该操作指令可存储在操作系统中),执行相应的操作。Optionally, the memory stores executable software modules or data structures, and the processor can execute corresponding operations by calling operation instructions stored in the memory (the operation instructions can be stored in the operating system).
可选的,芯片可以使用在本发明实施例涉及的电子设备或DOP中。可选的,接口电路1502可用于输出处理器1501的执行结果。关于本发明的一个或多个实施例提供的视频播放方法可参考前述各个实施例,这里不再赘述。Optionally, the chip can be used in the electronic device or DOP involved in the embodiment of the present invention. Optionally, the interface circuit 1502 may be used to output an execution result of the processor 1501 . Regarding the video playing method provided by one or more embodiments of the present invention, reference may be made to the foregoing embodiments, and details are not repeated here.
需要说明的,处理器1501、接口电路1502各自对应的功能既可以通过硬件设计实现,也可以通过软件设计来实现,还可以通过软硬件结合的方式来实现,这里不作限制。It should be noted that the corresponding functions of the processor 1501 and the interface circuit 1502 can be realized by hardware design, software design, or a combination of software and hardware, which is not limited here.
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执 行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。Wherein, the electronic device, computer storage medium, computer program product or chip provided in this embodiment is all used to execute the corresponding method provided above, therefore, the beneficial effects it can achieve can refer to the corresponding method provided above The beneficial effects in the method will not be repeated here.
本发明实施例还提供了一种终端设备,所述终端设备包括上述的电子设备。可选地,所述终端设备为车辆或者智能机器人等。An embodiment of the present invention also provides a terminal device, where the terminal device includes the above-mentioned electronic device. Optionally, the terminal device is a vehicle or an intelligent robot.
应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。It should be understood that in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, rather than by the embodiment of the present invention. The implementation process constitutes any limitation.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those skilled in the art can appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present invention.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the above-described system, device and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本发明所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其他的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其他的形式。In the several embodiments provided by the present invention, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the above units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
上述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例上述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the above functions are realized in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the essence of the technical solution of the present invention or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the above-mentioned methods in various embodiments of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other various media that can store program codes.
本发明实施例中的第一应用为关于音频内容的应用,当然可选的,第一应用还可以拓展为视频内容的应用,或者音视频内容的应用。The first application in the embodiment of the present invention is an application related to audio content. Of course, optionally, the first application may also be extended to an application of video content, or an application of audio and video content.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Anyone skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the present invention. Should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.

Claims (21)

  1. 一种视频播放方法,其特征在于,所述方法包括:A video playback method, characterized in that the method comprises:
    采集目标信息,所述目标信息包括用户面部朝向信息和用户移动信息,所述用户面部朝向信息用于表征用户面部朝向的方向,所述用户移动信息用于表征用户移动的方向;Collect target information, the target information includes user face orientation information and user movement information, the user face orientation information is used to represent the direction of the user's face orientation, and the user movement information is used to represent the user's movement direction;
    根据所述目标信息调整视频的播放视角。The playing angle of the video is adjusted according to the target information.
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标信息调整视频的播放视角,包括:The method according to claim 1, wherein the adjusting the playing angle of the video according to the target information comprises:
    根据所述用户面部朝向信息调整所述播放视角的朝向;Adjusting the orientation of the playing angle of view according to the user's face orientation information;
    根据所述用户移动信息调整所述播放视角的位置。Adjusting the position of the playing angle of view according to the user movement information.
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述用户面部朝向信息调整所述播放视角的朝向,包括:The method according to claim 2, wherein the adjusting the orientation of the playback viewing angle according to the user's face orientation information comprises:
    根据所述用户面部朝向信息确定所述播放视角的朝向调整方向;determining an orientation adjustment direction of the playing angle of view according to the user's face orientation information;
    根据所述朝向调整方向和预设朝向调整量调整所述播放视角的朝向。The orientation of the playback viewing angle is adjusted according to the orientation adjustment direction and a preset orientation adjustment amount.
  4. 根据权利要求2所述的方法,其特征在于,所述面部朝向信息还用于表征用户面部朝向的角度,所述根据所述用户面部朝向信息调整所述播放视角的朝向,包括:The method according to claim 2, wherein the face orientation information is also used to characterize the angle of the user's face orientation, and the adjusting the orientation of the playback viewing angle according to the user's face orientation information includes:
    根据所述用户面部朝向信息确定调整信息,所述调整信息用于表征调整后的所述播放视角的朝向的方向和所述播放视角的朝向的角度;Determine adjustment information according to the user's face orientation information, where the adjustment information is used to characterize the adjusted direction of the playing angle of view and the angle of the orientation of the playing angle of view;
    根据所述调整信息调整所述播放视角的朝向。Adjusting the orientation of the playback viewing angle according to the adjustment information.
  5. 根据权利要求2至4中任一项所述的方法,其特征在于,所述根据所述用户移动信息调整所述播放视角的位置,包括:The method according to any one of claims 2 to 4, wherein the adjusting the position of the playing angle of view according to the user movement information comprises:
    根据所述用户移动信息确定所述播放视角的位置调整方向;determining the position adjustment direction of the playing angle of view according to the user movement information;
    根据所述位置调整方向和预设位置调整量调整所述播放视角的位置。The position of the playing angle of view is adjusted according to the position adjustment direction and the preset position adjustment amount.
  6. 根据权利要求2至4中任一项所述的方法,其特征在于,所述用户移动信息还用于表征用户移动的距离,所述根据所述用户移动信息调整所述播放视角的位置,包括:The method according to any one of claims 2 to 4, wherein the user movement information is also used to characterize the distance moved by the user, and the adjusting the position of the playing angle of view according to the user movement information includes :
    根据所述用户移动信息确定所述播放视角的位置调整方向和所述播放视角的位置调整距离;determining a position adjustment direction of the playback viewing angle and a position adjustment distance of the playback viewing angle according to the user movement information;
    根据所述位置调整方向和所述位置调整距离调整所述播放视角的位置。Adjusting the position of the playing angle of view according to the position adjustment direction and the position adjustment distance.
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述采集目标信息,包括:The method according to any one of claims 1 to 6, wherein the collecting target information includes:
    通过采集单元采集所述目标信息,所述采集单元包括图像采集单元、声音采集单元或红外采集单元中的至少一项。The target information is collected by a collection unit, and the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
  8. 根据权利要求1至6中任一项所述的方法,其特征在于,所述采集目标信息,包括:The method according to any one of claims 1 to 6, wherein the collecting target information includes:
    接收采集装置发送的采集信息,所述采集信息包括所述目标信息。The collection information sent by the collection device is received, where the collection information includes the target information.
  9. 一种视频播放装置,其特征在于,包括处理单元,所述处理单元用于:A video playback device, characterized in that it includes a processing unit, the processing unit is used for:
    采集目标信息,所述目标信息包括用户面部朝向信息和用户移动信息,所述用户面部朝向信息用于表征用户面部朝向的方向,所述用户移动信息用于表征用户移动的方向;Collect target information, the target information includes user face orientation information and user movement information, the user face orientation information is used to represent the direction of the user's face orientation, and the user movement information is used to represent the user's movement direction;
    根据所述目标信息调整视频的播放视角。The playing angle of the video is adjusted according to the target information.
  10. 根据权利要求9所述的装置,其特征在于,所述处理单元具体用于:The device according to claim 9, wherein the processing unit is specifically used for:
    根据所述用户面部朝向信息调整所述播放视角的朝向;Adjusting the orientation of the playing angle of view according to the user's face orientation information;
    根据所述用户移动信息调整所述播放视角的位置。Adjusting the position of the playing angle of view according to the user movement information.
  11. 根据权利要求10所述的装置,其特征在于,所述处理单元具体用于:The device according to claim 10, wherein the processing unit is specifically used for:
    根据所述用户面部朝向信息确定所述播放视角的朝向调整方向;determining an orientation adjustment direction of the playing angle of view according to the user's face orientation information;
    根据所述朝向调整方向和预设朝向调整量调整所述播放视角的朝向。The orientation of the playback viewing angle is adjusted according to the orientation adjustment direction and a preset orientation adjustment amount.
  12. 根据权利要求10所述的装置,其特征在于,所述面部朝向信息还用于表征用户面部朝向的角度,所述处理单元具体用于:The device according to claim 10, wherein the face orientation information is also used to characterize the angle of the user's face orientation, and the processing unit is specifically used for:
    根据所述用户面部朝向信息确定调整信息,所述调整信息用于表征调整后的所述播放视角的朝向的方向和所述播放视角的朝向的角度;Determine adjustment information according to the user's face orientation information, where the adjustment information is used to characterize the adjusted direction of the playing angle of view and the angle of the orientation of the playing angle of view;
    根据所述调整信息调整所述播放视角的朝向。Adjusting the orientation of the playback viewing angle according to the adjustment information.
  13. 根据权利要求10至12中任一项所述的装置,其特征在于,所述处理单元具体用于:The device according to any one of claims 10 to 12, wherein the processing unit is specifically configured to:
    根据所述用户移动信息确定所述播放视角的位置调整方向;determining the position adjustment direction of the playing angle of view according to the user movement information;
    根据所述位置调整方向和预设位置调整量调整所述播放视角的位置。The position of the playing angle of view is adjusted according to the position adjustment direction and the preset position adjustment amount.
  14. 权利要求10至12中任一项所述的装置,其特征在于,所述用户移动信息还用于表征用户移动的距离,所述处理单元具体用于:The device according to any one of claims 10 to 12, wherein the user movement information is also used to characterize the distance moved by the user, and the processing unit is specifically used for:
    根据所述用户移动信息确定所述播放视角的位置调整方向和所述播放视角的位置调整距离;determining a position adjustment direction of the playback viewing angle and a position adjustment distance of the playback viewing angle according to the user movement information;
    根据所述位置调整方向和所述位置调整距离调整所述播放视角的位置。Adjusting the position of the playing angle of view according to the position adjustment direction and the position adjustment distance.
  15. 根据权利要求9至14中任一项所述的装置,其特征在于,所述处理单元具体用于:The device according to any one of claims 9 to 14, wherein the processing unit is specifically configured to:
    通过采集单元采集所述目标信息,所述采集单元包括图像采集单元、声音采集单元或红外采集单元中的至少一项。The target information is collected by a collection unit, and the collection unit includes at least one of an image collection unit, a sound collection unit, or an infrared collection unit.
  16. 根据权利要求9至14中任一项所述的装置,其特征在于,所述处理单元具体用于:The device according to any one of claims 9 to 14, wherein the processing unit is specifically configured to:
    接收采集装置发送的采集信息,所述采集信息包括所述目标信息。The collection information sent by the collection device is received, where the collection information includes the target information.
  17. 一种芯片设备,包括存储器和处理器,所述存储器与所述处理器耦合,所述存储器存储有代码,所述处理器被配置为执行所述代码,当所述代码被执行时,实现上述权利要求1至8中任一项所述的方法。A chip device, including a memory and a processor, the memory is coupled to the processor, the memory stores codes, the processor is configured to execute the codes, and when the codes are executed, the above-mentioned The method according to any one of claims 1 to 8.
  18. 一种计算机可读存储介质,用于存储计算机程序,其特征在于,所述计算机程序包括用于实现上述权利要求1至8中任一项所述的方法的指令。A computer-readable storage medium for storing a computer program, wherein the computer program includes instructions for implementing the method according to any one of claims 1 to 8 above.
  19. 一种计算机程序产品,所述计算机程序产品中包含指令,其特征在于,当所述指令在计算机或处理器上运行时,使得所述计算机或所述处理器实现上述权利要求1至8中任一项所述的方法。A computer program product, the computer program product contains instructions, characterized in that, when the instructions are run on a computer or a processor, the computer or the processor is made to implement any of the preceding claims 1 to 8. one of the methods described.
  20. 一种电子设备,其特征在于,包括如权利要求9至16中任一项所述的视频播放装置。An electronic device, characterized by comprising the video playing device according to any one of claims 9 to 16.
  21. 根据权利要求20所述的电子设备,其特征在于,所述电子设备为智慧屏。The electronic device according to claim 20, wherein the electronic device is a smart screen.
PCT/CN2022/127534 2021-11-05 2022-10-26 Video playback method and device WO2023078133A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111303353.5 2021-11-05
CN202111303353.5A CN116095405A (en) 2021-11-05 2021-11-05 Video playing method and device

Publications (1)

Publication Number Publication Date
WO2023078133A1 true WO2023078133A1 (en) 2023-05-11

Family

ID=86208769

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/127534 WO2023078133A1 (en) 2021-11-05 2022-10-26 Video playback method and device

Country Status (2)

Country Link
CN (1) CN116095405A (en)
WO (1) WO2023078133A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866809A (en) * 2014-02-25 2015-08-26 腾讯科技(深圳)有限公司 Picture playing method and device thereof
CN109448549A (en) * 2018-12-29 2019-03-08 河北三川科技有限公司 A method of based on recognition of face adjust automatically advertisement playing device angle
CN110673734A (en) * 2019-09-30 2020-01-10 京东方科技集团股份有限公司 Virtual tourism method, client, server, system and image acquisition equipment
CN112489578A (en) * 2020-11-19 2021-03-12 北京沃东天骏信息技术有限公司 Commodity presentation method and device
CN113132801A (en) * 2019-12-31 2021-07-16 中移(苏州)软件技术有限公司 Video playing control method, device, terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866809A (en) * 2014-02-25 2015-08-26 腾讯科技(深圳)有限公司 Picture playing method and device thereof
CN109448549A (en) * 2018-12-29 2019-03-08 河北三川科技有限公司 A method of based on recognition of face adjust automatically advertisement playing device angle
CN110673734A (en) * 2019-09-30 2020-01-10 京东方科技集团股份有限公司 Virtual tourism method, client, server, system and image acquisition equipment
US20210093978A1 (en) * 2019-09-30 2021-04-01 Boe Technology Group Co., Ltd. Virtual Tourism Method, Client, Server, System, Acquisition Device, and Medium
CN113132801A (en) * 2019-12-31 2021-07-16 中移(苏州)软件技术有限公司 Video playing control method, device, terminal and storage medium
CN112489578A (en) * 2020-11-19 2021-03-12 北京沃东天骏信息技术有限公司 Commodity presentation method and device

Also Published As

Publication number Publication date
CN116095405A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
WO2021057830A1 (en) Information processing method and electronic device
WO2022100315A1 (en) Method for generating application interface, and related apparatus
WO2021129253A1 (en) Method for displaying multiple windows, and electronic device and system
WO2021120914A1 (en) Interface element display method and electronic device
WO2021036571A1 (en) Desktop editing method and electronic device
WO2021000881A1 (en) Screen splitting method and electronic device
CN113132526B (en) Page drawing method and related device
CN114816167B (en) Application icon display method, electronic device and readable storage medium
WO2022152024A1 (en) Widget display method and electronic device
WO2021213449A1 (en) Touch operation method and device
WO2021175272A1 (en) Method for displaying application information and related device
WO2021008589A1 (en) Application running mehod and electronic device
WO2024021519A1 (en) Card display method and terminal device
CN113536866A (en) Character tracking display method and electronic equipment
CN112068907A (en) Interface display method and electronic equipment
WO2023093169A1 (en) Photographing method and electronic device
WO2023040666A1 (en) Keyboard display method, foldable screen device, and computer-readable storage medium
CN115756268A (en) Cross-device interaction method and device, screen projection system and terminal
EP4207744A1 (en) Video photographing method and electronic device
WO2022213831A1 (en) Control display method and related device
WO2021254113A1 (en) Control method for three-dimensional interface and terminal
WO2022002213A1 (en) Translation result display method and apparatus, and electronic device
WO2022057384A1 (en) Photographing method and device
WO2021052488A1 (en) Information processing method and electronic device
WO2023040775A1 (en) Preview method, electronic device, and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22889158

Country of ref document: EP

Kind code of ref document: A1