WO2020177278A1 - 数据处理方法及装置、存储介质、电子设备 - Google Patents

数据处理方法及装置、存储介质、电子设备 Download PDF

Info

Publication number
WO2020177278A1
WO2020177278A1 PCT/CN2019/102051 CN2019102051W WO2020177278A1 WO 2020177278 A1 WO2020177278 A1 WO 2020177278A1 CN 2019102051 W CN2019102051 W CN 2019102051W WO 2020177278 A1 WO2020177278 A1 WO 2020177278A1
Authority
WO
WIPO (PCT)
Prior art keywords
mapping relationship
image frame
time stamp
mapping
information
Prior art date
Application number
PCT/CN2019/102051
Other languages
English (en)
French (fr)
Inventor
罗创
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Priority to US17/057,768 priority Critical patent/US11265594B2/en
Publication of WO2020177278A1 publication Critical patent/WO2020177278A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2408Monitoring of the upstream path of the transmission network, e.g. client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/26603Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for automatically generating descriptors from content, e.g. when it is not made available by its provider, using content analysis techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42224Touch pad or touch panel provided on the remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8583Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by creating hot-spots

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a data processing method, a data processing device, a computer-readable storage medium, and electronic equipment.
  • the purpose of the present disclosure is to provide a data processing method, a data processing device, a computer-readable storage medium, and an electronic device, so as to overcome at least to some extent the problem of insufficient software function perception due to limitations of related technologies.
  • a data processing method is provided, which is applied to a live broadcast terminal of a live video broadcast.
  • the method includes: determining an image frame of the video data, and obtaining information about a functional area in the image frame. Location information; acquiring encoding information of the functional area, and establishing a first mapping relationship between the encoding information and the location information; acquiring the timestamp of the image frame, and mapping the timestamp to the first The relationship establishes a second mapping relationship.
  • the method further includes: acquiring each image in the video data The mapping relationship set of each second mapping relationship corresponding to the frame; encoding the video data and the mapping relationship set to obtain data mapping encoding information; uploading the data mapping encoding information to a server.
  • the method further includes: acquiring each image in the video data Each of the mapping relationship sets of the second mapping relationship corresponding to the frame; uploading the video data and the mapping relationship set to the server respectively.
  • the uploading the video data and the mapping relationship set to the server respectively includes: determining the first mapping relationship corresponding to the time stamp of the image frame and the previous frame Whether there is a difference in the first mapping relationship corresponding to the time stamp of the image; if it is determined that the first mapping relationship corresponding to the time stamp of the image frame is different from the first mapping relationship corresponding to the time stamp of the previous frame of image, then the The second mapping relationship corresponding to the time stamp of the image frame is uploaded to the server.
  • the acquiring position information of the functional area located in the image frame includes: determining a plurality of position points corresponding to the functional area on the image frame; acquiring the Location information of multiple locations.
  • the acquiring position information of a functional area located in the image frame includes: determining a position area corresponding to the functional area on the image frame; acquiring the position area Location information.
  • a data processing device comprising: an information acquisition component configured to determine an image frame of video data, and acquire position information of a functional area located in the image frame A first mapping component, configured to obtain coding information of the functional area, and to establish a first mapping relationship between the coding information and the location information; a second mapping component, configured to obtain a time stamp of the image frame, And establish a second mapping relationship between the timestamp and the first mapping relationship.
  • a data processing method is provided, which is applied to the viewer end of a live video broadcast.
  • the method includes: determining an image frame in the video data, and receiving a touch that acts on the image frame. Control operation; obtain the time stamp of the image frame, and the touch position information of the touch operation on the image frame; based on the second mapping relationship, obtain the first mapping relationship corresponding to the time stamp;
  • the first mapping relationship acquires coding information corresponding to the touch position information, and performs a function corresponding to the functional area indicated by the coding information, wherein the first mapping relationship includes the functional area in the image frame
  • the mapping relationship between the coding information of and the location information of the functional area, and the second mapping relationship includes the mapping relationship between the time stamps of the image frame.
  • the method before the determining the image frame in the video data and receiving the touch operation acting on the image frame, the method further includes: acquiring the second mapping The second relationship information of the relationship; save the second relationship information and the video data to a local database.
  • the obtaining the first mapping relationship corresponding to the triggering timestamp based on the second mapping relationship includes: determining whether the time and the time are stored in the local database. The first mapping relationship corresponding to the timestamp; if the first mapping relationship corresponding to the timestamp is not stored in the local database, upload the timestamp to the server to obtain the timestamp correspondence from the server The first mapping relationship.
  • a data processing device comprising: an instruction receiving component configured to determine an image frame in video data and receive a touch operation acting on the image frame
  • the instruction information component is set to obtain the time stamp of the image frame and the trigger position information of the touch operation on the image frame;
  • the first obtaining component is set to obtain the time based on the second mapping relationship A first mapping relationship corresponding to the stamp;
  • a second acquiring component configured to acquire coding information corresponding to the touch position information based on the first mapping relationship, and execute a function corresponding to the functional area indicated by the coding information
  • the first mapping relationship includes the mapping relationship between the coding information of the functional area in the image frame and the position information of the functional area
  • the second mapping relationship includes the time stamp of the image frame.
  • a data processing method comprising: obtaining a first mapping relationship between a timestamp of an image frame in video data and an area on the image frame that can accept instructions; Determine whether the first mapping relationship corresponding to the time stamp of the image frame is consistent with the first mapping relationship corresponding to the time stamp of the next frame of image; if the first mapping relationship corresponding to the time stamp of the image frame is consistent with the next If the first mapping relationships corresponding to the time stamps of the frame images are consistent, the two first mapping relationships respectively corresponding to the image frame and the next frame image are integrated into one first mapping relationship.
  • a data processing device comprising: a relationship acquisition component configured to acquire a time stamp of an image frame in video data and an area on the image frame that can accept instructions The first mapping relationship; the mapping determination component is set to determine whether the first mapping relationship corresponding to the time stamp of the image frame is consistent with the first mapping relationship corresponding to the time stamp of the next frame image; the relationship integration component is set to if If the first mapping relationship corresponding to the timestamp of the image frame is consistent with the first mapping relationship corresponding to the timestamp of the next frame of image, then the two first mapping relationships corresponding to the image frame and the next frame of image respectively A mapping relationship is integrated into a first mapping relationship.
  • an electronic device including: a processor and a memory; wherein the memory is connected to the processor and is configured to store computer-readable instructions, the computer-readable instructions When executed by the processor, the data processing method in any of the foregoing exemplary embodiments is implemented.
  • a computer-readable storage medium having a computer program stored thereon, and the computer program, when executed by a processor, implements the data processing method in any of the foregoing exemplary embodiments.
  • Fig. 1 schematically shows a flow chart of a data processing method in an exemplary embodiment of the present disclosure
  • Fig. 2 schematically shows a flow chart of a method for obtaining location information of a functional area in one of the exemplary embodiments of the present disclosure
  • FIG. 3 schematically shows a schematic flowchart of another method for obtaining location information of a functional area in an exemplary embodiment of the present disclosure
  • FIG. 4 schematically shows a schematic flowchart of a method for processing a second mapping relationship in one of the exemplary embodiments of the present disclosure
  • FIG. 5 schematically shows a schematic flow chart of another method for obtaining a second mapping relationship in an exemplary embodiment of the present disclosure
  • Fig. 6 schematically shows a schematic flowchart of a method for uploading video data and a second mapping relationship in one of the exemplary embodiments of the present disclosure
  • Fig. 7 schematically shows a structural diagram of a data processing device in one of the exemplary embodiments of the present disclosure
  • FIG. 8 schematically shows a schematic flowchart of another data processing method in one of the exemplary embodiments of the present disclosure
  • FIG. 9 schematically shows a schematic flowchart of a method for obtaining a first mapping relationship corresponding to a timestamp in one of the exemplary embodiments of the present disclosure
  • FIG. 10 schematically shows a flow chart of a method for saving a second mapping relationship to a local database in one of the exemplary embodiments of the present disclosure
  • FIG. 11 schematically shows a schematic structural diagram of another data processing device in one of the exemplary embodiments of the present disclosure.
  • FIG. 12 schematically shows a schematic flowchart of still another data processing method in one of the exemplary embodiments of the present disclosure
  • FIG. 13 schematically shows a schematic structural diagram of still another data processing device in one of the exemplary embodiments of the present disclosure
  • FIG. 14 schematically shows a flow chart of a data processing method corresponding to the anchor end in one of the exemplary embodiments of the present disclosure
  • FIG. 15 schematically shows a schematic diagram of an application interface for capturing a screen to generate a video image in one of the exemplary embodiments of the present disclosure
  • FIG. 16 schematically shows a flow chart of a data processing method corresponding to the audience side in one of the exemplary embodiments of the present disclosure
  • Fig. 17 schematically shows a flow chart of a data processing system in one of the exemplary embodiments of the present disclosure
  • FIG. 18 schematically shows an electronic device for implementing a data processing method in one of the exemplary embodiments of the present disclosure
  • Fig. 19 schematically shows a computer-readable storage medium for implementing a data processing method in one of the exemplary embodiments of the present disclosure.
  • the terms “a”, “a”, “the” and “said” are used to indicate the presence of one or more elements/components/etc.; the terms “including” and “have” are used to indicate open-ended Inclusive means and means that in addition to the listed elements/components/etc., there may be other elements/components/etc.; the terms “first” and “second” etc. are only used as marks, not The number of its objects is limited.
  • Figure 1 shows a flow chart of the data processing method. As shown in Figure 1, the data processing method at least includes the following steps:
  • step S101 the image frame of the video data is determined, and the position information of the functional area located in the image frame is acquired.
  • step S102 the coding information of the functional area is acquired, and a first mapping relationship is established between the coding information and the location information.
  • step S103 the time stamp of the image frame is acquired, and a second mapping relationship is established between the time stamp and the first mapping relationship.
  • the viewer can perform the mapping of the functional area under the guidance of the host (the functional area may be During the live broadcast, the area designated by the host or automatically set by the live broadcast terminal software/live server) performs touch operations to achieve the effect of turning on the corresponding functions of the software.
  • the one hand solves the problem of low perception of some software functions, improves the connection and interaction between the audience and the software functions of the live broadcast software, and facilitates the retention of users; on the other hand, realizes many functional operation areas in the limited display layout space of the live broadcast software At the same time, it avoids the obstruction of the video screen of the audience; on the other hand, it provides a new interactive method for the live video broadcast and the audience to increase the interest of the video broadcast and enhance the audience's viewing experience.
  • step S101 the image frame of the video data is determined, and the position information of the functional area in the image frame is acquired.
  • the video data itself is composed of successive image frames, and a frame of image is a still picture, and the continuous image frames form a video.
  • the video data can be composed of 60 image frames or 80 image frames, and the two can display the same content, but there will be obvious differences in the smoothness of the video data.
  • the method for determining the image frame of the video data may be to decompose the video data. This exemplary embodiment does not limit the number of image frames included in the video data.
  • the video data can be decomposed to obtain 4 image frames.
  • this exemplary embodiment does not limit the format of the video data. It can be real-time live video data, or video data in formats such as mp4, avi, mkv, dvd, flv, etc., or other formats.
  • the video file can also include single-channel grayscale video data and three-channel color video data.
  • the position of the functional area in the video frame it can be determined through the screen coordinate system.
  • the screen coordinate system takes the lower left corner of the screen as the origin, and uses pixels as the unit.
  • the coordinate axis extends upwards of the screen, but does not exceed the maximum width and maximum height of the screen.
  • the position information of the functional area in the image frame can be determined.
  • the functional area may be an area selected by the host during the live broadcast, or it may be automatically set by the live broadcast software or the live server.
  • the specific method of determining the functional area and the shape of the functional area in this exemplary embodiment There are no special restrictions on such attributes.
  • step S201 in the image A number of position points corresponding to the functional area are determined on the frame.
  • the functional area can be composed of multiple location points, and each location point can be determined in the image frame.
  • step S202 position information of multiple position points is acquired. Since these location points are established based on the screen coordinate system, the location information of each location point can be determined corresponding to the pixel value of the screen coordinate system to determine the location information of the functional area.
  • the functional area is a side length of 1 and A square with the coordinates of the four points determined.
  • the coordinate information of the functional area is unique and determined. This exemplary embodiment does not specifically limit the specific method of determining the position information of the functional area through the position points, and can be carried out according to the actual situation. determine.
  • a schematic flow chart of another method for acquiring position information of a functional area in an image frame includes at least the following steps:
  • step S301 Determine the location area corresponding to the functional area on the image frame.
  • the functional area may be an area with corresponding functions, and the area corresponds to a position area in the screen coordinate system of the image frame.
  • step S302 the location information of the location area is acquired. Through the pixel value of the screen coordinate system, the location information of the location area can be determined.
  • the location information of the location area of the functional area may be (1, 2, 3, 4), indicating that the functional area is a starting point with coordinates (1, 2), a width of 3, and a height of 4 Location area, the unit of the location area is pixel value.
  • the method for determining the location information of the corresponding functional area can be selected for the setting of different functional areas, which is more targeted, and the determined location information is more targeted. accurate.
  • step S102 the coding information of the functional area is acquired, and a first mapping relationship is established between the coding information and the location information.
  • the functional area is an area that has been functionally coded, and coding information of the functional area can be acquired.
  • a first mapping relationship between the obtained position information and encoding information of the functional area can be established.
  • the first mapping relationship may be a set of mapping relationships between functional areas and encoding information R, R
  • the specific form of can be as follows: ((x1,y1,width1,height1)->code1,(x2,y2,width2,height2)->code2,...), it is worth noting that this exemplary embodiment
  • the specific form of the first mapping relationship is not specifically limited.
  • step S103 the time stamp of the image frame is acquired, and a second mapping relationship is established between the time stamp and the first mapping relationship.
  • the time stamp is usually a character sequence that uniquely identifies the time at a certain moment, and the time stamp of the image frame can uniquely identify the playback time of the image frame at a certain moment.
  • the character sequence refers to a string of characters composed of numbers, letters, and underscores.
  • FIG. 4 shows a schematic flow chart of the processing method after the second mapping relationship is established.
  • the method includes at least the following steps:
  • step S401 acquiring and video data
  • the second mapping relationship corresponding to each image frame can be determined.
  • the video data includes multiple image frames, so multiple second mapping relationships can also be determined. Collecting all the second mapping relationships in the video data can determine the mapping relationship set of the video data.
  • step S402 the video data and the mapping relationship set are encoded to obtain data mapping encoding information.
  • the data mapping encoding information can be obtained according to the encoding result.
  • step S403 upload the data mapping coding information to the server.
  • the data mapping and encoding information obtained after encoding may be uploaded to the server.
  • the video data and the set of mapping relations are encoded and uploaded to the server at the same time, and the two can be encoded into data mapping encoding information, so that there is a unique relationship between the set of mapping relations stored on the server and the video data. And the definite relationship is more convenient for query when decoding processing on the viewer side.
  • FIG. 5 shows a schematic flowchart of another processing method after the second mapping relationship is established.
  • the method includes at least the following steps: in step S501 , Acquiring the mapping relationship set of each second mapping relationship corresponding to each image frame in the video data.
  • the second mapping relationship corresponding to each image frame can be determined.
  • the video data includes multiple image frames, so multiple second mapping relationships can also be determined. Collecting all the second mapping relationships in the video data can determine the mapping relationship set of the video data.
  • step S502 the video data and the mapping relationship set are uploaded to the server respectively. In order to facilitate the viewer terminal to obtain the video data and the mapping relationship set, the video data and the mapping relationship set may be uploaded separately.
  • FIG. 6 shows a schematic flowchart of a method of uploading video data and the second mapping relationship.
  • the method includes at least the following steps: In step S601, determining the image frame Whether there is a difference between the first mapping relationship corresponding to the time stamp and the first mapping relationship corresponding to the time stamp of the previous frame of image. Since there is a one-to-one correspondence between the time stamp and the image frame, and each image frame has a corresponding first mapping relationship, there is a one-to-one correspondence between the time stamp and the first mapping relationship.
  • step S602 if it is determined that there is a difference between the first mapping relationship corresponding to the time stamp of the image frame and the first mapping relationship corresponding to the time stamp of the previous frame of image, upload the second mapping relationship corresponding to the time stamp of the image frame to server.
  • the mapping relationship corresponding to the time stamp is uploaded; if it is determined that the first mapping relationship corresponding to the time stamp of the image frame is different If there is no difference between a mapping relationship and the first mapping relationship corresponding to the time stamp of the previous frame of image, the first mapping relationship associated with consecutive time stamps may be combined. For example, in 100 frames of video data, if the first mapping relationship from 1-100 frames is the same, only one corresponding storage record is saved, that is, only the second mapping relationship corresponding to the first frame is uploaded to the server.
  • the video data and the mapping relationship set are uploaded respectively, and before uploading, it can be determined that the first mapping relationship corresponding to the time stamp at this moment and the first mapping relationship corresponding to the time stamp at the previous moment can be determined. Whether there is a difference between a mapping relationship, upload to the server only when there is a difference, which not only saves the workload and time of the upload server, but also saves server resources, which is a more preferred upload server solution.
  • FIG. 7 shows a schematic structural diagram of a data processing device.
  • the data processing device 700 may include: an information acquisition component 701, a first mapping component 702, and a second mapping component 703. among them:
  • the information acquisition component 701 is configured to determine the image frame of the video data and acquire the position information of the functional area in the image frame;
  • the first mapping component 702 is configured to acquire the coding information of the functional area and establish the coding information and the position information The first mapping relationship;
  • the second mapping component 703 is configured to obtain the timestamp of the image frame, and establish a second mapping relationship between the timestamp and the first mapping relationship.
  • modules or units of the data processing apparatus 700 are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
  • FIG. 8 shows a schematic flowchart of the data processing method. As shown in FIG. 8, the data processing method At least include the following steps:
  • step S801 an image frame in the video data is determined, and a touch operation acting on the image frame is received;
  • step S802 obtain the time stamp of the image frame and the touch position information of the touch operation on the image frame
  • step S803 based on the second mapping relationship, obtain the first mapping relationship corresponding to the timestamp;
  • step S804 based on the first mapping relationship, the encoding information corresponding to the touch position information is acquired, and the function corresponding to the functional area indicated by the encoding information is executed, where the first mapping relationship includes the encoding information of the functional area in the image frame and The mapping relationship between the position information of the functional areas, and the second mapping relationship includes the mapping relationship between the time stamps of the image frames.
  • the viewer can perform touch operations on the functional area under the guidance of the host. To achieve the effect of opening the corresponding function of the software.
  • it can accurately capture the touch position information of the audience's touch operation, execute the software function of the corresponding area, solve the problem of low perception of some software functions, and improve the connection and interaction between the audience and the software function of the live software, which is convenient Retain users; on the other hand, while realizing many functional operation areas in the limited display layout space of the live broadcast software, it avoids the obstruction of the video screen of the audience, and provides the audience with a better visual effect; on the other hand, for the video
  • the live broadcast end and the audience end of the live broadcast provide a new interactive method to increase the interest of the live video broadcast and enhance the viewer's viewing experience.
  • step S801 an image frame in the video data is determined, and a touch operation acting on the image frame is received.
  • the video data itself is composed of successive image frames, and a frame of image is a still picture, and the continuous image frames form a video.
  • the video data can be composed of 60 image frames or 80 image frames, and the two can display the same content, but there will be obvious differences in the smoothness of the video data.
  • the method for determining the image frame of the video data may be to decompose the video data. This exemplary embodiment does not limit the number of image frames included in the video data.
  • the video data can be decomposed to obtain 4 image frames.
  • this exemplary embodiment also does not limit the format of the video data. It can be real-time live video data, or video files in mp4, avi, mkv, dvd, flv and other formats, or other formats.
  • the video data can also include single-channel gray-scale video data and three-channel color video data. Determine an image frame in the video data, and receive a touch operation acting on the image frame.
  • the touch operation may be a click operation, a sliding operation, a pressing operation, etc., and this exemplary embodiment does not specifically limit the specific touch operation.
  • step S802 the time stamp of the image frame and the touch position information of the touch operation on the image frame are acquired.
  • the time stamp is usually a character sequence that uniquely identifies the time at a certain moment, and the time stamp of the image frame can uniquely identify the playback time of the image frame at a certain moment.
  • the timestamp of the image frame can be uniquely determined. Determining the touch position information of the touch operation in the image frame can be determined through the screen coordinate system.
  • the screen coordinate system takes the lower left corner of the screen as the origin, in pixels, the coordinate axis extends upwards of the screen, but does not exceed the maximum width and maximum height of the screen.
  • the touch position information can be determined according to the corresponding pixels in the screen coordinate system of the touch operation.
  • the touch position information may be position information of multiple touch points.
  • the position information of multiple position points is acquired to determine the touch position information. Since these position points are established based on the screen coordinate system, the position information of each position point can be determined corresponding to the pixel value of the screen coordinate system to determine the touch position information. For example, if the position information of multiple position points are (1,2), (1,3), (2,2) and (2,3), then it can be determined that the active area of the touch operation is a side For a square whose length is 1 and the coordinates of four points have been determined, the coordinate information of the touch position can be uniquely determined.
  • the touch position information may be the position information of a touch area, and the position information of the touch area may be determined through the pixel value of the screen coordinate system.
  • the position information of the touch area may be (1, 2, 3, 4), indicating that the touch area is a position with coordinates (1, 2) as the starting point, width 3, and height 4 Area, the unit of the position area is a pixel value, then the touch position information is determined according to the position information of (1, 2, 3, 4), and the touch position information is uniquely determined in the image frame.
  • step S803 based on the second mapping relationship, the first mapping relationship corresponding to the time stamp is acquired.
  • FIG. 9 shows a schematic flowchart of a method for obtaining a first mapping relationship corresponding to a time stamp based on a second mapping relationship.
  • the method includes at least the following steps: In step S901, it is determined whether the first mapping relationship corresponding to the time stamp is stored in the local database. When the video data is decoded on the viewer side, after obtaining the time stamp of the video data, the time stamp is used to request the server to obtain the first mapping relationship. In order to reduce the amount of requests, the server will send the second mapping relationship of the video data to the viewer together to facilitate subsequent decoding.
  • the second mapping relationship includes the mapping relationship between the time stamps of the image frames.
  • the form of the second mapping relationship may be a time stamp and the first mapping relationship F (timestamp->R). Therefore, when decoding, the viewer first queries whether the first mapping relationship corresponding to the timestamp is stored in the local database. If the first mapping relationship corresponding to the timestamp is stored in the local database, the first mapping relationship can be directly acquired. In step S902, if the first mapping relationship corresponding to the timestamp is not saved in the local database, the timestamp is uploaded to the server to obtain the first mapping relationship corresponding to the timestamp from the server.
  • the server If the first mapping relationship corresponding to the timestamp is not found in the local database, a request is initiated to the server, the timestamp is uploaded to the server, and the first mapping relationship corresponding to the timestamp is obtained from the server.
  • the corresponding first mapping relationship is queried from the server through the uploaded timestamp, which reduces the number of viewers. For server-side visits, concurrency resources on the server-side are saved.
  • step S804 based on the first mapping relationship, the encoding information corresponding to the touch position information is acquired, and the function corresponding to the functional area indicated by the encoding information is executed, wherein the first mapping relationship includes the encoding information of the functional area in the image frame
  • the second mapping relationship includes the mapping relationship between the time stamps of the image frame.
  • the first mapping relationship includes the mapping relationship between the encoding information of the functional area and the position information of the functional area in the image frame, and the first mapping relationship may be in the form of the functional area and the encoding information.
  • the mapping set R((x1,y1,width1,height1)->code1,(x2,y2,width2,height2)->code2,...) this exemplary embodiment does not specifically limit the form of the first mapping relationship .
  • the coding information of the corresponding functional area can be obtained according to the touch position information, and the coding information can be executed to enable the viewer to realize the corresponding function.
  • the functional area may be designated by the host during the live broadcast, or may be automatically set by the live broadcast terminal software/live server. This exemplary embodiment does not specifically limit the specific determination form of the functional area.
  • FIG. 10 shows a schematic flowchart of a method for saving a second mapping relationship to a local database.
  • the method at least includes the following steps:
  • step S1001 the second mapping relationship is obtained.
  • the second mapping relationship of the video data can be uploaded to the local database.
  • the second relationship information and the video data are saved to the local database.
  • the video data with the second relationship information and the second relationship information are saved to the local database, so that the viewer can query the second mapping relationship in the local database when decoding.
  • FIG. 11 shows a schematic structural diagram of a data processing device.
  • the data processing device 1100 may include: an instruction receiving component 1101, an instruction information component 1102, a first acquiring component 1103, and a second acquiring component 1104. among them:
  • the instruction receiving component 1101 is set to determine the image frame in the video data and receives the touch operation acting on the image frame; the instruction information component 1102 is set to obtain the timestamp of the image frame, and the touch operation acts on the image frame
  • the first obtaining component 1103 is set to obtain the first mapping relationship corresponding to the timestamp based on the second mapping relationship; the second obtaining component 1104 is set to obtain the touch location information based on the first mapping relationship Corresponding encoding information, and perform the function corresponding to the functional area indicated by the encoding information, wherein the first mapping relationship includes the mapping relationship between the encoding information of the functional area and the position information of the functional area in the image frame, and the second mapping relationship includes The mapping relationship between the timestamps of the image frames.
  • modules or units of the data processing apparatus 1100 are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
  • Fig. 12 shows a flowchart of the data processing method. As shown in Fig. 12, the data processing method at least includes the following steps:
  • step S1201 obtain a first mapping relationship between the timestamp of the image frame in the video data and the area on the image frame that can accept instructions;
  • step S1202 it is determined whether the first mapping relationship corresponding to the time stamp of the image frame is consistent with the first mapping relationship corresponding to the time stamp of the next frame of image;
  • step S1203 if the first mapping relationship corresponding to the time stamp of the image frame is consistent with the first mapping relationship corresponding to the time stamp of the next frame of image, then the image frame and the next frame of image are respectively corresponding to two first mapping relationships.
  • the mapping relationship is integrated into a first mapping relationship.
  • the viewer can perform touch operations on the functional area under the guidance of the host.
  • the server of the live video broadcast can judge whether the first mapping relationship corresponding to the time stamp of the image frame in the received video data is consistent with the first mapping relationship of the received next frame time stamp. When the judgment result is consistent, the first mapping relationship of the two frames is merged.
  • the established mapping relationship can be processed again, which not only saves server resources, but also reduces the query workload during the decoding process at the viewer side from the perspective of a third party. To a certain extent, the viewer side and the live broadcast End-to-end interaction is more smooth and convenient.
  • step S1201 the first mapping relationship between the time stamp of the image frame in the video data and the area on the image frame that can receive the touch operation is acquired.
  • the video data itself is composed of successive image frames, and a frame of image is a still picture, and the continuous image frames form a video.
  • the video data can be composed of 60 image frames or 80 image frames, and the two can display the same content, but there will be obvious differences in the smoothness of the video data.
  • the method for determining the image frame of the video data may be to decompose the video data. This exemplary embodiment does not limit the number of image frames included in the video data.
  • the video data can be decomposed to obtain 4 image frames.
  • this exemplary embodiment also does not limit the format of the video data. It can be real-time live video data, or video files in mp4, avi, mkv, dvd, flv and other formats, or other formats.
  • the video data can also include single-channel gray-scale video data and three-channel color video data.
  • the timestamp is usually a character sequence that uniquely identifies the time at a certain moment, and the timestamp of the image frame can uniquely identify the playback time of the image frame at a certain moment. When determining the image frame of the video data, the timestamp of the image frame can be uniquely determined.
  • the touch operation may be a click operation, a sliding operation, a pressing operation, etc., and this exemplary embodiment does not specifically limit the specific touch operation.
  • the video data can be decomposed to obtain an image frame, and at the same time, the time stamp of the image frame and the first mapping relationship that can receive the touch operation can be obtained.
  • the first mapping relationship includes the mapping relationship between the encoding information of the functional area in the image frame and the location information of the area that can receive the touch operation.
  • the form of the first mapping relationship may be a mapping set R(( x1,y1,width1,height1)->code1,(x2,y2,width2,height2)->code2, «). Wherein, the position information of the area that can receive the touch operation can be determined through the screen coordinate system.
  • the screen coordinate system takes the lower left corner of the screen as the origin, and takes pixels as the unit.
  • the coordinate axis extends upwards of the screen, but does not exceed the maximum width and maximum height of the screen.
  • the position information of the area that can receive the touch can be determined.
  • the location information may be the location information of multiple touch points. For example, the location information of multiple location points is acquired to determine the location information of the area. Since these location points are established based on the screen coordinate system, the location information of each location point can be determined corresponding to the pixel value of the screen coordinate system to determine the location information of the area that can receive the touch operation.
  • the position information of multiple position points are (1,2), (1,3), (2,2) and (2,3)
  • the area that can receive touch operations is a A square with a side length of 1 and the coordinates of four points have been determined, and the position information of the area that can receive touch operations can be uniquely determined.
  • the position information of the area that can receive the touch operation may be the position information of a touch area, and the position information of the touch area can be determined by the pixel value of the screen coordinate system.
  • the position information of the area that can receive touch operations can be (1, 2, 3, 4), indicating that the touch area is a starting point with coordinates (1, 2), width 3, height
  • the location area is 4, and the unit of the location area is the pixel value.
  • the location information of the area that can receive the touch operation is determined according to the location information of (1, 2, 3, 4).
  • the specific determination form of the location information is not specifically limited.
  • the area that can receive touch operations may be directly designated by the host during the live broadcast process, or may be set by the live broadcast terminal software or the live broadcast server itself. This exemplary embodiment does not determine the specific method of the area. Special restrictions, and no special restrictions on the area shape and other attributes of the area.
  • step S1202 it is determined whether the first mapping relationship corresponding to the time stamp of the image frame is consistent with the first mapping relationship corresponding to the time stamp of the next frame image.
  • the time stamp and the first mapping relationship are there is a one-to-one correspondence.
  • the image frame may be divided too finely, there may be no difference between the first mapping relationship corresponding to the time stamp of the image frame and the first mapping relationship corresponding to the time stamp of the previous image.
  • the first mapping relationship corresponding to the timestamp is consistent with the first mapping relationship corresponding to the timestamp of the next frame image according to the timestamp of the image frame, that is, the two Whether the coded information on the area that can receive the touch operation in the image frame is consistent. If the judgment result is inconsistent, it means that the functions not included in the two image frames are inconsistent, and both of the first mapping relationships can be saved.
  • step S1203 if the first mapping relationship corresponding to the time stamp of the image frame is consistent with the first mapping relationship corresponding to the time stamp of the next frame of image, then the two first mappings corresponding to the image frame and the next frame of image respectively The relationship is integrated into a first mapping relationship.
  • the first mapping relationship corresponding to the time stamp of the image frame is consistent with the first mapping relationship corresponding to the time stamp of the next frame of image, it means that only the time stamp of the two image frames changes, where The included functions have not changed.
  • the two first mapping relationships corresponding to the image frame and the next frame can be integrated, and only one first mapping relationship is saved. At this time, the two timestamps correspond to the same
  • the first mapping relationship can also facilitate the viewer's query in the server, so as to save query time and respond faster to the audience's function unlocking demand.
  • FIG. 13 shows a schematic structural diagram of a data processing device.
  • the data processing device 1300 may include: a relationship acquisition component 1301, a mapping judgment component 1302, and a relationship integration component 1303. among them:
  • the relationship acquisition component 1301 is configured to acquire the first mapping relationship between the time stamp of the image frame in the video data and the area on the image frame that can receive touch operations;
  • the mapping determination component 1302 is configured to determine the first mapping relationship corresponding to the time stamp of the image frame A mapping relationship is consistent with the first mapping relationship corresponding to the time stamp of the next frame of image;
  • the relationship integration component 1303 is set to if the first mapping relationship corresponding to the time stamp of the image frame corresponds to the first mapping relationship corresponding to the time stamp of the next frame of image If the mapping relationship is consistent, the two first mapping relationships respectively corresponding to the image frame and the next frame of image are integrated into one first mapping relationship.
  • modules or units of the data processing apparatus 1300 are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
  • FIG. 14 shows a schematic flow chart of the data processing method provided by the present disclosure on the host side.
  • the method includes at least the following steps:
  • step S1401 a video image is generated using a screenshot area of the screen, and the video The encoding method in the image is a commonly used broadcast method.
  • Fig. 15 shows a schematic diagram of an application interface for capturing a screen to generate a video image.
  • the area corresponding to 1501 is the area where the screen is captured for live broadcast.
  • a rectangular area based on the screen coordinate system can be obtained, then the rectangular area is placed in the live broadcast area, and the functions contained in the rectangular area are mapped to the live broadcast area.
  • step S1402 the live video area is sent to the software where the rectangular area is located, and the software can return the first mapping relationship R ((x1, y1,width1,height1)->code1,(x2,y2,width2,height2)->code2, «).
  • the position information of the functional area can be converted from the position information based on the screen coordinate system into the relative position information of the live broadcast area.
  • the position information of the functional area is (1,1,4,4), and the relative position information of the live broadcast area converted from the coordinate information may be (0,0,4,4).
  • the specific relative position information conversion method is not specifically limited, and it is convenient to determine the position of the functional area in the live broadcast area.
  • step S1403 when the video data is encoded, each generated video frame will generate a time stamp corresponding to the video frame at the same time, and the time stamp is a necessary condition for the viewer to be able to watch the live video normally.
  • step S1404 a one-to-one correspondence is established between the obtained time stamp and the first mapping relationship to obtain a second mapping relationship.
  • step S1405 the video data and the second mapping relationship are uploaded to the server.
  • the types of servers may include personal computer (Personal Computer, PC for short) servers, mainframes, minicomputers, and cloud servers. This exemplary embodiment does not specifically limit the types of servers.
  • the operation of the upload server can be optimized as follows: upload only the previous image frame
  • the second mapping relationship is different in the functional area. For example, if the time stamp of the video data is 1-100, the video images are the same, and the first mapping relationship is also the same, then you can only upload the second mapping relationship with the time stamp of 1, 2-100 frames of images
  • the second mapping relationship of the frame is not uploaded. If the first mapping relationship is different from the first mapping relationship of 1 frame when the time stamp is 101, then the second mapping relationship of 101 frames is uploaded.
  • the viewer terminal After the live broadcast terminal uploads the function-encoded video data to the server terminal, the viewer terminal can perform software functions by clicking the corresponding position on the video screen while watching the video through corresponding decoding processing.
  • Figure 16 shows a schematic flow chart of the method for data processing on the viewer side of the present disclosure.
  • the method at least includes the following steps: in step S1601, when the viewer side decodes the video data, the video data and The timestamp of each image frame.
  • step S1602 the time stamp of the image frame is used to request the server, and if the server exists in the second mapping relationship corresponding to the time stamp, it is sent to the viewer.
  • an optimization method is provided: the second mapping relationship corresponding to the subsequent timestamp after the timestamp is issued to the audience at the same time, so as to avoid too many requests initiated by the audience and occupy concurrent resources of the server. After the audience receives the data returned by the server, it can be saved in the local database.
  • step S1603 when the viewer sends a touch operation to the live broadcast area, the time stamp of the image frame where the viewer initiates the touch operation and the touch position information of the touch operation can be obtained.
  • the first mapping relationship can be determined according to the timestamp to determine the functional area hit by the touch operation.
  • the first mapping relationship is (x i, y i, width i, height i)->code i, and the coding information code i can be determined according to the touch position information.
  • the audience terminal returns the coded information to the software.
  • the software After the software receives the corresponding coded information, it executes the corresponding software function.
  • it can be the software function corresponding to the "daily task", such as daily Sign in, clock in, or other corresponding software functions, etc.
  • the present disclosure also provides a data processing system.
  • the functions and corresponding operations performed by each module in the system will be described in detail below.
  • FIG. 17 shows the data processing system provided by the present disclosure.
  • the data processing system includes a software function encoding module 1701.
  • the host side pairs the software function and image frame mapping processing module 1702 located in the live broadcast area, and the viewer side pairs
  • the timestamp is decoded, the first mapping relationship is obtained, the viewer click module 1703 is processed, the server side stores the encoding information and the module 1704 processes the host and viewer requests.
  • the server-side module 1704 for storing coded information and processing requests from the host and the viewer includes a module for storing and retrieving coded information and a request processing module.
  • the software function coding module 1701 is used to code the functional areas of the software that can receive touch operations, provide an interface for querying software function coding information based on points or areas of the screen coordinate system, and when receiving a request for executing software function coding information , Execute the corresponding software function.
  • the host side maps the software functions in the live broadcast area and the image frame processing module 1702, which is used to query the module 1701 for the first mapping relationship of the functional areas in the live broadcast area, and generate video data when the time stamp is obtained after encoding the video And upload the second mapping relationship set to the server module.
  • the viewer decodes the timestamp, obtains the first mapping relationship, and processes the viewer click module 1703, which is used to query the local database for the existence of the corresponding second mapping relationship after decoding the video data to obtain the timestamp, If it exists, the first mapping relationship is directly returned to the viewer; if it does not exist, the first mapping relationship query request corresponding to the timestamp is sent to the server module.
  • the server module will return the second mapping relationship corresponding to subsequent timestamps from the timestamp to the audience at the same time,
  • the viewer saves the second mapping relationship in the local database; or when the corresponding second mapping relationship is not found, it returns feedback information that the second mapping relationship corresponding to the timestamp is not found.
  • the viewer sends a touch operation such as a click operation
  • the timestamp of the click operation can be obtained, and the corresponding first mapping relationship is calculated through the mapping relationship set of the second mapping relationship in the local database.
  • the first mapping relationship is obtained, calculate whether the first mapping relationship includes the software function area corresponding to the touch position information for the click operation.
  • the encoding information storage and retrieval module in the module 1704 for storing encoding information on the server side and processing the host and viewer requests is used to store the second mapping relationship set of the time stamp of the video data from the host on the server side, and the viewer
  • the client sends the timestamp of the request, retrieves the corresponding second mapping relationship and the subsequent set of second mapping relationships; the server-side stores coding information and processes the host and viewer requests.
  • the request processing module in the module 1704 is on the video server side , Used to process data requests from the host and audience, and return the processing results.
  • an electronic device capable of implementing the above method is also provided.
  • the electronic device 1800 according to such an embodiment of the present disclosure will be described below with reference to FIG. 18.
  • the electronic device 1800 shown in FIG. 18 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 1800 is in the form of a general-purpose computing device.
  • the components of the electronic device 1800 may include, but are not limited to: the aforementioned at least one processing unit 1810, the aforementioned at least one storage unit 1820, a bus 1830 connecting different system components (including the storage unit 1820 and the processing unit 1810), and a display unit 1840.
  • the storage unit stores program code, and the program code can be executed by the processing unit 1810, so that the processing unit 1810 executes the various exemplary methods described in the “Exemplary Methods” section of this specification. Example steps.
  • the storage unit 1820 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 1821 and/or a cache storage unit 1822, and may further include a read-only storage unit (ROM) 1823.
  • RAM random access storage unit
  • ROM read-only storage unit
  • the storage unit 1820 may also include a program/utility tool 1824 having a set (at least one) program module 1825.
  • program module 1825 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
  • the bus 1830 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area that uses any of a variety of bus structures. bus.
  • the electronic device 1800 may also communicate with one or more external devices 2000 (such as keyboards, pointing devices, Bluetooth devices, etc.), and may also communicate with one or more devices that enable a user to interact with the electronic device 1800, and/or communicate with Any device (eg, router, modem, etc.) that enables the electronic device 1800 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 1850.
  • the electronic device 1800 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 1860.
  • the network adapter 1840 communicates with other modules of the electronic device 1800 through the bus 1830.
  • the exemplary embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
  • a computer-readable storage medium on which is stored a program product capable of implementing the above method in this specification.
  • the various aspects of the present disclosure may also be implemented in the form of a program product, which includes program code.
  • the program product runs on a terminal device, the program code is used to make the The terminal device executes the steps according to various exemplary embodiments of the present disclosure described in the above "Exemplary Method" section of this specification.
  • a program product 1900 for implementing the above method according to an embodiment of the present disclosure is described. It can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be installed in a terminal device, For example, running on a personal computer.
  • CD-ROM compact disk read-only memory
  • the program product of the present disclosure is not limited thereto.
  • the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or combined with an instruction execution system, device, or device.
  • the program product can use any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
  • the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.
  • the program code used to perform the operations of the present disclosure can be written in any combination of one or more programming languages.
  • the programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural styles. Programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computing device (for example, using Internet service providers) Business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service providers Internet service providers

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开属于计算机技术领域,涉及一种数据处理方法及装置、计算机可读存储介质、电子设备。该方法包括:确定视频数据的图像帧,并获取位于图像帧中的功能区域的位置信息;获取功能区域的编码信息,并对编码信息和位置信息建立第一映射关系;获取图像帧的时间戳,并对时间戳与第一映射关系建立第二映射关系。 (图1)

Description

数据处理方法及装置、存储介质、电子设备 技术领域
本公开涉及计算机技术领域,尤其涉及一种数据处理方法与数据处理装置、计算机可读存储介质及电子设备。
背景技术
当前线上直播软件功能越来越多,有两种常用的软件功能介绍的方法。一种是制作软件教程,将软件的各功能依次编排成文档或者网页的形式,以供用户自行查阅;另一种是在软件功能的入口处添加说明文案等。虽然这两种方法在一定程度上有其适用性,但都存在对应的缺陷。鉴于此,本领域亟需开发一种新的数据处理方法及装置。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本公开的目的在于提供一种数据处理方法、数据处理装置、计算机可读存储介质及电子设备,进而至少在一定程度上克服由于相关技术的限制而导致的软件功能感知不足的问题。
本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。
根据本公开实施例的第一个方面,提供一种数据处理方法,应用于视频直播的直播端,所述方法包括:确定视频数据的图像帧,并获取位于所述图像帧中的功能区域的位置信息;获取所述功能区域的编码信息,并对所述编码信息和所述位置信息建立第一映射关系;获取所述图像帧的时间戳,并对所述时间戳与所述第一映射关系建立第二映射关系。
在本公开的一种示例性实施例中,在所述对所述时间戳与所述第一映射关系建立第二映射关系之后,所述方法还包括:获取与所述视频数据中的各个图像帧相对应的各个所述第二映射关系的映射关系集合;对所述视频数据与所述映射关系集合进行编码,以获取数据映射编码信息;将所述数据映射编码信息上传至服务器。
在本公开的一种示例性实施例中,在所述对所述时间戳与所述第一映射关系建立第二映射关系之后,所述方法还包括:获取与所述视频数据中的各个图像帧相对应的各个所述第二映射关系的映射关系集合;将所述视频数据和所述映射关系集合分别上传至服务器。
在本公开的一种示例性实施例中,所述将所述视频数据和所述映射关系集合分别上传至服务器,包括:判断所述图像帧的时间戳对应的第一映射关系与上一帧图像的时间戳对应的第一映射关系是否存在差异;若判定所述图像帧的时间戳对应的第一映射关系 与上一帧图像的时间戳对应的第一映射关系存在差异,则将所述图像帧的时间戳对应的第二映射关系上传至服务器。
在本公开的一种示例性实施例中,所述获取位于所述图像帧中的功能区域的位置信息,包括:在所述图像帧上确定对应于功能区域的多个位置点;获取所述多个位置点的位置信息。
在本公开的一种示例性实施例中,所述获取位于所述图像帧中的功能区域的位置信息,包括:在所述图像帧上确定对应于功能区域的位置区域;获取所述位置区域的位置信息。
根据本公开实施例的第二个方面,提供一种数据处理装置,所述装置包括:信息获取组件,设置为确定视频数据的图像帧,并获取位于所述图像帧中的功能区域的位置信息;第一映射组件,设置为获取所述功能区域的编码信息,并对所述编码信息和所述位置信息建立第一映射关系;第二映射组件,设置为获取所述图像帧的时间戳,并对所述时间戳与所述第一映射关系建立第二映射关系。
根据本公开实施例的第三个方面,提供一种数据处理方法,应用于视频直播的观众端,所述方法包括:确定视频数据中的图像帧,并接收作用于所述图像帧上的触控操作;获取所述图像帧的时间戳,以及触控操作作用在所述图像帧上的触控位置信息;基于第二映射关系,获取与所述时间戳对应的第一映射关系;基于所述第一映射关系,获取与所述触控位置信息对应的编码信息,并执行与所述编码信息指示的功能区域对应的功能,其中,所述第一映射关系包含所述图像帧中功能区域的编码信息与所述功能区域的位置信息之间的映射关系,所述第二映射关系包含所述图像帧的时间戳之间的映射关系。
在本公开的一种示例性实施例中,在所述确定视频数据中的图像帧,并接收作用于所述图像帧上的触控操作之前,所述方法还包括:获取所述第二映射关系的第二关系信息;将所述第二关系信息与所述视频数据保存至本地数据库。
在本公开的一种示例性实施例中,所述基于第二映射关系,获取与所述触发时间戳对应的第一映射关系,包括:判断在所述本地数据库中是否保存有与所述时间戳对应的第一映射关系;若所述本地数据库中未保存与所述时间戳对应的第一映射关系,则将所述时间戳上传至服务器,以从所述服务器处获取所述时间戳对应的第一映射关系。
根据本公开实施例的第四个方面,提供一种数据处理装置,所述装置包括:指令接收组件,设置为确定视频数据中的图像帧,并接收作用于所述图像帧上的触控操作;指令信息组件,设置为获取所述图像帧的时间戳,以及触控操作作用在所述图像帧上的触发位置信息;第一获取组件,设置为基于第二映射关系,获取与所述时间戳对应的第一映射关系;第二获取组件,设置为基于所述第一映射关系,获取与所述触控位置信息对应的编码信息,并执行与所述编码信息指示的功能区域对应的功能,其中,所述第一映 射关系包含所述图像帧中功能区域的编码信息与所述功能区域的位置信息之间的映射关系,所述第二映射关系包含所述图像帧的时间戳之间的映射关系。
根据本公开实施例的第五个方面,提供一种数据处理方法,所述方法包括:获取视频数据中的图像帧的时间戳与所述图像帧上可接受指令的区域的第一映射关系;判断所述图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系是否一致;若所述图像帧的时间戳对应的第一映射关系与所述下一帧图像的时间戳对应的第一映射关系一致,则将所述图像帧和所述下一帧图像分别对应的两个第一映射关系整合为一个第一映射关系。
根据本公开实施例的第六个方面,提供一种数据处理装置,所述装置包括:关系获取组件,设置为获取视频数据中的图像帧的时间戳与所述图像帧上可接受指令的区域的第一映射关系;映射判断组件,设置为判断所述图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系是否一致;关系整合组件,设置为若所述图像帧的时间戳对应的第一映射关系与所述下一帧图像的时间戳对应的第一映射关系一致,则将所述图像帧和所述下一帧图像分别对应的两个第一映射关系整合为一个第一映射关系。
根据本公开实施例的第七个方面,提供一种电子设备,包括:处理器和存储器;其中,存储器上与所述处理器相连,设置为存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现上述任意示例性实施例中的数据处理方法。
根据本公开实施例的第八个方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任意示例性实施例中的数据处理方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
图1示意性示出本公开示例性实施例中一种数据处理方法的流程示意图;
图2示意性示出本公开其中一示例性实施例中一种获取功能区域的位置信息的方法的流程示意图;
图3示意性示出本公开其中一示例性实施例中另一种获取功能区域的位置信息的方法的流程示意图;
图4示意性示出本公开其中一示例性实施例中一种对第二映射关系的处理方法的流程示意图;
图5示意性示出本公开其中一示例性实施例中另一种获对第二映射关系的处理方法的流程示意图;
图6示意性示出本公开其中一示例性实施例中一种分别上传视频数据和第二映射关 系的方法的流程示意图;
图7示意性示出本公开其中一示例性实施例中一种数据处理装置的结构示意图;
图8示意性示出本公开其中一示例性实施例中另一种数据处理方法的流程示意图;
图9示意性示出本公开其中一示例性实施例中一种获取与时间戳对应的第一映射关系的方法的流程示意图;
图10示意性示出本公开其中一示例性实施例中一种将第二映射关系保存至本地数据库的方法的流程示意图;
图11示意性示出本公开其中一示例性实施例中另一种数据处理装置的结构示意图;
图12示意性示出本公开其中一示例性实施例中再一种数据处理方法的流程示意图;
图13示意性示出本公开其中一示例性实施例中再一种数据处理装置的结构示意图;
图14示意性示出本公开其中一示例性实施例中主播端对应的数据处理方法的流程示意图;
图15示意性示出本公开其中一示例性实施例中截取屏幕生成视频图像的应用界面示意图;
图16示意性示出本公开其中一示例性实施例中观众端对应的数据处理方法的流程示意图;
图17示意性示出本公开其中一示例性实施例中数据处理系统的流程示意图;
图18示意性示出本公开其中一示例性实施例中一种用于实现数据处理方法的电子设备;
图19示意性示出本公开其中一示例性实施例中一种用于实现数据处理方法的计算机可读存储介质。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。
本说明书中使用用语“一个”、“一”、“该”和“所述”用以表示存在一个或多个要素/组成部分/等;用语“包括”和“具有”用以表示开放式的包括在内的意思并且是指除了列出的要素/组成部分/等之外还可存在另外的要素/组成部分/等;用语“第一”和“第二”等仅作为标记使用,不是对其对象的数量限制。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。
针对相关技术中存在的问题,本公开提出了一种数据处理方法。图1示出了数据处理方法的流程图,如图1所示,数据处理方法至少包括以下步骤:
在步骤S101,确定视频数据的图像帧,并获取位于图像帧中的功能区域的位置信息。
在步骤S102,获取功能区域的编码信息,并对编码信息和位置信息建立第一映射关系。
在步骤S103,获取图像帧的时间戳,并对时间戳与第一映射关系建立第二映射关系。
在本公开的示例性实施例中,通过对视频直播的直播端画面中的功能区域与软件功能建立对应的映射关系的方式,使观众端可以在主播的引导下对功能区域(功能区域可以是在直播过程中由主播指定的,或者由直播端软件/直播服务器自动设置的区域)进行触控操作,达到开启软件相应功能的效果。一方面解决了部分软件功能感知度小的问题,提高观众与直播软件的软件功能的联系性及互动性,便于存留用户;再一方面,在直播软件有限的显示布局空间中实现众多功能操作区域的同时避免了对观众端视频画面的遮挡;另一方面,为视频直播的直播端和观众端提供了一种新的互动方法,增加视频直播的趣味性,提升观众的观看体验。
下面对数据处理方法的各个步骤进行详细说明。
在步骤S101中,确定视频数据的图像帧,并获取位于图像帧中的功能区域的位置信息。
在本公开的示例性实施例中,视频数据本身是由一个个连续的图像帧组成的,一帧图像就是一幅静止的画面,连续的图像帧就形成了视频。在视频数据中,每秒钟的帧数越多,所显示的画面就会越流畅,越逼真;每秒钟的帧数越少,视频画面就会显示的不连贯,流畅度变低。举例而言,视频数据中可以由60帧图像帧组成,也可以由80帧图像帧组成,二者可以显示相同的内容,但是,视频数据的流畅度上会存在明显的差异。对视频数据的图像帧进行确定的方法可以是对视频数据进行分解,本示例性实施例对视频数据包括的图像帧的个数不做限定,举例而言,若该视频数据包括4个图像帧,则可以分解该视频数据以得到4个图像帧。除此之外,本示例性实施例对视频数据的格式也不做限定,可以是实时直播视频数据,也可以是mp4、avi、mkv、dvd、flv等格式的视频数据,也可以是其他格式的视频文件,还可以包括单通道的灰度视频数据和三通道的彩色视频数据。
为确定视频帧中的功能区域的位置,可以通过屏幕坐标系进行确定。屏幕坐标系以 屏幕的左下角作为原点,以像素为单位,坐标轴向屏幕的走上方延伸,但不会超过屏幕的最大宽度和最大高度。按照功能区域在屏幕坐标系中的对应像素可以确定该图像帧中的功能区域的位置信息。其中,功能区域可以是在直播过程中由主播端选定的区域,也可以是由直播端软件或者直播服务器自动设置的,本示例性实施例对该功能区域的具体确定方式以及功能区域的形状等属性不做特殊限定。在可选的实施例中,图2示出了获取位于图像帧中的功能区域的位置信息的方法的流程示意图,如图2所示,该方法至少包括以下步骤:在步骤S201中,在图像帧上确定对应于功能区域的多个位置点。功能区域可以由多个位置点组成,可以在该图像帧中确定各个位置点。在步骤S202中,获取多个位置点的位置信息。由于这些位置点是基于屏幕坐标系建立的,可以对应屏幕坐标系的像素值确定各个位置点的位置信息以确定功能区域的位置信息。举例而言,多个位置点的位置信息分别为(1,2)、(1,3)、(2,2)和(2,3),那么可以确定该功能区域是一个边长为1且四个点的坐标已确定的正方形,该功能区域的坐标信息是唯一且确定的,本示例性实施例对通过位置点确定功能区域的位置信息的具体方式不做特殊限定,可根据实际情况进行确定。
在可选的实施例中,如3示出了另一种获取位于图像帧中的功能区域的位置信息的方法的流程示意图,如图3所示,该方法至少包括以下步骤:在步骤S301中,在图像帧上确定对应于功能区域的位置区域。功能区域可以是一个有对应功能的区域,该区域在该图像帧的屏幕坐标系中对应有一位置区域。在步骤S302中,获取位置区域的位置信息。通过屏幕坐标系的像素值,可以确定该位置区域的位置信息。举例而言,该功能区域的位置区域的位置信息可以是(1,2,3,4),表明该功能区域是一个以坐标(1,2)为起始点,宽为3,高为4的位置区域,该位置区域的单位为像素值。在如图2和图3所示的两个可选的实施例,可以针对不同的功能区域的设置,选择对应的功能区域的位置信息的确定方法,更具有针对性,确定的位置信息更为准确。
在步骤S102中,获取功能区域的编码信息,并对编码信息和位置信息建立第一映射关系。
在本公开的示例性实施例中,该功能区域是已经进行过功能编码的区域,可以获取该功能区域的编码信息。可以对已获得的该功能区域的位置信息和编码信息建立二者之间的第一映射关系,举例而言,该第一映射关系的形式可以是功能区域和编码信息的映射关系集合R,R的具体形式可以是如下方式:((x1,y1,width1,height1)->code1,(x2,y2,width2,height2)->code2,……),值得说明的是,本示例性实施例对第一映射关系的具体形式不做特殊限定。
在步骤S103中,获取图像帧的时间戳,并对时间戳与第一映射关系建立第二映射关系。
在本公开的示例性实施例中,时间戳通常是一个字符序列,唯一地标识某一时刻的 时间,图像帧的时间戳可以唯一的标识某一时刻的图像帧的播放时间。其中,字符序列是指由数字、字母、下划线组成的一串字符。根据步骤S101中确定视频数据的图像帧的时候,可以唯一确定的获取到该图像帧的时间戳。在步骤S102建立图像帧中的功能区域的第一映射关系之后,可以建立该图像帧的第一映射关系与时间戳之间的第二映射关系,举例而言,该第二映射关系的形式可以是时间戳与第一映射关系F,F的具体形式可以是如下方式:(timestamp->R),本示例性实施例对第二映射关系的具体形式不做特殊限定。
在本公开的示例性实施例中,图4示出了建立第二映射关系之后的处理方法的流程示意图,如图4所示,该方法至少包括以下步骤:在步骤S401中,获取与视频数据中的各个图像帧相对应的各个第二映射关系的映射关系集合。根据步骤S101-S103可以确定各个图像帧对应的第二映射关系。视频数据中包括多个图像帧,因此也可以确定多个第二映射关系。将该视频数据中的所有第二映射关系进行集合,可以确定该视频数据的映射关系集合。在步骤S402中,对视频数据与映射关系集合进行编码,以获取数据映射编码信息。对该视频数据与映射关系二者进行编码,可以根据编码结果得到数据映射编码信息。在步骤S403中,将数据映射编码信息上传至服务器。为便于观众端获取视频数据与映射关系集合,可以将编码后得到的数据映射编码信息上传至服务器端。在本示例性实施例中,将视频数据与映射关系集合进行编码,同时上传至服务器端,可以将二者编码成数据映射编码信息,使服务器端保存的映射关系集合与视频数据之间存在唯一且确定的关系,在观众端进行解码处理的时候,更便于查询。
在本公开的另一示例性实施例中,图5示出了另一种建立第二映射关系之后的处理方法的流程示意图,如图5所示,该方法至少包括以下步骤:在步骤S501中,获取与视频数据中的各个图像帧相对应的各个第二映射关系的映射关系集合。根据步骤S101-S103可以确定各个图像帧对应的第二映射关系。视频数据中包括多个图像帧,因此也可以确定多个第二映射关系。将该视频数据中的所有第二映射关系进行集合,可以确定该视频数据的映射关系集合。在步骤S502中,将视频数据与映射关系集合分别上传至服务器。为便于观众端获取视频数据与映射关系集合,可以将视频数据与映射关系集合分别进行上传。在本公开的示例性实施例中,通过分别上传视频数据与映射关系集合可以更便于观众端对功能编码信息的查询,以更快速的完成执行对应软件功能的需求,提升了观众的用户体验。
在可选的实施例中,图6示出了分别上传视频数据与第二映射关系的方法的流程示意图,如图6所示,该方法至少包括以下步骤:在步骤S601中,判断图像帧的时间戳对应的第一映射关系与上一帧图像的时间戳对应的第一映射关系是否存在差异。由于时间戳与图像帧之间具有一一对应的关系,而各个图像帧都建立有对应的第一映射关系,因此时间戳与第一映射关系之间具有一一对应的关系。但是,由于图像帧的划分可能过于 细密,会存在图像帧的时间戳对应的第一映射关系与上一图像的时间戳对应的第一映射关系之间不存在差异的情况。为便于上传且节约时间与资源,可以对图像帧的时间戳对应的第一映射关系与上一帧图像的时间戳对应的第一映射关系是否存在差异进行判断。在步骤S602中,若判定图像帧的时间戳对应的第一映射关系与上一帧图像的时间戳对应的第一映射关系存在差异,则将图像帧的时间戳对应的第二映射关系上传至服务器。若判定图像帧的时间戳对应的第一映射关系与上一帧图像的时间戳对应的第一映射关系存在差异,才上传该时间戳对应的映射关系;若判定图像帧的时间戳对应的第一映射关系与上一帧图像的时间戳对应的第一映射关系不存在差异,则可以将连续的时间戳关联的第一映射关系进行合并。举例而言,在100帧的视频数据中,从1-100帧的第一映射关系相同,则只保存一条对应的存储记录,即只给服务器上传第一帧对应的第二映射关系。在本公开的示例性实施例中,分别将视频数据与映射关系集合进行上传,并且在上传之前,可以判断这一时刻的时间戳对应的第一映射关系与上一时刻的时间戳对应的第一映射关系之间是否存在差异,只有存在差异的时候,才上传至服务器,不仅节约了上传服务器的工作量和时间,还可以节省服务器资源,是一种较为优选的上传服务器的方案。
此外,在本公开的示例性实施例中,还提供一种数据处理装置。图7示出了数据处理装置的结构示意图,如图7所示,数据处理装置700可以包括:信息获取组件701、第一映射组件702、第二映射组件703。其中:
信息获取组件701,设置为确定视频数据的图像帧,并获取位于图像帧中的功能区域的位置信息;第一映射组件702,设置为获取功能区域的编码信息,并对编码信息和位置信息建立第一映射关系;第二映射组件703,设置为获取图像帧的时间戳,并对时间戳与第一映射关系建立第二映射关系。
上述数据处理装置的具体细节已经在对应的数据处理方法中进行了详细的描述,因此此处不再赘述。
应当注意,尽管在上文详细描述中提及了数据处理装置700的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
针对相关技术中存在的问题,本公开还提出另一种数据处理方法,应用于视频直播的观众端,图8示出了该数据处理方法的流程示意图,如图8所示,该数据处理方法至少包括以下步骤:
在步骤S801,确定视频数据中的图像帧,并接收作用于图像帧上的触控操作;
在步骤S802,获取图像帧的时间戳,以及触控操作作用在图像帧上的触控位置信息;
在步骤S803,基于第二映射关系,获取与时间戳对应的第一映射关系;
在步骤S804,基于第一映射关系,获取与触控位置信息对应的编码信息,并执行与编码信息指示的功能区域对应的功能,其中,第一映射关系包含图像帧中功能区域的编码信息与功能区域的位置信息之间的映射关系,第二映射关系包含图像帧的时间戳之间的映射关系。
在本公开的示例性实施例中,通过对视频直播的直播端画面中的功能区域与软件功能建立对应的映射关系的方式,使观众端可以在主播的引导下对功能区域进行触控操作,达到开启软件相应功能的效果。一方面可以精准的捕获观众触控操作的触控位置信息,执行对应区域的软件功能,解决了部分软件功能感知度小的问题,提高观众与直播软件的软件功能的联系性及互动性,便于存留用户;再一方面,在直播软件有限的显示布局空间中实现众多功能操作区域的同时,避免了对观众端视频画面的遮挡,提供给观众端更好的视觉效果;另一方面,为视频直播的直播端和观众端提供了一种新的互动方法,增加视频直播的趣味性,提升观众的观看体验。
下面对该数据处理方法的各个步骤进行详细说明。
在步骤S801中,确定视频数据中的图像帧,并接收作用于图像帧上的触控操作。
在本公开的一种示例性实施例中,视频数据本身是由一个个连续的图像帧组成的,一帧图像就是一幅静止的画面,连续的图像帧就形成了视频。在视频数据中,每秒钟的帧数越多,所显示的画面就会越流畅,越逼真;每秒钟的帧数越少,视频画面就会显示的不连贯,流畅度变低。举例而言,视频数据中可以由60帧图像帧组成,也可以由80帧图像帧组成,二者可以显示相同的内容,但是,视频数据的流畅度上会存在明显的差异。对视频数据的图像帧进行确定的方法可以是对视频数据进行分解,本示例性实施例对视频数据包括的图像帧的个数不做限定,举例而言,若该视频数据包括4个图像帧,则可以分解该视频数据以得到4个图像帧。除此之外,本示例性实施例对视频数据的格式也不做限定,可以是实时直播视频数据,也可以是mp4、avi、mkv、dvd、flv等格式的视频文件,也可以是其他格式的视频数据,还可以包括单通道的灰度视频数据和三通道的彩色视频数据。确定视频数据中的一个图像帧,并接收作用于该图像帧上的触控操作。该触控操作可以是点击操作、滑动操作、按压操作等,本示例性实施例对具体的触控操作不做特殊限定。
在步骤S802中,获取图像帧的时间戳,以及触控操作作用在图像帧上的触控位置信息。
在本公开的示例性实施例中,时间戳通常是一个字符序列,唯一地标识某一时刻的时间,图像帧的时间戳可以唯一的标识某一时刻的图像帧的播放时间。在确定视频数据的图像帧的时候,可以唯一确定的获取到该图像帧的时间戳。确定触控操作在图像帧中的触控位置信息可以通过屏幕坐标系进行确定。屏幕坐标系以屏幕的左下角作为原点, 以像素为单位,坐标轴向屏幕的走上方延伸,但不会超过屏幕的最大宽度和最大高度。按照触控操作在屏幕坐标系中的对应像素可以确定该触控位置信息。该触控位置信息可以是多个触控点的位置信息,举例而言,获取多个位置点的位置信息以确定该触控位置信息。由于这些位置点是基于屏幕坐标系建立的,可以对应屏幕坐标系的像素值确定各个位置点的位置信息以确定触控位置信息。举例而言,多个位置点的位置信息分别为(1,2)、(1,3)、(2,2)和(2,3),那么可以确定该触控操作的作用区域是一个边长为1且四个点的坐标已确定的正方形,该触控位置的坐标信息可以唯一确定。该触控位置信息可以是一个触控区域的位置信息,通过屏幕坐标系的像素值,可以确定该触控区域的位置信息。举例而言,该触控区域的位置信息可以是(1,2,3,4),表明该触控区域是一个以坐标(1,2)为起始点,宽为3,高为4的位置区域,该位置区域的单位为像素值,则根据(1,2,3,4)的位置信息确定了触控位置信息,并且该触控位置信息在图像帧中是唯一确定的。
在步骤S803中,基于第二映射关系,获取与时间戳对应的第一映射关系。
在本公开的示例性实施例中,图9示出了基于第二映射关系,获取与时间戳对应的第一映射关系的方法的流程示意图,如图9所示,该方法至少包括以下步骤:在步骤S901中,判断在本地数据库中是否保存有与时间戳对应的第一映射关系。在观众端解码视频数据时,获取视频数据的时间戳之后,用该时间戳请求服务器获取第一映射关系。为了减少请求量,服务器端会将该视频数据的第二映射关系一起发送给观众端,以便于后续解码。其中,第二映射关系包含图像帧的时间戳之间的映射关系,举例而言,该第二映射关系的形式可以是时间戳与第一映射关系F(timestamp->R)。因此,观众端在解码时,会首先查询本地数据库中是否保存有与时间戳对应的第一映射关系。若本地数据库中保存有与时间戳对应的第一映射关系,则直接获取该第一映射关系即可。在步骤S902中,若本地数据库中未保存与时间戳对应的第一映射关系,则将时间戳上传至服务器,以从服务器处获取时间戳对应的第一映射关系。若在本地数据库中未查询到时间戳对应的第一映射关系,则向服务器发起请求,将时间戳上传至服务器,并从服务器处获取与时间戳对应的第一映射关系。在本公开的示例性实施例中,通过判断时间戳对应的第一映射关系在本地数据库中未保存的时候,才从服务器端通过上传的时间戳查询对应的第一映射关系,减少了观众端对于服务器端的访问量,节约了服务器端的并发资源。
在步骤S804中,基于第一映射关系,获取与触控位置信息对应的编码信息,并执行与编码信息指示的功能区域对应的功能,其中,第一映射关系包含图像帧中功能区域的编码信息与功能区域的位置信息之间的映射关系,第二映射关系包含图像帧的时间戳之间的映射关系。
在本公开的示例性实施例中,第一映射关系包含图像帧中功能区域的编码信息与功能区域的位置信息之间的映射关系,该第一映射关系的形式可以是功能区域和编码信息 的映射集合R((x1,y1,width1,height1)->code1,(x2,y2,width2,height2)->code2,……),本示例性实施例对第一映射关系的形式不做特殊限定。根据获取到的第一映射关系,按照触控位置信息可以获取对应的功能区域的编码信息,可以执行该编码信息以使观众端实现对应的功能。其中,该功能区域可以是在直播过程中由主播指定的,或者可以是由直播端软件/直播服务器自动设置的,本示例性实施例对该功能区域的具体确定形式不做特殊限定。
在本公开的示例性实施例中,图10示出了将第二映射关系保存至本地数据库的方法的流程示意图,如图10所示,该方法至少包括以下步骤:在步骤S1001中,获取第二映射关系的第二关系信息。为减少对服务器的访问量,可以将视频数据的第二映射关系上传至本地数据库中。获取第二映射关系中的第二关系信息,该第二关系信息是图像帧的时间戳与图像帧的第一映射关系之间的对应关系信息。在步骤S1002中,将第二关系信息与视频数据保存至本地数据库。将建立有第二关系信息的视频数据及其第二关系信息保存至本地数据库,以便于观众端在解码时现在本地数据库中查询第二映射关系。
此外,在本公开的示例性实施例中,还提供一种数据处理装置。图11示出了数据处理装置的结构示意图,如图11所示,数据处理装置1100可以包括:指令接收组件1101、指令信息组件1102、第一获取组件1103、第二获取组件1104。其中:
指令接收组件1101,设置为确定视频数据中的图像帧,并接收作用于图像帧上的触控操作;指令信息组件1102,设置为获取图像帧的时间戳,以及触控操作作用在图像帧上的触发位置信息;第一获取组件1103,设置为基于第二映射关系,获取与时间戳对应的第一映射关系;第二获取组件1104,设置为基于第一映射关系,获取与触控位置信息对应的编码信息,并执行与编码信息指示的功能区域对应的功能,其中,第一映射关系包含图像帧中功能区域的编码信息与功能区域的位置信息之间的映射关系,第二映射关系包含图像帧的时间戳之间的映射关系。
上述数据处理装置的具体细节已经在对应的数据处理方法中进行了详细的描述,因此此处不再赘述。
应当注意,尽管在上文详细描述中提及了数据处理装置1100的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
针对相关技术中存在的问题,本公开还提出另一种数据处理方法。图12示出了该数据处理方法的流程图,如图12所示,该数据处理方法至少包括以下步骤:
在步骤S1201,获取视频数据中的图像帧的时间戳与图像帧上可接受指令的区域的第一映射关系;
在步骤S1202,判断图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对 应的第一映射关系是否一致;
在步骤S1203,若图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系一致,则将图像帧和所述下一帧图像分别对应的两个第一映射关系整合为一个第一映射关系。
在本公开的示例性实施例中,通过对视频直播的直播端画面中的功能区域与软件功能建立对应的映射关系的方式,使观众端可以在主播的引导下对功能区域进行触控操作,达到开启软件相应功能的效果。除此之外,视频直播的服务器端可以对接收到的视频数据中的图像帧的时间戳对应的第一映射关系与接收到的下一帧时间戳的第一映射关系是否一致进行判断,当判断结果为一致的时候,对两帧的第一映射关系进行合并。通过服务器端的整合操作可以再一次对建立的映射关系进行处理,不仅节约了服务器资源,还从第三方的角度再一次减少了观众端解码处理时的查询工作量,一定程度上使观众端和直播端的交互更为流畅和方便。
下面对该数据处理方法的各个步骤进行详细说明。
在步骤S1201中,获取视频数据中的图像帧的时间戳与图像帧上可接收触控操作的区域的第一映射关系。
在本公开的示例性实施例中,视频数据本身是由一个个连续的图像帧组成的,一帧图像就是一幅静止的画面,连续的图像帧就形成了视频。在视频数据中,每秒钟的帧数越多,所显示的画面就会越流畅,越逼真;每秒钟的帧数越少,视频画面就会显示的不连贯,流畅度变低。举例而言,视频数据中可以由60帧图像帧组成,也可以由80帧图像帧组成,二者可以显示相同的内容,但是,视频数据的流畅度上会存在明显的差异。对视频数据的图像帧进行确定的方法可以是对视频数据进行分解,本示例性实施例对视频数据包括的图像帧的个数不做限定,举例而言,若该视频数据包括4个图像帧,则可以分解该视频数据以得到4个图像帧。除此之外,本示例性实施例对视频数据的格式也不做限定,可以是实时直播视频数据,也可以是mp4、avi、mkv、dvd、flv等格式的视频文件,也可以是其他格式的视频数据,还可以包括单通道的灰度视频数据和三通道的彩色视频数据。时间戳通常是一个字符序列,唯一地标识某一时刻的时间,图像帧的时间戳可以唯一的标识某一时刻的图像帧的播放时间。在确定视频数据的图像帧的时候,可以唯一确定的获取到该图像帧的时间戳。触控操作可以是点击操作、滑动操作、按压操作等,本示例性实施例对具体的触控操作不做特殊限定。对视频数据进行分解可以获取到图像帧,并同时获取该图像帧的时间戳与可接收触控操作的第一映射关系。第一映射关系包含图像帧中功能区域的编码信息与可接收触控操作的区域的位置信息之间的映射关系,该第一映射关系的形式可以是功能区域和编码信息的映射集合R((x1,y1,width1,height1)->code1,(x2,y2,width2,height2)->code2,……)。其中,可接收触控操作的区域的位置信息可以通过屏幕坐标系进行确定。屏幕坐标系以屏幕的左下 角作为原点,以像素为单位,坐标轴向屏幕的走上方延伸,但不会超过屏幕的最大宽度和最大高度。按照触控操作在屏幕坐标系中的对应像素可以确定该可接收触控的区域的位置信息。该位置信息可以是多个触控点的位置信息,举例而言,获取多个位置点的位置信息以确定该区域的位置信息。由于这些位置点是基于屏幕坐标系建立的,可以对应屏幕坐标系的像素值确定各个位置点的位置信息以确定可接收触控操作的区域的位置信息。举例而言,多个位置点的位置信息分别为(1,2)、(1,3)、(2,2)和(2,3),那么可以确定该可接收触控操作的区域是一个边长为1且四个点的坐标已确定的正方形,该可接收触控操作的区域的位置信息可以唯一确定。该可接收触控操作的区域的位置信息可以是一个触控区域的位置信息,通过屏幕坐标系的像素值,可以确定该触控区域的位置信息。举例而言,该可接收触控操作的区域的位置信息可以是(1,2,3,4),表明该触控区域是一个以坐标(1,2)为起始点,宽为3,高为4的位置区域,该位置区域的单位为像素值,则根据(1,2,3,4)的位置信息确定了可接收触控操作的区域的位置信息,本示例性实施例对该区域的位置信息的具体确定形式不做特殊限定。并且,该可接收触控操作的区域可以是在直播过程中直接由主播指定的,还可以是由直播端软件或者直播服务器自行设置的,本示例性实施例对该区域的具体确定方式不做特殊限定,且对该区域的区域形状等其他属性不做特殊限定。
在步骤S1202中,判断图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系是否一致。
在本公开的示例性实施例中,由于时间戳与图像帧之间具有一一对应的关系,而各个图像帧都建立有对应的第一映射关系,因此,时间戳与第一映射关系之间具有一一对应的关系。但是,由于图像帧的划分可能过于细密,会存在图像帧的时间戳对应的第一映射关系与上一图像的时间戳对应的第一映射关系之间不存在差异的情况。为便于上传且节约时间与资源,可以按照图像帧的时间戳可以对该时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系是否一致进行判断,亦即判断两图像帧中的可接收触控操作的区域上的编码信息是否一致。若判定结果为不一致,说明两个图像帧中不包含的功能不一致,可以将两条第一映射关系都保存。
在步骤S1203中,若图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系一致,则将图像帧和下一帧图像分别对应的两个第一映射关系整合为一个第一映射关系。
在本公开的示例性实施例中,若图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系一致,说明两个图像帧只是时间戳发生变化,其中包含的功能并未改变,为节省服务器的容量,可以将图像帧与下一帧图像对应的两个第一映射关系进行整合,只保存一条第一映射关系,此时,两个时间戳对应同一第一映射关系,还可以方便观众端在服务器中的查询,以节约查询时间,更快响应观众端的功能解锁需 求。
此外,在本公开的示例性实施例中,还提供一种数据处理装置。图13示出了数据处理装置的结构示意图,如图13所示,数据处理装置1300可以包括:关系获取组件1301、映射判断组件1302、关系整合组件1303。其中:
关系获取组件1301,设置为获取视频数据中的图像帧的时间戳与图像帧上可接收触控操作的区域的第一映射关系;映射判断组件1302,设置为判断图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系是否一致;关系整合组件1303,设置为若图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系一致,则将图像帧和下一帧图像分别对应的两个第一映射关系整合为一个第一映射关系。
上述数据处理装置的具体细节已经在对应的数据处理方法中进行了详细的描述,因此此处不再赘述。
应当注意,尽管在上文详细描述中提及了数据处理装置1300的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
下面结合一应用场景对本公开实施例中的数据处理方法做出详细说明。
图14示出了本公开在主播端提供的数据处理的方法的流程示意图,如图14所示,该方法至少包括以下步骤:在步骤S1401中,使用截取屏幕区域生成视频图像,并在该视频图像中编码的方式是普遍使用的开播方式。图15示出了截取屏幕生成视频图像的应用界面示意图,如图15所示,1501对应的区域即为截取屏幕进行直播的区域。选定了截取屏幕直播的区域后,可以得到一个基于屏幕坐标系的矩形区域,则将该矩形区域置于直播区域中,并将该矩形区域中包含的功能映射到直播区域中。在步骤S1402中,将视频直播区域发送给矩形区域所在的软件中,该软件可以返回处于直播区域中的各功能区域以及功能区域的位置信息和编码信息组成的第一映射关系R((x1,y1,width1,height1)->code1,(x2,y2,width2,height2)->code2,……)。并且,可以将功能区域的位置信息从基于屏幕坐标系的位置信息转化成直播区域的相对位置信息。举例而言,功能区域的位置信息为(1,1,4,4),将该坐标信息转化成直播区域的相对位置信息可以是(0,0,4,4),本示例性实施例对具体的相对位置信息转换方式不做特殊限定,便于直播区域内功能区域的位置确定即可。在步骤S1403中,对视频数据进行编码时,产生的每个视频帧都会同时生成视频帧对应的时间戳,该时间戳是观众端能够正常观看直播视频的必须条件。在步骤S1404中,将得到的时间戳与第一映射关系之间建立一一对应的关系,得到第二映射关系。在步骤S1405中,将视频数据和第二映射关系上传到服务器。其中,服务器的种类可以包括个人电脑(Personal Computer,简称PC)服务 器、大型机、小型机,还可以包括云服务器,本示例性实施例对服务器的种类不做具体限定。由于直播视频一般是以1秒16-30帧的效率进行编码,但出于截屏区域中的软件一般不会很快移动,因此,可以对上传服务器的操作优化为:只上传与上一图像帧中的功能区域不同的第二映射关系。举例而言,视频数据的时间戳为1-100,视频画面都是相同的,并且第一映射关系也是相同的,那么可以只上传时间戳为1的第二映射关系,2-100帧的图像帧的第二映射关系均不上传。如果在时间戳为101时,第一映射关系与1帧的第一映射关系不同,这时才上传101帧的第二映射关系。
直播端将已经功能编码的视频数据上传至服务器端之后,观众端要经过相对应的解码处理才可以在一边观看视频的同时,一边通过点击视频画面的对应位置执行软件功能。
图16示出了本公开在观众端的数据处理的方法的流程示意图,如图16所示,该方法至少包括以下步骤:在步骤S1601中,观众端对视频数据进行解码时,可以得到视频数据与各个图像帧的时间戳。在步骤S1602中,使用图像帧的时间戳请求服务器,如果服务器存在于该时间戳对应的第二映射关系,则下发到观众端。并且,提供一种优化方法:将该时间戳之后的后续时间戳对应的第二映射关系同时下发到观众端,避免过多的观众端发起请求,占用服务器并发资源。观众端接收到服务器返回的数据后,可以保存在本地数据库中。在步骤S1603中,观众对直播区域发送触控操作时,可以获取观众发起触控操作所在的图像帧的时间戳,以及该触控操作的触控位置信息。基于第二映射关系,可以按照时间戳确定出第一映射关系,以确定触控操作命中的功能区域。举例而言,第一映射关系为(x i,y i,width i,height i)->code i,可以根据触控位置信息确定编码信息code i。并且,若该触控操作未命中功能区域时,可以返回一未命中的提示信息。在步骤S1604中,观众端将编码信息返回给软件,软件接收到对应的编码信息后,则执行相对应的软件功能,举例而言,可以是“每日任务”对应的软件功能,例如每日签到、打卡、或者是其他对应的软件功能等。
为便于将本公开提供的数据处理方法系统化,本公开还提供一种数据处理系统,下面对系统中的各个模块执行的功能和对应的操作进行具体说明。
图17示出了本公开提供的数据处理系统,如图17所示,该数据处理系统包括软件功能编码模块1701,主播端对位于直播区域的软件功能与图像帧映射处理模块1702,观众端对时间戳进行解码处理,获取第一映射关系,处理观众端点击模块1703,服务器端存储编码信息和处理主播端以及观众端请求的模块1704。其中,服务器端存储编码信息和处理主播端以及观众端请求的模块1704包括存储及检索编码信息模块和请求处理模块。软件功能编码模块1701用于对软件可接收触控操作的功能区域进行编码,对外提供基于屏幕坐标系的点或者区域的查询软件功能编码信息的接口,以及接收到执行软件功能编码信息的请求时,执行相对应的软件功能。主播端对位于直播区域的软件功能 与图像帧映射处理模块1702,用于向模块1701查询位于直播区域中的功能区域的第一映射关系,并在对视频编码后获得时间戳时,生成视频数据的第二映射关系,并将第二映射关系的映射关系集合上传至服务器模块。观众端对时间戳进行解码处理,获取第一映射关系,处理观众端点击模块1703,用于在对视频数据进行解码处理得到时间戳后,在本地数据库中查询对应的第二映射关系是否存在,若存在,则直接向观众端返回第一映射关系;若不存在,则向服务器模块发送时间戳对应的查询第一映射关系请求。若查询到时间戳对应的第二映射关系时,则直接返回相应的第一映射关系,并且,服务器模块会将从该时间戳起的后续时间戳对应的第二映射关系同时返回给观众端,观众端将第二映射关系保存在本地数据库中;或者未查询到对应的第二映射关系时,返回未查询到与该时间戳对应的第二映射关系的反馈信息。当观众端发送一例如点击操作的触控操作时,可以得到该点击操作的时间戳,通过本地数据库中的第二映射关系的映射关系集合计算得到对应的第一映射关系。当得到第一映射关系时,再计算第一映射关系中是否包含点击操作为触控位置信息对应的软件功能区域,当包含时,则返回相应的软件功能编码信息,将编码信息发送给模块1701以执行对应功能。服务器端存储编码信息和处理主播端以及观众端请求的模块1704中的存储及检索编码信息模块用于在服务器端存储来自主播端的视频数据的时间戳的第二映射关系的映射关系集合,以及观众端发送请求的时间戳,检索对应的第二映射关系和后续的第二映射关系集合;服务器端存储编码信息和处理主播端以及观众端请求的模块1704中的请求处理模块是在视频的服务器端,用于在处理来自主播端和观众端的数据请求,并返回处理结果。
此外,在本公开的示例性实施例中,还提供了一种能够实现上述方法的电子设备。
下面参照图18来描述根据本公开的这种实施例的电子设备1800。图18显示的电子设备1800仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图18所示,电子设备1800以通用计算设备的形式表现。电子设备1800的组件可以包括但不限于:上述至少一个处理单元1810、上述至少一个存储单元1820、连接不同系统组件(包括存储单元1820和处理单元1810)的总线1830、显示单元1840。
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元1810执行,使得所述处理单元1810执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施例的步骤。
存储单元1820可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)1821和/或高速缓存存储单元1822,还可以进一步包括只读存储单元(ROM)1823。
存储单元1820还可以包括具有一组(至少一个)程序模块1825的程序/实用工具1824,这样的程序模块1825包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线1830可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备1800也可以与一个或多个外部设备2000(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备1800交互的设备通信,和/或与使得该电子设备1800能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口1850进行。并且,电子设备1800还可以通过网络适配器1860与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器1840通过总线1830与电子设备1800的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备1800使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
通过以上的实施例的描述,本领域的技术人员易于理解,这里描述的示例实施例可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施例的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施例的方法。
在本公开的示例性实施例中,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施例中,本公开的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施例的步骤。
参考图19所示,描述了根据本公开的实施例的用于实现上述方法的程序产品1900,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本公开的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任 意合适的组合。
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本公开操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。

Claims (15)

  1. 一种数据处理方法,应用于视频直播的直播端,所述方法包括:
    确定视频数据的图像帧,并获取位于所述图像帧中的功能区域的位置信息;
    获取所述功能区域的编码信息,并对所述编码信息和所述位置信息建立第一映射关系;
    获取所述图像帧的时间戳,并对所述时间戳与所述第一映射关系建立第二映射关系。
  2. 根据权利要求1所述的数据处理方法,其中,在所述对所述时间戳与所述第一映射关系建立第二映射关系之后,所述方法还包括:
    获取与所述视频数据中的各个图像帧相对应的各个所述第二映射关系的映射关系集合;
    对所述视频数据与所述映射关系集合进行编码,以获取数据映射编码信息;
    将所述数据映射编码信息上传至服务器。
  3. 根据权利要求2所述的数据处理方法,其中,在所述对所述时间戳与所述第一映射关系建立第二映射关系之后,所述方法还包括:
    获取与所述视频数据中的各个图像帧相对应的各个所述第二映射关系的映射关系集合;
    将所述视频数据和所述映射关系集合分别上传至服务器。
  4. 根据权利要求3所述的数据处理方法,其中,所述将所述视频数据和所述映射关系集合分别上传至服务器,包括:
    判断所述图像帧的时间戳对应的第一映射关系与上一帧图像的时间戳对应的第一映射关系是否存在差异;
    若判定所述图像帧的时间戳对应的第一映射关系与上一帧图像的时间戳对应的第一映射关系存在差异,则将所述图像帧的时间戳对应的第二映射关系上传至服务器。
  5. 根据权利要求1所述的数据处理方法,其中,所述获取位于所述图像帧中的功能区域的位置信息,包括:
    在所述图像帧上确定对应于功能区域的多个位置点;
    获取所述多个位置点的位置信息。
  6. 根据权利要求1所述的数据处理方法,其中,所述获取位于所述图像帧中的功能区域的位置信息,包括:
    在所述图像帧上确定对应于功能区域的位置区域;
    获取所述位置区域的位置信息。
  7. 一种数据处理装置,包括:
    信息获取组件,设置为确定视频数据的图像帧,并获取位于所述图像帧中的功能区域的位置信息;
    第一映射组件,设置为获取所述功能区域的编码信息,并对所述编码信息和所述位置信息建立第一映射关系;
    第二映射组件,设置为获取所述图像帧的时间戳,并对所述时间戳与所述第一映射关系建立第二映射关系。
  8. 一种数据处理方法,应用于视频直播的观众端,所述方法包括:
    确定视频数据中的图像帧,并接收作用于所述图像帧上的触控操作;
    获取所述图像帧的时间戳,以及触控操作作用在所述图像帧上的触控位置信息;
    基于第二映射关系,获取与所述时间戳对应的第一映射关系;
    基于所述第一映射关系,获取与所述触控位置信息对应的编码信息,并执行与所述编码信息指示的功能区域对应的功能,其中,所述第一映射关系包含所述图像帧中功能区域的编码信息与所述功能区域的位置信息之间的映射关系,所述第二映射关系包含所述图像帧的时间戳之间的映射关系。
  9. 根据权利要求8所述的数据处理方法,其中,在所述确定视频数据中的图像帧,并接收作用于所述图像帧上的触控操作之前,所述方法还包括:
    获取所述第二映射关系的第二关系信息;
    将所述第二关系信息与所述视频数据保存至本地数据库。
  10. 根据权利要求9所述的数据处理方法,其中,所述基于第二映射关系,获取与所述触发时间戳对应的第一映射关系,包括:
    判断在所述本地数据库中是否保存有与所述时间戳对应的第一映射关系;
    若所述本地数据库中未保存与所述时间戳对应的第一映射关系,则将所述时间戳上传至服务器,以从所述服务器处获取所述时间戳对应的第一映射关系。
  11. 一种数据处理装置,包括:
    指令接收组件,设置为确定视频数据中的图像帧,并接收作用于所述图像帧上的触控操作;
    指令信息组件,设置为获取所述图像帧的时间戳,以及触控操作作用在所述图像帧上的触发位置信息;
    第一获取组件,设置为基于第二映射关系,获取与所述时间戳对应的第一映射关系;
    第二获取组件,设置为基于所述第一映射关系,获取与所述触控位置信息对应的编码信息,并执行与所述编码信息指示的功能区域对应的功能,其中,所述第一映射关系包含所述图像帧中功能区域的编码信息与所述功能区域的位置信息之间的映射关系,所述第二映射关系包含所述图像帧的时间戳之间的映射关系。
  12. 一种数据处理方法,所述方法包括:
    获取视频数据中的图像帧的时间戳与所述图像帧上可接收触控操作的区域的第一映射关系;
    判断所述图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系是否一致;
    若所述图像帧的时间戳对应的第一映射关系与所述下一帧图像的时间戳对应的第一映射关系一致,则将所述图像帧和所述下一帧图像分别对应的两个第一映射关系整合为一个第一映射关系。
  13. 一种数据处理装置,包括:
    关系获取组件,设置为获取视频数据中的图像帧的时间戳与所述图像帧上可接收触控操作的区域的第一映射关系;
    映射判断组件,设置为判断所述图像帧的时间戳对应的第一映射关系与下一帧图像的时间戳对应的第一映射关系是否一致;
    关系整合组件,设置为若所述图像帧的时间戳对应的第一映射关系与所述下一帧图像的时间戳对应的第一映射关系一致,则将所述图像帧和所述下一帧图像分别对应的两个第一映射关系整合为一个第一映射关系。
  14. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-6、8-10或者12所述的数据处理方法。
  15. 一种电子设备,包括:
    处理器;
    存储器,与所述处理器相连,设置为存储所述处理器的可执行指令;
    其中,所述处理器被设置为执行时实现如权利要求1-6、8-10或者12所述的数据处理方法。
PCT/CN2019/102051 2019-03-05 2019-08-22 数据处理方法及装置、存储介质、电子设备 WO2020177278A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/057,768 US11265594B2 (en) 2019-03-05 2019-08-22 Data processing method and device, storage medium, electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910165365.2A CN109874026B (zh) 2019-03-05 2019-03-05 数据处理方法与装置、存储介质、电子设备
CN201910165365.2 2019-03-05

Publications (1)

Publication Number Publication Date
WO2020177278A1 true WO2020177278A1 (zh) 2020-09-10

Family

ID=66919841

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/102051 WO2020177278A1 (zh) 2019-03-05 2019-08-22 数据处理方法及装置、存储介质、电子设备

Country Status (3)

Country Link
US (1) US11265594B2 (zh)
CN (1) CN109874026B (zh)
WO (1) WO2020177278A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109874026B (zh) 2019-03-05 2020-07-07 网易(杭州)网络有限公司 数据处理方法与装置、存储介质、电子设备
CN114915771A (zh) * 2022-04-26 2022-08-16 深圳市企鹅网络科技有限公司 基于图像叠加的在线教学方法、系统、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035279A (zh) * 2007-05-08 2007-09-12 孟智平 一种在视频资源中使用信息集的方法
CN105580355A (zh) * 2013-09-11 2016-05-11 辛赛股份有限公司 内容交易型项目的动态绑定
US20160210669A1 (en) * 2015-01-17 2016-07-21 Alfred Xueliang Xin On-site sales and commercial search method and system
CN108401173A (zh) * 2017-12-21 2018-08-14 平安科技(深圳)有限公司 移动直播的互动终端、方法及计算机可读存储介质
CN108769808A (zh) * 2018-05-24 2018-11-06 安徽质在智能科技有限公司 交互式视频播放方法和系统
CN109874026A (zh) * 2019-03-05 2019-06-11 网易(杭州)网络有限公司 数据处理方法与装置、存储介质、电子设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9108107B2 (en) * 2002-12-10 2015-08-18 Sony Computer Entertainment America Llc Hosting and broadcasting virtual events using streaming interactive video
CN101414473B (zh) * 2004-06-18 2013-01-23 松下电器产业株式会社 再现装置、程序、再现方法
US8479107B2 (en) * 2009-12-31 2013-07-02 Nokia Corporation Method and apparatus for fluid graphical user interface
CA2876130A1 (en) * 2012-06-14 2013-12-19 Bally Gaming, Inc. System and method for augmented reality gaming
CN103780942B (zh) * 2013-12-23 2017-07-11 青岛海信电器股份有限公司 一种信息配置方法及装置
US9619105B1 (en) * 2014-01-30 2017-04-11 Aquifi, Inc. Systems and methods for gesture based interaction with viewpoint dependent user interfaces
US10033926B2 (en) * 2015-11-06 2018-07-24 Google Llc Depth camera based image stabilization
CN106254941A (zh) * 2016-10-10 2016-12-21 乐视控股(北京)有限公司 视频处理方法及装置
US10929494B2 (en) * 2018-04-16 2021-02-23 Stops.com Ltd. Systems and methods for tagging objects for augmented reality
CN109194978A (zh) * 2018-10-15 2019-01-11 广州虎牙信息科技有限公司 直播视频剪辑方法、装置和电子设备
US10593059B1 (en) * 2018-11-13 2020-03-17 Vivotek Inc. Object location estimating method with timestamp alignment function and related object location estimating device
US10937237B1 (en) * 2020-03-11 2021-03-02 Adobe Inc. Reconstructing three-dimensional scenes using multi-view cycle projection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035279A (zh) * 2007-05-08 2007-09-12 孟智平 一种在视频资源中使用信息集的方法
CN105580355A (zh) * 2013-09-11 2016-05-11 辛赛股份有限公司 内容交易型项目的动态绑定
US20160210669A1 (en) * 2015-01-17 2016-07-21 Alfred Xueliang Xin On-site sales and commercial search method and system
CN108401173A (zh) * 2017-12-21 2018-08-14 平安科技(深圳)有限公司 移动直播的互动终端、方法及计算机可读存储介质
CN108769808A (zh) * 2018-05-24 2018-11-06 安徽质在智能科技有限公司 交互式视频播放方法和系统
CN109874026A (zh) * 2019-03-05 2019-06-11 网易(杭州)网络有限公司 数据处理方法与装置、存储介质、电子设备

Also Published As

Publication number Publication date
CN109874026A (zh) 2019-06-11
US11265594B2 (en) 2022-03-01
US20210409809A1 (en) 2021-12-30
CN109874026B (zh) 2020-07-07

Similar Documents

Publication Publication Date Title
US10497273B2 (en) Method and system for recording and playback of web-based instructions
CN109981711B (zh) 文档动态播放方法、装置、系统及计算机可读存储介质
WO2021179882A1 (zh) 图像的绘制方法、装置、可读介质和电子设备
US20100268694A1 (en) System and method for sharing web applications
CN104539977A (zh) 直播预览方法及装置
US6724918B1 (en) System and method for indexing, accessing and retrieving audio/video with concurrent sketch activity
CN110245304B (zh) 数据分享方法、设备以及计算机可读介质
WO2020177278A1 (zh) 数据处理方法及装置、存储介质、电子设备
CN109767257B (zh) 基于大数据分析的广告投放方法、系统及电子设备
CN104185040A (zh) 应用同步方法、应用服务器及终端
CN111818383B (zh) 视频数据的生成方法、系统、装置、电子设备及存储介质
EP4096224A1 (en) Method and apparatus for displaying live clip
US20150029196A1 (en) Distribution management apparatus
JP2023522092A (ja) インタラクション記録生成方法、装置、デバイス及び媒体
CN102722980A (zh) 全数字实时多信号融合处理方法
US10841544B2 (en) Systems and methods for media projection surface selection
US9992528B2 (en) System and methods thereof for displaying video content
WO2022121778A1 (zh) 声音信息处理方法及装置、计算机存储介质、电子设备
US20140178035A1 (en) Communicating with digital media interaction bundles
US20130198791A1 (en) E-book-based on-line broadcasting study system and method
US20220385970A1 (en) Method for displaying live broadcast clip
WO2024045026A1 (zh) 一种显示方法、电子设备、显示设备、传屏器及介质
US20210352124A1 (en) Custom generated real-time media on demand
CN111726687B (zh) 用于生成显示数据的方法和装置
CN107018451B (zh) 时基超媒体事件的调度方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19918445

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19918445

Country of ref document: EP

Kind code of ref document: A1