CN114666621A - Page processing method, device and equipment - Google Patents

Page processing method, device and equipment Download PDF

Info

Publication number
CN114666621A
CN114666621A CN202210286723.7A CN202210286723A CN114666621A CN 114666621 A CN114666621 A CN 114666621A CN 202210286723 A CN202210286723 A CN 202210286723A CN 114666621 A CN114666621 A CN 114666621A
Authority
CN
China
Prior art keywords
video
static
webpage
elements
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210286723.7A
Other languages
Chinese (zh)
Inventor
林啸洋
王鹏
顾文杰
李洪辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202210286723.7A priority Critical patent/CN114666621A/en
Publication of CN114666621A publication Critical patent/CN114666621A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application provides a page processing method, a page processing device and page processing equipment, wherein the method comprises the following steps: determining a plurality of elements and element information of the elements in the first webpage, wherein the element information comprises: a display period and a display position of the element in the first webpage; acquiring the display duration of a first webpage; processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage; and sending video information corresponding to the first webpage to the electronic equipment. The reliability of the webpage displayed by the electronic equipment is improved.

Description

Page processing method, device and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a page processing method, apparatus, and device.
Background
At present, multimedia information can be played through electronic devices (large screens, mobile phones, computers, and the like), and the multimedia information can include contents such as characters, images, videos, and the like.
In the related art, multimedia information is usually presented in the form of a web page, for example, a web page in a hypertext Markup Language (HTML) format may be written in a cloud device, where the web page includes content such as characters, images, and videos. After the webpage is manufactured in the cloud equipment, a link of the webpage is generated, and the link is sent to the electronic equipment. When the electronic device needs to play the multimedia information corresponding to the webpage, the electronic device downloads the content in the webpage from the cloud device according to the link and displays the content in the webpage. However, when the electronic device is not compatible with the format of the content in the web page, the electronic device fails to play the multimedia content, resulting in poor reliability of displaying the web page by the electronic device.
Disclosure of Invention
Aspects of the present application provide a page processing method, device and apparatus, so as to improve reliability of displaying a webpage by an electronic device.
In a first aspect, an embodiment of the present application provides a page processing method, including:
determining a plurality of elements and element information of the elements in a first webpage, wherein the element information comprises: a display period and a display position of the element in the first webpage;
acquiring the display duration of the first webpage;
processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage;
and sending the video information corresponding to the first webpage to the electronic equipment.
In one possible implementation, the plurality of elements includes a static element and a dynamic element; processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage, including:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
and determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
In a possible implementation manner, processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video includes:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
generating N frames of target images according to the plurality of static elements and the element information of the plurality of static elements;
and splicing the N frames of target images to obtain the static video.
In one possible embodiment, generating an N-frame target image according to the plurality of static elements and the element information of the plurality of static elements includes:
respectively determining an RGB image corresponding to each static element and a frame identifier corresponding to the RGB image according to the element information of each static element, wherein the frame identifier is an integer which is greater than or equal to 1 and less than or equal to N;
determining N image groups according to the RGB image corresponding to each static element and the frame identification corresponding to the RGB image, wherein the image group comprises at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and i is an integer which is greater than or equal to 1 and less than or equal to N;
and respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
In one possible implementation, for any one static element; determining the RGB image corresponding to the static element and the frame identifier corresponding to the RGB image according to the element information of the static element, including:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
and determining a frame identifier corresponding to the RGB image according to the display time period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
In a possible implementation manner, the stitching processing on the N frames of target images to obtain the static video includes:
carrying out format conversion processing on the N frames of target images to obtain N frames of images in the target format;
and splicing the images in the N frames of target formats to obtain the static video.
In a possible implementation manner, determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element includes:
determining the video information comprises: the static video, the dynamic element, and element information of the dynamic element;
alternatively, the first and second electrodes may be,
and according to the element information of the dynamic element, performing fusion processing on the static video and the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
In a second aspect, an embodiment of the present application provides a page processing method, including:
receiving video information corresponding to a first webpage, wherein the first webpage comprises a plurality of elements, the video information is determined according to the element information of the elements, and the element information comprises a display time period and a display position of the elements in the first webpage;
and determining a target video according to the video information, and playing the target video.
In one possible embodiment, the plurality of elements includes a static element and a dynamic element, wherein,
the video information includes: the static video, the dynamic elements and the element information of the dynamic elements, wherein the static video is determined according to the element information of the static elements in the first webpage;
alternatively, the first and second liquid crystal display panels may be,
the video information comprises a fusion video, and the fusion video is obtained by performing fusion processing on the static video and the dynamic element.
In one possible embodiment, the video information includes the static video, the dynamic element, and element information of the dynamic element; determining a target video according to the video information, comprising:
according to the element information of the dynamic element, the static video and the dynamic element are subjected to fusion processing to obtain the target video;
alternatively, the first and second electrodes may be,
the video information comprises a fusion video; determining a target video according to the video information, comprising:
determining the fused video as the target video.
In a third aspect, an embodiment of the present application provides a page processing apparatus, including: a determining module, an obtaining module, a processing module and a sending module, wherein,
the determining module is configured to determine a plurality of elements and element information of the elements in a first webpage, where the element information includes: a display period and a display position of the element in the first webpage;
the acquisition module is used for acquiring the display duration of the first webpage;
the processing module is used for processing the elements according to the display duration and the element information to obtain video information corresponding to the first webpage;
the sending module is used for sending the video information corresponding to the first webpage to the electronic equipment.
In a possible implementation, the processing module is specifically configured to:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
and determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
In a possible implementation manner, the processing module is specifically configured to:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
generating N frames of target images according to the plurality of static elements and the element information of the plurality of static elements;
and splicing the N frames of target images to obtain the static video.
In a possible implementation, the processing module is specifically configured to:
respectively determining an RGB image corresponding to each static element and a frame identifier corresponding to the RGB image according to the element information of each static element, wherein the frame identifier is an integer which is greater than or equal to 1 and less than or equal to N;
determining N image groups according to the RGB image corresponding to each static element and the frame identification corresponding to the RGB image, wherein the image group comprises at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and i is an integer which is greater than or equal to 1 and less than or equal to N;
and respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
In a possible implementation, the processing module is specifically configured to:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
and determining a frame identifier corresponding to the RGB image according to the display time period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
In a possible implementation, the processing module is specifically configured to:
carrying out format conversion processing on the N frames of target images to obtain N frames of images in the target format;
and splicing the images in the N frames of target formats to obtain the static video.
In a possible implementation, the processing module is specifically configured to:
determining the video information comprises: the static video, the dynamic element, and element information of the dynamic element;
alternatively, the first and second electrodes may be,
and according to the element information of the dynamic element, performing fusion processing on the static video and the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
In a fourth aspect, an embodiment of the present application provides a page processing apparatus, including: a receiving module, a determining module and a playing module, wherein,
the receiving module is used for receiving video information corresponding to a first webpage, the first webpage comprises a plurality of elements, the video information is determined according to element information of the elements, and the element information comprises a display time period and a display position of the elements in the first webpage;
the determining module is used for determining a target video according to the video information;
the playing module is used for playing the target video.
In one possible implementation, the video information includes: the static video, the dynamic elements and the element information of the dynamic elements, wherein the static video is determined according to the element information of the static elements in the first webpage;
alternatively, the first and second electrodes may be,
the video information comprises a fusion video, and the fusion video is obtained by performing fusion processing on the static video and the dynamic element.
In a possible implementation, the determining module is specifically configured to:
according to the element information of the dynamic element, the static video and the dynamic element are subjected to fusion processing to obtain the target video;
alternatively, the first and second electrodes may be,
the video information comprises a fusion video; determining a target video according to the video information, comprising:
determining the fused video as the target video.
In a fifth aspect, an embodiment of the present application provides a cloud device, including: a memory and a processor;
the memory stores computer execution instructions;
the processor executes the computer-executable instructions stored in the memory, so that the processor executes the page processing method of any one of the first aspect.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: a memory and a processor;
the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored by the memory, so that the processor executes the page processing method of any one of the second aspect.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is configured to implement the page processing method according to any one of the first aspects.
In an eighth aspect, an embodiment of the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is configured to implement the page processing method according to any one of the second aspects.
In a ninth aspect, the present application provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the page processing method shown in any one of the first aspect.
In a tenth aspect, an embodiment of the present application provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the page processing method shown in any one of the second aspects.
In this embodiment of the application, the cloud device can acquire the display duration, the plurality of elements and the element information of the first webpage, and can process the plurality of static elements according to the display duration and the element information corresponding to the static elements to obtain the static video. The cloud device can take the static video, the dynamic element and the element information corresponding to the dynamic element as video information, or perform fusion processing on the static video and the dynamic element according to the element information corresponding to the dynamic element to obtain the video information corresponding to the first webpage. The cloud device can send video information corresponding to the first webpage to the electronic device. After the electronic device receives the video information corresponding to the first webpage, the target video can be determined according to the video information, and the target video is played. The cloud device can convert the webpage into corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of the electronic device for displaying the webpage can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic diagram of an application scenario provided in an exemplary embodiment of the present application;
fig. 2 is a schematic flowchart of a page processing method according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a first webpage provided by an exemplary embodiment of the present application;
fig. 4 is a flowchart illustrating a method for determining video information according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of an RGB image provided by an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of determining a group of images provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of image fusion provided by an exemplary embodiment of the present application;
fig. 8 is a flowchart illustrating a further page processing method according to an exemplary embodiment of the present application;
FIG. 9 is a schematic structural diagram of a page processing apparatus according to an exemplary embodiment of the present application;
fig. 10 is a schematic structural diagram of another page processing apparatus according to an exemplary embodiment of the present application;
fig. 11 is a schematic structural diagram of a cloud device according to an exemplary embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic diagram of an application scenario provided in an exemplary embodiment of the present application. As shown in fig. 1, the system includes a cloud device and a plurality of electronic devices. For example, the plurality of electronic devices may include electronic device 1, electronic device 2, … …, electronic device n. The cloud device may be in communication with the electronic device. The cloud device may be a computer device. For example, the cloud device may be a computer or the like. The electronic device has a display screen, and for example, the electronic device may be a terminal large screen, a mobile phone, a computer, or the like.
The staff can make the webpage on the high in the clouds equipment, can include one or more elements such as characters, image and video in the webpage. After the webpage is manufactured, the cloud device can process elements in the webpage to convert the webpage into corresponding video information, and send the video information to the electronic device, so that the electronic device plays and displays the content in the webpage according to the video information. The web page can be a static web page or a dynamic web page and has a certain display time length. For example, when the webpage is a static webpage, the content in the webpage may be a poster for promotion, and the display duration is 5 s; when the webpage is a dynamic webpage, the content in the webpage can be advertisements, trailers and the like, and the display time is 10 s.
In the related art, multimedia information is usually presented in the form of a web page, for example, the web page in an HTML format may be written in a cloud device, and the web page includes contents such as text, images, and videos. After the webpage is manufactured in the cloud device, a link of the webpage is generated, and the link is sent to the electronic device. When the electronic device needs to play the multimedia information corresponding to the webpage, the electronic device downloads the content in the webpage from the cloud device according to the link and displays the content in the webpage. However, when the electronic device is not compatible with the format of the content in the web page, the electronic device fails to play the multimedia content, resulting in poor reliability of displaying the web page by the electronic device.
In the implementation of the application, the cloud device can determine a plurality of elements and corresponding element information in the webpage, process the plurality of elements according to the display duration and the element information of the webpage, convert the webpage into corresponding video information, enable the electronic device to download the video information, and display webpage content according to the video information. The cloud device can convert the webpage into corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of the electronic device for displaying the webpage can be improved.
The technical means shown in the present application will be described in detail below with reference to specific examples. It should be noted that the following embodiments may exist alone or in combination with each other, and description of the same or similar contents is not repeated in different embodiments.
Fig. 2 is a schematic flowchart of a page processing method according to an exemplary embodiment of the present application. Referring to fig. 2, the method may include:
s201, determining a plurality of elements and element information of the elements in the first webpage.
The execution main body of the embodiment of the application can be cloud equipment and can also be a page processing device arranged in the cloud equipment. The page processing means may be implemented by software, or by a combination of software and hardware.
The first web page may be a web page in HTML format, for example, the first web page may be an H5 web page.
The first webpage can be a static webpage or a dynamic webpage, and the first webpage has a corresponding display duration.
When the first webpage is a static webpage, the content in the first webpage may be static content, and the display content does not change within the display duration, for example, the content in the first webpage may be a static poster. When the first webpage is a dynamic webpage, the content in the webpage may include dynamic content, and the content displayed in the first webpage is different in different display periods. For example, promotional text and promotional videos may be included in the first web page.
The first web page may include a plurality of elements, for example, the elements may be text, images, videos, motion pictures, and the like. The elements can be divided into static elements and dynamic elements, wherein the static elements include static contents such as characters and images, and the dynamic elements can include dynamic contents such as videos and motion pictures.
Each element in the first webpage has corresponding element information, and the element information comprises a display time period and a display position of the element in the first webpage.
The display position may be represented by coordinates of the element in the first web page. For example, the upper left corner of the first web page may be used as the origin of coordinates, and the coordinate values may be represented by pixel values. For example, an element is a rectangular picture of 100px x 150px in the first web page, the display position of which can be represented by an upper left vertex (10px, 150px) and a lower right vertex (110px, 300px), which means that the upper left vertex of the element is 100px away from the left boundary of the first web page and 150px away from the upper boundary of the first web page; the lower right vertex of the element is at a distance of 110px from the left boundary of the first web page and 300px from the upper boundary of the first web page.
Different elements can be displayed in different display periods in the first webpage, and the different elements correspond to different element information. Next, a plurality of elements in the first web page will be described with reference to fig. 3.
Fig. 3 is a schematic diagram of a first webpage according to an exemplary embodiment of the present application. As shown in fig. 3, if the display duration of the first web page is 3s, and 24 display images can be included in each second, the 1 st s includes 24 display images corresponding to the 1 st ms to the 60 th ms, the 2 nd s includes 24 display images corresponding to the 61 st ms to the 120 th ms, and the 3 rd s includes 24 display images corresponding to the 121 st ms to the 180 th ms.
As shown in fig. 3, the first web page includes a text a, a text b, a video c, and an image d, where the text a, the text b, and the image d are static elements, and the video c is a dynamic element.
The display time period of the letter a in the first web page is 1ms to 120ms, and the display position can be represented as an upper left vertex (10px ) and a lower right vertex (60px, 40 px).
The display time period of the text b in the first web page is 121ms to 180ms, and the display positions can be represented as a top left corner vertex (10px ) and a bottom right corner vertex (60px, 40 px).
The display period of the video c in the first web page is 1ms to 180ms, and the display positions can be represented as top left corner vertex (0px, 80px) and bottom right corner vertex (280px, 180 px).
The display period of the image d in the first web page is 61ms to 180ms, and the display positions can be represented as top left corner vertex (300px, 10px) and bottom right corner vertex (420px, 90 px).
S202, obtaining the display duration of the first webpage.
The display duration of the first webpage refers to the display duration of the content in the first webpage by the electronic equipment. The display duration of the first web page may be preset. If the first webpage comprises dynamic elements such as videos, the display time length of the first webpage can be determined according to the playing time length of the dynamic elements.
And S203, processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage.
The first web page may include static elements, or the first web page may include dynamic elements, or the first web page may include both static and dynamic elements. When the elements included in the first webpage are different, the process of determining the video information corresponding to the first webpage is also different, which includes the following three cases:
case 1, the first web page includes static elements.
When the first webpage includes the static elements but not the dynamic elements, the plurality of static elements can be processed according to the display duration of the first webpage and the element information of the static elements to obtain the static video. That is, in this case, the video information corresponding to the first web page includes a still video.
It should be noted that, in the embodiment shown in fig. 4, a process of determining a still video is described, and will not be described here.
Case 2, the first web page includes dynamic elements.
When the first webpage includes the dynamic element but not the static element, the video information corresponding to the first webpage may be determined to be the dynamic element.
Case 3, the first web page includes static elements and dynamic elements.
In this case, the cloud device may process the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video; and determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element. The static video refers to a video obtained by processing static elements.
The cloud device may determine the video information corresponding to the first webpage in multiple ways, which may include the following two ways:
in the method 1, after the cloud device obtains the static video corresponding to the static element, the static video, the dynamic element and the element information of the dynamic element may be determined as the video information corresponding to the first webpage.
In this manner, the video information includes static video, dynamic elements, and element information of the dynamic elements.
When the video information in the electronic device needs to be updated, if the dynamic element changes and the static element does not change, the cloud device can send the dynamic element and the element information of the dynamic element to the electronic device, and the static element does not need to be sent to the electronic device. Or, if the static element changes and the dynamic element does not change, the cloud device may send the static video to the electronic device, and does not need to send the dynamic element and the element information of the dynamic element to the electronic device. Unnecessary data transmission is reduced, and the workload of cloud equipment is reduced. In addition, in the practical application process, the static elements or the dynamic elements in the video information required by different electronic devices may be the same, and in this case, the electronic devices may flexibly combine the contents in the video information sent to different electronic devices, so that the flexibility of sending the video information is high.
In the mode 2, the cloud device can perform fusion processing on the static video and the dynamic element through a media synthesis algorithm according to the element information of the dynamic element to obtain a fused video.
In this way, the video information corresponding to the first webpage may be a fusion video. After the electronic equipment receives the fusion video, the electronic equipment directly plays the fusion video, so that the convenience of playing the video information by the electronic equipment is higher.
And S204, sending the video information corresponding to the first webpage to the electronic equipment.
After the cloud device determines the video information corresponding to the first webpage, the video information can be sent to the electronic device through a streaming media technology, so that the electronic device can download the video information and process the video information at the same time to display the content of the first webpage.
The streaming media technology is a technology for continuously playing multimedia files in real time on a network by adopting a streaming transmission technology. By adopting the streaming media technology, the electronic equipment can process while downloading, and does not need to wait for processing after completely downloading the multimedia file.
In this application embodiment, the cloud device can acquire the display duration, the multiple elements and the element information of the first webpage, and can process the multiple static elements according to the display duration and the element information corresponding to the static elements to obtain the static video. The cloud device can take the static video, the dynamic element and the element information corresponding to the dynamic element as video information, or perform fusion processing on the static video and the dynamic element according to the element information corresponding to the dynamic element to obtain the video information corresponding to the first webpage. The cloud device can send video information corresponding to the first webpage to the electronic device. The cloud device can convert the webpage into corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of the electronic device for displaying the webpage can be improved.
On the basis of the embodiment shown in fig. 2, when the elements included in the first web page are different, the process of determining the video information corresponding to the first web page is different. Next, with reference to fig. 4, a process of determining video information corresponding to a first web page is described by taking a case where the first web page includes a static element and a dynamic element as an example (S203 in the embodiment of fig. 2).
Fig. 4 is a flowchart illustrating a method for determining video information according to an exemplary embodiment of the present application. Referring to fig. 4, the method may include:
s401, determining the number N of the video frames according to the display duration and the preset frame rate.
The preset frame rate refers to a frequency at which images in units of frames continuously appear. The preset frame rate may be preset by a human. For example, the preset frame rate may be 24 frames/second, which means that 24 images are continuously displayed in the first web page within 1 second.
The number N of video frames refers to the number of image frames corresponding to the first web page in the display duration. N is an integer greater than 1. For example, if the display duration of the first webpage is 10s and the preset frame rate is 24 frames/s, the number N of video frames may be determined to be 240.
The cloud device may determine a product of a display duration of the first webpage and a preset frame rate as the number N of video frames.
S402, determining a Red, Green and Blue (RGB) image corresponding to each static element and a frame identifier corresponding to the RGB image according to the element information of each static element.
The RGB image corresponding to the static element comprises the static element, and the size of the RGB image is the same as that of the first webpage. The image format of the RGB image is the RGB format.
The RGB image corresponding to the static element may be generated according to the display position of the static element in the first webpage. Next, an RGB image corresponding to a static element will be described with reference to fig. 5.
Fig. 5 is a schematic diagram of an RGB image provided in an exemplary embodiment of the present application. Referring to fig. 5, the first web page 501, the RGB image 502 and the RGB image 503 are included. Referring to fig. 5, a first web page includes static element text 1 and image 1 at a certain time. According to the position of the text 1 in the first webpage, the RGB image corresponding to the text 1 can be determined to be the RGB image 502, and the RGB image corresponding to the image 1 can be determined to be the RGB image 503.
The frame identifier corresponding to the RGB image is an identifier of a frame displaying the RGB image, that is, the frame identifier corresponding to the RGB image may indicate in which frames the RGB image is displayed. For example, assuming that the frame identifiers 1, 2, and 3 corresponding to the RGB images indicate that the RGB images are displayed in the 1 st frame, the 2 nd frame, and the 3 rd frame.
The frame identifier corresponding to the RGB image may be determined according to the display period of the static element in the first webpage and the preset frame rate, and the RGB image corresponds to at least one frame identifier. For example, the frame identifier of the image frame displayed in the display period may be calculated according to the display period and a preset frame rate, and the frame identifier of the image frame displayed in the display period may be determined as at least one frame identifier corresponding to the RGB image.
For example, assuming that the preset frame rate is 24 frames/s, and the static element 1 is 61ms to 120ms in the display period of the first web page, it may be determined that the frames of the image frames displayed in the display period are identified as 25 th to 48 th frames, and at least one frame corresponding to the static element 1 is identified as 25 th to 48 th frames.
S403, determining N image groups according to the RGB image corresponding to each static element and the frame identification corresponding to the RGB image.
If the number of the video frames corresponding to the first webpage is N, the N image groups may be determined according to the RGB images corresponding to the static elements in the first webpage and the frame identifiers corresponding to the RGB images.
For any image group, the image group comprises at least one RGB image, and each RGB image in the ith image group corresponds to a frame identifier i.
Fig. 6 is a schematic diagram of determining a group of images according to an exemplary embodiment of the present application. Referring to fig. 6, it is assumed that the first web page includes static element text 1 and image 1, where text 1 corresponds to RGB image 1, and the corresponding frames are labeled as frame 1 to frame 48. Image 1 corresponds to RGB image 2, and its corresponding frames are identified as frame 24 through frame 72. If the number of the video frames corresponding to the first webpage is 72, the number of the image groups is 72.
Since the RGB image 1 corresponds to the frame identifiers 1-23, it can be determined that the image groups 1 to 23 respectively include: RGB image 1.
Since the RGB image 1 and the RGB image 2 correspond to the frame identifications 24-48, it can be determined that the 24 th to 48 th image groups respectively include: RGB image 1 and RGB image 2.
Since the RGB image 2 corresponds to the frame identifications 49-72, it can be determined that the 49 th to 72 th image groups respectively include: RGB image 2.
S404, respectively carrying out fusion processing on the RGB images in each image group to obtain N frames of target images.
After the cloud device determines the RGB images in each image group, the RGB images may be superimposed to obtain N frames of target images.
The process of fusing RGB images in each image group is the same, and the process of fusing RGB images in any one image group will be described below with reference to fig. 7.
Fig. 7 is a schematic diagram of image fusion provided in an exemplary embodiment of the present application. Referring to fig. 7, assuming that a certain image group includes an RGB image 1 corresponding to a text 1 and an RGB image 2 corresponding to an image 1, the cloud device may superimpose the RGB image 1 and the RGB image 2 to obtain an RGB image 3, and the RGB image 3 is a target image.
S405, splicing the N frames of target images to obtain a static video.
Because the N frames of target images are obtained by superposing all the RGB images, each frame of target image is still the target image in the RGB format. The cloud device can perform format conversion processing on the N frames of target images to obtain target images in the N frames of target formats. For example, the cloud device may perform format conversion processing on a target image in an RGB format to obtain a video sequence frame image.
After the cloud device obtains the target images in the N frames of target formats, the images in the N frames of target formats can be spliced according to a preset frame rate through a media synthesis algorithm to obtain a static video. For example, if the cloud device obtains 240 frames of target images in the target format, the cloud device may stitch the 240 frames of target images in the target format according to 24 frames per second to obtain a static video, where the display duration is 10 s.
S406, determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
It should be noted that the execution process of S406 may refer to case 3 in S203, and is not described herein again.
In this application embodiment, the cloud device may determine the number N of video frames corresponding to the first web page according to the display duration and the preset frame rate of the first web page, and generate the corresponding RGB image according to the display time period and the display position of each static element in the first web page. The cloud device can determine the N image groups according to the RGB image corresponding to each static element and the frame identifier corresponding to the RGB image, can superpose the RGB images in each image group to obtain N frame target images, and can further perform format conversion processing on the N frame target images to obtain target images in N frame target formats. The cloud device can splice target images in N frames of target formats to obtain a static video, and determines video information corresponding to the first webpage according to the static video, the dynamic elements and the element information of the dynamic elements. The cloud device can convert the webpage into corresponding video information, so that the problem that the electronic device cannot be compatible with the format of the content in the webpage is avoided, and the reliability of the electronic device for displaying the webpage can be improved.
On the basis of any one of the above embodiments, after the electronic device receives the video information corresponding to the first webpage, the electronic device may perform video playing according to the video information. Next, with reference to fig. 8, a process of receiving and playing video information by the electronic device will be described.
Fig. 8 is a flowchart illustrating a further page processing method according to an exemplary embodiment of the present application, please refer to fig. 8, where the method may include:
s801, receiving video information corresponding to the first webpage.
The execution main body of the embodiment of the application can be an electronic device, and can also be a page processing device arranged in the electronic device. The page processing means may be implemented by software, or by a combination of software and hardware.
The first webpage comprises a plurality of elements, and the plurality of elements comprise static elements and dynamic elements. For example, a static element may include text 1 and image 1, and a dynamic element may include video 1. Each element has corresponding element information, and the element information includes a display period and a display position of the element in the first webpage.
The video information is determined from element information of a plurality of elements in the first web page. The electronic device can receive video information corresponding to the first webpage sent by the cloud device.
S802, determining a target video according to the video information.
After the electronic device can receive the video information, the video information can be identified to determine content included in the video information. When the content included in the video information is different, the manner of determining the target video by the electronic device is also different, and the following 4 cases may be included:
case 1, the video information includes still video.
The static video is determined according to the element information of the static elements in the first webpage.
For example, if the first webpage includes the text 1 and the image 1, the still video 1 can be obtained from the text 1 and the image 1, and correspondingly, the video information includes the still video 1.
In this case, the electronic device may determine that the still video is the target video.
Case 2, including dynamic elements in the video information.
For example, if the first webpage includes video 1, the video information includes video 1 correspondingly.
In this case, the electronic device may determine that the dynamic element is the target video.
Case 3, the video information includes static video, dynamic elements, and element information of the dynamic elements.
For example, if the first webpage includes the text 1, the image 1, and the video 1, wherein the still video 1 can be generated according to the text 1 and the image 1, the video information includes the display time period and the display position of the still video 1, the video 1, and the video 1 correspondingly.
In this case, the electronic device may perform fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain the target video
Case 4, the video information includes the fusion video.
The fusion video is obtained by fusing a static video and a dynamic element.
For example, if the first webpage includes the text 1, the image 1, and the video 1, the still video 1 may be generated according to the text 1 and the image 1, and the still video 1 and the video 1 may be subjected to fusion processing to obtain a fused video, and correspondingly, the video information includes the fused video.
In this case, the electronic device may determine the fused video as the target video.
And S803, playing the target video.
After the electronic device determines the target video, the target video may be played to display the content in the first webpage.
In this embodiment of the application, the electronic device may receive video information corresponding to the first webpage, and identify and process the video information. If the video information comprises any one of a static video, a dynamic element or a fusion video, the electronic equipment can determine the video information as a target video and play the target video; if the video information includes the static video, the dynamic element and the element information of the dynamic element, the electronic device may perform fusion processing on the static video and the dynamic element according to the element information of the dynamic element to obtain a target video, and play the target video to display the content corresponding to the first webpage. The electronic equipment receives the video information corresponding to the webpage instead of directly downloading the content in the webpage, so that the problem that the electronic equipment cannot be compatible with the format of the content in the webpage is solved, and the reliability of the electronic equipment for displaying the webpage can be improved.
Fig. 9 is a schematic structural diagram of a page processing apparatus according to an exemplary embodiment of the present application. Referring to fig. 9, the page processing apparatus 10 includes: a determination module 11, an acquisition module 12, a processing module 13 and a sending module 14, wherein,
the determining module 11 is configured to determine a plurality of elements and element information of the elements in a first webpage, where the element information includes: a display time period and a display position of the element in the first webpage;
the obtaining module 12 is configured to obtain a display duration of the first webpage;
the processing module 13 is configured to process the multiple elements according to the display duration and the element information to obtain video information corresponding to the first webpage;
the sending module 14 is configured to send video information corresponding to the first webpage to an electronic device.
The page processing apparatus provided in the embodiment of the present application may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
In a possible implementation, the processing module 13 is specifically configured to:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
and determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
In a possible implementation, the processing module 13 is specifically configured to:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
generating N frames of target images according to the plurality of static elements and the element information of the plurality of static elements;
and splicing the N frames of target images to obtain the static video.
In a possible implementation, the processing module 13 is specifically configured to:
respectively determining an RGB image corresponding to each static element and a frame identifier corresponding to the RGB image according to the element information of each static element, wherein the frame identifier is an integer which is greater than or equal to 1 and less than or equal to N;
determining N image groups according to the RGB image corresponding to each static element and the frame identification corresponding to the RGB image, wherein the image group comprises at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and i is an integer which is greater than or equal to 1 and less than or equal to N;
and respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
In a possible implementation, the processing module 13 is specifically configured to:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
and determining a frame identifier corresponding to the RGB image according to the display time period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
In a possible implementation, the processing module 13 is specifically configured to:
carrying out format conversion processing on the N frames of target images to obtain N frames of images in a target format;
and splicing the images in the N frames of target formats to obtain the static video.
In a possible implementation, the processing module 13 is specifically configured to:
determining the video information comprises: the static video, the dynamic element, and element information of the dynamic element;
alternatively, the first and second electrodes may be,
and according to the element information of the dynamic element, performing fusion processing on the static video and the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
The page processing apparatus provided in the embodiment of the present application may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
Fig. 10 is a schematic structural diagram of another page processing apparatus according to an exemplary embodiment of the present application. Referring to fig. 10, the page processing apparatus 20 includes: the method comprises the following steps: a receiving module 21, a determining module 22 and a playing module 23, wherein,
the receiving module 21 is configured to receive video information corresponding to a first webpage, where the first webpage includes a plurality of elements, the video information is determined according to element information of the plurality of elements, and the element information includes a display time period and a display position of the element in the first webpage;
the determining module 22 is configured to determine a target video according to the video information;
the playing module 23 is configured to play the target video.
The page processing apparatus provided in the embodiment of the present application may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
In one possible implementation, the video information includes: the static video, the dynamic elements and the element information of the dynamic elements, wherein the static video is determined according to the element information of the static elements in the first webpage;
alternatively, the first and second electrodes may be,
the video information comprises a fusion video, and the fusion video is obtained by performing fusion processing on the static video and the dynamic element.
In a possible implementation, the determining module 22 is specifically configured to:
according to the element information of the dynamic element, the static video and the dynamic element are subjected to fusion processing to obtain the target video;
alternatively, the first and second electrodes may be,
the video information comprises a fusion video; determining a target video according to the video information, comprising:
determining the fused video as the target video.
The page processing apparatus provided in the embodiment of the present application may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
Referring to fig. 11, the cloud device 30 may include a processor 31 and a memory 32. Illustratively, the processor 31, the memory 32, and the various parts are interconnected by a bus 33.
The memory 32 stores computer-executable instructions;
the processor 31 executes the computer-executable instructions stored in the memory 32, so that the processor 31 executes the page processing method as shown in the above-mentioned method embodiments.
The exemplary embodiment of the present application provides a schematic structural diagram of an electronic device, please refer to fig. 12, where the electronic device 40 may include a processor 41 and a memory 42. Illustratively, the processor 41, the memory 42, and the various parts are interconnected by a bus 43.
The memory 42 stores computer-executable instructions;
the processor 41 executes computer-executable instructions stored by the memory 42, causing the processor 41 to perform the page processing method as shown in the above-described method embodiments.
Accordingly, an embodiment of the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is configured to implement the page processing method according to the foregoing method embodiment.
Accordingly, the present application may also provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the page processing method shown in the foregoing method embodiment may be implemented.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (16)

1. A page processing method is characterized by comprising the following steps:
determining a plurality of elements and element information of the elements in a first webpage, wherein the element information comprises: a display period and a display position of the element in the first webpage;
acquiring the display duration of the first webpage;
processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage;
and sending the video information corresponding to the first webpage to the electronic equipment.
2. The method of claim 1, wherein the plurality of elements comprises a static element and a dynamic element; processing the plurality of elements according to the display duration and the element information to obtain video information corresponding to the first webpage, including:
processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video;
and determining video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element.
3. The method of claim 2, wherein processing the plurality of static elements according to the display duration and the element information of the static elements to obtain a static video comprises:
determining the number N of video frames according to the display duration and a preset frame rate, wherein N is an integer greater than 1;
generating N frames of target images according to the plurality of static elements and the element information of the plurality of static elements;
and splicing the N frames of target images to obtain the static video.
4. The method of claim 3, wherein generating an N-frame target image from the plurality of static elements and the element information of the plurality of static elements comprises:
respectively determining a red, green and blue (RGB) image corresponding to each static element and a frame identifier corresponding to the RGB image according to the element information of each static element, wherein the frame identifier is an integer which is greater than or equal to 1 and less than or equal to N;
determining N image groups according to the RGB image corresponding to each static element and the frame identification corresponding to the RGB image, wherein the image group comprises at least one RGB image, each RGB image in the ith image group corresponds to a frame identification i, and the i is an integer which is greater than or equal to 1 and less than or equal to N;
and respectively carrying out fusion processing on the RGB images in each image group to obtain the N frames of target images.
5. The method of claim 4, wherein for any one static element; determining the RGB image corresponding to the static element and the frame identifier corresponding to the RGB image according to the element information of the static element, including:
generating an RGB image corresponding to the static element according to the display position of the static element in the first webpage;
and determining a frame identifier corresponding to the RGB image according to the display time period of the static element in the first webpage and the preset frame rate, wherein the RGB image corresponds to at least one frame identifier.
6. The method according to any one of claims 3 to 5, wherein the stitching the N target images to obtain the still video comprises:
carrying out format conversion processing on the N frames of target images to obtain N frames of images in the target format;
and splicing the N frames of images in the target format to obtain the static video.
7. The method according to any one of claims 2 to 5, wherein determining the video information corresponding to the first webpage according to the static video, the dynamic element and the element information of the dynamic element comprises:
determining the video information comprises: the static video, the dynamic element, and element information of the dynamic element;
alternatively, the first and second electrodes may be,
and according to the element information of the dynamic element, performing fusion processing on the static video and the dynamic element to obtain a fusion video, and determining that the video information comprises the fusion video.
8. A page processing method is characterized by comprising the following steps:
receiving video information corresponding to a first webpage, wherein the first webpage comprises a plurality of elements, the video information is determined according to the element information of the elements, and the element information comprises a display time period and a display position of the elements in the first webpage;
and determining a target video according to the video information, and playing the target video.
9. The method of claim 8, wherein the plurality of elements includes static elements and dynamic elements, and wherein,
the video information includes: the static video, the dynamic elements and the element information of the dynamic elements, wherein the static video is determined according to the element information of the static elements in the first webpage;
alternatively, the first and second electrodes may be,
the video information comprises a fusion video, and the fusion video is obtained by fusing the static video and the dynamic element.
10. The method of claim 9, wherein the video information comprises the static video, the dynamic element, and element information of the dynamic element; determining a target video according to the video information, comprising:
according to the element information of the dynamic element, the static video and the dynamic element are subjected to fusion processing to obtain the target video;
alternatively, the first and second electrodes may be,
the video information comprises a fusion video; determining a target video according to the video information, comprising:
and determining the fused video as the target video.
11. A page processing apparatus, comprising: the device comprises a determining module, an obtaining module, a processing module and a sending module, wherein,
the determining module is configured to determine a plurality of elements and element information of the elements in a first webpage, where the element information includes: a display period and a display position of the element in the first webpage;
the acquisition module is used for acquiring the display duration of the first webpage;
the processing module is used for processing the elements according to the display duration and the element information to obtain video information corresponding to the first webpage;
the sending module is used for sending the video information corresponding to the first webpage to the electronic equipment.
12. A page processing apparatus, comprising: a receiving module, a determining module and a playing module, wherein,
the receiving module is used for receiving video information corresponding to a first webpage, the first webpage comprises a plurality of elements, the video information is determined according to element information of the elements, and the element information comprises a display time period and a display position of the elements in the first webpage;
the determining module is used for determining a target video according to the video information;
the playing module is used for playing the target video.
13. A cloud device, comprising: a memory and a processor;
the memory stores computer-executable instructions;
the processor executing the computer executable instructions stored by the memory causes the processor to perform the page processing method of any of claims 1 to 7.
14. An electronic device, comprising: a memory and a processor;
the memory stores computer-executable instructions;
the processor executing the computer-executable instructions stored by the memory causes the processor to perform the page processing method of any of claims 8 to 10.
15. A computer-readable storage medium having stored thereon computer-executable instructions for implementing the page processing method of any one of claims 1 to 7, or the page processing method of any one of claims 8 to 10, when the computer-executable instructions are executed by a processor.
16. A computer program product comprising a computer program which, when executed by a processor, implements the page processing method of any one of claims 1 to 7, or the page processing method of any one of claims 8 to 10.
CN202210286723.7A 2022-03-22 2022-03-22 Page processing method, device and equipment Pending CN114666621A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210286723.7A CN114666621A (en) 2022-03-22 2022-03-22 Page processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210286723.7A CN114666621A (en) 2022-03-22 2022-03-22 Page processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN114666621A true CN114666621A (en) 2022-06-24

Family

ID=82031485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210286723.7A Pending CN114666621A (en) 2022-03-22 2022-03-22 Page processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN114666621A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396696A (en) * 2022-08-22 2022-11-25 网易(杭州)网络有限公司 Video data transmission method, system, processing device and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847870A (en) * 2016-04-20 2016-08-10 乐视控股(北京)有限公司 Server, static video playing page generation method, device and system
US20170004646A1 (en) * 2015-07-02 2017-01-05 Kelly Phillipps System, method and computer program product for video output from dynamic content
CN106649830A (en) * 2016-12-29 2017-05-10 北京奇虎科技有限公司 Information showing method and device
WO2017148211A1 (en) * 2016-02-29 2017-09-08 努比亚技术有限公司 Mobile terminal and webpage screenshot capturing method
CN107480245A (en) * 2017-08-10 2017-12-15 腾讯科技(深圳)有限公司 A kind of generation method of video file, device and storage medium
CN109684565A (en) * 2018-12-11 2019-04-26 北京字节跳动网络技术有限公司 The generation of Webpage correlation video and methods of exhibiting, device, system and electronic equipment
CN110324671A (en) * 2018-03-30 2019-10-11 中兴通讯股份有限公司 Video web page playback method and device, electronic equipment and storage medium
CN110457624A (en) * 2019-06-26 2019-11-15 网宿科技股份有限公司 Video generation method, device, server and storage medium
US20200410034A1 (en) * 2019-06-26 2020-12-31 Wangsu Science & Technology Co., Ltd. Video generating method, apparatus, server, and storage medium
CN113010825A (en) * 2021-03-09 2021-06-22 腾讯科技(深圳)有限公司 Data processing method and related device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170004646A1 (en) * 2015-07-02 2017-01-05 Kelly Phillipps System, method and computer program product for video output from dynamic content
WO2017148211A1 (en) * 2016-02-29 2017-09-08 努比亚技术有限公司 Mobile terminal and webpage screenshot capturing method
CN105847870A (en) * 2016-04-20 2016-08-10 乐视控股(北京)有限公司 Server, static video playing page generation method, device and system
CN106649830A (en) * 2016-12-29 2017-05-10 北京奇虎科技有限公司 Information showing method and device
CN107480245A (en) * 2017-08-10 2017-12-15 腾讯科技(深圳)有限公司 A kind of generation method of video file, device and storage medium
CN110324671A (en) * 2018-03-30 2019-10-11 中兴通讯股份有限公司 Video web page playback method and device, electronic equipment and storage medium
CN109684565A (en) * 2018-12-11 2019-04-26 北京字节跳动网络技术有限公司 The generation of Webpage correlation video and methods of exhibiting, device, system and electronic equipment
CN110457624A (en) * 2019-06-26 2019-11-15 网宿科技股份有限公司 Video generation method, device, server and storage medium
US20200410034A1 (en) * 2019-06-26 2020-12-31 Wangsu Science & Technology Co., Ltd. Video generating method, apparatus, server, and storage medium
CN113010825A (en) * 2021-03-09 2021-06-22 腾讯科技(深圳)有限公司 Data processing method and related device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
甄凤其;: "网页设计中视觉元素的功能与审美性研究", 电脑编程技巧与维护, no. 17, 3 September 2017 (2017-09-03) *
郑海能;叶阿真;: "一种网页转换技术的研究", 电脑知识与技术, no. 14, 15 May 2018 (2018-05-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396696A (en) * 2022-08-22 2022-11-25 网易(杭州)网络有限公司 Video data transmission method, system, processing device and storage medium
CN115396696B (en) * 2022-08-22 2024-04-12 网易(杭州)网络有限公司 Video data transmission method, system, processing device and storage medium

Similar Documents

Publication Publication Date Title
CN109460233B (en) Method, device, terminal equipment and medium for updating native interface display of page
US11785195B2 (en) Method and apparatus for processing three-dimensional video, readable storage medium and electronic device
CN110070593B (en) Method, device, equipment and medium for displaying picture preview information
CN112651475B (en) Two-dimensional code display method, device, equipment and medium
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
CN114666621A (en) Page processing method, device and equipment
US20240048665A1 (en) Video generation method, video playing method, video generation device, video playing device, electronic apparatus and computer-readable storage medium
CN109871465B (en) Time axis calculation method and device, electronic equipment and storage medium
CN110971955B (en) Page processing method and device, electronic equipment and storage medium
CN114222185B (en) Video playing method, terminal equipment and storage medium
CN113961280B (en) View display method and device, electronic equipment and computer readable storage medium
CN112004049B (en) Double-screen different display method and device and electronic equipment
CN114035787A (en) Webpage construction method and device, electronic equipment, storage medium and product
US20230022105A1 (en) Video mask layer display method, apparatus, device and medium
CN115150653B (en) Media content display method and device, electronic equipment and storage medium
CN109242814A (en) Commodity image processing method, device and electronic equipment
CN112306339B (en) Method and apparatus for displaying image
US12020347B2 (en) Method and apparatus for text effect processing
US11983911B2 (en) Method and system for transmitting information, storage medium and electronic device
US11651529B2 (en) Image processing method, apparatus, electronic device and computer readable storage medium
WO2024131621A1 (en) Special effect generation method and apparatus, electronic device, and storage medium
US20220292731A1 (en) Method and apparatus for text effect processing
US20220308821A1 (en) Dividing method, distribution method, medium, server, system
CN113362419A (en) Drawing method and related equipment
CN116302268A (en) Media content display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination