CN117692672A - Snapshot-based video information sending method and device, electronic equipment and medium - Google Patents

Snapshot-based video information sending method and device, electronic equipment and medium Download PDF

Info

Publication number
CN117692672A
CN117692672A CN202311658329.2A CN202311658329A CN117692672A CN 117692672 A CN117692672 A CN 117692672A CN 202311658329 A CN202311658329 A CN 202311658329A CN 117692672 A CN117692672 A CN 117692672A
Authority
CN
China
Prior art keywords
video
information
snapshot
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311658329.2A
Other languages
Chinese (zh)
Other versions
CN117692672B (en
Inventor
张燕
王杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Park Road Credit Information Co ltd
Original Assignee
Park Road Credit Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Park Road Credit Information Co ltd filed Critical Park Road Credit Information Co ltd
Priority to CN202311658329.2A priority Critical patent/CN117692672B/en
Publication of CN117692672A publication Critical patent/CN117692672A/en
Application granted granted Critical
Publication of CN117692672B publication Critical patent/CN117692672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention discloses a snapshot-based video information sending method, a snapshot-based video information sending device, electronic equipment and media. One embodiment of the method comprises the following steps: receiving target user request information sent by a user terminal; transmitting target user request information to a server to receive video information transmitted by the server; in response to determining that the video information satisfies a preset normal condition, performing the determining step of: determining video list information included in the video information as target video list information; adding target user request information and target video list information into a video snapshot table; selecting target video list information from the video snapshot table and carrying out alarm processing on the user terminal in response to the fact that the video information does not meet the preset normal condition; and sending the obtained target video list information to the user terminal. This embodiment may reduce the waste of storage resources.

Description

Snapshot-based video information sending method and device, electronic equipment and medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a snapshot-based video information sending method, a snapshot-based video information sending device, electronic equipment and media.
Background
When the server side is abnormal, video information is continuously sent to the user terminal, so that the user requirement can be met. Currently, the video information is transmitted in the following manner: and backing up the video information stored in the server or storing the video information queried by the user terminal into a snapshot, querying the video information corresponding to the user request from the backed-up video information or the snapshot when the server is abnormal, and then transmitting the queried video information to the user terminal.
However, the following technical problems generally exist in the above manner:
firstly, a mode of backing up video information is adopted, double storage resources are needed to store the video information, and the storage resources are wasted;
secondly, the same video information is stored into the snapshot in a mode of storing the video information queried by the user terminal into the snapshot, so that repeated video information is stored by consuming storage resources, and the waste of the storage resources is caused;
thirdly, the method of inquiring the video information corresponding to the user request from the snapshot is adopted, so that the computing resource is consumed to traverse the snapshot for many times to inquire the video information which is most in line with the user requirement, and the computing resource is wasted.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose snapshot-based video information transmission methods, apparatuses, electronic devices, and computer-readable media to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a snapshot-based video information transmission method, the method including: receiving target user request information sent by a user terminal; transmitting the target user request information to a server to receive video information transmitted by the server, wherein the video information comprises: server status code, video list information; in response to determining that the video information satisfies a preset normal condition, performing the following determining step: determining video list information included in the video information as target video list information, wherein the preset normal condition is as follows: the server status code included in the video information is a preset status code, and the video list information included in the video information is not empty; adding the target user request information and the target video list information into a video snapshot table, wherein the video snapshot table is initially empty; selecting target video list information from a video snapshot table and carrying out alarm processing on the user terminal in response to determining that the video information does not meet the preset normal condition; and sending the obtained target video list information to the user terminal.
In a second aspect, some embodiments of the present disclosure provide a snapshot-based video information transmitting apparatus, the apparatus including: a receiving unit configured to receive target user request information sent by a user terminal; a first sending unit configured to send the target user request information to a server, so as to receive video information sent by the server, where the video information includes: server status code, video list information; a determining unit configured to perform the following determining step in response to determining that the video information satisfies a preset normal condition: determining video list information included in the video information as target video list information, wherein the preset normal condition is as follows: the server status code included in the video information is a preset status code, and the video list information included in the video information is not empty; adding the target user request information and the target video list information into a video snapshot table, wherein the video snapshot table is initially empty; a selecting unit configured to select target video list information from a video snapshot table and perform alarm processing on the user terminal in response to determining that the video information does not satisfy the preset normal condition; and a second transmitting unit configured to transmit the obtained target video list information to the user terminal.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: by the snapshot-based video information sending method of some embodiments of the present disclosure, waste of storage resources can be reduced. Specifically, the reason why the storage resources are wasted is that: by adopting a mode of backing up video information, double storage resources are consumed to store the video information, so that the storage resources are wasted. Based on this, the snapshot-based video information transmission method of some embodiments of the present disclosure first receives target user request information transmitted by a user terminal. Thus, the target user request information of the user terminal can be received, so that the video information can be acquired from the server side later. And secondly, the target user request information is sent to a server side so as to receive the video information sent by the server side. Wherein, the video information includes: server status code, video list information. Thus, the video information can be acquired from the server side so as to send the video list information to the user terminal later. Then, in response to determining that the video information satisfies a preset normal condition, performing the following determining step: first, video list information included in the video information is determined as target video list information. Wherein, the preset normal conditions are as follows: the server status code included in the video information is a preset status code, and the video list information included in the video information is not null. Second, the target user request information and the target video list information are added to a video snapshot table. Wherein the video snapshot table is initially empty. Therefore, the video list information acquired by the server in the normal state can be stored in the video snapshot table, so that when the subsequent server is abnormal, the video list information is acquired from the video snapshot table. Then, in response to determining that the video information does not meet the preset normal condition, selecting target video list information from a video snapshot table, and performing alarm processing on the user terminal. Therefore, when the server side is abnormal, the target video list information can be selected from the video snapshot table. And then, the obtained target video list information is sent to the user terminal. Thus, the target video list information can be transmitted to the user terminal to meet the user's needs. Therefore, the user request information and the corresponding video list information can be stored in the video snapshot table, so that the waste of storage resources can be reduced instead of backing up all video data.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of a snapshot-based video information transmission method according to the present disclosure;
FIG. 2 is a schematic diagram of the structure of some embodiments of a snapshot-based video information transmitting device according to the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, a flow 100 of some embodiments of a snapshot-based video information transmission method according to the present disclosure is shown. The snapshot-based video information sending method comprises the following steps:
And step 101, receiving target user request information sent by a user terminal.
In some embodiments, an execution subject (e.g., a computing device) of the snapshot-based video information transmission method may receive target user request information transmitted by a user terminal through a wired connection or a wireless connection. The user terminal may be a terminal that wants to view video. The target user request information may characterize that the user terminal wants to view the video.
Step 102, the target user request information is sent to the server to receive the video information sent by the server.
In some embodiments, the executing entity may send the target user request information to a server to receive video information sent by the server. Wherein, the video information may include, but is not limited to, at least one of the following: server status code, video list information. The server-side status code may characterize an HTTP status code that the server responds to after receiving the user request. The video list information may characterize a video list corresponding to the target user request. The video list information may include, but is not limited to, at least one of: the method comprises the steps of a server interface, a request user identifier, a request longitude and latitude, and a request video list identifier. Here, the server interface may be an interface through which the user terminal accesses the server to acquire video information. The requesting user identification may characterize the publisher of the video that the user terminal wants to view. Requesting the latitude and longitude may characterize a geohash code (latitude and longitude address code) of the latitude and longitude. The requested video list identifies individual videos that may correspond to the user terminal wants to view. For example, the accuracy of the requested latitude and longitude correspondence may be, but is not limited to: 5 km, 10 km, 20 km. For example, the server state code may be, but is not limited to: 200. 400, 404, 500.
Step 103, in response to determining that the video information satisfies a preset normal condition, performing the following determining steps:
step 1031, determining video list information included in the video information as target video list information.
In some embodiments, the executing body may determine video list information included in the video information as target video list information. Wherein, the preset normal condition may be: the server status code included in the video information is a preset status code, and the video list information included in the video information is not null. For example, the preset status code may be 200.
Step 1032, adds the target user request information and the target video list information to the video snapshot table.
In some embodiments, the executing entity may add the target user request information and the target video list information to a video snapshot table. Wherein the video snapshot table may be initially empty. The video snapshot table may include: at least one video snapshot information. The video snapshot information may include, but is not limited to, at least one of: user request information, video list information, time identification. The user request information may characterize that the user wants to view the video. The video list information may characterize the video that the user wants to view. The time identifier may characterize the time at which the video listing information was received. For example, the time stamp may be 2023.11.1.
In practice, the executing entity may add the target user request information and the target video list information to the video snapshot table by:
and the first step is to splice the target user request information, the target video list information and the current time mark to generate first target video snapshot information. In practice, the executing body may splice the target user request information, the preset splicing character string, the target video list information, the preset splicing character string, and the current time identifier into the first target video snapshot information sequentially from left to right. Here, the current time identification may characterize the current time. For example, the current time identification may be 2023.11.2. For example, the preset splice string may be, but is not limited to: "|", "#".
A second step of, in response to determining that the video snapshot table does not have the same video snapshot information as the first target video snapshot information, performing the following processing sub-steps:
and a first sub-step of adding the first target video snapshot information into a video snapshot table to obtain the first video snapshot table. Wherein, the first video snapshot table may include: at least one first video snapshot information. The first video snapshot information may include, but is not limited to, at least one of: user request information, video list information, time identification.
And a second sub-step of deleting at least one piece of first video snapshot information meeting the preset time condition from the first video snapshot table in response to determining that the first video snapshot information meeting the preset time condition exists in the first video snapshot table, so as to obtain a second video snapshot table. The preset time condition may be: the interval between the time represented by the time mark included in the first video snapshot information and the time marked by the time mark included in the first target video snapshot information is larger than a preset duration. For example, the preset time period may be 7 days.
And a third sub-step of determining the first video snapshot table as a second video snapshot table in response to determining that the first video snapshot table does not have the first video snapshot information satisfying the preset time condition.
And a fourth sub-step of determining the second video snapshot table as a video snapshot table.
Therefore, the video snapshot information exceeding a certain time length in the video snapshot table can be deleted, so that the storage space is saved.
In some optional implementations of some embodiments, the executing entity may add the target user request information and the target video list information to a video snapshot table by:
And the first step is to splice the target user request information, the target video list information and the current time mark to generate second target video snapshot information. In practice, the executing body may splice the target user request information, the preset splicing character string, the target video list information, the preset splicing character string, and the current time identifier into the first target video snapshot information sequentially from left to right. For example, the preset splice string may be, but is not limited to: "|", "#".
And step two, the second target video snapshot information is sent to a preset filter so as to receive the query identification sent by the preset filter. Here, the preset filter may include a plurality of bloom filters distributed. The video snapshot information in the video snapshot table may be stored in the plurality of distributed bloom filters described above via a hash map. The query identifier may be a first preset query identifier or a second preset query identifier. For example, the first preset query identifier may be "true". The second preset query identifier may be "false". Here, the first preset query identifier may be characterized: the second target video snapshot information may be in a video snapshot table. The second preset query identifier may be characterized by: the second target video snapshot information must not be in the video snapshot table.
Therefore, the distributed bloom filter layer does not cause that a bloom filter occupies a large amount of memory, the server pressure can be relieved, the cluster load can be balanced, the availability of the architecture is improved, and the higher system performance is maintained, so that the efficiency of data query can be improved.
Third, in response to determining that the received query identifier meets a preset abnormal query condition, performing the following first adding sub-step:
and a first sub-step of adding the second target video snapshot information into the video snapshot table to obtain a third video snapshot table. The preset abnormal query condition may be: the query identifier is a second preset query identifier.
And a second sub-step of determining the third video snapshot table as a video snapshot table.
Fourth, in response to determining that the received query identifier does not satisfy the preset abnormal query condition, performing the following second adding sub-step:
and a first sub-step, comparing the second target video snapshot information with the video snapshot table to generate a comparison result. In practice, first, in response to determining that the video snapshot information identical to the second target video snapshot information exists in the video snapshot table, the executing body may determine information characterizing "the second target video snapshot information exists in the video snapshot table" as a comparison result. Then, the executing body may determine, as the comparison result, information characterizing "the second target video snapshot information is not present in the video snapshot table" in response to determining that the same video snapshot information as the second target video snapshot information is not present in the video snapshot table.
And a second sub-step of adding the second target video snapshot information to the video snapshot table to obtain a fourth video snapshot table in response to determining that the comparison result meets a preset comparison condition. The preset comparison condition may be that the comparison result represents that the second target video snapshot information does not exist in the video snapshot table.
And a third sub-step of determining the fourth video snapshot table as a video snapshot table.
The optional technical content in step 1032 is an invention point of the embodiment of the present disclosure, and solves the second technical problem mentioned in the background art, namely "waste of storage resources. Factors that cause waste of storage resources are often as follows: the method of storing the video information queried by the user terminal into the snapshot is adopted, the same video information is stored into the snapshot, and repeated video information is stored in the storage resource consumption mode. If the above factors are solved, the effect of reducing the waste of storage resources can be achieved. To achieve this, first, the above-described target user request information, the above-described target video list information, and the current time stamp are subjected to a stitching process to generate second target video snapshot information. Thus, the second target video snapshot information including the current time identification can be obtained, so that the second target video snapshot information can be stored in the video snapshot table later. And secondly, the second target video snapshot information is sent to a preset filter so as to receive the query identifier sent by the preset filter. Thus, it may be determined by the filter whether the second target video snapshot information is in the video snapshot table. Then, in response to determining that the received query identity meets a preset abnormal query condition, performing the following adding steps: and firstly, adding the second target video snapshot information into the video snapshot table to obtain a third video snapshot table. Thus, when the bloom filter determines that the second target video snapshot information is not in the video snapshot table, the second target video snapshot information may be stored into the video snapshot table. Second, the third video snapshot table is determined as a video snapshot table. Then, in response to determining that the received query identity does not satisfy the preset abnormal query condition, performing the following second adding step: firstly, comparing the second target video snapshot information with the video snapshot table to generate a comparison result. Thus, when the bloom filter determines that the second target video snapshot information is in the video snapshot table, to avoid false positives by the bloom filter, the video snapshot table may be traversed to determine whether the second target video snapshot information is in the video snapshot table. And secondly, in response to determining that the comparison result meets a preset comparison condition, adding the second target video snapshot information into the video snapshot table to obtain a fourth video snapshot table. Thus, when the video snapshot information identical to the second target video snapshot information does not exist in the video snapshot table, the second target video snapshot information can be stored in the video snapshot table. Third, the fourth video snapshot table is determined as a video snapshot table. Thus, a video snapshot table including non-duplicate video snapshot information may be obtained. Therefore, by introducing the bloom filter, whether the second target video snapshot information is in the video snapshot table can be determined, so as to determine whether the second target video snapshot information needs to be stored in the video snapshot table, and meanwhile, in order to avoid misjudgment of the bloom filter, when the bloom filter determines that the second target video snapshot information is in the video snapshot table, whether the second target video snapshot information is in the video snapshot table is further determined through traversing. Thus, non-duplicate video snapshot information may be stored. Further, waste of storage resources can be reduced.
And step 104, selecting target video list information from the video snapshot table and carrying out alarm processing on the user terminal in response to the fact that the video information does not meet the preset normal condition.
In some embodiments, the executing body may select target video list information from the video snapshot table and perform alarm processing on the user terminal in response to determining that the video information does not satisfy the preset normal condition. The alarm processing may be to display warning text or control the speaker to give out prompt sound.
In practice, in response to determining that the video information does not satisfy the preset normal condition, the executing body may select the target video list information from the video snapshot table by:
the first step, carrying out combination processing on the target user request information and each time identification sequence in the time identification sequences to generate first time request combination information, and obtaining a first time request combination information sequence, wherein the last time identification in the time identification sequences is the current time identification. The order of the time stamps in the sequence of time stamps may be a chronological order of time. The time interval corresponding to every two adjacent time marks in the time mark sequence is one day. The number of time stamps in the sequence of time stamps may be 7.
And a second step of selecting each video snapshot information corresponding to the first time request combination information from the video snapshot table for each first time request combination information in the first time request combination information sequence to obtain a time video snapshot information set. In practice, for each first time request combination information in the first time request combination information sequence, the executing body may select, based on a preset selection condition, each video snapshot information corresponding to the first time request combination information from the video snapshot table, to obtain a time video snapshot information set. Wherein, the preset selection conditions may be: the time video snapshot information in the time video snapshot information set includes the same user request information as the target user request information included in the first time request combination information, and the time identifier included in the time video snapshot information set is the same as the time identifier included in the first time request component information. For example, the temporal video snapshot information set may be empty when there is no video snapshot information in the video snapshot table corresponding to the first time request combination information.
And thirdly, determining each obtained time video snapshot information set as a time video snapshot information set sequence.
And step four, carrying out data cleaning processing on the time video snapshot information set sequence so as to generate the time video snapshot cleaning information set sequence. In practice, the executing body may remove the empty time video snapshot information set in the time video snapshot information set sequence, to obtain a time video snapshot cleaning information set sequence.
And fifthly, determining the last time video snapshot cleaning information set in the time video snapshot cleaning information set sequence as a target time video snapshot information set. Wherein, the target time video snapshot information in the target time video snapshot information set may include, but is not limited to, at least one of the following: user request information, video list information, time identification.
And sixthly, determining video list information contained in the target time video snapshot information which is concentrated and meets the preset longitude and latitude condition as target video list information. The preset longitude and latitude conditions may be: the video list information included in the target time video snapshot information includes the minimum accuracy corresponding to the longitude and latitude of the request.
In some optional implementations of some embodiments, in response to determining that the video information does not meet the preset normal condition, the executing entity may select the target video list information from the video snapshot table by:
and the first step is to carry out combination processing on the target user request information and the current time mark so as to generate second time request combination information.
And secondly, inputting the second time request combination information and the video snapshot table into a pre-trained target video list information generation model to obtain target video list information. The pre-trained target video list information generation model may be a pre-trained neural network model with the second time request combination information and the video snapshot table as input and the target video list information as output.
Alternatively, the pre-trained target video list information generation model may be trained by:
first, a training sample set is obtained.
In some embodiments, the executing entity may obtain the training sample set from the terminal device through a wired connection or a wireless connection. Wherein, the training samples in the training sample set include: sample time requests combining information, sample video snapshot tables, and sample target video list information. Wherein the sample time request combination information may include, but is not limited to, at least one of: the user requests information and a time identifier. Sample video snapshot tables may include, but are not limited to: at least one sample video snapshot information. Sample video snapshot information may include, but is not limited to, at least one of: user request information, video list information, time identification.
And secondly, determining an initial target video list information generation model.
In some embodiments, the executive may determine an initial target video list information generation model. Wherein, the initial target video list information generation model can include, but is not limited to, at least one of the following: an initial attention model, an initial matching model, an initial selection model.
The initial attention model may be an attention model with sample video snapshot information as input and initial snapshot attention information as output. Wherein the initial snapshot attention information may include, but is not limited to, at least one of: the user requests information and a time identifier. For example, the initial Attention model may be a Self-Attention model.
The initial matching model may be a custom model with the sample time request combination information and the initial snapshot attention information set as inputs and the initial matching value set as output. Wherein the initial matching value in the initial matching value set may characterize a degree of matching of the initial snapshot attention information with the sample time request combination information. The custom model can be divided into three layers:
the first layer may be an input layer for passing the sample time request combination information and the initial snapshot attention information set to the second layer.
The second layer may include: a first sub-model and a second sub-model. The first sub-model may be a pearson model with the sample time request combination information and the initial snapshot attention information set as inputs and the first matching value set as output. The second sub-model may be a correlation model with the sample time request combination information and the initial snapshot attention information set as inputs and the second matching value set as output. For example, the second sub-model may be a BM25 (Best Match 25) model. The first match value in the first set of match values may correspond to the second match value in the second set of match values. The first match value in the first set of match values may characterize a degree of match between the initial snapshot attention information in the initial snapshot attention information set and the sample time request combination information. The second match value in the second set of match values may characterize a degree of match between the initial snapshot attention information in the initial snapshot attention information set and the sample time request combination information.
The third layer may be an output layer for receiving outputs of the first sub-model and the second sub-model, respectively, and outputting an average value of the first sub-model and the second sub-model. For example, for each first matching value in the first matching value set, an average value of the first matching value and the second matching value in the second matching value set corresponding to the first matching value is determined as an initial matching value. The generated respective initial match values are then determined as a set of initial match values as output of the entire custom model.
The initial selection model may be a model with the sample video snapshot and the initial set of matching values as inputs and the initial target video list information as outputs. The initial target video list information can represent the video list information which is matched with the user request information included in the sample time request combination information to the highest degree. The initial selection model may be used to: first, an initial matching value with the highest initial matching value is selected from the initial matching value set as a target matching value. Then, sample video snapshot information corresponding to the target matching value in the sample video snapshot table and included video list information are used as initial target video list information.
And thirdly, selecting training samples from the training sample set.
In some embodiments, the executing entity may select a training sample from the training sample set. In practice, the executing entity may randomly select training samples from the training sample set.
And fourthly, inputting each sample video snapshot information included in the sample video snapshot list included in the selected training sample into the initial attention model to generate initial snapshot attention information, and obtaining an initial snapshot attention information set.
In some embodiments, the executing body may input each sample video snapshot information included in the sample video snapshot table included in the selected training sample into the initial attention model to generate initial snapshot attention information, so as to obtain an initial snapshot attention information set.
And fifthly, inputting sample time request combination information and the initial snapshot attention information set included in the selected training samples into the initial matching model to obtain an initial matching value set.
In some embodiments, the executing entity may input the sample time request combination information included in the selected training sample and the initial snapshot attention information set into the initial matching model to obtain an initial matching value set.
And sixthly, inputting a sample video snapshot table and the initial matching value set included in the selected training sample into the initial selection model to obtain initial target video list information.
In some embodiments, the executing body may input the sample video snapshot table included in the selected training sample and the initial matching value set into the initial selection model, so as to obtain initial target video list information.
Seventh, determining a difference value between the target video list information and sample target video list information included in the training sample based on a preset loss function.
In some embodiments, the execution body may determine a difference value between the target video list information and sample target video list information included in the training sample based on a preset loss function. The preset loss function may be, but is not limited to: mean square error loss function (MSE), hinge loss function, cross entropy loss function (cross entropy), 0-1 loss function, absolute loss function, log loss function, square loss function, exponential loss function, and the like.
And eighth step, in response to determining that the difference value is greater than or equal to a preset difference value, adjusting network parameters of the initial target video list information generation model.
In some embodiments, the execution body may adjust network parameters of the initial target video list information generation model in response to determining that the difference value is equal to or greater than a preset difference value. For example, the above-described difference value and the preset difference value may be differentiated. On this basis, the error value is transmitted forward from the last layer of the model by using back propagation, random gradient descent and the like to adjust the parameters of each layer. Of course, a network freezing (dropout) method may be used as needed, and network parameters of some layers therein may be kept unchanged and not adjusted, which is not limited in any way. The setting of the preset difference value is not limited, and for example, the preset difference value may be 0.1.
Optionally, in response to determining that the difference value is smaller than the preset difference value, determining the initial target video list information generation model as a trained target video list information generation model.
In some embodiments, the executing body may determine the initial target video list information generation model as a trained target video list information generation model in response to determining that the difference value is smaller than the preset difference value.
The optional technical content in step 104 is taken as an invention point of the embodiment of the present disclosure, and solves the third "technical problem mentioned in the background art, which causes waste of computing resources". Factors that lead to wasted computing resources are often as follows: and inquiring video information corresponding to the user request from the snapshot, and traversing the snapshot for a plurality of times to inquire the video information which is most in line with the user requirement. If the above factors are solved, the effect of reducing the waste of the computing resources can be achieved. To achieve this, first, initial snapshot attention information including user request information and time identification associated with sample time request combination information in sample video snapshot information can be obtained through an initial attention model. Secondly, a more accurate first matching value set between the initial snapshot attention information set and the sample time request combination information can be obtained through the first sub-model, and a more accurate second matching value set between the initial snapshot attention information set and the sample time request combination information can be obtained through the second sub-model. Then, a more accurate initial set of matching values can be obtained by the initial matching model comprising the first sub-model and the second sub-model. Finally, the sample video snapshot information with the highest matching degree with the sample time request combination information in the sample video snapshot table can be selected through an initial selection model. Therefore, the initial target video list information generation model comprising the initial attention model, the initial matching model and the initial selection model can be trained to obtain a more accurate target video list information generation model. Therefore, the video information which is most suitable for the user requirement can be obtained through one output of the target video list information generation model. Furthermore, the target video list information generation model is used for replacing traversing the snapshot for a plurality of times, so that the waste of computing resources can be reduced.
And step 105, the obtained target video list information is sent to the user terminal.
In some embodiments, the executing entity may send the obtained target video list information to the user terminal.
Optionally, in response to receiving the abnormal alarm information sent by the target user terminal or in response to determining that the user terminal meets a preset abnormal condition, performing the following updating steps:
and determining video snapshot information meeting preset abnormal snapshot conditions in the video snapshot table as abnormal video snapshot information.
In some embodiments, the executing body may determine video snapshot information satisfying a preset abnormal snapshot condition in the video snapshot table as the abnormal video snapshot information. The target user terminal may be a terminal of a maintenance server. The abnormal alarm information can represent that the target user terminal discovers that the video snapshot information in the video snapshot table is abnormal. The preset abnormal condition may be: after receiving the target video list information, the user terminal does not click on the target video list information. The preset abnormal snapshot condition may be: the video snapshot information is video snapshot information corresponding to the abnormal alarm information. The preset abnormal snapshot condition may also be: the video snapshot information is video snapshot information corresponding to the target video list information received by the user terminal. The anomalous video snapshot information may include, but is not limited to, at least one of: user request information, video list information, time identification.
And secondly, determining the last video snapshot information corresponding to the abnormal video snapshot information in the video snapshot table as initial video snapshot information.
In some embodiments, the executing body may determine the last video snapshot information corresponding to the abnormal video snapshot information in the video snapshot table as the initial video snapshot information. Wherein, the video snapshot represents the last video snapshot information corresponding to the abnormal video snapshot information may be: in the video snapshot table, the time identification included in the abnormal video snapshot information, the time identification of the corresponding previous day and the corresponding video snapshot information.
And thirdly, updating the abnormal video snapshot information in the video snapshot table into initial video snapshot information so as to update the video snapshot table and obtain a fifth video snapshot table.
In some embodiments, the executing body may update the abnormal video snapshot information in the video snapshot table to the initial video snapshot information, so as to update the video snapshot table to obtain a fifth video snapshot table.
And fourthly, determining the fifth video snapshot table as a video snapshot table.
In some embodiments, the executing entity may determine the fifth video snapshot table as a video snapshot table.
The above embodiments of the present disclosure have the following advantageous effects: by the snapshot-based video information sending method of some embodiments of the present disclosure, waste of storage resources can be reduced. Specifically, the reason why the storage resources are wasted is that: by adopting a mode of backing up video information, double storage resources are consumed to store the video information, so that the storage resources are wasted. Based on this, the snapshot-based video information transmission method of some embodiments of the present disclosure first receives target user request information transmitted by a user terminal. Thus, the target user request information of the user terminal can be received, so that the video information can be acquired from the server side later. And secondly, the target user request information is sent to a server side so as to receive the video information sent by the server side. Wherein, the video information includes: server status code, video list information. Thus, the video information can be acquired from the server side so as to send the video list information to the user terminal later. Then, in response to determining that the video information satisfies a preset normal condition, performing the following determining step: first, video list information included in the video information is determined as target video list information. Wherein, the preset normal conditions are as follows: the server status code included in the video information is a preset status code, and the video list information included in the video information is not null. Second, the target user request information and the target video list information are added to a video snapshot table. Wherein the video snapshot table is initially empty. Therefore, the video list information acquired by the server in the normal state can be stored in the video snapshot table, so that when the subsequent server is abnormal, the video list information is acquired from the video snapshot table. Then, in response to determining that the video information does not meet the preset normal condition, selecting target video list information from a video snapshot table, and performing alarm processing on the user terminal. Therefore, when the server side is abnormal, the target video list information can be selected from the video snapshot table. And then, the obtained target video list information is sent to the user terminal. Thus, the target video list information can be transmitted to the user terminal to meet the user's needs. Therefore, the user request information and the corresponding video list information can be stored in the video snapshot table, so that the waste of storage resources can be reduced instead of backing up all video data.
With further reference to fig. 2, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of a snapshot-based video information transmitting apparatus, which correspond to those method embodiments shown in fig. 1, and which are particularly applicable to various electronic devices.
As shown in fig. 2, the snapshot-based video information transmitting apparatus 200 of some embodiments includes: a receiving unit 201, a first transmitting unit 202, a determining unit 203, a selecting unit 204, and a second transmitting unit 205. Wherein, the receiving unit 201 is configured to receive target user request information sent by a user terminal; a first sending unit 202 configured to send the target user request information to a server, so as to receive video information sent by the server, where the video information includes: server status code, video list information; a determining unit 203 configured to perform the following determining step in response to determining that the video information satisfies a preset normal condition: determining video list information included in the video information as target video list information, wherein the preset normal condition is as follows: the server status code included in the video information is a preset status code, and the video list information included in the video information is not empty; adding the target user request information and the target video list information into a video snapshot table, wherein the video snapshot table is initially empty; a selecting unit 204 configured to select target video list information from a video snapshot table and perform alarm processing on the user terminal in response to determining that the video information does not satisfy the preset normal condition; the second transmitting unit 205 is configured to transmit the obtained target video list information to the above-mentioned user terminal.
It will be appreciated that the elements described in the snapshot-based video information transmitting apparatus 200 correspond to the respective steps in the method described with reference to fig. 1. Thus, the operations, features and advantages described above with respect to the method are equally applicable to the snapshot-based video information transmitting apparatus 200 and the units contained therein, and are not described herein.
Referring now to FIG. 3, a schematic diagram of an electronic device (e.g., computing device) 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic devices in some embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, as well as stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM302, and the RAM303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from ROM 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving target user request information sent by a user terminal; transmitting the target user request information to a server to receive video information transmitted by the server, wherein the video information comprises: server status code, video list information; in response to determining that the video information satisfies a preset normal condition, performing the following determining step: determining video list information included in the video information as target video list information, wherein the preset normal condition is as follows: the server status code included in the video information is a preset status code, and the video list information included in the video information is not empty; adding the target user request information and the target video list information into a video snapshot table, wherein the video snapshot table is initially empty; selecting target video list information from a video snapshot table and carrying out alarm processing on the user terminal in response to determining that the video information does not meet the preset normal condition; and sending the obtained target video list information to the user terminal.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a receiving unit, a first transmitting unit, a determining unit, a selecting unit, and a second transmitting unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the receiving unit may also be described as "a unit that receives target user request information sent by the user terminal".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (8)

1. A video information sending method based on snapshot includes:
receiving target user request information sent by a user terminal;
transmitting the target user request information to a server to receive video information transmitted by the server, wherein the video information comprises: server status code, video list information;
in response to determining that the video information satisfies a preset normal condition, performing the determining step of:
determining video list information included in the video information as target video list information, wherein the preset normal condition is: the server-side status code included in the video information is a preset status code, and the video list information included in the video information is not empty;
adding the target user request information and the target video list information into a video snapshot table, wherein the video snapshot table is initially empty;
selecting target video list information from a video snapshot table and carrying out alarm processing on the user terminal in response to determining that the video information does not meet the preset normal condition;
and sending the obtained target video list information to the user terminal.
2. The method of claim 1, wherein the video snapshot table comprises: video snapshot information, the video snapshot information including: user request information, video list information, and time identification; and
The adding the target user request information and the target video list information to a video snapshot table comprises the following steps:
performing splicing processing on the target user request information, the target video list information and the current time mark to generate first target video snapshot information;
in response to determining that the video snapshot information that is the same as the first target video snapshot information does not exist in the video snapshot table, performing the following processing steps:
adding the first target video snapshot information into a video snapshot table to obtain a first video snapshot table, wherein the first video snapshot table comprises first video snapshot information;
in response to determining that first video snapshot information meeting a preset time condition exists in the first video snapshot table, deleting at least one piece of first video snapshot information meeting the preset time condition from the first video snapshot table to obtain a second video snapshot table;
in response to determining that first video snapshot information meeting a preset time condition does not exist in the first video snapshot table, determining the first video snapshot table as a second video snapshot table;
and determining the second video snapshot table as a video snapshot table.
3. The method of claim 1, wherein the selecting target video list information from a video snapshot table in response to determining that the video information does not satisfy the preset normal condition comprises:
combining the target user request information with each time identification sequence in the time identification sequences to generate first time request combined information to obtain a first time request combined information sequence, wherein the last time identification in the time identification sequences is the current time identification;
for each first time request combination information in the first time request combination information sequence, selecting each video snapshot information corresponding to the first time request combination information from a video snapshot table to obtain a time video snapshot information set;
determining each obtained time video snapshot information set as a time video snapshot information set sequence;
performing data cleaning processing on the time video snapshot information set sequence to generate a time video snapshot cleaning information set sequence;
determining the last time video snapshot cleaning information set in the time video snapshot cleaning information set sequence as a target time video snapshot information set;
And determining video list information contained in the target time video snapshot information which is concentrated and meets the preset longitude and latitude condition as target video list information.
4. The method of claim 1, wherein the selecting target video list information from a video snapshot table in response to determining that the video information does not satisfy the preset normal condition comprises:
combining the target user request information and the current time identifier into second time request combined information;
and inputting the second time request combination information and the video snapshot table into a pre-trained target video list information generation model to obtain target video list information.
5. The method of claim 1, wherein the method further comprises:
in response to receiving the abnormal alarm information sent by the target user terminal or in response to determining that the user terminal meets a preset abnormal condition, the following updating steps are executed:
determining video snapshot information meeting preset abnormal snapshot conditions in a video snapshot table as abnormal video snapshot information;
determining the last video snapshot information corresponding to the abnormal video snapshot information in the video snapshot table as initial video snapshot information;
Updating the abnormal video snapshot information in the video snapshot table to initial video snapshot information so as to update the video snapshot table and obtain a fifth video snapshot table;
and determining the fifth video snapshot table as a video snapshot table.
6. A snapshot-based video information transmitting apparatus, comprising:
a receiving unit configured to receive target user request information sent by a user terminal;
the first sending unit is configured to send the target user request information to a server side so as to receive video information sent by the server side, wherein the video information comprises: server status code, video list information;
a determining unit configured to perform the following determining step in response to determining that the video information satisfies a preset normal condition: determining video list information included in the video information as target video list information, wherein the preset normal condition is: the server-side status code included in the video information is a preset status code, and the video list information included in the video information is not empty; adding the target user request information and the target video list information into a video snapshot table, wherein the video snapshot table is initially empty;
A selecting unit configured to select target video list information from a video snapshot table and perform alarm processing on the user terminal in response to determining that the video information does not satisfy the preset normal condition;
and a second transmitting unit configured to transmit the obtained target video list information to the user terminal.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-5.
8. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-5.
CN202311658329.2A 2023-12-05 2023-12-05 Snapshot-based video information sending method and device, electronic equipment and medium Active CN117692672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311658329.2A CN117692672B (en) 2023-12-05 2023-12-05 Snapshot-based video information sending method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311658329.2A CN117692672B (en) 2023-12-05 2023-12-05 Snapshot-based video information sending method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN117692672A true CN117692672A (en) 2024-03-12
CN117692672B CN117692672B (en) 2024-06-11

Family

ID=90127719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311658329.2A Active CN117692672B (en) 2023-12-05 2023-12-05 Snapshot-based video information sending method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117692672B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253495A (en) * 2005-08-30 2008-08-27 微软公司 Electronic data snapshot generator
US20150046402A1 (en) * 2013-08-12 2015-02-12 International Business Machines Corporation Data backup across physical and virtualized storage volumes
US20150178167A1 (en) * 2013-12-23 2015-06-25 Symantec Corporation Systems and methods for generating catalogs for snapshots
US20180081572A1 (en) * 2016-09-21 2018-03-22 International Business Machines Corporation Log snapshot control on an automated data storage library
CN108156010A (en) * 2016-12-06 2018-06-12 创盛视联数码科技(北京)有限公司 Video cloud platform system monitoring and method
US20190392079A1 (en) * 2018-06-22 2019-12-26 International Business Machines Corporation Holistic mapping and relocation of social media assets
US20200142971A1 (en) * 2018-11-02 2020-05-07 International Business Machines Corporation Method to write data ahead to snapshot area to avoid copy-on-write
CN114116723A (en) * 2021-11-30 2022-03-01 新华三大数据技术有限公司 Snapshot processing method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253495A (en) * 2005-08-30 2008-08-27 微软公司 Electronic data snapshot generator
US20150046402A1 (en) * 2013-08-12 2015-02-12 International Business Machines Corporation Data backup across physical and virtualized storage volumes
US20150178167A1 (en) * 2013-12-23 2015-06-25 Symantec Corporation Systems and methods for generating catalogs for snapshots
US20180081572A1 (en) * 2016-09-21 2018-03-22 International Business Machines Corporation Log snapshot control on an automated data storage library
CN108156010A (en) * 2016-12-06 2018-06-12 创盛视联数码科技(北京)有限公司 Video cloud platform system monitoring and method
US20190392079A1 (en) * 2018-06-22 2019-12-26 International Business Machines Corporation Holistic mapping and relocation of social media assets
US20200142971A1 (en) * 2018-11-02 2020-05-07 International Business Machines Corporation Method to write data ahead to snapshot area to avoid copy-on-write
CN114116723A (en) * 2021-11-30 2022-03-01 新华三大数据技术有限公司 Snapshot processing method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
门阳博: "云平台监控和管理系统的设计与实现", 《西安电子科技大学硕士论文》, 15 April 2022 (2022-04-15) *

Also Published As

Publication number Publication date
CN117692672B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
CN110781373B (en) List updating method and device, readable medium and electronic equipment
CN111246228B (en) Method, device, medium and electronic equipment for updating gift resources of live broadcast room
CN111400625B (en) Page processing method and device, electronic equipment and computer readable storage medium
CN112256733A (en) Data caching method and device, electronic equipment and computer readable storage medium
CN111163336A (en) Video resource pushing method and device, electronic equipment and computer readable medium
CN111596992A (en) Navigation bar display method and device and electronic equipment
CN111798251A (en) Verification method and device of house source data and electronic equipment
CN117692672B (en) Snapshot-based video information sending method and device, electronic equipment and medium
CN111625745B (en) Recommendation method, recommendation device, electronic equipment and computer readable medium
CN112507676B (en) Method and device for generating energy report, electronic equipment and computer readable medium
CN110941683B (en) Method, device, medium and electronic equipment for acquiring object attribute information in space
CN111460020B (en) Method, device, electronic equipment and medium for resolving message
CN112115154A (en) Data processing and data query method, device, equipment and computer readable medium
CN111680754A (en) Image classification method and device, electronic equipment and computer-readable storage medium
CN111580890A (en) Method, apparatus, electronic device, and computer-readable medium for processing features
CN115348260B (en) Information processing method, device, equipment and medium based on campus information security
CN115374320B (en) Text matching method and device, electronic equipment and computer medium
CN110633324B (en) Method, apparatus, electronic device and computer readable medium for synchronizing data
CN112948108B (en) Request processing method and device and electronic equipment
CN116881097B (en) User terminal alarm method, device, electronic equipment and computer readable medium
CN116800834B (en) Virtual gift merging method, device, electronic equipment and computer readable medium
CN111857879B (en) Data processing method, device, electronic equipment and computer readable medium
CN115269645A (en) Information query method and device, electronic equipment and computer readable medium
CN116680316A (en) Method, device, medium and electronic equipment for acquiring dimension data
CN114040014A (en) Content pushing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant