CN108470062B - Communication method and device based on shared video - Google Patents

Communication method and device based on shared video Download PDF

Info

Publication number
CN108470062B
CN108470062B CN201810251608.XA CN201810251608A CN108470062B CN 108470062 B CN108470062 B CN 108470062B CN 201810251608 A CN201810251608 A CN 201810251608A CN 108470062 B CN108470062 B CN 108470062B
Authority
CN
China
Prior art keywords
video
content
node
user
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810251608.XA
Other languages
Chinese (zh)
Other versions
CN108470062A (en
Inventor
翁园林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Ainong Yunlian Technology Co., Ltd
Original Assignee
Wuhan Ainong Yunlian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Ainong Yunlian Technology Co Ltd filed Critical Wuhan Ainong Yunlian Technology Co Ltd
Priority to CN201810251608.XA priority Critical patent/CN108470062B/en
Publication of CN108470062A publication Critical patent/CN108470062A/en
Application granted granted Critical
Publication of CN108470062B publication Critical patent/CN108470062B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles

Abstract

The invention relates to the technical field of computers, and provides a communication method and device based on a shared video. The method comprises the steps of obtaining an instruction of a user for calling the alternating current video, and calling the text content of a content index node of the alternating current video stored in a cloud according to the instruction; feeding back the text content to the intelligent terminal sending the instruction in a preset presentation format; receiving the operation of selecting the text of the content index node corresponding to the video by the user, and feeding back the video browsing content near the corresponding content index node to the intelligent terminal; and acquiring an interception instruction comprising a video starting node and a video stopping node, generating a mapping relation with the corresponding communication video, converting the mapping relation into an abstract link, and displaying the abstract link in the current communication window. According to the invention, when other users browse the communication comment, the interaction with the video stored in the cloud can be quickly established through the abstract link, so that the communication form among the users is enriched, and the transmission speed of effective information is increased.

Description

Communication method and device based on shared video
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of computers, in particular to a communication method and device based on a shared video.
[ background of the invention ]
With the explosive increase of information volume, the knowledge area that can be mastered or understood by individuals has become more and more minute compared to the capacity existing in society. Therefore, various professional interaction platforms and communication software are produced, and various difficulties and puzzles encountered by people in daily life are solved.
However, the existing communication platform, especially forum, can only support the expression of text and pictures, because the content covered by the platform is too wide, and the content is mostly from media resources acquired by users from third party channels, and the compatibility and presentation of the media resources in the platform itself will bring great burden to the server, so that the mainstream forum communication platform at present cannot effectively support video stream. Although the prior art also has a content mode of introducing the Youkou video as a webpage share, the mode mainly represents a moving mode, active indexing and editing cannot be realized, and the communication mode is limited.
In the agricultural communication platform applied by the invention, because each expert video is collected, stored and maintained by the platform, the agricultural communication platform provides inherent advantages for providing video streams as communication materials in the communication platform. On the other hand, as for agricultural communication platforms, the solution of various problems is not always achieved at once, and the related process usually comprises a plurality of links and a plurality of operation steps, and in this case, if a video stream can be provided as a material for communication work or communication, the communication process is twice the result with half the effort.
In view of this, how to find an efficient communication method based on shared video becomes a technical problem to be solved at present.
[ summary of the invention ]
The invention aims to solve the technical problem of how to introduce video streams into various communication processes of a platform, and is particularly suitable for introducing proprietary videos in the communication platform into the communication processes.
The invention further aims to solve the technical problem of how to efficiently find out a target video clip in massive video resources and blend the target video clip into the current communication characters.
The invention adopts the following technical scheme:
in a first aspect, the present invention provides a shared video-based communication method, where a communication video of each expert recorded by a platform is stored in a cloud, and one or more content index nodes are distributed in the communication video, where the communication method includes:
acquiring an instruction of a user for calling the alternating current video, and calling the text content of the content index node of the alternating current video stored in the cloud according to the instruction;
feeding back the text content to the intelligent terminal sending the instruction in a preset presentation format;
receiving the operation of selecting the text of the content index node corresponding to the video by the user, and feeding back the video browsing content near the corresponding content index node to the intelligent terminal;
the method comprises the steps of obtaining an intercepting instruction containing a video starting node and a video ending node, generating a mapping relation with a corresponding communication video, converting the mapping relation into an abstract link, and displaying the abstract link in a current communication window, so that other users can trigger the playing of video contents corresponding to the starting node and the ending node when clicking the abstract link.
Preferably, the preset presentation format specifically includes:
the method comprises the steps that an alternating-current video author or an alternating-current video theme is taken as a first level, content type division of content index nodes in an alternating-current video is taken as a second level, specific content of each content index node is taken as a third level, and text content corresponding to each alternating-current video stored in a cloud is presented as a tree according to distribution of each level; wherein the text content in each level can be expanded and folded so that the user can complete the corresponding interaction.
Preferably, when the instruction contains one or more keywords, the server screens the text content of the content index node of the alternating-current video stored in the cloud according to the one or more keywords, and feeds back the screened text content of the content index node to the intelligent terminal operated by the user.
Preferably, the summary link is formed by a frame of picture a in the video content corresponding to the start node and the end node, corresponding time points of the start node and the end node, and a location address where the video content is stored; the frame of picture A is provided with a hyperlink, and the hyperlink is specifically represented by importing video contents of a starting node and an ending node according to the time points of the starting node and the ending node and the position address where the video contents are stored after the picture A is clicked.
Preferably, the selection of the frame of picture a specifically includes at least one or more of the following ways:
manually selecting a frame of picture in the video contents of the starting node and the terminating node as the picture A by a user; alternatively, the first and second electrodes may be,
and identifying the alternating text content of the current user by the server, matching the caption and/or content index node contained in each frame of picture according to the alternating text content, and taking the picture with the highest matching similarity as the picture A.
Preferably, when detecting that a user opens a webpage containing the summary link, the server caches the corresponding video content according to the location address where the video content is stored and the corresponding time points of the start node and the end node.
Preferably, the operation of receiving the text of the content index node of the corresponding video selected by the user specifically includes:
the user points to an index node and completes double-click action; alternatively, the first and second electrodes may be,
a user selects an index node and drags the index node to a browse window; alternatively, the first and second electrodes may be,
and selecting an index node by the user, and selecting a video playing and browsing option from the jumped operation list in a long-time pressing mode.
Preferably, the server is further configured to extract subtitle content in the video content of the start node and the end node, and use the subtitle content in a text form as a component of the summary link or use the subtitle content in a text form as the text content of the user communication.
Preferably, when the user browses the website content of the platform, only displaying text content may be preset, and when the server receives a browsing request of the user, the server skips an operation of caching video content for the website content.
In a second aspect, the present invention further provides a shared video-based communication apparatus, configured to implement the shared video-based communication method according to the first aspect, where the apparatus includes:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor and programmed to perform the shared video-based communication method of the first aspect.
In a third aspect, the present invention also provides a non-transitory computer storage medium storing computer-executable instructions for execution by one or more processors for performing the method for shared video based communication according to the first aspect.
The invention utilizes the characteristic that the content index node is text in the video with the content index node, firstly matches the content of the content index node through the index key words input by the user, switches to the video segment which establishes the mapping relation with the target content index node of the corresponding jump operation according to the jump operation of the user, and further determines the abstract link configured for the current communication window by the start-end time node configured by the user. Therefore, when other users browse the communication comments, the interaction with the video stored in the cloud can be quickly established through the abstract link, the communication form among the users is enriched, and the transmission speed of effective information is improved.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flowchart of a shared video-based communication method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an effect of a communication interface based on a shared video according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an effect of a communication interface based on a shared video according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating an effect that content index nodes in a communication interface based on a shared video are presented in a tree form according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an effect of a communication interface based on a shared video according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a method for generating node tags in a recorded expert video according to an embodiment of the present invention;
fig. 7 is a schematic flowchart of an improved method for generating node tags in a recorded expert video according to an embodiment of the present invention;
fig. 8 is a schematic flowchart of an extended method for generating node tags in a recorded expert video according to an embodiment of the present invention;
fig. 9 is a schematic flowchart of another extended method for generating node tags in a recorded expert video according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an ac device based on shared video according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, the terms "inner", "outer", "longitudinal", "lateral", "upper", "lower", "top", "bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are for convenience only to describe the present invention without requiring the present invention to be necessarily constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In the embodiments of the present invention, the expression of "platform" and "server" is the same meaning, and the "cloud" is a description manner made in compliance with the trend of big data, and in some small-sized applications, the "cloud" may also be embodied as "server" itself; in a large application occasion, the cloud end more refers to servers with professional storage functions, and compared with a platform, the cloud end plays a more supporting role in video stream storage of the cloud end.
In the embodiment of the present invention, the character a is used only for convenience of description (convenience of transfer) and for embodying an object having a specific meaning, and does not bring a special meaning limitation by itself, but limits the meaning by the literal content of the definition.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1:
the embodiment 1 of the present invention provides a shared video-based communication method, where a communication video of each expert recorded by a platform is stored in a cloud, and one or more content index nodes are distributed in the communication video, as shown in fig. 1, the communication method includes:
in step 201, an instruction for a user to call an alternating current video is obtained, and text content of a content index node of the alternating current video stored in the cloud is called according to the instruction.
When the instruction can include one or more keywords, the server screens text contents of content index nodes of the alternating-current video stored in the cloud according to the one or more keywords, and feeds the screened text contents of the content index nodes back to the intelligent terminal operated by the user. For example: the instruction for calling the communication video is the '@ video', namely when the server detects that the user inputs the content of the '@ video', the server calls the content index node related information of the communication video stored in the cloud, and the information is preferably displayed on the intelligent terminal of the user according to a preset presentation format.
In step 202, the text content is fed back to the intelligent terminal sending the instruction in a preset presentation format.
Wherein, the preset presentation format comprises: the method comprises the steps that an alternating-current video author or an alternating-current video theme is taken as a first level, content type division of content index nodes in an alternating-current video is taken as a second level, specific content of each content index node is taken as a third level, and text content corresponding to each alternating-current video stored in a cloud is presented as a tree according to distribution of each level; wherein, the text content in each level can be expanded and folded, so that the user can complete the corresponding interaction; in practical cases, the corresponding hierarchy may be further subdivided according to the actual classification, which is not listed here. As shown in fig. 2, the user is editing the communication content of a certain discussion topic; as shown in fig. 3, when a user inputs "@ video" into an editing window of the platform, the platform generates a sub-window on the communication interface, the sub-window may be set at any position on the interface, and the content of the content index node in the sub-window is presented in an expandable-foldable tree-like manner, as shown in fig. 4, an effect diagram of the content index node is presented for several trees, wherein buttons with arrows are configured for the expandable-foldable branches, so that the user completes expansion and folding of the corresponding branches by clicking the corresponding buttons with arrows.
In step 203, receiving an operation of selecting a text of a content index node corresponding to a video by a user, and feeding back video browsing content near the corresponding content index node to the intelligent terminal.
The operation of receiving the text of the content index node of the corresponding video selected by the user comprises the following steps:
the user points to an index node and completes double-click action; or, the user selects an index node and drags the index node to the browse window; or, the user selects an index node and selects a video playing and browsing option from the skipped operation list by long pressing.
Taking fig. 5 as an example, double-clicking on an index node can reduce the probability of no operation compared with a single-click operation, after a user double-clicks on an index node (also referred to as a leaf node) in fig. 5, the outer frame of the corresponding index node is prominently presented (for example, the frame shown in fig. 5 is bold), and a sub-window is generated to carry the loaded video segment and support the related operation of determining the content of the summary link by the subsequent user. Wherein, without special limitation, the secondary window can be arranged at any position in the interface, and fig. 5 only shows one way of arranging the secondary window.
In step 204, an intercepting instruction including a video start node and a video end node is obtained, a mapping relation with a corresponding communication video is generated, the mapping relation is converted into an abstract link and is displayed in a current communication window, so that other users trigger playing of video contents corresponding to the start node and the end node when clicking the abstract link.
The summary link is formed by a frame of picture A in the video content corresponding to the starting node and the ending node, corresponding time points of the starting node and the ending node, and a position address where the video content is stored; the frame of picture A is provided with a hyperlink, and the hyperlink is specifically represented by importing video contents of a starting node and an ending node according to the time points of the starting node and the ending node and the position address where the video contents are stored after the picture A is clicked.
As shown in fig. 5, the start node time and the end node time may be completed by selecting the corresponding video frame, or may be input through the related information editing window in the sub-window thereof, which is not limited herein.
The embodiment of the invention utilizes the characteristic that the content index node is text in the video with the content index node, firstly matches the content of the content index node through the index key words input by a user, switches to the video segment which establishes a mapping relation with the target content index node of the corresponding jump operation according to the jump operation of the user, and further determines the abstract link configured for the current communication window by the start-end time node configured by the user. Therefore, when other users browse the communication comments, the interaction with the video stored in the cloud can be quickly established through the abstract link, the communication form among the users is enriched, and the transmission speed of effective information is improved.
With reference to the embodiment of the present invention, in step 204, the selecting of the frame of picture a specifically includes at least one or more of the following manners:
in the first mode, a user manually selects one frame of picture in the video contents of the starting node and the terminating node as the picture A.
And secondly, identifying the current user's alternating text content by the server, matching the caption and/or content index node contained in each frame of picture according to the alternating text content, and taking the picture with the highest matching similarity as the picture A.
The method is a traditional generation method, which has the advantage of being convenient to implement, but cannot meet the requirement of users on rapid extraction of frequent data when pursuing increasing intellectualization. Therefore, the embodiment of the present invention further provides a second method, which effectively utilizes effective information contained between the content index node and the video subtitles (the generation of the subtitles thereof will be further described in embodiment 2), and can quickly and accurately extract one or more frames of pictures with the largest information amount or the closest relationship with the user expression viewpoint from a segment of video content. The inconvenience of requiring a user to make a choice of means in the first mode is simplified. In a preferred implementation, the first and second modes can be implemented in combination, and the priority of the first mode is higher than that of the second mode, that is, after the platform selects/recommends a frame of picture for the user, if the user is not satisfied, the user can further select the picture a approved by the user in the first mode.
In the embodiment of the invention, in a default condition, when a server detects that a certain webpage containing the abstract link is opened by a user, the server caches the corresponding video content according to the position address of the video content storage and the corresponding time points of the starting node and the ending node. On the other hand, however, some users' browsing habits tend to browse only the video content in the topics of the web pages, and switch to the video browsing interface when necessary, so that in the embodiment of the present invention, an extended function is further provided, wherein the user may preset to display only the text content when browsing the website content of the platform, and the server skips the operation of caching the video content for the website content when receiving the browsing request of the user. Therefore, the loss of resources for caching the video content by the server can be reduced, and the mode of browsing themes by the user can be enriched.
The use experience of the method and the server resource utilization in a specific implementation environment can be improved by combining various extensions made by the embodiment, but a preferable extension scheme also exists by combining the embodiment of the invention based on the use habit that other users possibly set only browse characters, so that the preliminary understanding of the meaning of the user to express the video segments quoted by the user can be further improved under the condition that the abstract links are not clicked by other users. Therefore, a preferred extension scheme exists in combination with the embodiment of the present invention, wherein the server is further configured to extract subtitle content in the video content of the start node and the end node, and use the subtitle content in a text form as a component of the abstract link or use the subtitle content in a text form as a text content for user communication.
Example 2:
an embodiment 2 of the present invention provides a method for generating a node tag in a recorded expert video, which is used to support an expert ac video used in embodiment 1, and as shown in fig. 6, the method includes:
in step 301, various tools used by experts in the recording environment are added with tag electronic devices, and the tag electronic devices are used for feeding back the types of the currently actuated tools to the platform when the corresponding tools are actuated by the experts.
After the server determines the type of the currently driven tool according to the wireless signal sent by the tag electronic device, the server may further formulate a collection strategy of relevant information of the corresponding tool, for example: the angle of shooting of the monitoring camera, the field of view size of shooting, the sensitivity and the collection cycle of the wireless signal of adjustment collection etc.. The sensitivity of the collected wireless signals and the adjustment of the collection period are also related to the richness degree of the process detail information capture of the tool used, and have a larger incidence relation with the related information of the corresponding tool which is finally analyzed.
In step 302, the platform confirms that the tool a is used according to a wireless signal fed back by the tag electronic device a on the tool a, and adds a node tag in a corresponding recorded video content; wherein the node label contains information about the use of the tool A by an expert.
Wherein, the related information of the tool A comprises: the length of time that tool a is used, the target object that is processed using tool a, the number of times tool a is used, one or more of other tool objects that are used in conjunction with tool a.
In the embodiment of the invention, corresponding professional tools are required to finish the interactive content with the crops in the found expert video, and the content is left in the prior art and is not effectively tracked. The platform realizes the judgment of the positions of the added nodes of the recorded monitoring video and the generation of the content of the added node labels according to the monitoring of the relevant information of the tool A used by the expert, and effectively improves the generation efficiency of the node labels in the expert video.
For the platform related in the embodiment of the present invention to confirm that the tool a is used according to the wireless signal fed back by the tag electronic device a on the tool a, at least two implementations are also provided:
in a first mode, a storage cabinet for placing the tool is provided with a wireless charging device and an external signal shielding function, and a platform side determines that the tool A is used according to an electronic tag number corresponding to a currently detectable wireless signal; when other tools are placed in the storage cabinet, the wireless signals emitted to the outside are shielded by the storage cabinet. Wherein, wireless charging device can guarantee that each label electronic equipment is in signal emission saturated condition constantly, can be when being taken out the use by the expert, and corresponding radio signal is gathered by the detection device of platform setting the very first time. Wherein, store up and receive the cabinet and can adopt the closed shell that tin, iron material made, and corresponding wireless charging device can adopt the industry ripe wireless means of charging, no longer describe here.
And in the second mode, the wireless signal receiver on the platform side has a signal strength detection function, the wireless signal receiver periodically receives the wireless signals returned by the tag electronic equipment on each tool, the camera shoots that the expert enters the monitoring area, and when the strength of the wireless signal sent by the tag electronic equipment A on the tool A changes, the tool A is determined to be used.
In order to ensure the accuracy of detecting the change of the wireless signal strength of the equipment, it is preferable to arrange at least three wireless signal collectors in the monitoring area, so that the position coordinates of the tool a in the monitoring area can be calculated based on the known coordinates of the three wireless signal collectors even by the principle that three points form one surface.
In embodiment 1 of the present invention, mainly an electronic tag device with a wireless signal transmitting function is added to a tool used by an expert, and in an actual implementation process, for an agricultural video, besides the tag content in the expert video can be automatically generated by taking reference to the electronic tag device in embodiment 1, a feature judgment of a growth stage specific to an agricultural crop can be further adopted to generate the tag content of a node tag belonging to the growth stage, as shown in fig. 7, the generating method further includes:
in step 303, the platform matches the photographed crops by using the crop features of the corresponding growth stages when the currently photographed crops enter the growth stage time interval according to the currently recorded growth stage distribution information of the crop varieties and the crop features of each growth stage, and configures the node tags of the growth stages for the correspondingly photographed screens when the similarity of the matching results is greater than a preset threshold.
In the above process of setting node labels corresponding to growth stages according to the growth stages of crops, the same crop may also be changed correspondingly due to different geographic locations of experts for cultivating the same crop, and at this time, if a user is involved to browse videos of a certain crop, it is preferable to return a monitoring video matched with the position information of the crop, so as to be better effective for obtaining effective information from the video, therefore, in combination with the method in step 303 of the embodiment of the present invention, there is also a preferred implementation scheme, as shown in fig. 8, and the generating method further includes:
in step 304, after acquiring the recorded videos of different experts for the same crop type, the platform establishes a distribution relationship of node tags of the recorded videos and corresponding to corresponding growth stages of the same crop type according to different geographical positions of the experts.
In step 305, according to the distribution relationship, when the user queries the recorded video of the corresponding crop, the video content with the same crop type and the highest region position similarity is returned to the user.
For the expert video, as long as the expert video is not published on the platform, there is room for adjustment and modification, so if the growth stage obtained by the current matching result skips the theoretically previous growth stage, the generation method further includes:
tracing back the video content between the last growth stage node and the growth stage node obtained by the current matching result; and again using a second video in between for feature matching, where the second video originates from a camera from another perspective. In the above manner, the requirement of providing at least two cameras in the monitoring area is provided, which is actually also a preferable solution proposed for making the label associated with the node label in the growing stage, because the growing node is captured by analyzing the image characteristics of the crops in the picture captured by the cameras, which is different from the label electronic device in embodiment 1, and the image capture may cause the analysis deviation due to the difference of the capturing view angle, and therefore, the provision of at least two cameras in the monitoring area is a preferable implementation solution.
In the traditional mode, the excerpted notes of experts exist in the form of paper texts, and are actually a big loss of effective information in the internet era. Even though some platforms have introduced a way for computer entry, it is still not associated with video tagging. In the embodiment of the invention, a preferable expansion scheme is provided, specifically, relevant information in the excerpted note of the expert is obtained, and the time node for adding the node tag in the agricultural video of the value is determined according to the relevant information. The server further analyzes the recorded video segments according to the content of the relevant information after obtaining the relevant information in the extracted note, and determines the video frame where the relevant information is located from the corresponding video segments through image identification, so that the confirmation of the time node added with the node label is completed.
In actual operation, the possibility of information interaction exists between monitoring videos of different experts, particularly in an application example of making labels for growth nodes of crops, because growth solar terms of the same crop have high relevance, in the concrete implementation, regional information of the growth node labels is marked earliest in a batch of expert videos corresponding to the same crop, and growth nodes in videos of other regions are calculated according to the regional information, so that the analysis range of a platform for the growth nodes in the videos of other regions is updated.
In order to further improve the efficiency of tagging expert videos, it is also considered by the embodiment of the present invention that the activity of a general video browsing user is utilized, and therefore, a preferred implementation scheme exists in combination with the embodiment of the present invention, as shown in fig. 9, including:
in step 401, a request for adding a first node tag fed back by a user browsing a video is obtained.
The request for adding the first node label carries the name ID of the browsed video, time node information corresponding to the video frame added with the first node label, label content, user ID and the like. After obtaining the request for adding the first node label, the server correspondingly confirms the user authority according to the user ID, preferably, queries the user ID history according to the user ID, initiates the total times (also called total points) that the request for adding the first node label is finally classified into the conventional node label row and column of the video, so that the user ID with higher total score is set in the column to be checked in step 302, and then the corresponding first preset threshold value is decreased according to the preset proportion or the preset difference value (for example, the first preset threshold value is 90% of the first preset threshold value; or the first preset threshold value is-5, where 5 is the preset difference value), therefore, the request of adding the first node label of the user ID with better historical credit/professional degree is more quickly verified and passed.
In step 402, the first node tag is set in a to-be-checked column, and when the platform feeds back the video to other users, the platform receives a judgment of correctness of the first node tag by other users.
Preferably, for the judgment results of other users, the feedback correctness judgment of the other users can be given appropriate weighting according to the correctness total of the history judgment, that is, a weighting coefficient greater than 1 is multiplied, so that the efficiency of the verification of the request for adding the first node tag can be further improved.
In step 403, when the number of correct judgments corresponding to the first node tag reaches a first preset threshold and the ratio of the number of correct judgments to the number of incorrect judgments is greater than a second preset threshold, the first node tag is classified into a regular node tag row and column of the video.
The second preset threshold value ensures that the request for adding the first node label is verified incorrectly due to a single reference first preset threshold value when more actual judgments are wrong because the total cardinality is large enough. Thus, the robustness of the present implementation is further improved.
Example 3:
the present invention also provides a shared video-based communication apparatus, as shown in fig. 10, for implementing the shared video-based communication method described in embodiment 1, the apparatus includes:
at least one processor 21; and a memory 22 communicatively coupled to the at least one processor 21; wherein the memory 22 stores instructions executable by the at least one processor 21 and programmed to perform the shared video-based communication method of embodiment 1.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules and units in the device are based on the same concept as the processing method embodiment of the present invention, specific contents may refer to the description in the method embodiment of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. The communication method based on the shared video is characterized in that communication videos of experts recorded by a platform are stored in a cloud, one or more content index nodes are distributed in the communication videos, and the communication method comprises the following steps:
acquiring an instruction of a user for calling the alternating current video, and calling the text content of the content index node of the alternating current video stored in the cloud according to the instruction;
feeding back the text content to the intelligent terminal sending the instruction in a preset presentation format;
receiving the operation of selecting the text of the content index node corresponding to the video by the user, and feeding back the video browsing content near the corresponding content index node to the intelligent terminal;
acquiring an interception instruction comprising a video starting node and a video ending node, generating a mapping relation with a corresponding communication video, converting the mapping relation into an abstract link, and displaying the abstract link in a current communication window so that other users can trigger the playing of video contents corresponding to the starting node and the ending node when clicking the abstract link;
the abstract link is composed of a frame of picture A in the video content corresponding to the starting node and the ending node, corresponding starting node and ending node time points and a position address where the video content is stored; the frame of picture A is provided with a hyperlink, and the hyperlink is specifically represented as that after the picture A is clicked, video contents of a starting node and a terminating node are imported according to the time points of the starting node and the terminating node and the position and the address of the video content storage;
the selecting method of the frame of picture a specifically includes: and identifying the alternating text content of the current user by the server, matching the caption and/or content index node contained in each frame of picture according to the alternating text content, and taking the picture with the highest matching similarity as the picture A.
2. The shared video based communication method according to claim 1, wherein the preset presentation format is specifically:
the method comprises the steps that an alternating-current video author or an alternating-current video theme is taken as a first level, content type division of content index nodes in an alternating-current video is taken as a second level, specific content of each content index node is taken as a third level, and text content corresponding to each alternating-current video stored in a cloud is presented as a tree according to distribution of each level; wherein the text content in each level can be expanded and folded so that the user can complete the corresponding interaction.
3. The shared video based communication method according to claim 1, wherein when the instruction includes one or more keywords, the server screens text contents of content index nodes of the communication video stored in the cloud according to the one or more keywords, and feeds back the text contents of the screened content index nodes to the intelligent terminal operated by the user.
4. The method according to claim 1, wherein the server caches the corresponding video content according to the location address where the video content is stored and the corresponding time points of the start node and the end node whenever detecting that the user opens a page of the web page containing the summary link.
5. The shared video-based communication method according to claim 1, wherein the operation of receiving a text of a content index node of a corresponding video selected by a user specifically includes:
the user points to an index node and completes double-click action; alternatively, the first and second electrodes may be,
a user selects an index node and drags the index node to a browse window; alternatively, the first and second electrodes may be,
and selecting an index node by the user, and selecting a video playing and browsing option from the jumped operation list in a long-time pressing mode.
6. The method for communicating based on shared video as claimed in claim 1, wherein the server is further configured to extract the subtitle content in the video content of the start node and the end node, and to use the subtitle content in text form as a component of the summary link or use the subtitle content in text form as the text content for the user to communicate.
7. The method as claimed in claim 1, wherein when the user browses the website content of the platform, it is preset to display only text content, and when the server receives the browsing request from the user, the server skips the operation of caching the video content for the website content.
8. A shared video based communication device, the device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor and programmed to perform the shared video-based communication method of any of claims 1-7.
CN201810251608.XA 2018-03-26 2018-03-26 Communication method and device based on shared video Active CN108470062B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810251608.XA CN108470062B (en) 2018-03-26 2018-03-26 Communication method and device based on shared video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810251608.XA CN108470062B (en) 2018-03-26 2018-03-26 Communication method and device based on shared video

Publications (2)

Publication Number Publication Date
CN108470062A CN108470062A (en) 2018-08-31
CN108470062B true CN108470062B (en) 2021-02-09

Family

ID=63264794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810251608.XA Active CN108470062B (en) 2018-03-26 2018-03-26 Communication method and device based on shared video

Country Status (1)

Country Link
CN (1) CN108470062B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8112702B2 (en) * 2008-02-19 2012-02-07 Google Inc. Annotating video intervals
CN102307156B (en) * 2011-05-16 2015-07-22 北京奇艺世纪科技有限公司 Method and device for sharing video picture and returning to playing
US9836180B2 (en) * 2012-07-19 2017-12-05 Cyberlink Corp. Systems and methods for performing content aware video editing
CN103647991A (en) * 2013-12-23 2014-03-19 乐视致新电子科技(天津)有限公司 Method and system for sharing video in intelligent television
CN103731685A (en) * 2013-12-27 2014-04-16 乐视网信息技术(北京)股份有限公司 Method and system for synchronous communication with video played on client side
CN105979387A (en) * 2015-12-01 2016-09-28 乐视网信息技术(北京)股份有限公司 Video clip display method and system
CN105516348B (en) * 2015-12-31 2019-11-29 北京奇艺世纪科技有限公司 A kind of method and system that information is shared
KR101769071B1 (en) * 2016-05-10 2017-08-18 네이버 주식회사 Method and system for manufacturing and using video tag
CN106096050A (en) * 2016-06-29 2016-11-09 乐视控股(北京)有限公司 A kind of method and apparatus of video contents search

Also Published As

Publication number Publication date
CN108470062A (en) 2018-08-31

Similar Documents

Publication Publication Date Title
CN103942337B (en) It is a kind of based on image recognition and the video searching system that matches
CN113115099B (en) Video recording method and device, electronic equipment and storage medium
US7734654B2 (en) Method and system for linking digital pictures to electronic documents
JP6384474B2 (en) Information processing apparatus and information processing method
CN111343467B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN111857508B (en) Task management method and device and electronic equipment
CN105009113A (en) Queryless search based on context
CN104769957A (en) Identification and presentation of internet-accessible content associated with currently playing television programs
US20150055017A1 (en) Relational Display of Images
CN108536414A (en) Method of speech processing, device and system, mobile terminal
CN106407358B (en) Image searching method and device and mobile terminal
CN113569037A (en) Message processing method and device and readable storage medium
CN112486385A (en) File sharing method and device, electronic equipment and readable storage medium
CN107071554B (en) Method for recognizing semantics and device
CN107122450A (en) A kind of network picture public sentiment monitoring method
CN105607757A (en) Input method and device and device used for input
WO2019141159A1 (en) Method and terminal for obtaining multimedia file, storage medium, and electronic device
CN108470062B (en) Communication method and device based on shared video
CN106713973A (en) Program searching method and device
CN108551473B (en) Agricultural product communication method and device based on visual agriculture
CN111522992A (en) Method, device and equipment for putting questions into storage and storage medium
CN108681549A (en) The method and apparatus for obtaining multimedia resource
KR20150097250A (en) Sketch retrieval system using tag information, user equipment, service equipment, service method and computer readable medium having computer program recorded therefor
KR102536057B1 (en) Providing Method of summary information for an image searching and service device thereof
CN114245174B (en) Video preview method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210113

Address after: Room 017, building B, block 1, Gaonong biological park headquarters, 888 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430000

Applicant after: Wuhan Ainong Yunlian Technology Co., Ltd

Address before: 430223 floor 9-1, building 6, No.18 huashiyuan North Road, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Applicant before: WUHAN NANBO NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant