CN112261428A - Picture display method and device, electronic equipment and computer readable medium - Google Patents

Picture display method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN112261428A
CN112261428A CN202011128150.2A CN202011128150A CN112261428A CN 112261428 A CN112261428 A CN 112261428A CN 202011128150 A CN202011128150 A CN 202011128150A CN 112261428 A CN112261428 A CN 112261428A
Authority
CN
China
Prior art keywords
target
target area
display
displaying
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011128150.2A
Other languages
Chinese (zh)
Inventor
李宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202011128150.2A priority Critical patent/CN112261428A/en
Publication of CN112261428A publication Critical patent/CN112261428A/en
Priority to PCT/CN2021/110769 priority patent/WO2022083230A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a picture display method, a picture display device, electronic equipment and a computer readable medium, and relates to the technical field of video processing. The method comprises the following steps: acquiring a target video, and displaying the target video on a display interface; when receiving an amplification display triggering operation of a user, identifying a target object in a target video, and extracting a target area corresponding to the target object in the target video; and creating an enlarged display window, and displaying the target area through the enlarged display window. The target object in the target video is identified by receiving the amplification display triggering operation of the user, the target area corresponding to the target video is amplified and displayed through the amplification display window, when the user displays the object in a live broadcast mode and needs to display details of the object, the object or the lens does not need to be moved, the details of the object can be amplified and displayed, the user operation is convenient, and the user experience is improved.

Description

Picture display method and device, electronic equipment and computer readable medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a picture display method and apparatus, an electronic device, and a computer-readable medium.
Background
With the development of network technology, online shopping has become daily in people's life, and bringing goods by live broadcast is also the first way for sellers to sell goods.
In the current live broadcast technology, the user needs to zoom out the camera lens in order to guarantee that people and articles can go up the mirror simultaneously when broadcasting and promoting articles in a live broadcast, so, it is clear enough probably to the detail show of article, when the user wants to show the detail part of article, need be close to the camera lens with article, troublesome poeration to can shelter from the user, influence user experience.
Therefore, in the existing live broadcast technology, when a user wants to show details of an article, the distance between the article and a lens can be shortened, the operation is troublesome, and the article can shield the user, so that the watching experience of audiences is influenced.
Disclosure of Invention
The present disclosure aims to solve at least one of the above technical defects, especially, in the existing live broadcast technology, when a user wants to show details of an object, the distance between the object and a lens can only be shortened, which is troublesome to operate, and the object can block the user, which affects the viewing experience of the audience.
In a first aspect, a method for displaying a picture is provided, which includes:
acquiring a target video, and displaying the target video on a display interface;
when receiving an amplification display triggering operation of a user, identifying a target object in a target video, and extracting a target area corresponding to the target object in the target video;
and creating an enlarged display window, and displaying the target area through the enlarged display window.
In a second aspect, there is provided a picture display apparatus, comprising:
the target video acquisition module is used for acquiring a target video and displaying the target video on a display interface;
the target area identification module is used for identifying a target object in a target video and extracting a target area corresponding to the target object in the target video when receiving an amplification display triggering operation of a user;
and the picture display module is used for creating an enlarged display window and displaying the target area through the enlarged display window.
In a third aspect, an electronic device is provided, which includes:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: the above-described picture display method is performed.
In a fourth aspect, a computer-readable medium is provided, which stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the above-mentioned screen presentation method.
The target object in the target video is identified by receiving the amplification display triggering operation of the user, the target area corresponding to the target video is amplified and displayed through the amplification display window, when the user displays the object in a live broadcast mode and needs to display details of the object, the object or the lens does not need to be moved, the details of the object can be amplified and displayed, the user operation is convenient, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings used in the description of the embodiments of the present disclosure will be briefly described below.
Fig. 1 is a schematic flow chart of a picture displaying method according to an embodiment of the present disclosure;
fig. 2 is a schematic view of an application scenario of a picture displaying method according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a target area provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of determining a location of a target area according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a method for displaying a target area in an enlarged manner according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of a method for displaying a plurality of target areas in an enlarged manner according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating an enlarged display of a plurality of target areas according to an embodiment of the disclosure;
fig. 8 is a schematic flowchart of a method for adjusting a target area according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a picture display apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure provides a picture display method, an apparatus, an electronic device and a computer-readable medium, which are intended to solve the above technical problems in the prior art.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
The embodiment of the present disclosure provides a picture display method, as shown in fig. 1, the method includes:
step S101, acquiring a target video, and displaying the target video on a display interface;
step S102, when receiving an amplification display triggering operation of a user, identifying a target object in a target video, and extracting a target area corresponding to the target object in the target video;
step S103, creating an enlarged display window, and displaying the target area through the enlarged display window.
The picture display method provided by the embodiment of the disclosure is applied to a user terminal, the user terminal is provided with a video acquisition device and serves as an optional application scene, a user can live broadcast through the user terminal, articles can be displayed in the live broadcast, when the details of the articles need to be displayed, the picture display method provided by the disclosure can be adopted, the user can enlarge and display the details of the articles without moving the articles or a lens, and the operation is convenient.
In the embodiment of the present disclosure, for convenience of description, taking a specific embodiment as an example, when a user performs live broadcasting through a user terminal, a live object is captured by a video capture device to form a target video, and the target video is displayed on a display interface of the user terminal, where the live object may include an article, a person, and the like, for example, when an anchor shows an article for a watching user who is live, the live object may be an anchor and an article that the anchor wants to show to the user. In the process of displaying the target video, when receiving an enlarged display triggering operation of a user, identifying a target object in the target video, and extracting a target area corresponding to the target object in the target video, wherein the enlarged display triggering operation may be a preset voice operation or an action operation, by which an enlarged operation on an article may be started, and the target object refers to a pre-specified object appearing in the target video, such as a finger of the user, a specific article or a certain part of the article, and the like, such as one end of a certain rod-shaped object. When receiving an enlarged display triggering operation of a user, identifying a target object in a target video, and extracting a target region corresponding to the target object in the target video, where the target region may be a region near the target object or a region pointed by the target object, and if the target object is a finger of the user, identifying that the region near the finger is a target region when the finger of the user exists in the target video, or identifying that the rod-shaped object exists in the target video when the target object is a target end of the rod-shaped object, and then identifying that the region near the target end of the rod-shaped object is a target region. After the target area in the target video is determined, an enlarged display window is created in the display interface, and the target area is enlarged and displayed through the enlarged display window.
As a specific embodiment of the present disclosure, as shown in fig. 2, when a certain anchor user carries out live-broadcast delivery, a live-broadcast picture is taken through a mobile phone, where the live-broadcast picture 201 includes the anchor 202 and an item 203 propelled by the anchor, when the anchor needs to introduce a detailed part of the item 203, an enlargement display triggering operation may be used to trigger an enlargement operation, and when receiving an enlargement display triggering operation of the anchor (for example, the anchor says "enlargement display"), a target object in the live-broadcast picture, such as a small bar 204 in the anchor, is identified, an area 205 near an end of the small bar 204 is determined as a target area, an enlargement display window 206 is created, and the target area 205 is displayed through the enlargement display window 206.
The target object in the target video is identified by receiving the amplification display triggering operation of the user, the target area corresponding to the target video is amplified and displayed through the amplification display window, when the user displays the object in a live broadcast mode and needs to display details of the object, the object or the lens does not need to be moved, the details of the object can be amplified and displayed, the user operation is convenient, and the user experience is improved.
The present disclosure provides one possible implementation in which the magnification exhibition triggering operation includes a first preconfigured voice and/or a first preconfigured action.
In the embodiment of the present disclosure, the magnifying display triggering operation is a preconfigured operation, and may be triggered by voice, by an action, or by a combination of voice and action. For example, a pre-configured voice may be set, and when it is detected that the user speaks the voice, it is regarded that the user triggers the enlarged presentation triggering operation, and if the pre-configured voice is "enlarged presentation," the enlarged presentation triggering operation may be triggered when the user speaks the voice "enlarged presentation" while live broadcasting. Optionally, the magnified presentation triggering operation may be a preconfigured action, and when the action is detected, the magnified presentation triggering operation is triggered, and if the preconfigured action is that a hand of the user opens from a fist-making state, the magnified presentation triggering operation may be triggered when the user performs the operation while live broadcasting; optionally, the preconfigured action may also be that the user presses a certain key, for example, an amplification display key is provided on the user terminal, and when the user presses the amplification display key during live broadcasting, an amplification display triggering operation may be triggered.
According to the embodiment of the disclosure, by setting the preconfigured trigger operation, the user can start the amplified display through the preconfigured trigger operation, and the operation is convenient.
The embodiment of the present disclosure provides a possible implementation manner, in which extracting a target region pointed by a target object in a target video includes:
and determining a region in a first preset range corresponding to the target object as a target region.
In the embodiment of the present disclosure, the target area refers to an area that needs to be displayed in an enlarged manner, the determination of the area depends on the position of the target object in the target video, and an area within a first preset range of the target object in the target video is determined as the target area.
In the embodiment of the present disclosure, for convenience of description, as shown in fig. 3, by taking the foregoing specific embodiment as an example, if the target object is the end 301 of a small bar in a broadcaster, when the detail of an article needs to be displayed in an enlarged manner, the broadcaster needs to display the end of the small bar in an enlarged manner in an area where the end of the small bar needs to be displayed in an enlarged manner, when the user terminal identifies the end of the small bar, an area 302 in a first preset range of the end of the small bar is taken as a target area, optionally, the first preset range may be a circular range or a rectangular range with the end of the small bar as a central point, and after the target area is obtained, the target area is displayed in an enlarged manner through an enlarged display window.
According to the embodiment of the disclosure, the area near the target object is determined as the target area, so that the area which is specified by the user and needs to be amplified and displayed can be accurately identified, and the accuracy of amplified and displayed area is ensured.
The embodiment of the present disclosure provides a possible implementation manner, in which determining, as a target area, an area within a first preset range corresponding to a target object includes:
the method comprises the steps of obtaining the position of a target object in a target video, and determining a region in a first preset range with the position as the center as a target region.
In the embodiment of the present disclosure, before determining the target area, a position of the target object in the target video needs to be determined, and optionally, coordinate information of the target object in the target video may be obtained, in a coordinate system, an area within a first preset range around the coordinate information is used as the target area, specifically, as shown in fig. 4, the target video 401 is placed in a rectangular coordinate system 402, the target object is an end of a small stick in a main broadcaster, coordinates of the target object in the rectangular coordinate system are obtained, and an area around the coordinates in the first preset range is used as the target area.
According to the method and the device, the position of the target object in the target video is obtained, and the area in the first preset range of the position is determined as the target area, so that the accuracy of determining the target area is guaranteed.
The embodiment of the present disclosure provides a possible implementation manner, in which as shown in fig. 5, performing an enlarged display on a target area through an enlarged display window includes:
step S501, intercepting an image of a target area in a target frame image;
and step S502, carrying out amplification display on the image of the target area in the amplification display window.
In the embodiment of the present disclosure, the enlarged display of the target area may further include capturing an image of the target area, and enlarging and displaying the image through the enlarged display window. For convenience of description, by taking the foregoing specific embodiment as an example, when the anchor needs to display details of an article, the amplification display is triggered by an amplification display triggering operation, after a target area is identified, a target frame of a target video can be captured, an image of the target area in the captured image is captured, and the image is amplified and displayed through an amplification display window. At least the part of the object needing to be magnified and displayed in the target frame image is not blocked by other contents.
According to the embodiment of the disclosure, the image of the target area is amplified and displayed, and a static image mode is adopted, so that the content of the amplified display interface can not change along with the action of the anchor, the content display is clearer, and the experience of audience users is better.
The embodiment of the present disclosure provides a possible implementation manner, in which as shown in fig. 6, performing an enlarged display on a target area through an enlarged display window includes:
step S601, intercepting target area images in a plurality of target frame images to obtain a plurality of target area images;
step S602, displaying the plurality of target area images on the enlarged display window.
In the embodiment of the disclosure, when the target area image is displayed through the enlarged display window, a plurality of target area images can be displayed at the same time, wherein the plurality of target area images can be used for displaying details of different viewing angles of the article.
In the embodiment of the present disclosure, intercepting the target area image in the plurality of target frame images may be in response to a plurality of target area image intercepting operations of a user, resulting in a plurality of target area images. For convenience of description, taking the foregoing specific embodiment as an example, when the anchor wants to display details of multiple angles of an article, when displaying the multiple angles of the article, respectively intercepting target area images of the details of the multiple angles, and simultaneously displaying the multiple target area images through an enlarged display window, as shown in fig. 7, when displaying the front side of the article, the details of the front side of the article need to be enlarged and displayed, the enlarged display is triggered by an enlarged display triggering operation, after identifying a target area, an object frame of a target video can be intercepted, an image 701 of the target area in the intercepted image is intercepted, the image 701 is enlarged and displayed through an enlarged display window in a display interface 704, optionally, if the multiple target area images need to be displayed simultaneously, the image 701 can be retained, when the back of the article is displayed, the amplification display is triggered continuously through the amplification display triggering operation, the target frame of the target video is captured, the target area image 702 in the captured image is captured, and the target area image 702 is displayed through the amplification display window 703, so that the simultaneous display of a plurality of target area images is realized. According to the embodiment of the disclosure, the images of the plurality of target areas are simultaneously amplified and displayed, so that a user can conveniently know the details of the articles at a plurality of angles, and the articles are displayed more comprehensively.
The embodiment of the present disclosure provides a possible implementation manner, in which as shown in fig. 8, performing an enlarged display on a target area through an enlarged display window includes:
step S801, adjusting the size of a display area of a target area to a target size and adjusting the resolution of the target area to a target resolution;
and S802, displaying the adjusted target area on an enlarged display window.
In the disclosed example, when the target area is displayed, the size and resolution of the target area can be adjusted to meet the viewing requirements of the audience users.
In the embodiment of the disclosure, after the target area is determined, the display size of the target area is adjusted according to the size of the enlarged display window, the size of the target area is adjusted to be consistent with that of the enlarged display window, and the resolution of the target area can also be adjusted to the target resolution, so that the detail display of the object in the target area is clearer. And adjusting the resolution and the size of the target area to a target value, and then displaying through a magnified display window.
According to the embodiment of the disclosure, the display size and the resolution of the target area are adjusted to adapt to the enlarged display window, so that the user experience of audiences is better.
The embodiment of the present disclosure provides a possible implementation manner, in which the picture display method further includes:
and receiving a closing operation of a user, and closing the amplified display of the target video in the picture display interface based on the closing operation.
In the embodiment of the present disclosure, a user may close an enlarged display of details of an article through a closing operation, where the closing operation may be a voice operation or an action operation, for example, the user may close an enlarged display window by saying "close the enlarged display", or close the enlarged display window by triggering a preset action, for example, the preset action may be clicking a closing button, the closing button may be a button in a user terminal or a screen display interface or a button on a remote control device, the preset action may also be a preset gesture action, such as waving a hand in a specific gesture, and the like, and when the user wants to close the enlarged display of details of the article, the operation may be implemented through the above operations.
The embodiment of the disclosure closes the amplification display window based on the closing operation by receiving the closing operation of a user, the user can operate the switch of the amplification display window at any time, the detail of an article can be selected to be amplified and displayed when needed, and the amplification display window is closed when not needed, so that the user of a viewer can watch the amplification display window conveniently.
The target object in the target video is identified by receiving the amplification display triggering operation of the user, the target area corresponding to the target video is amplified and displayed through the amplification display window, when the user displays the object in a live broadcast mode and needs to display details of the object, the object or the lens does not need to be moved, the details of the object can be amplified and displayed, the user operation is convenient, and the user experience is improved.
An embodiment of the present disclosure provides a picture display apparatus, as shown in fig. 9, the picture display apparatus 90 may include: a target video acquisition module 901, a target area recognition module 902, and a screen presentation module 903, wherein,
a target video obtaining module 901, configured to obtain a target video and display the target video on a display interface;
a target area identifying module 902, configured to identify a target object in a target video when an amplification display triggering operation of a user is received, and extract a target area corresponding to the target object in the target video;
and the picture display module 903 is used for creating an enlarged display window and displaying the target area through the enlarged display window.
Optionally, the magnification exhibition triggering operation includes a first preconfigured voice and/or a first preconfigured action.
Optionally, when the target area identifying module 902 extracts the target area pointed by the target object in the target video, it may be configured to:
and determining a region in a first preset range corresponding to the target object as a target region.
Optionally, when determining the region in the first preset range corresponding to the target object as the target region, the target region identifying module 902 may be configured to:
the method comprises the steps of obtaining the position of a target object in a target video, and determining a region in a first preset range corresponding to the position as a target region.
Optionally, when the screen display module 903 displays the target area in an enlarged manner through the enlarged display window, the screen display module may be configured to:
intercepting an image of a target area in a target frame image;
and amplifying and displaying the image of the target area in an amplifying and displaying window.
Optionally, when the screen display module 903 displays the target area in an enlarged manner through the enlarged display window, the screen display module may be configured to:
respectively adjusting the size of a display area of a target area to a target size, and adjusting the resolution of the target area to a target resolution;
and displaying the adjusted target area on the enlarged display window.
Optionally, the image display apparatus provided in the embodiment of the present disclosure further includes a closing module, where the closing module is configured to:
and receiving a closing operation of a user, and closing the amplified display of the target video in the picture display interface based on the closing operation.
The image display apparatus of the present embodiment can perform the image display method shown in the foregoing embodiments of the present disclosure, and the implementation principles thereof are similar, and are not described herein again.
The target object in the target video is identified by receiving the amplification display triggering operation of the user, the target area corresponding to the target video is amplified and displayed through the amplification display window, when the user displays the object in a live broadcast mode and needs to display details of the object, the object or the lens does not need to be moved, the details of the object can be amplified and displayed, the user operation is convenient, and the user experience is improved.
Referring now to FIG. 10, a block diagram of an electronic device 1000 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 1001 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)1002, a Random Access Memory (RAM)1003 and a storage device 1008 hereinafter, which are specifically shown as follows:
as shown in fig. 10, the electronic device 1000 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Generally, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 1007 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 1008 including, for example, magnetic tape, hard disk, and the like; and a communication device 1009. The communication device 1009 may allow the electronic device 1000 to communicate with other devices wirelessly or by wire to exchange data. While fig. 10 illustrates an electronic device 1000 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 1009, or installed from the storage means 1008, or installed from the ROM 1002. The computer program, when executed by the processing device 1001, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a target video, and displaying the target video on a display interface; when receiving an amplification display triggering operation of a user, identifying a target object in a target video, and extracting a target area corresponding to the target object in the target video; and creating an enlarged display window, and displaying the target area through the enlarged display window.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments provided by the present disclosure, there is provided a screen displaying method including:
acquiring a target video, and displaying the target video on a display interface;
when receiving an amplification display triggering operation of a user, identifying a target object in a target video, and extracting a target area corresponding to the target object in the target video;
and creating an enlarged display window, and displaying the target area through the enlarged display window.
Further, the magnification exhibition triggering operation includes a first preconfigured voice and/or a first preconfigured action.
Further, extracting a target area pointed by the target object in the target video includes:
and determining a region in a first preset range corresponding to the target object as a target region.
Further, determining a region in a first preset range corresponding to the target object as a target region, including:
the method comprises the steps of obtaining the position of a target object in a target video, and determining a region in a first preset range corresponding to the position as a target region.
Further, the amplifying display of the target area through the amplifying display window includes:
intercepting an image of a target area in a target frame image;
and amplifying and displaying the image of the target area in an amplifying and displaying window.
Further, the amplifying display of the target area through the amplifying display window includes:
respectively adjusting the size of a display area of a target area to a target size, and adjusting the resolution of the target area to a target resolution;
and displaying the adjusted target area on the enlarged display window.
Further, the method further comprises:
and receiving a closing operation of a user, and closing the amplified display of the target video in the picture display interface based on the closing operation.
According to one or more embodiments provided by the present disclosure, there is provided a picture presentation apparatus including:
the target video acquisition module is used for acquiring a target video and displaying the target video on a display interface;
the target area identification module is used for identifying a target object in a target video and extracting a target area corresponding to the target object in the target video when receiving an amplification display triggering operation of a user;
and the picture display module is used for creating an enlarged display window and displaying the target area through the enlarged display window.
Optionally, the magnification exhibition triggering operation includes a first preconfigured voice and/or a first preconfigured action.
Optionally, when the target area identifying module extracts the target area pointed by the target object in the target video, the target area identifying module may be configured to:
and determining a region in a first preset range corresponding to the target object as a target region.
Optionally, when determining the region in the first preset range corresponding to the target object as the target region, the target region identification module may be configured to:
the method comprises the steps of obtaining the position of a target object in a target video, and determining a region in a first preset range corresponding to the position as a target region.
Optionally, when the image display module performs enlarged display on the target area through the enlarged display window, the image display module may be configured to:
intercepting an image of a target area in a target frame image;
and amplifying and displaying the image of the target area in an amplifying and displaying window.
Optionally, when the image display module performs enlarged display on the target area through the enlarged display window, the image display module may be configured to:
respectively adjusting the size of a display area of a target area to a target size, and adjusting the resolution of the target area to a target resolution;
and displaying the adjusted target area on the enlarged display window.
Optionally, the image display apparatus provided in the embodiment of the present disclosure further includes a closing module, where the closing module is configured to:
and receiving a closing operation of a user, and closing the amplified display of the target video in the picture display interface based on the closing operation.
According to one or more embodiments provided by the embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: the above-described picture display method is performed.
According to one or more embodiments provided by the present disclosure, a computer-readable medium is provided, which stores at least one instruction, at least one program, code set, or instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the above-mentioned screen presentation method
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1. A method for displaying a picture, comprising:
acquiring a target video, and displaying the target video on a display interface;
when receiving an amplification display triggering operation of a user, identifying a target object in the target video, and extracting a target area corresponding to the target object in the target video;
and creating an enlarged display window, and displaying the target area through the enlarged display window.
2. The method of claim 1, wherein the enlarged presentation trigger operation comprises a first preconfigured voice and/or a first preconfigured action.
3. The method according to claim 1, wherein the extracting a target region corresponding to the target object in the target video comprises:
and determining a region in a first preset range corresponding to the target object as the target region.
4. The method according to claim 3, wherein the determining a region within a first preset range corresponding to the target object as the target region comprises:
and acquiring the position of the target object in the target video, and determining a region in a first preset range with the position as the center as the target region.
5. The method of claim 1, wherein the magnifying the target area through the magnifying display window comprises:
intercepting an image of the target area in a target frame image;
and amplifying and displaying the image of the target area in the amplifying and displaying window.
6. The method of claim 1, wherein the magnifying the target area through the magnifying display window comprises:
intercepting target area images in the target frame images to obtain a plurality of target area images;
and displaying the target area images on the enlarged display window.
7. The method of claim 1, wherein the magnifying the target area through the magnifying display window comprises:
adjusting the size of a display area of the target area to a target size, and adjusting the resolution of the target area to a target resolution;
and displaying the adjusted target area on the enlarged display window.
8. The method of claim 1, further comprising:
and receiving a closing operation of a user, and closing the amplified display of the target video in the picture display interface based on the closing operation.
9. A picture display apparatus, comprising:
the target video acquisition module is used for acquiring a target video and displaying the target video on a display interface;
the target area identification module is used for identifying a target object in the target video when receiving an amplification display triggering operation of a user, and extracting a target area corresponding to the target object in the target video;
and the picture display module is used for creating an amplification display window and displaying the target area through the amplification display window.
10. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: the picture display method according to any one of claims 1 to 8 is performed.
11. A computer-readable medium, having stored thereon at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of presenting pictures according to any one of claims 1 to 8.
CN202011128150.2A 2020-10-20 2020-10-20 Picture display method and device, electronic equipment and computer readable medium Pending CN112261428A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011128150.2A CN112261428A (en) 2020-10-20 2020-10-20 Picture display method and device, electronic equipment and computer readable medium
PCT/CN2021/110769 WO2022083230A1 (en) 2020-10-20 2021-08-05 Screen display method, apparatus, electronic device, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011128150.2A CN112261428A (en) 2020-10-20 2020-10-20 Picture display method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN112261428A true CN112261428A (en) 2021-01-22

Family

ID=74245289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011128150.2A Pending CN112261428A (en) 2020-10-20 2020-10-20 Picture display method and device, electronic equipment and computer readable medium

Country Status (2)

Country Link
CN (1) CN112261428A (en)
WO (1) WO2022083230A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113364985A (en) * 2021-06-11 2021-09-07 广州逅艺文化科技有限公司 Live broadcast lens tracking method, device and medium
CN113709545A (en) * 2021-04-13 2021-11-26 腾讯科技(深圳)有限公司 Video processing method and device, computer equipment and storage medium
CN113810599A (en) * 2021-08-12 2021-12-17 惠州Tcl云创科技有限公司 Method for focusing on designated area by AI action recognition, mobile terminal and storage medium
WO2022083230A1 (en) * 2020-10-20 2022-04-28 北京字节跳动网络技术有限公司 Screen display method, apparatus, electronic device, and computer-readable medium
CN114706646A (en) * 2022-04-11 2022-07-05 北京字跳网络技术有限公司 View display method, device, equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278275B (en) * 2022-06-21 2024-05-07 北京字跳网络技术有限公司 Information display method, apparatus, device, storage medium, and program product

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2690541A1 (en) * 2012-07-24 2014-01-29 Humax Co., Ltd. Method of displaying status bar
CN106406651A (en) * 2015-08-03 2017-02-15 北京鸿合智能系统股份有限公司 A method and a device for dynamic enlarging display of video
CN106792092A (en) * 2016-12-19 2017-05-31 广州虎牙信息科技有限公司 Live video flow point mirror display control method and its corresponding device
CN107908349A (en) * 2017-12-26 2018-04-13 深圳市金立通信设备有限公司 Display interface amplification method, terminal and computer-readable recording medium
CN109121000A (en) * 2018-08-27 2019-01-01 北京优酷科技有限公司 A kind of method for processing video frequency and client
CN110085068A (en) * 2019-04-22 2019-08-02 广东小天才科技有限公司 A kind of study coach method and device based on image recognition
CN110166848A (en) * 2018-05-11 2019-08-23 腾讯科技(深圳)有限公司 A kind of method of living broadcast interactive, relevant apparatus and system
CN110557649A (en) * 2019-09-12 2019-12-10 广州华多网络科技有限公司 Live broadcast interaction method, live broadcast system, electronic equipment and storage medium
CN110620955A (en) * 2018-06-19 2019-12-27 宏正自动科技股份有限公司 Live broadcasting system and live broadcasting method thereof
CN111353839A (en) * 2018-12-21 2020-06-30 阿里巴巴集团控股有限公司 Commodity information processing method, method and device for live broadcasting of commodities and electronic equipment
CN111601145A (en) * 2020-05-20 2020-08-28 腾讯科技(深圳)有限公司 Content display method, device and equipment based on live broadcast and storage medium
CN111766947A (en) * 2020-06-30 2020-10-13 歌尔科技有限公司 Display method, display device, wearable device and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8406472B2 (en) * 2010-03-16 2013-03-26 Sony Corporation Method and system for processing image data
WO2017075614A1 (en) * 2015-10-29 2017-05-04 Oy Vulcan Vision Corporation Video imaging an area of interest using networked cameras
CN111541907B (en) * 2020-04-23 2023-09-22 腾讯科技(深圳)有限公司 Article display method, apparatus, device and storage medium
CN112261428A (en) * 2020-10-20 2021-01-22 北京字节跳动网络技术有限公司 Picture display method and device, electronic equipment and computer readable medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2690541A1 (en) * 2012-07-24 2014-01-29 Humax Co., Ltd. Method of displaying status bar
CN106406651A (en) * 2015-08-03 2017-02-15 北京鸿合智能系统股份有限公司 A method and a device for dynamic enlarging display of video
CN106792092A (en) * 2016-12-19 2017-05-31 广州虎牙信息科技有限公司 Live video flow point mirror display control method and its corresponding device
CN107908349A (en) * 2017-12-26 2018-04-13 深圳市金立通信设备有限公司 Display interface amplification method, terminal and computer-readable recording medium
CN110166848A (en) * 2018-05-11 2019-08-23 腾讯科技(深圳)有限公司 A kind of method of living broadcast interactive, relevant apparatus and system
CN110620955A (en) * 2018-06-19 2019-12-27 宏正自动科技股份有限公司 Live broadcasting system and live broadcasting method thereof
CN109121000A (en) * 2018-08-27 2019-01-01 北京优酷科技有限公司 A kind of method for processing video frequency and client
CN111353839A (en) * 2018-12-21 2020-06-30 阿里巴巴集团控股有限公司 Commodity information processing method, method and device for live broadcasting of commodities and electronic equipment
CN110085068A (en) * 2019-04-22 2019-08-02 广东小天才科技有限公司 A kind of study coach method and device based on image recognition
CN110557649A (en) * 2019-09-12 2019-12-10 广州华多网络科技有限公司 Live broadcast interaction method, live broadcast system, electronic equipment and storage medium
CN111601145A (en) * 2020-05-20 2020-08-28 腾讯科技(深圳)有限公司 Content display method, device and equipment based on live broadcast and storage medium
CN111766947A (en) * 2020-06-30 2020-10-13 歌尔科技有限公司 Display method, display device, wearable device and medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022083230A1 (en) * 2020-10-20 2022-04-28 北京字节跳动网络技术有限公司 Screen display method, apparatus, electronic device, and computer-readable medium
CN113709545A (en) * 2021-04-13 2021-11-26 腾讯科技(深圳)有限公司 Video processing method and device, computer equipment and storage medium
CN113364985A (en) * 2021-06-11 2021-09-07 广州逅艺文化科技有限公司 Live broadcast lens tracking method, device and medium
CN113364985B (en) * 2021-06-11 2022-07-29 广州逅艺文化科技有限公司 Live broadcast lens tracking method, device and medium
CN113810599A (en) * 2021-08-12 2021-12-17 惠州Tcl云创科技有限公司 Method for focusing on designated area by AI action recognition, mobile terminal and storage medium
WO2023016207A1 (en) * 2021-08-12 2023-02-16 惠州Tcl云创科技有限公司 Method for focusing on specified area by means of ai action recognition, and terminal and storage medium
CN114706646A (en) * 2022-04-11 2022-07-05 北京字跳网络技术有限公司 View display method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2022083230A1 (en) 2022-04-28

Similar Documents

Publication Publication Date Title
CN112261428A (en) Picture display method and device, electronic equipment and computer readable medium
CN111510645B (en) Video processing method and device, computer readable medium and electronic equipment
CN107729522B (en) Multimedia resource fragment intercepting method and device
CN112004032B (en) Video processing method, terminal device and storage medium
CN111277893B (en) Video processing method and device, readable medium and electronic equipment
CN111935544B (en) Interaction method and device and electronic equipment
US20240121349A1 (en) Video shooting method and apparatus, electronic device and storage medium
US20230421857A1 (en) Video-based information displaying method and apparatus, device and medium
US20240119082A1 (en) Method, apparatus, device, readable storage medium and product for media content processing
CN114064593B (en) Document sharing method, device, equipment and medium
CN114860139A (en) Video playing method, video playing device, electronic equipment, storage medium and program product
US20220327580A1 (en) Method and apparatus for interacting with image, and medium and electronic device
CN113986003A (en) Multimedia information playing method and device, electronic equipment and computer storage medium
CN111354444A (en) Pathological section image display method and device, electronic equipment and storage medium
CN110769129B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111221455B (en) Material display method and device, terminal and storage medium
CN110489040B (en) Method and device for displaying feature model, terminal and storage medium
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes
CN112395530A (en) Data content output method and device, electronic equipment and computer readable medium
CN111526408A (en) Information content generating and displaying method and device and computer readable storage medium
EP4328726A1 (en) Video generation method and apparatus, and electronic device and storage medium
CN113068069B (en) Image processing method, system, device, electronic equipment and storage medium
US20240163548A1 (en) Method for displaying capturing interface, electronic device, and non-transitory computer-readable storage medium
WO2023030079A1 (en) Article display method and apparatus, and electronic device and storage medium
CN117234324A (en) Image acquisition method, device, equipment and medium of information input page

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

CB02 Change of applicant information