WO2020063095A1 - 一种截图显示方法及设备 - Google Patents

一种截图显示方法及设备 Download PDF

Info

Publication number
WO2020063095A1
WO2020063095A1 PCT/CN2019/098446 CN2019098446W WO2020063095A1 WO 2020063095 A1 WO2020063095 A1 WO 2020063095A1 CN 2019098446 W CN2019098446 W CN 2019098446W WO 2020063095 A1 WO2020063095 A1 WO 2020063095A1
Authority
WO
WIPO (PCT)
Prior art keywords
screenshot
display
identifiable
layer
input command
Prior art date
Application number
PCT/CN2019/098446
Other languages
English (en)
French (fr)
Inventor
付延松
宋虎
鲍姗娟
付友苹
李玉倩
Original Assignee
青岛海信电器股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201811132364.XA external-priority patent/CN109271983B/zh
Priority claimed from CN201811133159.5A external-priority patent/CN109388461A/zh
Priority claimed from CN201910199952.3A external-priority patent/CN109922363A/zh
Application filed by 青岛海信电器股份有限公司 filed Critical 青岛海信电器股份有限公司
Priority to US16/530,233 priority Critical patent/US11039196B2/en
Publication of WO2020063095A1 publication Critical patent/WO2020063095A1/zh
Priority to US17/322,572 priority patent/US11812188B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]

Definitions

  • the present application relates to the field of smart televisions, and in particular, to a method and device for displaying screenshots.
  • the smart TV controlled by the remote control automatically recognizes objects in the display interface.
  • the user can select specific identifiable objects.
  • the user's selection of recognizable objects and feedback information about the selected recognizable objects can be presented visually to the user in an interactive manner.
  • the embodiments of the present application provide a screenshot display method and device, which are used to provide a user with a visual focus frame for identifying an object in the screenshot and real-time operation interaction feedback.
  • obtaining a screenshot of a display screen of the display device In response to the input command, obtaining a screenshot of a display screen of the display device, displaying the display screen in a first area of a display interface of the display device, and displaying the screenshot in a second area of the display interface, Determining the identifiable objects in the screenshot, and arranging the identifiable objects in the third area of the display interface in order according to the position information of the identifiable objects corresponding to the screenshot, the identifiable objects Around the object recognition frame;
  • the focus frame is displayed overlaid on an object recognition frame corresponding to the selected identifiable object.
  • the display interface includes multiple layers, a screenshot of the display screen is displayed on a first layer, and an object recognition frame corresponding to the identifiable object is displayed on a second layer; wherein the first Two layers are located above the first layer.
  • the first object recognition frame is displayed overlappingly around the first recognizable object on the display screen, and the first recognizable object is displayed within a predetermined range from the first object recognition frame. A magnified image of the object.
  • the second and third regions are adjacent.
  • the two-dimensional code information associated with the screenshot is presented on the display interface, so that the user can obtain the screenshot by scanning the two-dimensional code information.
  • the two-dimensional code information is drawn on a third layer, and the third layer is located on the second layer.
  • a key event sent by the remote controller is received, and the key event is dispatched to one of the first layer, the second layer, and the third layer to respond.
  • the third layer is gradually removed from the display interface in a gradual manner.
  • the center coordinates of the identifiable object having the same abscissa or ordinate difference value are calculated according to the Pythagorean theorem, and The distance of the center coordinates of the selected identifiable object. The identifiable object with the smallest distance is taken as the next focus frame drawing position.
  • a memory configured to store computer instructions and image data associated with the display screen
  • a processor in communication with the display screen and memory and configured to execute computer instructions to cause the display device:
  • obtaining a screenshot of a display screen of the display device In response to the input command, obtaining a screenshot of a display screen of the display device, displaying the display screen in a first area of a display interface of the display device, and displaying the screenshot in a second area of the display interface, Determining the identifiable objects in the screenshot, and arranging the identifiable objects in the third area of the display interface in order according to the position information of the identifiable objects corresponding to the screenshot;
  • the focus frame is displayed overlaid on an object recognition frame corresponding to the selected identifiable object.
  • Another embodiment of the present application provides a computer-readable non-volatile storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions implement the above-mentioned method when executed by a processor. .
  • Determining the currently selected object on the screenshot and covering the currently selected object with the focus frame of the currently selected object, and drawing it on the second layer.
  • the object identification information includes position information of the object
  • Drawing the object recognition frame on the second layer according to the object identification information specifically includes:
  • covering the focus frame of the currently selected object over the recognition frame of the currently selected object and drawing it on the second layer specifically includes:
  • the focus frame of the currently selected object is drawn on the second layer
  • the focus frame previously drawn on the second layer is deleted.
  • the method further includes:
  • the screenshot sharing QR code is drawn on a third layer, where the third layer is located above the second layer.
  • the method further includes:
  • the method further includes:
  • An embodiment of the present application further provides a display device.
  • the display device includes:
  • Memory for storing program instructions
  • the processor is configured to call a program instruction stored in the memory and execute the method according to the foregoing embodiment of the present application according to the obtained program.
  • Another embodiment of the present application provides a computer-readable non-volatile storage medium, where the storage medium stores computer-executable instructions, and the computer-executable instructions are used to cause the computer to execute any one of the foregoing methods. .
  • An embodiment of the present application provides a display method based on an identification object in a screenshot of the screen, the method includes:
  • an object recognition frame corresponding to the selected recognition object body is displayed differently as the focus frame relative to other object recognition frames.
  • a screenshot of the current screen is displayed on a first layer on the screen, and an object recognition frame corresponding to the identified object body in the screenshot is displayed on a second layer on the screen; The second layer is above the first layer.
  • the position information of the recognition target object is used to indicate a display position and size of an object recognition frame corresponding to the recognition target object in the screenshot.
  • the position information of the recognition object body includes at least: coordinate information of any corner of the rectangular frame corresponding to the outline of the recognition object body, and the width and height of the rectangular frame. .
  • the method further includes:
  • the identification content and related recommended content of the selected recognition target body are displayed on the screen.
  • An embodiment of the present application provides a display method based on an identified object in a screenshot of a screen shot.
  • the method includes:
  • the focus frame is overlaid and displayed on an object recognition frame corresponding to the selected recognition object body.
  • a screenshot of the current screen is displayed on a first layer on the screen, and a second layer on the screen displays an object recognition frame and the object recognition frame corresponding to the recognition object body in the screenshot.
  • a focus frame is selected to identify the object; wherein the second layer is located above the first layer.
  • a display position and a size of a focus frame of the selected recognition target body are determined based on position information of the selected recognition target body, so that the focus frame and the object recognition frame completely coincide.
  • the position information of the recognition target body includes at least: coordinate information of any corner of a rectangular frame corresponding to the outline of the recognition target body.
  • the method further includes:
  • An embodiment of the present application provides a display device, including:
  • a processor in communication with the memory and the screen, the processor for performing the method described above.
  • An embodiment of the present application provides a graphical user interface method for displaying screenshots, and the method includes:
  • the selector In response to an input instruction for instructing the selector to move between at least one recognition frame, displaying the selector at a recognition frame of an recognition object, and determining that the size of the recognition frame of the selected recognition object is within a preset threshold, The selected recognition target image is displayed enlarged.
  • An embodiment of the present application provides a display device, where the display device includes:
  • a controller for controlling the display to display a graphical user interface in response to an input command of the user interface specifically performing:
  • the selector In response to an input instruction for instructing the selector to move between at least one recognition frame, displaying the selector at a recognition frame of an recognition object, and determining that the size of the recognition frame of the selected recognition object is within a preset threshold, The selected recognition target image is displayed enlarged.
  • FIG. 1 is a schematic diagram of a screenshot display method of a display device according to an embodiment of the present application
  • FIG. 2 is a schematic diagram illustrating a display screen provided by a display device by way of example
  • FIG. 3A exemplarily illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3B exemplarily illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3C exemplarily illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3D exemplarily illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3E exemplarily illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3F exemplarily shows a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3G exemplarily illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3H illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3I illustrates a schematic diagram of a GUI 400 provided by a display device
  • 3J exemplarily illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3K exemplarily illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 3L exemplarily illustrates a schematic diagram of a GUI 400 provided by a display device
  • FIG. 4A exemplarily shows a flowchart of a graphical user interface display method for displaying screenshots
  • 4B exemplarily shows a flowchart of a graphical user interface display method for displaying screenshots
  • FIG. 4C exemplarily shows a flowchart of a graphical user interface display method for displaying screenshots
  • FIG. 5A illustrates a schematic diagram of a layer distribution structure of an identification object display interface
  • FIG. 5B exemplarily illustrates a position information diagram of an identification object
  • FIG. 6 is a schematic diagram of an object recognition interface layout provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a distribution structure of an object recognition interface layer provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a display method for identifying an object in a screenshot of a screen according to an embodiment of the present application
  • FIG. 9 is a schematic diagram of labeling position information of an object recognition frame according to an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of an object recognition interface interaction process according to an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of an example program interaction process according to an embodiment of the present application.
  • FIG. 12 is a schematic flow chart of a solution example provided by an embodiment of the present application.
  • FIG. 13 is a schematic flow chart of a solution example provided by an embodiment of the present application.
  • FIG. 14 is a schematic flowchart of an example program interaction according to an embodiment of the present application.
  • FIG. 15 is a schematic flowchart of an example program interaction process according to an embodiment of the present application.
  • FIG. 16 is a schematic flowchart of an example program interaction according to an embodiment of the present application.
  • FIG. 17 is a schematic diagram of a display device for identifying an object in a screenshot of a screen according to an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a display device according to an embodiment of the present application.
  • the embodiment of the present application provides a display method for identifying an object in a screenshot of a screen, which is used to provide users with a visual focus state and real-time operation interactive feedback to meet the user's interaction needs, and reduce the amount of calculation and Memory consumption.
  • a screenshot display method of a display device includes:
  • S101 Receive an input command sent by a remote controller to take a screenshot of a display screen (or display screen or display content) of the display device;
  • S103 Receive a selection instruction from a remote controller that indicates that a focus frame is on the identifiable object
  • the identifiable object may be an object in a display screen of the display device, such as a person, an article, or the like.
  • the first area is the area where the current playback picture 41 is located
  • the second area is the area where the screenshot image 420 is located
  • the third area is the graphic elements 421, 422, 423, and 424 area.
  • the upper region in FIG. 12 is a first region
  • the lower region includes a second region and a third region.
  • the display interface includes multiple layers, a screenshot of the display screen is displayed on a first layer, and an object recognition frame corresponding to the identifiable object is displayed on a second layer; wherein the first Two layers are located above the first layer.
  • the first object recognition frame is displayed overlappingly around the first recognizable object on the display screen, and the first recognizable object is displayed within a predetermined range from the first object recognition frame.
  • a magnified image of the object for example, 45 in Figure 3C.
  • the second and third regions are adjacent.
  • the second region where the screenshot image 420 in FIG. 3A is located is adjacent to the third region where the imaged elements 422 and 423 are located.
  • receiving a first operation instruction for indicating a screenshot sent by the remote controller and in response to the first operation instruction, presenting two-dimensional code information associated with the screenshot on the display interface, So that the user can obtain the screenshot by scanning the QR code information. For example, the two-dimensional code information shown on the left in FIG. 13.
  • the display interface is updated while the QR code information corresponding to the screenshot is presented.
  • the images originally displayed in the first region are displayed, and the images originally displayed in the second region and the third region are hidden.
  • corresponding object recognition frames are displayed around the identifiable objects of the displayed image. For example, the object recognition frame drawn by the head of a person on the screen on the right in FIG. 13.
  • the object recognition frame is located on the second layer above the first layer where the screenshot is located. Through the superposition of the first layer and the second layer, the object recognition frame on the head of the person on the right in FIG. 13 is presented.
  • the QR code information is drawn on a third layer, and the third layer is located on the second layer where the object recognition frame is located.
  • a key event sent by the remote controller is received, and the key event is dispatched to one of the first layer, the second layer, and the third layer to respond.
  • the third layer is hidden.
  • the third layer is gradually removed from the display interface in a gradual manner. That is, the display interface is updated to hide the QR code information on the display interface.
  • the method provided by the present application may further include: traversing the difference in the abscissa or ordinate of the center coordinates of the selected identifiable object and the center coordinates of other identifiable objects; The second identifiable object with the smallest difference among other identifiable objects is described as the next focus frame drawing position.
  • the next selected identifiable object may be determined in the manner described above. For example, another identifiable object that is closest to the selected identifiable object currently in focus is determined as the next selected identifiable object.
  • next selected identifiable object may also be determined in other ways. You can select the next selected identifiable object by operating the remote control's arrow keys, or other keys.
  • the distance between the center coordinate of the identifiable object with the same difference and the center coordinate of the selected identifiable object may be calculated according to the Pythagorean theorem, and the identifiable object with the smallest distance is taken as the next focus frame drawing position.
  • FIG. 2 is a schematic diagram illustrating a display screen of a display device by way of example.
  • the display device may provide the display with a current playback picture 41, which may be at least one of a text, an image, and a video.
  • a current playback picture 41 which may be at least one of a text, an image, and a video.
  • the currently playing picture 41 shown in FIG. 2 is a TV drama segment (or a TV video image sequence).
  • FIG. 2 when a TV series is played on the display device, when the user needs to know information such as character information, clothing information, or channel information in the currently playing screen 41, the user can press a preset button (such as a screenshot key) on the control device. ).
  • the display device may respond to the screenshot operation instruction corresponding to the preset key, take a screenshot of the current playback screen 41 to obtain a screenshot image, and simultaneously transfer the screenshot image to the image recognition server for content recognition processing to make the image recognition
  • the server returns content information of the identifiable object contained in the screenshot image and recommended content related to the identifiable object for users to browse by category.
  • the display device may classify and display the returned recognition data according to a preset rule. For example: the image recognition server recognizes that the screenshot image contains 5 characters, then the identification data returned to the display device may include: the position information of the 5 characters in the screenshot image, the encyclopedia introduction information of the character, and the character played Other film and television drama information, characters' clothing information in the screenshot image, etc.
  • 3A-3L are schematic diagrams of a GUI 400 provided by a display device by way of example.
  • the display device responds to a screenshot operation instruction triggered by a preset key, and provides a GUI 400 to the display.
  • the GUI 400 includes a current playback screen 41 and a floating display on the current playback screen 41 and is located below the current playback screen 41. ⁇ suspended display area 42.
  • a screenshot image 420 in the floating display area 42, a screenshot image 420, as well as the identified person and clothing information are displayed.
  • a screenshot image 420 is displayed in the middle area of the floating display area 42, and graphical elements 421 to avatars of the characters identified in the screenshot image 420 are displayed on the two sides of the screenshot image 420, respectively. 424 and so on.
  • the GUI 400 provided by the display device further includes a selector 43 indicating that a graphical element is selected, and the position of the selector 43 in the display interface can be moved by an input when a user operates a control device (for example, a remote controller) to select Different identifiable objects in the display interface.
  • the identifiable objects can display the screenshot image 420 or the graphic elements 421, 422, 423, or 424, and so on.
  • a screenshot image 420 and a character avatar in the graphic elements 421 to 422 are displayed, and the selector 43 indicates that the screenshot image 420 is selected.
  • a plurality of object recognition frames are displayed at a plurality of person heads, and the selector 43 indicates that the object recognition frame 4201 is selected.
  • the display form of the selector may be a focus frame.
  • the graphical element displayed on the display interface may be selected or controlled according to the input of the user through the control device and controlling the movement of the display focus frame in the display device. For example, the user can control the movement of the focus frame through the direction keys on the control device to select and control the graphic elements.
  • the focus frame is presented with a thick line, and the focus frame can also be displayed by changing the size, color, transparency, and outline of the focus graphic element.
  • the display device when a user operates a control device, such as a user pressing a confirmation key on the control device, to instruct the selector 43 to select a screenshot image 420, the display device responds to an input that activates the screenshot image 420.
  • object recognition frames 4201 to 4205 are displayed respectively at the avatars of the characters in the five graphical elements 421 to 425, such as rectangular frames.
  • object recognition frames 4201 to 4205 are displayed respectively at the avatars of the characters in the five graphical elements 421 to 425, such as rectangular frames.
  • the display device can respond by operating the arrow keys on the control device according to the indication of the object recognition frame in the screenshot image.
  • the position of the selector in the GUI is moved according to the movement instruction corresponding to the direction key to indicate that different object recognition frames are selected, so that content links corresponding to the object recognition frames can be activated to browse content information of the selected recognition objects.
  • the display interface of the display device is updated, and at the object recognition frame 4201 Display selector 43.
  • an imaged element and a corresponding object recognition frame that are enlarged in proportion to a person's head may be displayed in a pop-up near the selector.
  • the user usually views or operates the GUI provided by the display device for a long distance, so that when the size of the person image in the screenshot image is small, the user is not easy to view or the image is blurry and difficult to identify; in the embodiment of the present application, the When the size of the person image is small, the enlarged person image is displayed, so that the user can clearly view the person's head image over a longer distance.
  • the name of the person's avatar returned by the image recognition server the name of the person is marked at the enlarged object recognition frame. In this way, when the user looks at a person's avatar clearly for a long distance, the user can get a preliminary understanding of the name corresponding to the person's avatar.
  • a connecting line may also be displayed between the selector and the enlarged character avatar and its corresponding object recognition frame to prompt the user that the enlarged character avatar and its corresponding object recognition frame belong to the selector's current
  • the selection of the object recognition frame further enables the user to more clearly view the person's avatar.
  • the connecting line 44 connects the selector 43 and the enlarged object recognition frame 45.
  • the selector and the object recognition frame corresponding to the character avatar selected by the selector are the same size and shape, but are differently displayed in the object recognition frames corresponding to other character avatars not selected by the selector, such as GUI400 shown in FIG. 3C
  • the border line of the selector 43 may be displayed in bold or displayed in other colors, and is distinguished from the object recognition frame 4201, so as to indicate to the user that the character head selected by the selector 43 is currently paying attention.
  • the display device may control the selector 43 to move to the object recognition frame 4202 to the right in response to the movement instruction corresponding to the right direction key, and display the human head and the object recognition frame enlarged in equal proportions near the selector 43.
  • the display device marks the person's name-LT at the enlarged object recognition frame according to the name of the person's avatar returned by the image recognition server. In this way, when the size of the person's image is small, the user can more clearly view the person's avatar over a longer distance and preliminary understand the name corresponding to the person's avatar.
  • a connecting line may also be displayed between the selector and the enlarged object recognition frame, to remind the user that the enlarged character avatar and its corresponding object recognition frame belong to the object recognition frame currently selected by the selector, further Users can see people's avatars more clearly.
  • the display device may, in response to the move instruction corresponding to the right direction key, control the selector 43 to move right to the object recognition frame 4203.
  • the display device marks the name -QX of the person at the position of the selector 43 according to the name of the person's avatar returned by the image recognition server. In this way, because the size of the person image is large and the person's name is marked on the person's head, the user can clearly view the person's head and understand the name corresponding to the person's head over a long distance.
  • the display device may respond to the movement instruction corresponding to the right direction key, and control the selector 43 to move to the object recognition frame 4204 to the right.
  • the display device marks the person's name -JX at the position of the selector 43 according to the name of the person's avatar returned by the image recognition server. In this way, because the size of the person image is large and the person's name is marked on the person's head, the user can clearly view the person's head and understand the name corresponding to the person's head over a long distance.
  • the display device may control the selector 43 to move to the right of the object recognition frame 4205 in response to a movement instruction corresponding to the right direction key.
  • the display device marks the person's name-WZW at the position of the selector 43 according to the name of the person's avatar returned by the image recognition server. In this way, because the size of the person image is large and the person's name is marked on the person's head, the user can clearly view the person's head and understand the name corresponding to the person's head over a long distance.
  • FIG. 3G when the user needs to return to view the content information of the previous recognition object, according to the indication of the object recognition frame corresponding to each recognition object in the screenshot image, by operating the arrow keys on the control device, such as Left arrow key.
  • the display device may respond to the movement instruction corresponding to the left arrow key, and display the GUI according to the reverse change of FIGS. 3G-3F-3E-3D-3C.
  • the display device when the user operates the control device and instructs the selector to activate the object recognition frame of the selected recognizable object, for example, if the user presses the confirmation key on the control device, the display device responds to activating the object recognition frame of the selected recognizable object. Input instructions to display recommended content related to the selected recognition object on the display to provide users with more detailed recognition object information.
  • the display device may, in response to an input instruction to activate the object recognition frame 4201, display the recommended content 4211 associated with the character avatar in the graphic element 421 on the right side of the display, such as YZ's encyclopedia introduction information, YZ Information about other film and television dramas played, YZ's clothing information in the current TV drama clips, etc.
  • the display device may, in response to an input instruction to activate the object recognition frame 4202, display recommended content 4221, such as LT's encyclopedia introduction information, on the right side of the display, which is associated with the character's head in the graphical element 422.
  • display recommended content 4221 such as LT's encyclopedia introduction information
  • the display device may respond to the input instructions of activating the object recognition frames 4203 to 4205, respectively, and display the characters in the graphic elements 423 to 425 as shown in FIG. 3J-3L on the right side of the display, respectively.
  • the display device may also respond to Activate the input instructions of the graphic elements 421 to 424, and display the recommended content 4211 to 4241 associated with the character avatar in the graphic elements 421 to 424 as shown in Figs. 3H-3K on the right side of the display, so as to provide users with more Detailed character information.
  • the user can operate the control device, such as pressing the return key on the control device or continuously pressing the return key on the control device.
  • the display device quits displaying the screenshot image and the display of the identification content in the screenshot image, so as to continue to display the TV series clip shown in FIG. 2 on the display for the user to continue watching.
  • the display device may provide the user with a screenshot image such as a display screen of the TV series and the content information of the identified object identified in the screenshot image.
  • a user can understand the actor-related information in the TV series, without requiring the user to query related information about the TV series through other devices (for example, a smart phone) to improve the user experience.
  • the user can provide a visual way to browse the identified object contained in the screenshot image by marking the object recognition frame corresponding to the identified object, and Real-time interactive feedback.
  • 4A-4C exemplarily show a flowchart of a graphical user interface display method for displaying screenshots.
  • the method includes the following steps S51 to S54.
  • Step S51 The display displays the current screen.
  • the display may show a TV series clip as shown in FIG. 2.
  • Step S52 Receive a screenshot operation instruction that instructs the user to take a screenshot of the current screen through the control device. For example, the user presses a preset key (such as a screenshot key) on the control device.
  • a preset key such as a screenshot key
  • Step S53 In response to the inputted screenshot operation instruction, a screenshot image of the current screen is displayed on the display, and based on the position information of one or more identifiable objects in the screenshot image, a display for identifying one or more identifiable objects is displayed on the display.
  • One or more object recognition frames that identify the object. For example, in the GUI 400 shown in FIG. 3B, the graphic elements of the screenshot image 420 of the current screen are displayed in full screen, and the object recognition frames 4201 are displayed at the graphic elements 421 to 425 of the five person avatars identified in the screenshot image 420, respectively. ⁇ 4205.
  • Step S54 Receive a movement instruction input by the user through the control device, instructing the selector to move between at least two object recognition frames. For example, the user presses a direction button (such as a right arrow) on the control device.
  • a direction button such as a right arrow
  • Step S55 in response to the input movement instruction, displaying a selector at an object recognition frame of an identifiable object, and determining that the size of the object recognition frame of the selected identifiable object is within a preset threshold, displaying the enlarged selected Identify the target image.
  • the selector 43 is displayed at the object recognition frame 4201; and it is determined that the size of the object recognition frame 4201 is within a preset threshold value, and a graphic element enlarged by an equal proportion is displayed near the selector 43 The character head and object recognition frame 4201 in 421.
  • step S53 may include S531 and S532.
  • Step S531 In response to the inputted screenshot operation instruction, take a screenshot of the current screen to obtain a screenshot image, and draw a screenshot image of the current screen on layer B of the display interface.
  • Step S532 Send the screenshot image to the image recognition server, and according to the content information of the identifiable objects in the screenshot image returned by the image recognition server, draw an object recognition frame corresponding to each identifiable object on the layer M of the display interface.
  • composition of the recognizable object display interface in the screenshot image 420 shown in FIG. 5A is divided into two parts.
  • the layer B is the lowest view, and the content drawn on this layer is the screenshot image 420 of the current screen.
  • Layer M is a View above layer B, used to draw the object recognition frame 4202-4205, selector 43, the name information of the recognizable object (such as YZ), the enlarged image of the identified object, and the connection for the magnification
  • the target image and the connecting line 44 of the selector 43 are identified.
  • the layer M is set to a visible state.
  • the content information of the identifiable object includes but is not limited to: the type of the identifiable object (such as types of people, animals, clothing, station logos, etc.), the position information of the identifiable object in the screenshot image, and the name of the identifiable object (such as Person name or animal name), recommended information about identifiable objects (such as movies played by characters), etc.
  • the position information of the identifiable object is used to indicate the display position and size of the object recognition frame corresponding to the identifiable object in the screenshot image.
  • the object recognition frame corresponding to the recognizable object uses a rectangular frame as an example.
  • the position information of the identifiable object includes, but is not limited to: coordinate information of any corner of the rectangular frame corresponding to the identifiable object outline, and the width and height of the rectangular frame.
  • the position information of the identifiable object includes: X-axis coordinate information X0, Y-axis coordinate information Y0 of the upper left corner of the rectangular frame corresponding to the avatar outline of the identifiable object, and a rectangular frame corresponding to the avatar outline of the identifiable object.
  • the position information of each identifiable object is obtained by traversing, and an image view (ImageView) control is created for each identifiable object.
  • the position and size of the ImageView control are determined by the identifiable objects shown in FIG. 5B. Location information to control.
  • the image of the object recognition frame stored in the display device may be filled into the ImageView control, and the ImageView control filled with the image of the object recognition frame may be drawn on the layer M. Therefore, an object recognition frame corresponding to each recognizable object can be drawn on the layer M of the display according to the position information of the recognizable object returned by the image recognition server.
  • step S55 may include S551-S553.
  • Step S551 In response to the input movement instruction, determine the currently selected recognizable object on the screenshot image, and draw the selector on the layer M, and overlay the selector on the object recognition frame of the selected recognition object.
  • the position and size of the created ImageView control are still controlled with the position information of the selected identifiable object as shown in FIG. 5B.
  • the currently selected identifiable object on the screenshot image is determined, and then the selector picture stored in the display device is filled into the ImageView control, and the ImageView control filled with the selector picture Draw on the layer M, while covering the object recognition frame of the currently selected identifiable object.
  • the size and shape of the two are coincident.
  • drawing an object recognition frame corresponding to each recognizable object on the layer M provides a selection instruction, and the user can operate the order of the direction keys of the control device to determine the next selected recognizable object to which the selector will move.
  • Object recognition frame
  • the method for determining the object recognition frame of the next selected recognition object to which the selector is to be moved may include:
  • the coordinates (X, Y) of the upper left corner of the rectangular frame corresponding to the avatar outline of other identifiable objects are calculated by traversing the (X0, Y0) of the upper left corner of the rectangular frame corresponding to the avatar outline of the currently selected identifiable object.
  • the rectangular frame corresponding to the avatar contour of the identifiable object with the smallest difference among the other identifiable objects is taken as the object recognition frame of the next selected identifiable object, that is, the position where the selector moves next time.
  • the upper left corner of the rectangular border corresponding to the avatar outline of each recognized object in the identifiable object with the same difference may be calculated separately
  • the distance between the vertex and the top left corner of the rectangular frame corresponding to the currently selected identifiable object's avatar outline that is, the straight line distance between the top left corner vertices of the two rectangular frames.
  • the frame serves as the object recognition frame of the next selected identifiable object. In other embodiments, other methods may be used to determine the next selected identifiable object.
  • Step S552 It is determined whether the size of the object recognition frame of the currently selected identifiable object is within a preset threshold; if yes, step S553 is performed; otherwise, the process ends.
  • Step S553 Display the enlarged currently selected identifiable object and its object recognition frame.
  • the recognition target image within the range of the object recognition frame of the currently selected identifiable object is deducted or re-captured from the screenshot image, and the recognition object image and its object recognition frame picture are enlarged in equal proportions.
  • An ImageView control can be created for the currently selected identifiable object.
  • the size of the control can be as shown in Figure 5B: width W1, height H1, and the position of the control can be shown in Figure 5B: The coordinates of the upper left corner of the control are ( X1, Y1); after that, the enlarged image of the recognition object and the picture of the object recognition frame are filled into the ImageView control, and the filled ImageView control is drawn on the layer M.
  • the interpolation recognition algorithm can be used to optimize the enlarged recognition target image.
  • the interpolation algorithm may adopt a method known to those skilled in the art. For example, an edge region and a smooth region of the original image may be extracted, and sampling pixels may be added to the smooth region and the edge region, respectively. In this way, the user can be more clearly identified.
  • the name information of the selected identifiable object may also be displayed at the enlarged selected recognition target image and its object recognition frame position.
  • the name of the person's head-YZ is displayed in the upper left corner of the person's head and the object recognition frame 4201 in the enlarged imaged element 421.
  • a TextView control can be created for the selected identifiable object, and the name of the selected identifiable object-YZ is filled into the TextView control, and the filled TextView control is drawn on the layer. M on.
  • connection object may also be displayed, the connection object being visually connected to the selector and the enlarged selected recognition object image and its object recognition frame; wherein the connection object and the enlarged selected recognition object image and The object recognition frame is displayed together.
  • the selector 43 is visually connected to the enlarged character avatar and the object recognition frame 4201 through the connecting line 44.
  • an ImageView control may be created for the selected identifiable object, the connection lines stored in the display device are filled into the ImageView control, and the filled ImageView control is drawn on the layer M.
  • the display device may further: receive an input instruction input by a user for instructing activation of an object recognition frame of the selected recognizable object; and in response to the input input instruction, display the selected recognizable on the display.
  • the identification content associated with the object may be
  • the recommended content 4211 associated with the portrait of the character is displayed on the right side of the display, such as the encyclopedia introduction information of YZ, other film and television drama information played by YZ, and the same clothing information in the current TV drama clip. Wait.
  • these associated recommendations 4211 may be displayed on the layer T above the layer M.
  • an object recognition frame is first used to mark each identifiable object in the screenshot image, and then the object recognition frame corresponding to each identifiable object is labeled. Indicates that the selected identifiable object is highlighted in the form of a selector to feedback the user. At the same time, for the case where the size of the selected identifiable object is small, the selected identifiable object is enlarged and displayed to further provide a clear browsing experience for the user.
  • the screenshot image, the object recognition frame corresponding to the recognizable object, and the selector on the selected recognizable object can be drawn in one layer.
  • its memory occupation on the display device is relatively large. If the selector is directly drawn in the layer where the screenshot image is placed, then because the screenshot image has a large internal memory, and the selector is drawn on each time the selector moves to select the recognition object, the graphics processor (Graphics Processing) Unit (GPU) refresh display on the layer will cause a large amount of calculation of the display device, large memory consumption, and reduced performance.
  • GPU Graphics Processing
  • the above functions are implemented by drawing a screenshot image on one layer, and drawing object recognition frames, selectors, etc. on another layer. This is especially the same as drawing directly on the layer where the screenshot image is placed.
  • the screenshot image and the selector are drawn in two layers, instead of refreshing the layer where the screenshot image is located each time the selector moves, only the layer where the selector is located can be reduced, which can reduce the amount of calculation , Reduce memory consumption and improve performance.
  • it also provides users with visualization and real-time operation interactive feedback of the identified objects in the screenshot image.
  • an embodiment of the present application provides a display device, and some components in the display device include:
  • the display may display the GUI 400 shown in FIG. 3A, which includes the current playback screen 41 and the floating display area 42 floating on the current playback screen 41 and located at the bottom of the current playback screen 41, and the screenshot in the floating display area 42
  • the image 420 is selected by the selector 43.
  • the user interface may receive a user's instruction to control the selector to move left / right in the GUI by pressing the left / right direction key on a control device (for example, a remote control) to change the position of the selector in the GUI.
  • a control device for example, a remote control
  • a controller for executing: in response to an input instruction for instructing to take a screenshot of the current screen displayed on the display, displaying a screenshot image of the current screen on the display, and based on position information of at least one identifiable object in the screenshot image, A recognition frame for identifying an identifiable object is displayed on the display.
  • the screenshot image 420 of the current screen is displayed in full screen, and the object recognition frames 4201 to 4205 of the five person heads in the graphic elements 421 to 425 respectively identified in the screenshot image 420 are displayed.
  • the controller is further configured to execute:
  • the selector 43 In response to an input instruction for instructing the selector to move between at least one object recognition frame, displaying the selector at an object recognition frame of an identifiable object, and determining that the size of the object recognition frame of the selected identifiable object is When the value is within the threshold, the enlarged selected recognition target image is displayed. For example, in the GUI 400 shown in FIG. 3C, the selector 43 is displayed at the object recognition frame 4201; and it is determined that the size of the object recognition frame 4201 is within a preset threshold value, and an avatar of a person enlarged in proportion is displayed near the selector 43.
  • the controller is further configured to perform: displaying an enlarged object recognition frame of the selected identifiable object. For example, in the GUI 400 shown in FIG. 3C, a selector 43 is displayed at the object recognition frame 4201; and a person's avatar and the object recognition frame 4201 which are enlarged in equal proportions are displayed near the selector 43.
  • the controller is further configured to perform: displaying the name information of the selected recognizable object at the enlarged selected recognition object image and the position of the object recognition frame thereof. For example, in the GUI 400 shown in FIG. 3C, the name -YZ of the character avatar is displayed in the upper left corner of the enlarged character avatar and the object recognition frame 4201.
  • the controller is further configured to execute: displaying a connection object, the connection object being visible to the connection selector and the enlarged selected recognition object image and its object recognition frame; wherein the connection object and the enlarged being The selected recognition target image is displayed with its target recognition frame.
  • the selector 43 is visually connected to the enlarged character avatar and the object recognition frame 4201 through the connecting line 44.
  • the controller is further configured to execute: in response to an input instruction for instructing activation of the selected identifiable object, displaying identification content associated with the selected identifiable object on a display.
  • identification content associated with the selected identifiable object on a display.
  • the recommended content 4211 associated with the portrait of the character is displayed on the right side of the display, such as the encyclopedia introduction information of YZ, other film and television drama information played by YZ, and the same clothing information in the current TV drama clip. Wait.
  • FIG. 6 The schematic diagram of the object recognition interface layout is shown in FIG. 6, which is displayed on the display interface of the display device.
  • Figure 7 is a schematic diagram of the layer distribution structure of the object recognition interface.
  • the structure of the object recognition interface is divided into three parts. Among them, layer B is the lowest view, and the content drawn on the layer is a screenshot of the display device (OSD / VEDIO mixed screenshot). ), Layer M is the middle layer View, which is used to place the object recognition frame and focus frame, and layer T is the uppermost view, which is used to place the QR code for screenshot sharing.
  • Layer B, layer M, and layer T are ViewGroup controls that display the interface, and these three layers are covered in turn.
  • Layer T is the top layer and covers the other two layers. Therefore, at the same time, you should control Buttons are monitored and processed by only one layer. For example, as shown in Figure 7, buttons are monitored and processed by layer T only, not layer B and layer T.
  • the first layer described below is layer B
  • the second layer is layer M
  • the third layer is layer T.
  • a display method for identifying an object in a screenshot of a screen provided by an embodiment of the present application includes S201-S203.
  • the screen is the screen of a smart TV.
  • the user can take a screenshot of the currently playing video through the remote control. After obtaining the screenshot in the background, the screenshot is drawn on the first layer.
  • S202 Obtain object identification information of the screenshot, and draw an object recognition frame on a second layer according to the object identification information, where the second layer is overlaid on the first layer;
  • the object identification information of the screenshot includes, but is not limited to, the type of the object (such as a person, an animal, a costume, a station logo, etc.) in the screenshot, the position information of the object on the screenshot (the position information of the object for short), the object Names, object-related recommendations, and more.
  • S203 Determine the currently selected object on the screenshot, cover the object recognition frame of the currently selected object with the focus frame of the currently selected object, and draw on the second layer.
  • the object identification information includes position information of an object.
  • drawing the object recognition frame on the second layer according to the object identification information includes:
  • the object recognition frame is filled into an image view ImageView control, and the image view control filled with the recognition frame is drawn on a second layer, wherein the ImageView control is created according to the position information of the object.
  • Covering the focus frame of the currently selected object over the object recognition frame of the currently selected object and drawing it on the second layer specifically includes:
  • the focus frame of the currently selected object is drawn on the second layer
  • the focus frame previously drawn on the second layer is deleted.
  • the method further includes:
  • the screenshot sharing QR code is drawn on the third layer, where the third layer is above the second layer.
  • the method further includes:
  • the method further includes:
  • the processing of the screenshot by the display device includes the following steps.
  • the first layer is directly drawn, and the obtained screenshot is set as the background content of the first layer.
  • the background service is requested to obtain the data of the identified content in the screenshot.
  • the second layer above the first layer is set to the visible state, and at the same time, the recognition frame of the recognition object, that is, the outline frame of the recognition object is set according to the obtained position information of the recognition object. Draw it out.
  • the specific process includes:
  • the position information of each identified object is obtained by traversing, and an image view ImageView control is created for each identified object.
  • the ImageView control controls its position and size according to the position information of each identified object body, and sets the recognition frame
  • the picture is filled into the ImageView control, the ImageView control filled with the object recognition frame is drawn on the ViewGroup control represented by the second layer, and the second layer is set to a visible state.
  • the visible state here can be, for example, that the background image is not set on the second layer, so it is transparent and the first layer is visible below the second layer.
  • the location information of the recognition object is shown in FIG. 9 and includes but is not limited to these four pieces of information: for example, the x-axis coordinate X0 of the upper-left corner of the recognized object, the y-axis coordinate Y0 of the upper-left corner of the recognized object, and the recognized object.
  • the width of the volume such as the length W0 on the x-axis, and the height of the identified object, such as the length H0 on the y-axis.
  • the currently selected recognition object is determined according to the order in which the user operates the remote control keys, and the focus frame of the selected object is drawn according to the obtained position information of the selected object.
  • the specific process includes:
  • create an ImageView control for the selected object fill the object focus frame to the ImageView control, draw the ImageView control filled with the object focus frame and the object recognition frame of the currently selected object to the second image Layer on the ViewGroup control.
  • the focus frame previously drawn on the second layer will be deleted at the same time, thereby ensuring that only one position currently has a focus frame, so as to achieve the effect of the focus frame moving and changing in real time when the remote control is operated.
  • the method for acquiring the next near focus position when the remote control is operated includes:
  • a screenshot is shared on the third layer to share the QR code, and it is displayed at the left end of the screen, as shown in Figure 6.
  • This layer slides out of the screen to the left by a certain distance along the X-axis coordinates in the form of animation, such as moving distance w.
  • the focus frame is drawn at the position of the leftmost selected object; when the user presses the left button, and The current focus frame position is the position of the leftmost selected object.
  • the third layer is moved to the right along the X axis coordinate by the same distance w by animation, and is displayed on the screen.
  • the focus frame on the second layer is displayed. Clear.
  • FIG. 10 a schematic flowchart of an object recognition interface interaction process according to some embodiments of the present application is shown.
  • the specific implementation steps include:
  • the background server (for example, an image recognition server, etc.) obtains the information of the recognition object.
  • the display device determines the reservation Whether the area coordinate .get () is less than 0 (that is, determines whether the QR code is on the TV screen).
  • the predetermined area View of the third layer (QR code) of the TV screen moves to the left by a certain distance w, and obtain the leftmost object information and
  • the position information draws the focus frame View; if the predetermined area coordinate .get () is less than 0 (indicating that the QR code is not on the TV screen), it continues to determine whether the current recognition object position is the rightmost, and if it is the rightmost, it keep There is no action feedback for the current focus state. If it is not the far right, obtain the right recognition object information, such as position information, and draw a new focus frame View in the second layer according to its position information, and destroy it in the second layer.
  • the display device judges whether the predetermined area coordinates.get () is less than 0 (that is, determines whether the two-dimensional code is on the TV screen) . If it is not less than 0 (indicating that the two-dimensional code is on the TV screen), the current focus state is maintained and the two-dimensional code has no action feedback; if it is less than 0 (indicating that the two-dimensional code is not on the TV screen), continue to determine whether the current recognition object position For the leftmost side, if it is the leftmost side, destroy the focus frame View of the current recognition object, and move the predetermined area View to the right by a certain distance w. If it is not the leftmost side, obtain the left recognition object information and Its position information draws a new focus frame View and destroys the focus frame View of the currently recognized object.
  • FIG. 16 there are shown schematic diagrams of an interaction process according to an embodiment of the present application.
  • the smart image function sent by the user through the shortcut key is used to display screenshot content in a part of the user interface displayed on the screen, and also display the playback screen in another area of the user interface. Screenshots are uploaded to the background for related identification and processing, and recommended content search;
  • the background server returns data, and the display device displays the data content in categories according to the established rules
  • the third layer T moves to the left and moves out of the screen through animation, that is, the QR code fades away from the user interface; the focus frame is displayed at this time Come out and draw on the leftmost recognition object; continue to right-click and follow the description of the recognition frame and focus frame movement and implementation process, the focus frame is redrawn and positioned on the corresponding recognition object body position;
  • a display device for identifying an object in a screenshot of a screen provided in an embodiment of the present application includes:
  • the first unit 11 is configured to obtain a screenshot of a current display screen of the screen, and draw the screenshot on a first layer;
  • a second unit 12 is configured to obtain object identification information of the screenshot, and draw an object identification frame on a second layer according to the object identification information, wherein the second layer is overlaid on the first layer. on;
  • a third unit 13 is configured to determine a currently selected object on the screenshot, and cover a recognition frame of the currently selected object with a focus frame of the currently selected object, and draw on the second layer .
  • an embodiment of the present application further provides a display device, including:
  • the processor 600 is configured to read a program in the memory 610, so that the display device performs the following processes:
  • Determining the currently selected object on the screenshot and covering the currently selected object with the focus frame of the currently selected object, and drawing it on the second layer.
  • the above method obtain a screenshot of the screen currently displayed on the screen, and draw the screenshot on the first layer; obtain the object identification information of the screenshot, and draw the object recognition frame on the second image according to the object identification information Layer, wherein the second layer is overlaid on the first layer; determining a currently selected object on the screenshot, and covering a focus frame of the currently selected object on the currently selected object
  • the recognition frame is drawn on the second layer to provide users with a visual focus state and real-time operation interactive feedback to meet user needs, and reduce the amount of calculation and memory consumption.
  • the bus architecture may include any number of interconnected buses and bridges, and one or more processors specifically represented by the processor 600 and various circuits of the memory represented by the memory 610 are linked together.
  • the bus architecture can also link various other circuits such as peripherals, voltage regulators, and power management circuits.
  • An embodiment of the present application provides a display device, which may be a smart TV, a desktop computer, a portable computer, a smart phone, a tablet computer, or the like.
  • the display device may include a central processing unit (CPU), a memory, an input / output device, etc.
  • the input device may include a keyboard, a mouse, a touch screen, etc.
  • the output device may include a display screen, such as a liquid crystal display (Liquid Crystal Display, LCD), cathode ray tube (Cathode Ray Tube, CRT) and so on.
  • the user interface 620 may be an interface capable of externally connecting an internal device as needed.
  • the connected devices include, but are not limited to, a keypad, a display, a speaker, a microphone, a joystick, and the like.
  • the processor 600 is responsible for managing the bus architecture and general processing, and the memory 610 may store instructions and data used by the processor 600 when performing operations.
  • the processor 600 may be a CPU (central embedded device), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a CPLD (Complex Programmable) Logic Device, complex programmable logic device).
  • CPU central embedded device
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • CPLD Complex Programmable Logic Device, complex programmable logic device
  • the memory 610 may include a read-only memory (ROM) and a random access memory (RAM), and provide the processor 600 with program instructions and data stored in the memory 610.
  • the memory 610 may be used to store a program of any of the methods provided in the embodiments of the present application.
  • the processor 600 calls program instructions stored in the memory 610, and the processor 600 is configured to execute any of the methods provided in the embodiments of the present application according to the obtained program instructions.
  • the embodiment of the present application provides a computer-readable non-volatile storage medium for storing the computer program instructions used in the foregoing embodiments of the present application, which includes instructions for executing any method provided in the foregoing embodiments of the present application. program.
  • the non-volatile storage medium may be any available medium or data storage device that can be accessed by a computer, including but not limited to magnetic storage (such as a floppy disk, hard disk, magnetic tape, magneto-optical disk (MO), etc.), optical storage (such as a CD , DVD, BD, HVD, etc.), and semiconductor memory (such as ROM, EPROM, EEPROM, non-volatile memory (NAND FLASH), solid-state hard disk (SSD)).
  • magnetic storage such as a floppy disk, hard disk, magnetic tape, magneto-optical disk (MO), etc.
  • optical storage such as a CD , DVD, BD, HVD, etc.
  • semiconductor memory such as ROM, EPROM, EEPROM, non-volatile memory (NAND FLASH), solid-state hard disk (SSD)
  • An embodiment of the present application provides a screenshot display method and display device of a display device, which can provide users with a visual focus state and real-time operation interactive feedback for identifying an object in the screenshot to meet user needs, improve user experience, and reduce Reduced the amount of calculations and reduced memory consumption.
  • the embodiments of the present application may be provided as a method, a device, a system, or a computer program product.
  • the present application may take the form of a computer program product implemented on one or more non-volatile computer storage media (including but not limited to disk storage, optical storage, etc.) containing computer program code.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to work in a particular manner such that the instructions stored in the computer-readable memory produce a manufactured article including an instruction device, the instructions
  • the device implements the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing device, so that a series of steps can be performed on the computer or other programmable device to produce a computer-implemented process, which can be executed on the computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种截图显示方法及设备。该截图显示方法,包括:当显示屏上显示显示画面时,接收遥控器发出的对显示设备的显示屏的截图的第一输入命令;响应于第一输入命令,捕获显示屏的截图;自动确定截图中的一个或多个可识别对象;接收遥控器发出的第二输入命令;响应于第二输入命令以执行如下操作:在显示屏中的第一显示图层上,显示截图;显示用于识别第二显示图层中的截图中的一个或多个可识别对象的一个或多个对象识别框;基于遥控器发出的用户选择请求,在第一显示图层上的第二显示图层中,显示与一个或多个可识别对象中的第一可识别对象对应的一个或多个对象识别框中的第一对象识别框上的焦点框。

Description

一种截图显示方法及设备
本申请要求在2018年9月27日提交中国专利局、申请号为201811133159.5、发明名称为“屏幕画面截图中识别物体的显示方法、装置及显示终端”的中国专利申请的优先权,在2018年9月27日提交中国专利局、申请号为201811132364.X、发明名称为“屏幕画面截图中识别物体的显示方法及显示终端”的中国专利申请的优先权,以及在2019年3月15日提交中国专利局、申请号为201910199952.3、发明名称为“一种显示画面截图的图形用户界面方法及显示设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能电视领域,尤其涉及一种截图显示方法及设备。
背景技术
通过遥控器控制的智能电视,在显示界面中自动识别对象。用户可以选择特定的可识别对象。可以以交互的方式在视觉上向用户呈现对可识别对象的选择和关于所选可识别对象的反馈信息。
发明内容
本申请实施例提供了一种截图显示方法及设备,用以为用户提供该截图中识别对象体的可视化的焦点框及实时操作交互反馈。
本申请实施例提供的一种显示设备的截图显示方法,包括:
接收遥控器发出的对所述显示设备的显示画面的截图的输入命令;
响应于所述输入命令,获取所述显示设备的显示画面的截图,在所述显示设备的显示界面的第一区域显示所述显示画面,在所述显示界面的第二区域显示所述截图,确定所述截图中的可识别对象,根据所述可识别对象的在所述截图中对应的位置信息,在所述显示界面的第三区域按顺序排列所述可识别对象,所述可识别对象的周围能够覆盖对象识别框;
接收遥控器发出的用于指示焦点框对所述可识别对象的选定指令;
响应于所述选定指令,在被选中的可识别对象对应的对象识别框上覆盖显示所述焦点框。
在一些实施方式中,所述显示界面包括多个图层,在第一图层显示所述显示画面的截图,在第二图层显示所述可识别对象对应的对象识别框;其中所述第二图层位于所述第一图层之上。
在一些实施方式中,在所述显示画面的第一可识别对象的周围,重叠展示第一对象识别框,同时在距离所述第一对象识别框的预定范围内,展示所述第一可识别对象的放大图像。
在一些实施方式中,所述第二区域和第三区域是相邻的。
在一些实施方式中,接收所述遥控器发出的用于指示截图的第一操作指令;
响应于所述第一操作指令,在所述显示界面上呈现与所述截图关联的二维码信息,以便用户通过扫描所述二维码信息获取所述截图。
在一些实施方式中,所述二维码信息被绘制在第三图层,所述第三图层位于所述第二图层之上。
在一些实施方式中,接收所述遥控器发送的按键事件,将所述按键事件分派给所述第一图层、第二图层和第三图层中的一者进行响应。
在一些实施方式中,接收所述遥控器发出的用于指示截图的第二操作指令;
响应于所述第二操作指令,以渐变的方式逐渐在所述显示界面移走所述第三图层。
在一些实施方式中,遍历所述被选中的可识别对象的中心坐标与其他可识别对象的中心坐标中的横坐标差值或纵坐标差值;
取所述其他可识别对象中差值最小的第二可识别对象作为下一个焦点框绘制位置。
在一些实施方式中,响应于存在所述横坐标或纵坐标差值相同的可识别对象,再根据勾股定理计算所述横坐标或纵坐标差值相同的可识别对象中心坐标与所述被选中的可识别对象中心坐标的距离,取距离最小的可识别对象作为下一个焦点框绘制位置。
本申请实施例提供的一种显示设备,包括:
显示屏,用于展示图像;
存储器,配置为存储计算机指令以及与所述显示屏关联的图像数据;
处理器,与所述显示屏和存储器通信,配置为运行计算机指令以使得所述显示设备:
接收遥控器发出的对所述显示设备的显示画面的截图的输入命令;
响应于所述输入命令,获取所述显示设备的显示画面的截图,在所述显示设备的显示界面的第一区域显示所述显示画面,在所述显示界面的第二区域显示所述截图,确定所述截图中的可识别对象,根据所述可识别对象的在所述截图中对应的位置信息,在所述显示界面的第三区域按顺序排列所述可识别对象;
接收遥控器发出的用于指示焦点框对所述可识别对象的选定指令;
响应于所述选定指令,在被选中的可识别对象对应的对象识别框上覆盖显示所述焦点框。
本申请另一实施例提供了一种计算机可读的非易失性存储介质,所述计算机存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器运行时实现上面所述的方法。
本申请实施例提供的一种在屏幕画面截图中识别对象体的显示方法,包括:
获取屏幕当前显示画面的截图,并将所述截图绘制在第一图层;
获取所述截图的对象识别信息,并根据所述对象识别信息将对象识别框绘制在第二图层,其中所述第二图层覆盖在所述第一图层之上;
确定所述截图上的当前被选中对象,并将所述当前被选中对象的焦点框覆盖所述当前被选中对象的识别框,并绘制在所述第二图层上。
在一些实施方式中,所述对象识别信息包括对象的位置信息;
根据所述对象识别信息将对象识别框绘制在第二图层,具体包括:
将所述识别框填充到图像视图ImageView控件,并将所述填充有识别框的ImageView控件绘制在第二图层上,其中,所述ImageView控件是根据对象的位置信息创建的。
在一些实施方式中,将所述当前被选中对象的焦点框覆盖所述当前被选中对象的识别框,并绘制在所述第二图层上,具体包括:
将所述当前被选中对象的焦点框填充到ImageView控件,将所述填充有焦点框的ImageView控件绘制在第二图层上,同时覆盖所述当前被选中对象的识别框,其中,所述ImageView控件是根据所述当前被选中对象的位置信息创建的。
在一些实施方式中,在所述当前被选中对象的焦点框绘制在所述第二图层上的同时,将之前绘制在所述第二图层上的焦点框删除。
在一些实施方式中,该方法还包括:
将截图分享二维码绘制在第三图层上,其中,所述第三图层位于所述第二图层之上。
在一些实施方式中,该方法还包括:
根据用户通过遥控器输出的指令,控制所述第三图层显示或隐藏。
在一些实施方式中,该方法还包括:
遍历计算当前被选中对象的中心坐标与其他识别对象体中心坐标中的横坐标差值或纵坐标差值;
取所述其他识别对象体中差值最小的识别对象体作为下一个焦点框绘制位置;
如果存在差值相同的识别对象体,再根据勾股定理计算所述差值相同的识别对象体中心坐标与所述当前被选中对象中心坐标的距离,取距离最小的识别对象体作为下一个焦点框绘制位置。
本申请实施例还提供的一种显示设备,该显示设备包括:
存储器,用于存储程序指令;
处理器,用于调用所述存储器中存储的程序指令,按照获得的程序执行上述本申请实施例所述的方法。
本申请另一实施例提供了一种计算机可读的非易失性存储介质,所述存储介质存储有 计算机可执行指令,所述计算机可执行指令用于使所述计算机执行上述任一种方法。
本申请实施例提供一种基于屏幕画面截图中识别对象体的显示方法,所述方法包括:
在屏幕上显示当前画面的同时,接收用于指示对所述当前画面截图的输入指令;
响应于所述输入指令,在所述屏幕上显示所述当前画面的截图,以及基于所述截图中至少一个识别对象体的位置信息,在所述屏幕上显示所述识别对象体对应的对象识别框;
接收用于指示焦点框在所述至少一个识别对象体之间移动的输入指令;
响应于所述输入指令,将被选中识别对象体对应的对象识别框相对于其他对象识别框区别显示为所述焦点框。
在一些实施方式中,在所述屏幕上的第一图层显示所述当前画面的截图,在所述屏幕上的第二图层显示所述截图中识别对象体对应的对象识别框;其中所述第二图层位于所述第一图层之上。
在一些实施方式中,所述识别对象体的位置信息用于指示所述识别对象体对应的对象识别框在所述截图中的显示位置和大小。
在一些实施方式中,在所述对象识别框为矩形边框形状时,所述识别对象体的位置信息至少包括:识别对象体轮廓对应的矩形边框的任一角的坐标信息、矩形边框的宽度和高度。
在一些实施方式中,所述将被选中识别对象体对应的对象识别框相对于其他对象识别框区别显示为所述焦点框之后,所述方法还包括:
接收用于指示确认选择所述被选中识别对象体的指令;
响应于所述输入指令,在所述屏幕上显示所述被选中识别对象体的识别内容和相关推荐内容。
本申请实施例,提供一种基于屏幕画面截图中识别对象的显示方法,所述方法包括:
在屏幕上显示当前画面的同时,接收用于指示对所述当前画面截图的输入指令;
响应于所述输入指令,在所述屏幕上显示所述当前画面的截图,以及基于所述截图中至少一个识别对象体的位置信息,在所述屏幕上显示所述识别对象体对应的对象识别框;
接收用于指示焦点框在所述至少一个识别对象体之间移动的输入指令;
响应于所述输入指令,在被选中识别对象体对应的对象识别框上覆盖显示所述焦点框。
在一些实施方式中,在所述屏幕上的第一图层显示所述当前画面的截图,在所述屏幕上的第二图层显示所述截图中识别对象体对应的对象识别框和所述被选中识别对象体的焦点框;其中所述第二图层位于所述第一图层之上。
在一些实施方式中,所述被选中识别对象体的焦点框的显示位置和大小基于所述被选中识别对象体的位置信息而确定,以使所述焦点框和所述对象识别框完全重合。
在一些实施方式中,所述识别对象体的位置信息至少包括:识别对象体轮廓对应的矩形边框的任一角的坐标信息。
在一些实施方式中,所述方法还包括:
接收所述焦点框从当前被选中识别对象体移动至其他识别对象体的输入指令;
响应于所述输入指令,比较所述当前被选中识别对象体轮廓对应的矩形边框与其他识别对象体轮廓对应的矩形边框的任一角的坐标信息中的横坐标差值或纵坐标差值,并且在所述差值最小的所述其他识别对象体对应的对象识别框上显示所述焦点框。
本申请实施例,提供一种显示设备,包括:
显示器;
存储器;
以及与所述存储器和所述屏幕通信的处理器,该处理器用于执行上面所述的方法。
本申请实施方式,提供一种显示画面截图的图形用户界面方法,所述方法包括:
显示器上显示当前画面;
响应于用于指示对所述当前画面进行截图的输入指令,在所述显示器上显示所述当前画面的截图图像,以及基于所述截图图像中至少一个识别对象的位置信息,在所述显示器上显示用于标识所述识别对象的识别框;
响应于用于指示选择器在至少一个识别框之间移动的输入指令,在一个识别对象的识别框处显示所述选择器,以及确定被选择识别对象的识别框尺寸在预设阈值内时,放大显示被选择识别对象图像。
本申请实施例提供一种显示设备,所述显示设备包括:
用户接口,用于接收用户输入指令;
显示器,用于显示图形用户界面,以及选择器,所述选择器可基于用户输入指令而移动其在显示器上的位置;
控制器,用于响应于用户接口的输入命令来控制显示器显示图形用户界面,具体执行:
响应于用于指示对显示器上显示的当前画面进行截图的输入指令,在所述显示器上显示所述当前画面的截图图像,以及基于所述截图图像中至少一个识别对象的位置信息,在所述显示器上显示用于标识所述识别对象的识别框;
响应于用于指示选择器在至少一个识别框之间移动的输入指令,在一个识别对象的识别框处显示所述选择器,以及确定被选择识别对象的识别框尺寸在预设阈值内时,放大显示被选择识别对象图像。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用 的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种显示设备的截图显示方法示意图;
图2中示例性示出了显示设备提供的一个显示画面的示意图;
图3A示例性示出了显示设备提供的一个GUI400的示意图;
图3B示例性示出了显示设备提供的一个GUI400的示意图;
图3C示例性示出了显示设备提供的一个GUI400的示意图;
图3D示例性示出了显示设备提供的一个GUI400的示意图;
图3E示例性示出了显示设备提供的一个GUI400的示意图;
图3F示例性示出了显示设备提供的一个GUI400的示意图;
图3G示例性示出了显示设备提供的一个GUI400的示意图;
图3H示例性示出了显示设备提供的一个GUI400的示意图;
图3I示例性示出了显示设备提供的一个GUI400的示意图;
图3J示例性示出了显示设备提供的一个GUI400的示意图;
图3K示例性示出了显示设备提供的一个GUI400的示意图;
图3L示例性示出了显示设备提供的一个GUI400的示意图;
图4A示例性示出了显示画面截图的图形用户界面显示方法的流程图;
图4B示例性示出了显示画面截图的图形用户界面显示方法的流程图;
图4C示例性示出了显示画面截图的图形用户界面显示方法的流程图;
图5A示例性示出了识别对象展示界面的图层分布结构示意图;
图5B示例性示出了识别对象的位置信息示意图;
图6为本申请实施例提供的对象识别界面布局示意图;
图7为本申请实施例提供的对象识别界面图层分布结构示意图;
图8为本申请实施例提供的一种在屏幕画面截图中识别对象体的显示方法示意图;
图9为本申请实施例提供的对象识别框位置信息标注示意图;
图10为本申请实施例提供的对象识别界面交互流程示意图;
图11为本申请实施例提供的方案实例交互流程示意图;
图12为本申请实施例提供的方案实例交互流程示意图;
图13为本申请实施例提供的方案实例交互流程示意图;
图14为本申请实施例提供的方案实例交互流程示意图;
图15为本申请实施例提供的方案实例交互流程示意图;
图16为本申请实施例提供的方案实例交互流程示意图;
图17为本申请实施例提供的一种在屏幕画面截图中识别对象体的显示装置示意图;
图18为本申请实施例还提供的一种显示设备示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是示例性的。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的其他实施例,都属于本申请保护的范围。
本申请实施例提供了一种在屏幕画面截图中识别对象体的显示方法,用以实现为用户提供可视化焦点状态及实时操作交互反馈,以满足用户的交互需求,并且减小了计算量、降低了内存消耗。
下面结合说明书附图对本申请各个实施例进行详细描述。需要说明的是,本申请实施例的展示顺序仅代表实施例的先后顺序,并不代表实施例所提供的技术方案的优劣。
参见图1,本申请实施例提供的一种显示设备的截图显示方法,包括:
S101、接收遥控器发出的对所述显示设备的显示画面(或者称为显示屏或显示内容)的截图的输入命令;
S102、响应于所述输入命令,获取所述显示设备的显示画面的截图,在所述显示设备的显示界面的第一区域继续显示所述显示画面,在所述显示界面的第二区域显示所述截图,确定所述截图中的可识别对象,根据所述可识别对象的在所述截图中对应的位置信息,在所述显示界面的第三区域按顺序排列所述可识别对象,所述可识别对象的周围能够覆盖有对象识别框,该对象识别框可以对所述可识别对象进行标记、标识或划分;
S103、接收遥控器发出的用于指示焦点框在所述可识别对象的选定指令;
S104、响应于所述选定指令,在被选中的可识别对象对应的对象识别框上覆盖显示所述焦点框。
可识别对象可以是显示设备的显示画面中的对象,例如人、物品等。
如图3A所示,所述第一区域为当前播放画面41所处的区域,所述第二区域为截图图像420所处的区域,所述第三区域为图形化元素421、422、423和424所处的区域。又如,图12中的上部区域为第一区域,下部区域包括第二区域和第三区域。
在一些实施方式中,所述显示界面包括多个图层,在第一图层显示所述显示画面的截图,在第二图层显示所述可识别对象对应的对象识别框;其中所述第二图层位于所述第一图层之上。
在一些实施方式中,在所述显示画面的第一可识别对象的周围,重叠展示第一对象识别框,同时在距离所述第一对象识别框的预定范围内,展示所述第一可识别对象的放大图像,例如,图3C中的45。
在一些实施方式中,所述第二区域和第三区域是相邻的。例如,图3A中的截图图像420所处的第二区域与图像化元素422和423所处的第三区域是相邻的。
在一些实施方式中,接收所述遥控器发出的用于指示截图的第一操作指令;响应于所述第一操作指令,在所述显示界面上呈现与所述截图关联的二维码信息,以便用户通过扫描所述二维码信息获取所述截图。例如,图13中的左侧所示的二维码信息。
在一些实施方式中,在呈现截图对应的二维码信息的同时,更新显示界面。在更新后的显示界面中展示原来在第一区域展示的图像,隐藏原来在第二区域和第三区域展示的图像。同时在所展示的图像的可识别对象的周围,展示相应的对象识别框。例如,图13右侧画面上人物头部圈画的对象识别框。该对象识别框位于截图所在的第一图层之上的第二图层。通过第一图层和第二图层的叠加,呈现图13右侧人物头部的对象识别框。
在一些实施方式中,所述二维码信息被绘制在第三图层,所述第三图层位于对象识别框所在的第二图层之上。
在一些实施方式中,接收所述遥控器发送的按键事件,将所述按键事件分派给所述第一图层、第二图层和第三图层中的一者进行响应。
在一些实施方式中,接收所述遥控器发出的用于指示截图的第二操作指令;
响应于所述第二操作指令,隐藏所述第三图层。比如,响应于所述第二操作指令,以渐变的方式逐渐在所述显示界面移走所述第三图层。也就是说,更新显示界面,在显示界面上隐藏二维码信息。
在一些实施方式中,本申请的提供的方法还可以包括:遍历所述被选中的可识别对象的中心坐标与其他可识别对象的中心坐标中的横坐标差值或纵坐标差值;取所述其他可识别对象中差值最小的第二可识别对象作为下一个焦点框绘制位置。
在一些实施方式中,可以通过上面所描述的方式,确定下一个被选中的可识别对象。如,将相对于目前处于焦点位置的被选中的可识别对象而言距离最近的另一个可识别对象,确定为下一个被选中的可识别对象。
在其他实施方式中,还可以按照其他的方式确定下一个被选中的可识别对象。可以通过操作遥控器的方向键,或者其他按键,选定下一个被选中的可识别对象。
在一些实施方式中,响应于存在所述横坐标或纵坐标差值相同的可识别对象,也就是说,存在跟当前被选中的可识别对象距离相同的两个可识别对象,那么进一步地,可以根据勾股定理计算所述差值相同的可识别对象的中心坐标与所述被选中的可识别对象的中心坐标的距离,取距离最小的可识别对象作为下一个焦点框绘制位置。
图2中示例性示出了显示设备的一个显示画面的示意图。
如图2所示,显示设备可向显示器提供当前播放画面41,该当前播放画面41可以是文字、图像、视频中的至少一个。例如,图2示出的当前播放画面41是一个电视剧片段 (或称为电视视频图像序列)。
在图2中,显示设备上播放一电视剧片段时,当用户需要了解当前播放画面41中的人物信息、服饰信息或频道信息等信息时,用户可以按压控制装置上的预设按键(如截图键)。显示设备可以响应于该预设按键对应的截图操作指令,对当前播放画面41进行截图,进而得到截图图像,同时将该截图图像传递至图像识别服务器,进行内容识别处理,以使所述图像识别服务器返回该截图图像中包含的可识别对象的内容信息及与该可识别对象相关的推荐内容,供用户分类浏览。
在一些实施方式中,当图像识别服务器返回与所述识别对象关联的数据时,显示设备可以根据预先设定的规则,分类展示返回的识别数据。例如:图像识别服务器识别出该截图图像中包含5个人物,那么向显示设备返回的识别数据可以包含:该5个人物分别在该截图图像中的位置信息、人物的百科介绍信息、人物饰演过的其它影视剧信息、人物在该截图图像中的服饰信息等。
图3A-图3L中示例性示出了显示设备提供的一个GUI400的示意图。
如图3A所示,显示设备响应于预设按键触发的截图操作指令,并向显示器提供一GUI400,该GUI400中包括当前播放画面41和悬浮显示在当前播放画面41上且位于当前播放画面41下方的悬浮展示区42。其中,在悬浮展示区42内,显示截图图像420,以及识别出的人物和服饰信息等。例如图3A示出的GUI400中,在悬浮展示区42的中间区域,显示截图图像420,以及在截图图像420两侧区域,分别显示该截图图像420中识别出的人物头像的图形化元素421~424等。
此外,显示设备提供的GUI400中还包括指示图形化元素被选择的选择器43,可通过用户操作控制装置(例如,遥控器)时的输入而移动选择器43在显示界面中的位置,以选择显示界面中不同的可识别对象,该可识别对象可以展示所述截图图像420或者图形化元素421、422、423或者424等。
例如,图3A示出的GUI400中,在悬浮展示区42内,显示截图图像420以及图形化元素421~422中的人物头像,选择器43指示截图图像420被选择。又如,图3C示出的GUI400中,在多个人物头像处分别显示多个对象识别框,选择器43指示对象识别框4201被选择。
在一些实施方式中,选择器的显示形式可以为焦点框。可根据用户通过控制装置的输入,控制显示设备中显示焦点框的移动来选择或控制显示界面上展示的图形化元素。如:用户可通过控制装置上的方向键控制焦点框的移动,以对图形化元素进行选择和控制。
焦点框的标识形式不限。示例的,如图3A中用粗线来呈现焦点框,也可以通过改变聚焦图形化元素的尺寸、颜色、透明度和轮廓等展示焦点框。
在一些实施方式中,参见图3A和图3B,当用户操作控制装置,如用户按压控制装置 上的确认键,指示选择器43选择了截图图像420时,显示设备响应于激活截图图像420的输入指令,在显示器上全屏显示该截图图像420,以及基于图像识别服务器返回的该截图图像420中包含的可识别对象在该截图图像420中的位置信息,在显示器上各可识别对象处显示用于标识可识别对象的对象识别框。例如图3B示出的GUI 400中,截图图像420中,在识别出的5个图形化元素421~425中的人物头像处,分别显示对象识别框4201~4205,例如为矩形边框。这样,不仅能够为用户提供一种直观的可识别对象的标注方式,以使用户直接了解该截图图像中包含的可识别对象;而且能够为后续用户浏览可识别对象的内容信息提供浏览提示。
在一些实施例中,当用户需要查看显示界面呈现的对象识别框所圈定的内容信息时,可以根据该截图图像中对象识别框的指示,通过操作控制装置上的方向键,使得显示设备可以响应于该方向按键对应的移动指令而移动选择器在GUI中的位置,以指示不同对象识别框被选择,从而可以激活该对象识别框对应的内容链接,浏览被选择识别对象的内容信息。
例如,参见图3B和图3C,当指示选择器43选择了该截图图像420中最左侧的图形化元素421对应的对象识别框4201时,更新显示设备的显示界面,在对象识别框4201处显示选择器43。
在一些实施例中,还可以在选择器附近弹出显示按照等比例放大的人物头像的图像化元素和对应的对象识别框。例如,用户通常较长距离观看或操作显示设备提供的GUI,使得在截图图像中的人物图像尺寸较小时,用户不易观看或图像模糊不易辨认;而本申请实施例中,可以在截图图像中的人物图像尺寸较小时,显示放大后的人物图像,这样能够使用户在较长距离内也能清晰的观看该人物头像。同时根据图像识别服务器返回的人物头像的名称,在放大的对象识别框处标注出人物的姓名。这样,用户在较长距离清晰的观看人物头像时,可以初步了解该人物头像对应的名字。
在一些实施例中,还可以在选择器与放大的人物头像和其对应的对象识别框之间显示连接线,以向用户提示该放大的人物头像和其对应的对象识别框是属于选择器当前选择的对象识别框的,进一步使用户更加清晰的观看人物头像。例如图3C示出的GUI400中,连接线44连接选择器43和放大的对象识别框45。
这里,选择器与选择器选定的人物头像对应的对象识别框的大小和形状相同,但是区别显示于该选择器未选定的其它人物头像对应的对象识别框,例如图3C示出的GUI400中,选择器43的边框线可以加粗显示或者采用其它颜色显示等,与对象识别框4201区分,以向用户提示当前关注的是该选择器43选择的人物头像。
参见图3C和图3D,当用户需要观看下一个可识别对象的内容信息时,可以根据该截图图像420中各可识别对象对应的对象识别框的指示,通过操作控制装置上的方向键,如 向右方向键。显示设备可以响应于向右方向键对应的移动指令,控制选择器43向右移动至对象识别框4202上,以及在选择器43附近,显示按照等比例放大的人物头像和对象识别框。同时,显示设备根据图像识别服务器返回的人物头像的姓名,在放大的对象识别框处标注出人物的姓名-LT。这样,能够在人物图像的尺寸较小时,使用户在较长距离内更清晰的观看人物头像以及初步了解人物头像对应的名字。
类似地,还可以在选择器与放大的对象识别框之间显示连接线,以向用户提示该放大的人物头像和其对应的对象识别框是属于选择器当前选择的对象识别框的,进一步使用户更加清晰的观看人物头像。
同时参见图3D和图3E,当用户需要继续观看下一个可识别对象的内容信息时,可以根据该截图图像中各可识别对象对应的对象识别框的指示,通过操作控制装置上的方向键,如向右方向键。显示设备可以响应于向右方向键对应的该移动指令,控制选择器43向右移动至对象识别框4203上。同时,显示设备根据图像识别服务器返回的人物头像的名称,在选择器43位置处标注出人物的姓名-QX。这样,由于人物图像的尺寸较大以及在人物头像处标注出人物姓名,用户在较长距离内可以清晰的观看人物头像以及初步了解人物头像对应的名字。
同时参见图3E和图3F,当用户需要继续观看下一个可识别对象的内容信息时,可以根据该截图图像中各可识别对象对应的对象识别框的指示,通过操作控制装置上的方向键,如向右方向键。显示设备可以响应于向右方向键对应的该移动指令,控制选择器43向右移动至对象识别框4204上。同时,显示设备根据图像识别服务器返回的人物头像的名称,在选择器43位置处标注出人物的姓名-JX。这样,由于人物图像的尺寸较大以及在人物头像处标注出人物姓名,用户在较长距离内可以清晰的观看人物头像以及初步了解人物头像对应的名字。
同时参见图3F和3G,当用户需要继续观看下一个可识别对象的内容信息时,可以根据该截图图像中各可识别对象对应的对象识别框的指示,通过操作控制装置上的方向键,如向右方向键。显示设备可以响应于向右方向键对应的移动指令,控制选择器43向右移动至对象识别框4205上。同时,显示设备根据图像识别服务器返回的人物头像的名称,在选择器43位置处标注出人物的姓名-WZW。这样,由于人物图像的尺寸较大以及在人物头像处标注出人物姓名,用户在较长距离内可以清晰的观看人物头像以及初步了解人物头像对应的名字。
类似地,在图3G中,当用户需要返回观看上一个识别对象体的内容信息,则可以根据该截图图像中各识别对象对应的对象识别框的指示,通过操作控制装置上的方向键,如向左方向键。显示设备可以响应于向左方向键对应的移动指令,按照图3G-3F-3E-3D-3C的逆变化显示GUI。
在一些实施例中,当用户操作控制装置而指示选择器激活被选择可识别对象的对象识别框时,如用户按压控制装置上的确认键,显示设备响应于激活被选择识别对象的对象识别框的输入指令,在显示器上显示与被选择识别对象相关的推荐内容,以为用户提供更为详细的识别对象信息。
例如,参见图3C和图3H,显示设备可以响应于激活对象识别框4201的输入指令,在显示器右侧显示与图形化元素421中人物头像关联的推荐内容4211,如YZ的百科介绍信息、YZ饰演的其他影视剧信息、YZ在当前电视剧片段中的同款服饰信息等。
又如,参见图3D和图3I,显示设备可以响应于激活对象识别框4202的输入指令,在显示器右侧显示与图形化元素422中人物头像关联的推荐内容4221,如LT的百科介绍信息、LT饰演的其他影视剧信息、LT在当前电视剧片段中的同款服饰信息等。
同理,在图3E-3G中,显示设备可以分别响应于激活对象识别框4203~4205的输入指令,在显示器右侧分别显示如图3J-3L所示的与图形化元素423~425中人物头像关联的推荐内容4231~4251。
在另一些实施例中,在图3A中,当用户操作控制装置,如用户按压控制装置上的确认键,而指示选择器43选择了图形化元素421~424时,显示设备也可以分别响应于激活图形化元素421~424的输入指令,在显示器上右侧分别显示如图3H-3K所示的与图形化元素421~424中的人物头像关联的推荐内容4211~4241,以为用户提供更为详细的人物信息。
此外,在上述图3A-3L所示任一GUI中,用户可以操作控制装置,如按压控制装置上返回键或连续按压控制装置上返回键,显示设备可以响应于返回键对应的输入指令,使得显示设备退出显示截图图像以及截图图像中的识别内容展示,从而继续在显示器上显示图2所示的电视剧片段,以供用户继续观看。
如上述实施例所述,在显示设备播放一电视剧片段时,基于用户输入,显示设备可向用户提供诸如电视剧片段的显示画面的截图图像,以及该截图图像中识别出的识别对象的内容信息,以使用户在观看电视剧片段时,可以了解该电视剧片段中的演员相关信息,无需用户通过显示设备之外的其他设备(例如,智能手机)查询有关该电视剧片段的关联信息,提升用户体验。
进一步的,在向用户提供截图图像中识别对象的内容信息展示过程中,可以通过识别对象对应的对象识别框的标注,为用户提供一种可视化的浏览截图图像中包含的识别对象的方式,以及实时操作交互反馈。
更进一步的,在浏览截图图像中包含的识别对象的过程中,考虑到用户通常远距离观看显示设备,所以在识别对象的尺寸较小时,显示放大的识别对象图像及识别对象名称信息,能够使得用户在较长距离内更为清晰的浏览识别对象图像及信息,以符合用户需求。
图4A-4C示例性示出了显示画面截图的图形用户界面显示方法的流程图。
结合图4A所示的方法来说,该方法包括以下步骤S51~S54。
步骤S51:显示器显示当前画面。例如,显示器可以显示如图2所示的电视剧片段。
步骤S52:接收用户通过控制装置而输入的指示对当前画面进行截图的截图操作指令。例如,用户按压控制装置上的预设按键(如截图键)。
步骤S53:响应于输入的截图操作指令,在显示器上显示当前画面的截图图像,以及基于截图图像中的一个或多个可识别对象的位置信息,在显示器上显示用于标识一个或多个可识别对象的一个或多个对象识别框。例如,图3B所示的GUI400中,全屏显示当前画面的截图图像420的图形化元素,以及在截图图像420中识别出的5个人物头像的图形化元素421~425处分别显示对象识别框4201~4205。
步骤S54:接收用户通过控制装置而输入的指示选择器在至少两个对象识别框之间移动的移动指令。例如,用户按压控制装置上的方向按键(如向右键)。
步骤S55:响应于输入的移动指令,在一个可识别对象的对象识别框处显示选择器,以及确定被选择的可识别对象的对象识别框尺寸在预设阈值内时,显示放大的被选择的识别对象图像。例如,图3C所示的GUI400中,在对象识别框4201处显示选择器43;以及判断出对象识别框4201的尺寸在预设阈值内,在选择器43附近显示按照等比例放大的图形化元素421中的人物头像和对象识别框4201。
具体的,结合图4B所示的方法、图5A所示的可识别对象展示界面的图层分布结构示意图、以及图5B所示的可识别对象的位置信息示意图来说,步骤S53可以包括S531和S532。
步骤S531:响应于输入的截图操作指令,对当前画面进行截图得到截图图像,在显示界面的图层B上绘制当前画面的截图图像。
步骤S532:将截图图像发送至图像识别服务器,并根据图像识别服务器返回的截图图像中的可识别对象的内容信息,在显示界面的图层M上绘制各可识别对象对应的对象识别框。
需要说明的是,图5A所示的截图图像420中可识别对象展示界面构成共分为2个部分,图层B是最底层视图View,该图层上绘制的内容为当前画面的截图图像420;图层M是位于图层B之上的View,用于绘制对象识别框4202-4205、选择器43、可识别对象的名称信息(如YZ)、放大的识别对象图像以及用于连接放大的识别对象图像和选择器43的连接线44。
当图像识别服务器返回可识别对象的内容信息时,图层M设置为可见状态。其中,可识别对象的内容信息包括但不限于:可识别对象的类型(如人物、动物、服饰、台标等类型)、可识别对象在截图图像中的位置信息、可识别对象的名称(如人物姓名或动物名称)、可识别对象的相关推荐信息(如人物饰演过的电影)等。
可识别对象的位置信息用于指示可识别对象对应的对象识别框在截图图像中的显示位置和大小。这里,可识别对象对应的对象识别框以矩形边框为例。此处可识别对象的位置信息包括但不限于:可识别对象轮廓对应的矩形边框的任一角的坐标信息、矩形边框的宽度和高度。
如图5B所示,可识别对象的位置信息包括:可识别对象的头像轮廓对应的矩形边框的左上角的X轴坐标信息X0、Y轴坐标信息Y0,可识别对象的头像轮廓对应的矩形边框在截图图像中的宽度(即X轴上的长度)W0,可识别对象的头像轮廓对应的矩形边框在截图图像中的高度(即Y轴上的长度)H0。
具体来说,通过遍历获取每个可识别对象的位置信息,并为每个可识别对象创建一个图像视图(ImageView)控件,该ImageView控件的位置和大小由如图5B所示的可识别对象的位置信息来控制。然后可以将显示设备内存储的对象识别框图片填充至该ImageView控件,并将填充有对象识别框图片的该ImageView控件绘制在图层M上。从而可以根据图像识别服务器返回的可识别对象的位置信息,在显示器的图层M上绘制各可识别对象对应的对象识别框。
结合图4C所示的方法、图5A所示的可识别对象展示界面的图层分布结构示意图、以及图5B所示的可识别对象的位置信息示意图来说,步骤S55可以包括S551-S553。
步骤S551:响应于输入的移动指令,确定截图图像上的当前被选择的可识别对象,并将选择器绘制在图层M上,且覆盖在被选择识别对象的对象识别框上。
这里,仍然以如图5B所示的被选择的可识别对象的位置信息来控制所创建的ImageView控件的位置和大小。首先根据用户操作控制装置的方向键的顺序,确定出截图图像上当前被选择的可识别对象,然后将显示设备中存储的选择器图片填充到ImageView控件,并将填充有选择器图片的ImageView控件绘制在图层M上,同时覆盖当前被选择的可识别对象的对象识别框。
由于被选择的可识别对象的对象识别框和选择器对应的ImageView控件的位置和大小,均是基于被选择的可识别对象的位置信息而创建的,所以两者的大小和形状重合。
进一步地,在图层M上绘制各可识别对象对应的对象识别框提供了选择指示,用户能够操作控制装置的方向键的顺序,来判定选择器将要移动到的下一个被选择的可识别对象的对象识别框。
选择器将要移动到的下一个被选择识别对象的对象识别框的确定方法,可以包括:
首先,遍历计算其它可识别对象的头像轮廓对应的矩形边框的左上角的坐标(X,Y)与当前被选择的可识别对象的头像轮廓对应的矩形边框的左上角的(X0,Y0)中的横坐标差值或纵坐标差值,即计算|X-X0|或|Y-Y0|。
然后,取该其他可识别对象中差值最小的可识别对象的头像轮廓对应的矩形边框,作 为下一个被选择的可识别对象的对象识别框,即选择器下次移动到的位置。
如果存在差值相同的可识别对象的头像轮廓对应的矩形边框,则可以再根据勾股定理,分别计算该差值相同的可识别对象中的各识别对象的头像轮廓对应的矩形边框的左上角顶点与当前被选择的可识别对象的头像轮廓对应的矩形边框的左上角顶点的距离(即两个矩形边框的左上角顶点的直线距离),取距离最小的可识别对象的头像轮廓对应的矩形边框作为下一个被选择的可识别对象的对象识别框。在其他实施方式中,还可以采用其他的方式确定下一个被选择的可识别对象。
步骤S552:确定当前被选择的可识别对象的对象识别框的尺寸是否在预设阈值内;若是,则执行步骤S553;否则结束流程。
继续以如图5B所示的可识别对象的位置信息示意图来说,当被选择的可识别对象的对象识别框的面积W0*H0小于或等于预设阈值,或者,当被选择的可识别对象的对象识别框的宽度W0和/或高度H0小于或等于预设阈值,则确定被选择的可识别对象的对象识别框的尺寸在预设阈值内。否则,确定被选择的可识别对象的对象识别框的尺寸不在预设阈值内,结束该流程。
步骤S553:显示放大的当前被选择的可识别对象及其对象识别框。
具体来说,从截图图像中扣取或者再次截取当前被选择的可识别对象的对象识别框范围内的识别对象图像,并将该识别对象图像与其对象识别框图片等比例放大。可以为当前被选择的可识别对象创建一个ImageView控件,该控件的大小可以为图5B所示:宽度W1、高度H1,该控件的位置可以在图5B所示:该控件左上角的坐标为(X1,Y1);之后将放大的该识别对象图像与其对象识别框图片填充至该ImageView控件,并将填充后的该ImageView控件绘制在图层M上。
此外,考虑到放大后的识别对象图像会模糊不清楚,这里还可以通过插值算法对放大后的识别对象图像进行优化。插值算法可以采用本领域技术人员已知的方法。例如,可以提取原图像的边缘区域和平滑区域,对平滑区域以及边缘区域分别增加采样像素。这样,能够进一步使用户更清晰的辨认识别对象。
在一些实施例中,还可以在放大的被选择的识别对象图像和其对象识别框位置处,显示被选择的可识别对象的名称信息。
例如,图3C所示的GUI400中,在放大的图像化元素421中的人物头像和对象识别框4201的左上角显示人物头像的姓名-YZ。这里,如图5B所示,可以为被选择的可识别对象创建一个TextView控件,将被选择的可识别对象的名称-YZ填充至该TextView控件,并将填充后的该TextView控件绘制在图层M上。
在一些实施例中,还可以显示连接对象,该连接对象可视的连接选择器与放大的被选择的识别对象图像及其对象识别框;其中,连接对象与放大的被选择的识别对象图像及其 对象识别框一起显示。
例如,图3C所示的GUI400中,选择器43与放大的人物头像和对象识别框4201通过连接线44可视的连接。这里,如图5B所示,可以为被选择的可识别对象创建一个ImageView控件,将显示设备内存储的连接线填充至该ImageView控件,并将填充后的该ImageView控件绘制在图层M上。
在一些实施例中,显示设备还可以:接收用户输入的用于指示激活被选择的可识别对象的对象识别框的输入指令;响应于输入的输入指令,在显示器上显示与被选择的可识别对象相关联的识别内容。
例如,图3H所示的GUI400中,在显示器右侧显示与人物头像关联的推荐内容4211,如YZ的百科介绍信息、YZ饰演的其他影视剧信息、YZ在当前电视剧片段中的同款服饰信息等。这里,可以在图层M之上的图层T上显示这些相关联的推荐内容4211。
如上面实施例所述,在显示器上显示当前播放画面的截图图像过程中,首先采用对象识别框标注出该截图图像中的各可识别对象,然后根据各可识别对象对应的对象识别框的标注指示,将被选中的可识别对象以选择器的形式突出显示,以反馈用户。同时,对于被选中的可识别对象的尺寸较小的情况,会放大显示被选中的可识别对象,以进一步为用户提供清晰的浏览体验。
在实现上述功能时,可以将截图图像、可识别对象对应的对象识别框以及被选中的可识别对象上的选择器等绘制在一个图层中。然而,考虑到需要保持截图图像自身的清晰度,其对显示设备的内存占用较大。如果直接在放置截图图像的图层中绘制选择器,那么由于截图图像自身内存较大,且每次选择器移动选中识别对象时再在其上绘制选择器,此时通过图形处理器(Graphics Processing Unit,GPU)刷新显示在图层上时,会导致显示设备的计算量大,内存消耗大,性能降低。
因此,本申请实施例中,通过将截图图像绘制在一个图层,对象识别框、选择器等绘制在另一个图层来实现上述功能,这样尤其是与直接在放置截图图像的图层中绘制选择器相比,将截图图像和选择器分别绘制在两个图层,不必在选择器每次移动时均刷新截图图像所在图层,只需要刷新选择器所在图层,从而能够减小计算量,降低内存消耗,提高性能。同时,也为用户提供该截图图像中识别对象的可视化及实时操作交互反馈。
此外,本申请实施例提供一种显示设备,该显示设备中的部分组件包括:
显示器,用于显示GUI,以及选择器,该选择器可基于用户输入指令而移动其在显示器上的位置。例如,显示器可以显示图3A所示的GUI400,该GUI400中包括当前播放画面41和悬浮显示在当前播放画面41上且位于当前播放画面41底部的悬浮展示区42,以及悬浮展示区42内指示截图图像420被选择的选择器43。
用户接口,用于接收控制选择器的输入指令。例如,用户接口可以接收用户通过按压 控制装置(例如,遥控器)上的左/右方向键,控制选择器在GUI中向左/右方向移动的指令,以改变选择器在GUI中的位置。
控制器,用于执行:响应于用于指示对显示器上显示的当前画面进行截图的输入指令,在显示器上显示当前画面的截图图像,以及基于截图图像中至少一个可识别对象的位置信息,在显示器上显示用于标识可识别对象的识别框。例如,图3B所示的GUI400中,全屏显示当前画面的截图图像420,以及在截图图像420中分别识别出的图形化元素421~425中的5个人物头像处的对象识别框4201~4205。
在一些实施例中,控制器,还用于执行:
响应于用于指示选择器在至少一个对象识别框之间移动的输入指令,在一个可识别对象的对象识别框处显示选择器,以及确定被选择的可识别对象的对象识别框的尺寸在预设阈值内时,显示放大的被选择的识别对象图像。例如,图3C所示的GUI400中,在对象识别框4201处显示选择器43;以及判断出对象识别框4201的尺寸在预设阈值内,在选择器43附近显示按照等比例放大的人物头像。
在一些实施例中,控制器,还用于执行:显示放大的被选择的可识别对象的对象识别框。例如,图3C所示的GUI400中,在对象识别框4201处显示选择器43;以及在选择器43附近显示按照等比例放大的人物头像和对象识别框4201。
在一些实施例中,控制器,还用于执行:在放大的被选择的识别对象图像和其对象识别框位置处,显示被选择的可识别对象的名称信息。例如,图3C所示的GUI400中,在放大的人物头像和对象识别框4201的左上角显示人物头像的姓名-YZ。
在一些实施例中,控制器,还用于执行:显示连接对象,该连接对象可视的连接选择器与放大的被选择的识别对象图像和其对象识别框;其中,连接对象与放大的被选择的识别对象图像和其对象识别框一起显示。例如,图3C所示的GUI400中,选择器43与放大的人物头像和对象识别框4201通过连接线44可视的连接。
在一些实施例中,控制器,还用于执行:响应于用于指示激活被选择的可识别对象的输入指令,在显示器上显示与被选择的可识别对象相关联的识别内容。例如,图3H所示的GUI400中,在显示器右侧显示与人物头像关联的推荐内容4211,如YZ的百科介绍信息、YZ饰演的其他影视剧信息、YZ在当前电视剧片段中的同款服饰信息等。
对象识别界面布局示意图如图6所示,该图在显示设备的显示界面中显示。图7为对象识别界面图层分布结构示意图,对象识别界面构成共分为3个部分,其中,图层B是最底层视图View,图层上绘制的内容为显示设备截图(OSD/VEDIO混合截图),图层M是中间层View,用于放置对象识别框和焦点框,图层T是最上层View,用于放置截图分享的二维码。
图层B、图层M和图层T都是显示界面的ViewGroup控件,并且这三个图层依次覆 盖,图层T在最上层,覆盖在其它两层之上,因此在同一时刻,应该控制按键只被一个图层所监听并处理,例如图7所示情况,按键只被图层T所监听并处理,不被图层B和图层T所监听。
以下所述第一图层为图层B,第二图层为图层M,第三图层为图层T。
参见图8,本申请实施例提供的一种在屏幕画面截图中识别对象体的显示方法,包括S201-S203。
S201、获取屏幕当前显示画面的截图,并将所述截图绘制在第一图层;
例如,屏幕为智能电视的屏幕,用户可以通过遥控器对当前播放的视频进行截图,后台获取到截图之后,将截图绘制在第一图层。
S202、获取所述截图的对象识别信息,并根据所述对象识别信息将对象识别框绘制在第二图层,其中所述第二图层覆盖在所述第一图层之上;
例如,所述截图的对象识别信息包括但不限于所述截图中对象的类型(如人物、动物、服饰、台标等)、对象在该截图上的位置信息(简称对象的位置信息)、对象的名称、对象相关的推荐信息等等。
S203、确定所述截图上的当前被选中对象,将所述当前被选中对象的焦点框覆盖所述当前被选中对象的对象识别框,并绘制在所述第二图层上。
所述对象识别信息包括对象的位置信息。
在一些实施方式中,根据所述对象识别信息将对象识别框绘制在第二图层,包括:
将所述对象识别框填充到图像视图ImageView控件,并将所述填充有识别框的ImageView控件绘制在第二图层上,其中,所述ImageView控件是根据对象的位置信息创建的。
将所述当前被选中对象的焦点框覆盖所述当前被选中对象的对象识别框,并绘制在所述第二图层上,具体包括:
将所述当前被选中对象的焦点框填充到ImageView控件,将所述填充有焦点框的ImageView控件绘制在第二图层上,同时覆盖所述当前被选中对象的对象识别框,其中,所述ImageView控件是根据所述当前被选中对象的位置信息创建的。
在一些实施方式中,在当前被选中对象的焦点框绘制在第二图层上的同时,将之前绘制在第二图层上的焦点框删除。
在一些实施方式中,该方法还包括:
将截图分享二维码绘制在第三图层上,其中,第三图层位于第二图层之上。
在一些实施方式中,该方法还包括:
根据用户通过遥控器输出的指令,控制第三图层显示或隐藏。
在一些实施方式中,该方法还包括:
遍历计算当前被选中对象的中心坐标与其他识别对象体中心坐标中的横坐标差值或纵坐标差值;
取所述其他识别对象体中差值最小的识别对象体作为下一个焦点框绘制位置;
如果存在差值相同的识别对象体,再根据勾股定理计算所述差值相同的识别对象体中心坐标与所述当前被选中对象中心坐标的距离,取距离最小的识别对象体作为下一个焦点框绘制位置。
在一些实施例中,显示设备的对截图的处理包括如下步骤。
第一步,显示设备的显示界面启动后直接绘制第一图层,将获取的截图设置为第一图层的背景内容。
第二步,在截图功能启动的同时,向后台服务请求获取截图中识别内容的数据。当有数据返回时,将第一图层之上的第二图层设置为可见状态,同时根据获取到的识别对象体的位置信息,将识别对象体的识别框,即识别对象体的轮廓边框绘制出来。具体的过程包括:
通过遍历获取到每个识别对象的位置信息,并为每个识别到的对象创建一个图像视图ImageView控件,ImageView控件根据每个识别对象体的位置信息来控制自身的位置和大小,并将识别框图片填充到ImageView控件,将填充有对象识别框的ImageView控件绘制到第二图层所表示的ViewGroup控件上,并将第二图层设置为可见状态。这里可见状态例如可以是第二图层上未设置背景图,所以是透明的以及第一图层在第二图层的下方可见。
其中,识别对象的位置信息如图9所示,包括但不限于这4部分信息:例如被识别对象体左上角的x轴坐标X0,被识别对象体左上角的y轴坐标Y0,被识别对象体的宽度,如x轴上的长度W0,被识别对象体的高度,如y轴上的长度H0。
另外,根据用户操作遥控器按键的顺序来判定当前被选中的识别对象,同时按照获取到的被选中对象的位置信息,将被选中对象的焦点框绘制出来,具体的过程包括:
根据被选中对象的位置信息,为被选中对象创建一个ImageView控件,并将对象焦点框填充到ImageView控件,将填充有对象焦点框的ImageView控件与当前被选中对象的对象识别框绘制到第二图层所表示的ViewGroup控件上。
这样,不用在焦点每次在第二图层中移动时都刷新截图所在的第一图层,而第二图层绝大部分区域是无内容的即使刷新也占内存小,因此总体节省很多内存,能够提高显示设备对用户界面的选择操作的响应速度。
在绘制当前被选中对象的焦点框时,同时会将之前绘制在第二图层的焦点框删除,从而保证当前只有一个位置具有焦点框,以实现遥控器操作时焦点框实时移动变化的效果。
进一步地,遥控器操作时下一个临近焦点位置的获取方法包括:
首先,遍历计算其它识别对象体中心坐标(X,Y)与当前选中的识别对象体中心坐标 (X0,Y0)中的横坐标差值或纵坐标差值,即计算|X-X0|或|Y-Y0|,取该其他识别对象体中差值最小的识别对象体作为下一个被选中的焦点位置;如果存在差值相同的两个或更多个识别对象,则再根据勾股定理分别计算各识别对象体中心与当前识别对象体中心的距离,取距离最小的识别对象体作为下一个被选中的焦点位置。
第三步,当对象识别界面启动后,在第三图层绘制截图分享二维码,且显示在屏幕的左端位置,如图6所示,在这种状态下,例如,当用户按右键,该图层通过动画形式沿X轴坐标,向左移动一定的距离滑出屏幕,例如移动距离w,同时,焦点框绘制在最左侧的被选中对象的位置上;用户按左键时,并且当前焦点框位置为最左侧被选中对象的位置上,该第三图层通过动画方式沿X轴坐标向右移动相同的距离w,呈现到屏幕上,同时将第二图层上的焦点框清除。
参见图10,示出了根据本申请一些实施例提供的对象识别界面交互流程示意图,具体的实现步骤包括:
(1)获取屏幕截图后,生成并在第三图层绘制截图分享二维码;
(2)获取屏幕截图后,后台服务器(例如,图像识别服务器等)获取识别对象体的信息,当用户按遥控器右键时且显示设备收到遥控器发出的操作指令后,该显示设备确定预定区域坐标.get()是否小于0(即确定二维码是否在电视屏幕上)。如果不小于0(表明二维码在电视屏幕上),则将电视屏幕的第三图层(二维码)的预定区域View向左移动一定距离w,同时获取位置最左的对象信息并根据其位置信息绘制焦点框View;如果预定区域坐标.get()小于0(表明二维码不在电视屏幕上),则继续判断当前识别对象位置是否为最右侧,如果是最右侧,则保持当前焦点状态无动作反馈,如果不是最右侧,则获取右侧识别对象信息,如位置信息,并根据其位置信息在第二图层中绘制新的焦点框View,同时销毁在第二图层中的当前识别对象体的焦点框View;
(3)当用户按遥控器左键时且显示设备收到遥控器发出的操作指令后,该显示设备判断预定区域坐标.get()是否小于0(即确定二维码是否在电视屏幕上)。如果不小于0(表明二维码在电视屏幕上),则保持当前焦点状态以及二维码无动作反馈;如果小于0(表明二维码不在电视屏幕上),则继续判断当前识别对象位置是否为最左侧,如果是最左侧,则销毁当前识别对象体的焦点框View,同时将预定区域View向右移动一定距离w,如果不是最左侧,则获取左侧识别对象体信息并根据其位置信息绘制新的焦点框View,同时销毁当前识别对象的焦点框View。
参见图11~图16,示出了根据本申请实施方式的交互流程示意图。
(1)如图11所示,接收用户通过快捷键发出的智能图像功能,在屏幕上显示的用户界面的一部分区域展示截图内容,还在用户界面的另一区域展示播放画面,此外,还将截图上传到后台进行相关识别处理,及推荐内容搜索;
(2)如图12所示,后台服务器返回数据,显示设备根据既定规则,分类展示数据内容;
(3)如图13所示,当焦点框出现在图2的截图上时,用户按遥控器的激活指令键,能够启动全屏对象识别展示界面,此时第三图层(即包含二维码的界面)在第一图层和第二图层上显示;
(4)如图14和15所示,按遥控器右键,第三图层T通过动画形式向左移动并移出屏幕,也就是说,二维码从用户界面中逐渐消退;此时焦点框显示出来,并绘制在最左侧的识别对象上面;继续按右键,按照识别框及焦点框移动与实现过程中的描述,焦点框重新绘制并定位在相对应的识别对象体位置上;
(5)如图16,在当前焦点框位置,用户按遥控器上的激活键时,将启动并展示该识别对象体相关的识别及推荐信息。
相应地,参见图17,本申请实施例提供的一种在屏幕画面截图中识别对象体的显示装置,包括:
第一单元11,用于获取屏幕当前显示画面的截图,并将所述截图绘制在第一图层;
第二单元12,用于获取所述截图的对象识别信息,并根据所述对象识别信息将对象识别框绘制在第二图层,其中所述第二图层覆盖在所述第一图层之上;
第三单元13,用于确定所述截图上的当前被选中对象,并将所述当前被选中对象的焦点框覆盖所述当前被选中对象的识别框,并绘制在所述第二图层上。
参见图18,本申请实施例还提供的一种显示设备,包括:
处理器600,用于读取存储器610中的程序,使得显示设备执行下列过程:
获取屏幕当前显示画面的截图,并将所述截图绘制在第一图层;
获取所述截图的对象识别信息,并根据所述对象识别信息将对象识别框绘制在第二图层,其中所述第二图层覆盖在所述第一图层之上;
确定所述截图上的当前被选中对象,并将所述当前被选中对象的焦点框覆盖所述当前被选中对象的识别框,并绘制在所述第二图层上。
根据上述的方法,获取屏幕当前显示画面的截图,并将所述截图绘制在第一图层;获取所述截图的对象识别信息,并根据所述对象识别信息将对象识别框绘制在第二图层,其中所述第二图层覆盖在所述第一图层之上;确定所述截图上的当前被选中对象,并将所述当前被选中对象的焦点框覆盖所述当前被选中对象的识别框,并绘制在所述第二图层上,实现为用户提供可视化焦点状态及实时操作交互反馈,以满足用户的需求,并且减小了计算量、降低了内存消耗。
在图18中,总线架构可以包括任意数量的互联的总线和桥,具体由处理器600代表的一个或多个处理器和存储器610代表的存储器的各种电路链接在一起。总线架构还可以 将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起。
本申请实施例提供了一种显示设备,该显示设备可以为智能电视、桌面计算机、便携式计算机、智能手机、平板电脑等。该显示设备可以包括中央处理器(Center Processing Unit,CPU)、存储器、输入/输出设备等,输入设备可以包括键盘、鼠标、触摸屏等,输出设备可以包括显示屏,如液晶显示器(Liquid Crystal Display,LCD)、阴极射线管(Cathode Ray Tube,CRT)等。
针对不同的显示设备,在一些实施方式中,用户接口620可以是能够根据需要外接内接设备的接口,连接的设备包括但不限于小键盘、显示器、扬声器、麦克风、操纵杆等。
处理器600负责管理总线架构和通常的处理,存储器610可以存储处理器600在执行操作时所使用的指令和数据。
在一些实施方式中,处理器600可以是CPU(中央处埋器)、ASIC(Application Specific Integrated Circuit,专用集成电路)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)或CPLD(Complex Programmable Logic Device,复杂可编程逻辑器件)。
存储器610可以包括只读存储器(ROM)和随机存取存储器(RAM),并向处理器600提供存储器610中存储的程序指令和数据。在本申请实施例中,存储器610可以用于存储本申请实施例提供的任一所述方法的程序。
处理器600通过调用存储器610存储的程序指令,处理器600用于按照获得的程序指令执行本申请实施例提供的任一所述方法。
本申请实施例提供了一种计算机可读的非易失性存储介质,用于储存为上述本申请实施例所用的计算机程序指令,其包含用于执行上述本申请实施例提供的任一方法的程序。
所述非易失性存储介质可以是计算机能够存取的任何可用介质或数据存储设备,包括但不限于磁性存储器(例如软盘、硬盘、磁带、磁光盘(MO)等)、光学存储器(例如CD、DVD、BD、HVD等)、以及半导体存储器(例如ROM、EPROM、EEPROM、非易失性存储器(NAND FLASH)、固态硬盘(SSD))等。
本申请实施例提供的一种显示设备的截图显示方法、显示设备,可以为用户提供该截图中识别对象体的可视化焦点状态及实时操作交互反馈,以满足用户的需求,提高用户体验,并且减小了计算量、降低了内存消耗。
本领域内的技术人员应明白,本申请的实施例可提供为方法、设备、系统、或计算机程序产品。而且,本申请可采用在一个或多个其中包含有计算机程序代码的非易失性计算机存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机 程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (20)

  1. 一种截图显示方法,该方法包括:
    当显示屏上显示显示画面时,接收遥控器发出的对显示设备的所述显示屏进行截图的第一输入命令;
    响应于所述第一输入命令,捕获所述显示屏的截图;
    自动确定所述截图中的一个或多个可识别对象;
    接收所述遥控器发出的第二输入命令;
    响应于所述第二输入命令以执行如下操作:
    在所述显示屏中的第一显示图层上,显示所述截图;
    显示用于识别第二显示图层中的所述截图中的一个或多个可识别对象的一个或多个对象识别框;
    基于所述遥控器发出的用户选择请求,在位于所述第一显示图层上的所述第二显示图层中,显示所述一个或多个对象识别框中、与所述一个或多个可识别对象中的第一可识别对象对应的第一对象识别框上的焦点框;
    其中,所述第一显示图层和所述第二显示图层为独立刷新。
  2. 如权利要求1所述的方法,在接收所述第二输入命令之前,响应于所述第一输入命令还执行如下操作:
    在所述显示屏的第一区域继续显示所述显示画面;
    在所述显示屏的第二区域显示所述截图;
    根据所述显示屏的第三区域中的所述截图中的所述一个或多个可识别对象的位置信息,按顺序提取和排列所述截图中的所述一个或多个可识别对象;
    其中,所述第二输入命令由所述遥控器在用户激活所述显示屏的所述第二区域之后发出。
  3. 如权利要求2所述的方法,所述第二区域与所述第三区域相邻。
  4. 如权利要求1所述的方法,所述方法还包括:
    在距离所述焦点框的预定范围内,显示所述第一可识别对象的放大图像。
  5. 如权利要求1所述的方法,所述方法还包括:
    接收所述遥控器发出的用于指示截图的第三输入命令;
    响应于所述第三输入命令,在所述显示屏上显示与所述截图关联的二维码信息,以便用户通过其他电子设备扫描所述二维码信息获取所述截图。
  6. 如权利要求5所述的方法,所述二维码信息被显示在位于所述第二显示图层上的第三显示图层中,其中,所述第一显示图层、所述第二显示图层和所述第三显示图层为独立刷新。
  7. 如权利要求6所述的方法,所述方法还包括:
    接收所述遥控器发出的第四输入命令,将所述第四输入命令分派给所述第一显示图层、所述第二显示图层和所述第三显示图层中的一者进行响应。
  8. 如权利要求6所述的方法,所述方法还包括:
    接收所述遥控器发出的用于指示截图的第四输入命令;
    响应于所述第四输入命令,以渐变的方式逐渐在所述显示屏中移走所述第三显示图层中的所述二维码信息。
  9. 如权利要求1所述的方法,该方法还包括:
    遍历所述一个或多个可识别对象中除所述第一可识别对象之外的剩余可识别对象的中心坐标;
    计算所述第一可识别对象的中心坐标与所述剩余可识别对象的中心坐标中的横坐标差值或纵坐标差值;
    当从所述遥控器接收到焦点遍历命令时,取所述剩余可识别对象中,与所述第一可识别对象的中心坐标的横坐标差值或纵坐标差值最小的第二可识别对象,作为所述焦点框将要移动的下一个可识别对象。
  10. 如权利要求9所述的方法,该方法还包括:
    当从所述遥控器接收到所述焦点遍历命令时,响应于与所述横坐标差值或所述纵坐标差值相关联的第二可识别对象和第三可识别对象,根据勾股定理计算所述第二可识别对象和所述第三可识别对象的中心坐标与所述第一可识别对象的中心坐标的距离,从所述第二可识别对象和所述第三可识别对象中,取距离最小的可识别对象,作为所述焦点框将要移动的下一个可识别对象。
  11. 一种显示设备,包括:
    显示屏;
    存储器,配置为存储计算机指令;
    处理器,与所述显示屏和存储器通信,配置为运行所述计算机指令以使得所述显示设备:
    当所述显示屏上显示显示画面时,接收遥控器发出的对显示设备的所述显示屏的截图的第一输入命令;
    响应于所述第一输入命令,捕获所述显示屏的截图;
    自动确定所述截图中的一个或多个可识别对象;
    接收所述遥控器发出的第二输入命令;
    响应于所述第二输入命令以执行如下操作:
    在所述显示屏中的第一显示图层上,显示所述截图;
    显示用于识别第二显示图层中的所述截图中的一个或多个可识别对象的一个或多个对象识别框;
    基于所述遥控器发出的用户选择请求,在所述第一显示图层上的所述第二显示图层中,显示所述一个或多个对象识别框中与所述一个或多个可识别对象中的第一可识别对象对应的第一对象识别框上的焦点框;
    其中,所述第一显示图层和所述第二显示图层为独立刷新。
  12. 如权利要求11所述的显示设备,在接收所述第二输入命令之前,响应于所述第一输入命令,所述处理器在执行所述计算机指令时还被配置用于:
    在所述显示屏的第一区域继续显示所述显示画面;
    在所述显示屏的第二区域显示所述截图;
    根据所述显示屏的第三区域中的所述截图中的所述一个或多个可识别对象的位置信息,按顺序提取和排列所述截图中的所述一个或多个可识别对象;
    其中,所述第二输入命令由所述遥控器在用户激活所述显示屏的所述第二区域之后发出。
  13. 如权利要求12所述的显示设备,所述第二区域与所述第三区域相邻。
  14. 如权利要求11所述的显示设备,所述处理器,还配置为运行计算机指令以:
    在距离所述焦点框的预定范围内,显示所述第一可识别对象的放大图像。
  15. 如权利要求11所述的显示设备,所述处理器,还配置为运行计算机指令以:
    接收所述遥控器发出的用于指示截图的第三输入命令;
    响应于所述第三输入命令,在所述显示屏上显示与所述截图关联的二维码信息,以便用户通过其他电子设备扫描所述二维码信息获取所述截图。
  16. 如权利要求15所述的显示设备,所述二维码信息被显示在位于所述第二显示图层上的第三显示图层中,其中,所述第一显示图层、所述第二显示图层和所述第三显示图层为独立刷新。
  17. 如权利要求16所述的显示设备,所述处理器,还配置为运行计算机指令以:
    接收所述遥控器发出的第四输入命令,将所述第四输入命令分派给所述第一显示图层、所述第二显示图层和所述第三显示图层中的一者进行响应。
  18. 如权利要求16所述的显示设备,所述处理器,还配置为运行计算机指令:
    接收所述遥控器发出的用于指示截图的第四输入命令;
    响应于所述第四输入命令,以渐变的方式逐渐在所述显示屏中移走所述第三显示图层中的所述二维码信息。
  19. 如权利要求11所述的显示设备,所述处理器,还配置为运行计算机指令:
    遍历所述一个或多个可识别对象中除所述第一可识别对象之外的剩余可识别对象的 中心坐标;
    计算所述第一可识别对象的中心坐标与所述剩余可识别对象的中心坐标中的横坐标差值或纵坐标差值;
    当从所述遥控器接收到焦点遍历命令时,取所述剩余可识别对象中,与所述第一可识别对象的中心坐标的横坐标差值或纵坐标差值最小的第二可识别对象,作为所述焦点框将要移动的下一个可识别对象。
  20. 一种计算机可读的非易失性存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器运行时实现:
    当显示屏上显示显示画面时,接收遥控器发出的对显示设备的所述显示屏的截图的第一输入命令;
    响应于所述第一输入命令,捕获所述显示屏的截图;
    自动确定所述截图中的一个或多个可识别对象;
    接收所述遥控器发出的第二输入命令;
    响应于所述第二输入命令以执行如下操作:
    在所述显示屏中的第一显示图层上,显示所述截图;
    显示用于识别第二显示图层中的所述截图中的一个或多个可识别对象的一个或多个对象识别框;
    基于所述遥控器发出的用户选择请求,在所述第一显示图层上的所述第二显示图层中,显示所述一个或多个对象识别框中与所述一个或多个可识别对象中的第一可识别对象对应的第一对象识别框上的焦点框;
    其中,所述第一显示图层和所述第二显示图层为独立刷新。
PCT/CN2019/098446 2018-09-27 2019-07-30 一种截图显示方法及设备 WO2020063095A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/530,233 US11039196B2 (en) 2018-09-27 2019-08-02 Method and device for displaying a screen shot
US17/322,572 US11812188B2 (en) 2018-09-27 2021-05-17 Method and device for displaying a screen shot

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201811132364.X 2018-09-27
CN201811132364.XA CN109271983B (zh) 2018-09-27 2018-09-27 屏幕画面截图中识别物体的显示方法及显示终端
CN201811133159.5 2018-09-27
CN201811133159.5A CN109388461A (zh) 2018-09-27 2018-09-27 屏幕画面截图中识别物体的显示方法、装置及显示终端
CN201910199952.3A CN109922363A (zh) 2019-03-15 2019-03-15 一种显示画面截图的图形用户界面方法及显示设备
CN201910199952.3 2019-03-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/530,233 Continuation US11039196B2 (en) 2018-09-27 2019-08-02 Method and device for displaying a screen shot

Publications (1)

Publication Number Publication Date
WO2020063095A1 true WO2020063095A1 (zh) 2020-04-02

Family

ID=69950269

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2019/098446 WO2020063095A1 (zh) 2018-09-27 2019-07-30 一种截图显示方法及设备
PCT/CN2019/099631 WO2020063123A1 (zh) 2018-09-27 2019-08-07 一种显示画面截图的图形用户界面方法、显示设备

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/099631 WO2020063123A1 (zh) 2018-09-27 2019-08-07 一种显示画面截图的图形用户界面方法、显示设备

Country Status (1)

Country Link
WO (2) WO2020063095A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11102441B2 (en) 2017-12-20 2021-08-24 Hisense Visual Technology Co., Ltd. Smart television and method for displaying graphical user interface of television screen shot
US11601719B2 (en) 2017-12-20 2023-03-07 Juhaokan Technology Co., Ltd. Method for processing television screenshot, smart television, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699800A (zh) * 2015-03-19 2015-06-10 深圳市米家互动网络有限公司 画面信息搜索方法和系统、遥控器和显示终端
US20170289643A1 (en) * 2016-03-31 2017-10-05 Valeria Kachkova Method of displaying advertising during a video pause
CN108322806A (zh) * 2017-12-20 2018-07-24 青岛海信电器股份有限公司 智能电视及电视画面截图的图形用户界面的显示方法
CN109271983A (zh) * 2018-09-27 2019-01-25 青岛海信电器股份有限公司 屏幕画面截图中识别物体的显示方法及显示终端
CN109388461A (zh) * 2018-09-27 2019-02-26 青岛海信电器股份有限公司 屏幕画面截图中识别物体的显示方法、装置及显示终端
CN109922363A (zh) * 2019-03-15 2019-06-21 青岛海信电器股份有限公司 一种显示画面截图的图形用户界面方法及显示设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9635195B1 (en) * 2008-12-24 2017-04-25 The Directv Group, Inc. Customizable graphical elements for use in association with a user interface
CN108416018A (zh) * 2018-03-06 2018-08-17 北京百度网讯科技有限公司 截屏搜索方法、装置及智能终端
CN109168069A (zh) * 2018-09-03 2019-01-08 聚好看科技股份有限公司 一种识别结果分区域显示方法、装置及智能电视

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699800A (zh) * 2015-03-19 2015-06-10 深圳市米家互动网络有限公司 画面信息搜索方法和系统、遥控器和显示终端
US20170289643A1 (en) * 2016-03-31 2017-10-05 Valeria Kachkova Method of displaying advertising during a video pause
CN108322806A (zh) * 2017-12-20 2018-07-24 青岛海信电器股份有限公司 智能电视及电视画面截图的图形用户界面的显示方法
CN109271983A (zh) * 2018-09-27 2019-01-25 青岛海信电器股份有限公司 屏幕画面截图中识别物体的显示方法及显示终端
CN109388461A (zh) * 2018-09-27 2019-02-26 青岛海信电器股份有限公司 屏幕画面截图中识别物体的显示方法、装置及显示终端
CN109922363A (zh) * 2019-03-15 2019-06-21 青岛海信电器股份有限公司 一种显示画面截图的图形用户界面方法及显示设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11102441B2 (en) 2017-12-20 2021-08-24 Hisense Visual Technology Co., Ltd. Smart television and method for displaying graphical user interface of television screen shot
US11558578B2 (en) 2017-12-20 2023-01-17 Hisense Visual Technology Co., Ltd. Smart television and method for displaying graphical user interface of television screen shot
US11601719B2 (en) 2017-12-20 2023-03-07 Juhaokan Technology Co., Ltd. Method for processing television screenshot, smart television, and storage medium
US11812189B2 (en) 2017-12-20 2023-11-07 Hisense Visual Technology Co., Ltd. Smart television and method for displaying graphical user interface of television screen shot

Also Published As

Publication number Publication date
WO2020063123A1 (zh) 2020-04-02

Similar Documents

Publication Publication Date Title
CN109271983B (zh) 屏幕画面截图中识别物体的显示方法及显示终端
US11812188B2 (en) Method and device for displaying a screen shot
CN105307000B (zh) 显示设备及其方法
US9880727B2 (en) Gesture manipulations for configuring system settings
CN104038807A (zh) 一种基于OpenGL的图层混合方法及装置
CN105191330A (zh) 显示装置及其图形用户界面屏幕提供方法
US10855481B2 (en) Live ink presence for real-time collaboration
WO2021088422A1 (zh) 应用消息的通知方法及装置
CN109388461A (zh) 屏幕画面截图中识别物体的显示方法、装置及显示终端
CN115134649B (zh) 用于在视频内容内呈现交互式元素的方法和系统
WO2020063095A1 (zh) 一种截图显示方法及设备
CN114385052B (zh) 一种Tab栏的动态展示方法及三维显示设备
CN110971953B (zh) 视频播放方法、装置、终端及存储介质
JP2012022632A (ja) 情報処理装置およびその制御方法
US11024257B2 (en) Android platform based display device and image display method thereof
WO2023071861A1 (zh) 数据可视化展示方法、装置、计算机设备和存储介质
CN112541960A (zh) 三维场景的渲染方法、装置及电子设备
CN101599263B (zh) 一种移动终端及其屏幕显示界面的显示方法
US20170031583A1 (en) Adaptive user interface
CN114741016A (zh) 操作方法、装置、电子设备和计算机可读存储介质
JP6695402B2 (ja) 表示システム、及び表示プログラム
CN114302206B (zh) 一种内容显示方法、显示设备及服务器
CN114302206A (zh) 一种内容显示方法、显示设备及服务器
CN116136733A (zh) 多屏显示方法、显示控制装置、设备以及多屏显示系统
CN115700450A (zh) 白板应用的控制方法、装置及智能交互平板

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19866003

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19866003

Country of ref document: EP

Kind code of ref document: A1