WO2020048447A1 - 一种识别结果分区域显示方法、装置及智能电视 - Google Patents

一种识别结果分区域显示方法、装置及智能电视 Download PDF

Info

Publication number
WO2020048447A1
WO2020048447A1 PCT/CN2019/104179 CN2019104179W WO2020048447A1 WO 2020048447 A1 WO2020048447 A1 WO 2020048447A1 CN 2019104179 W CN2019104179 W CN 2019104179W WO 2020048447 A1 WO2020048447 A1 WO 2020048447A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
recognition result
area
recognition
display
Prior art date
Application number
PCT/CN2019/104179
Other languages
English (en)
French (fr)
Inventor
高斯太
宋虎
于芝涛
Original Assignee
聚好看科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 聚好看科技股份有限公司 filed Critical 聚好看科技股份有限公司
Publication of WO2020048447A1 publication Critical patent/WO2020048447A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Definitions

  • the present disclosure relates to the technical field of electronic device display, and in particular, to a method, a device, and a smart TV for displaying the recognition results in different regions.
  • Smart TV has a fully open platform and is equipped with an operating system. While enjoying ordinary TV content, users can realize two-way human-computer interaction functions, integrating audio and video, entertainment, data and other functions in one, to meet user diversity and personalization Demand, bringing more convenient experience to users.
  • a smart TV display terminal generally has an interface layout of a recognition result recognized according to a screenshot image after a screenshot instruction is triggered as shown in FIG. 1.
  • the smart TV's display screen displays a graphical user interface, including a first display area showing the currently playing content and a screen shot image.
  • the second display area of the recognition result is a collection of multiple option bars.
  • the first display area continues to play the currently playing content, and the option bar in the second display area displays thumbnails of the screenshot images, and the recognition results identified based on the screenshot images.
  • the second display area that displays the recognition result of the screenshot image is displayed at the edge of the screen, such as: on the right side, we can see from Figure 1 that the thumbnail of the screenshot image and the recognition result identified based on the screenshot image, or The user control instruction input interface and the like associated with the screenshot operation function are all compressed and displayed in the second display area. There is no classification of the recognition results. In this case, the user still needs to select the information that he wants to know from the recognition results stacked together, or all the recognition results will be displayed in a stack, and the user cannot set what he wants to see. Information, or information you do n’t want to see, the user experience is poor.
  • the purpose of the present disclosure is to provide a method and device for displaying recognition results in different regions, and a smart TV. Different types of recognition results are displayed in different regions, and there are obvious video boundaries between different regions. Targeted selection of the recognition results that you need to display, or turning off recognition results that you do not want to know provides the basis, enhances the convenience and autonomy of the operation, and improves the user experience.
  • a method for displaying a recognition result by region is provided.
  • the method is applied to a display terminal and includes the following steps:
  • Receiving a screenshot instruction taking a screenshot of the current interface of the display terminal, and obtaining a screenshot image
  • sending the screenshot image to a server includes a first server and a second server, the second server is a face recognition server, and sending the screenshot image to the server includes: sending the screenshot Sending an image to the second server through the first server;
  • Receiving the recognition result obtained based on the screenshot image search returned by the server includes:
  • the server further includes a third server, and sends the screenshot image and / or the second recognition result to a third server for searching to obtain a third recognition result.
  • the method further includes: determining the type of the recognition results according to the server.
  • the displaying the different types of recognition results in different areas is specifically: extracting attribute tags of the recognition results, and determining whether the recognition results are determined by judging whether the attribute tags include price tags or not. Types of.
  • the recognition result display region includes a first region, a second region, and a third region, and a boundary is set between the first region, the second region, and the third region, and the third region is the screenshot image Thumbnail.
  • the method further includes:
  • the recognition result display area is hidden, and the detailed page of the selected recognition result is displayed on the same side of the first area;
  • the recognition result display area is hidden, and the detailed page of the selected recognition result is displayed on the same side of the second area.
  • the recognition result display region is hidden, and the thumbnail of the third region is enlarged and covers the current screen of the display terminal.
  • a command receiving and processing unit is configured to receive a screen capture instruction to screen capture a current screen of the display terminal to obtain a screen capture image
  • a sending unit configured to send the screenshot image to a server
  • a receiving unit configured to receive a recognition result returned by the server based on the screenshot image search
  • the display unit is configured to display different types of recognition results in different regions, and a boundary is set between the different regions.
  • the server includes a first server and a second server
  • the second server is a face recognition server
  • the device for displaying the recognition result by region further includes:
  • a second recognition unit configured to perform face recognition on the screenshot image by the second server to obtain a second recognition result
  • a first recognition unit is configured to search the first server according to the second recognition result to obtain a first recognition result.
  • the server further includes a third server
  • the device for displaying the recognition result by region further includes:
  • a third recognition unit is configured to send the screenshot image and / or the keywords of the second recognition result to a third server for searching to obtain a third recognition result.
  • a smart TV including:
  • Memory for storing program instructions
  • the processor is configured to execute a computer program stored in the memory to implement the method steps of displaying the recognition result in different regions.
  • Receiving a screenshot instruction taking a screenshot of the current interface of the display terminal, and obtaining a screenshot image
  • a readable storage medium stores a smart television executable instruction, and the smart television executable instruction is used to cause the smart television to execute the first aspect disclosure.
  • the user triggers a screen capture instruction to obtain a screen shot image
  • the server obtains a full range of information related to the screen shot image, including people, products, knowledge information, videos, etc., and places different types of results in different areas for display, and There are obvious visual boundaries between different areas. Users can set to display the information they want to know or close the information they do not want to see, which enhances the convenience and autonomy of operation and improves the user experience.
  • FIG. 1 shows an example diagram of a recognition result display interface searched according to a trigger screenshot instruction in the prior art
  • FIG. 2 shows a schematic diagram of an implementation environment involved in the present disclosure
  • FIG. 3 shows a flowchart of a method for displaying recognition results by region according to an embodiment of the present disclosure
  • FIG. 4 is a diagram showing an example of displaying a recognition result interface by region according to an embodiment of the present disclosure
  • FIGS. 5 (a) -5 (c) are diagrams showing an example of an enlarged screenshot interface provided by an embodiment of the present disclosure
  • FIG. 6 is an exemplary diagram of an interface after selecting a certain recognition result of a first region according to an embodiment of the present disclosure
  • FIG. 7 (a) -7 (b) are diagrams showing an example of an interface after selecting a certain recognition result of a second region according to an embodiment of the present disclosure
  • FIG. 8 is a diagram illustrating an example of an interface for setting a display result of a recognition result according to an embodiment of the present disclosure
  • FIG. 9 is a schematic structural diagram of a recognition result subregion display device according to an embodiment of the present disclosure.
  • FIG. 2 is a schematic diagram of an implementation environment according to the present disclosure according to an exemplary embodiment.
  • An implementation environment related to the present disclosure includes a display terminal 100.
  • the display terminal 100 may use the screen shot processing method provided by the present disclosure to obtain a screen shot image of the current display interface.
  • the display terminal 100 includes, but is not limited to, a network device having a screen capture processing function, such as a smart TV, a mobile phone, a tablet computer, a notebook computer, and a desktop computer.
  • a network device having a screen capture processing function such as a smart TV, a mobile phone, a tablet computer, a notebook computer, and a desktop computer.
  • the smart TV is taken as an example in the embodiments of the present disclosure.
  • the implementation environment further includes a first server 200 and one or more second servers 300.
  • the implementation environment may further include one or more third servers 400.
  • the first server 200 is a local server, and is configured to receive the screenshot image uploaded by the display terminal 100 and send the screenshot image to the second server 300.
  • the second server 300 refers to a server having a cooperation agreement with the first server 200.
  • the face recognition server that is, the second server 300 is a face recognition server.
  • the second server 300 compares the screenshot image sent by the first server 200 with its own big data to obtain information about the person in the screenshot image.
  • the information includes a person's name, and the person's information is fed back to the first server 200.
  • the first server 200 searches the local database for a video related to the person's name by using the person's name, and sends the person's information and the related video to the display terminal 100 as a recognition result.
  • the third server 400 refers to other servers that have reached a cooperation agreement with the first server 200.
  • the third server 400 may be a third server (such as a Baidu server) with an information search function, or a map search function
  • the third server (such as Taobao server)
  • the display terminal 100 calls the API (Application Programming Interface) with the third server 400, and sends the person name and / or screenshot image to the third server 400, the third server 400
  • the received person name and / or screen shot image is compared with its own big data to obtain the corresponding recognition result, and it is returned to the display terminal 100.
  • the specific implementation process can refer to the specific explanation of the following embodiments.
  • a method for displaying a recognition result by region is shown in FIG. 3 and includes the following steps:
  • Step 301 Receive a screenshot instruction, take a screenshot of the current interface of the display terminal, and obtain a screenshot image.
  • the display terminal may be the display terminal 100 in the implementation environment shown in FIG. 2, such as a smart TV, a smart TV set-top box, and the like.
  • the current interface refers to the display interface of a smart TV or smart TV set-top box.
  • the screen capture instruction may be sent by a control device such as a remote control.
  • the display terminal receives the screen capture instruction sent by the control device such as the remote control, and triggers the display terminal to perform subsequent screen capture operations to obtain the currently displayed screen content.
  • the screenshot processing method of the present disclosure is not limited to deploying corresponding processing logic in the display terminal 100, and may also be processing logic deployed in other machines.
  • the processing logic of the screen shot processing method is deployed in the display terminal 100 having computing capability.
  • Step S302 sending the screenshot image to a server
  • the server includes a first server 200 and one or more second servers 300;
  • the first server 200 is a local server, and is configured to receive the screenshot image uploaded by the display terminal 100, and may send the screenshot image to the second server 300.
  • the local server also searches for the identification information related to the screenshot image and returns it to the display terminal 100.
  • the second server 300 is a server with a face recognition function, that is, a face recognition server.
  • the second server 300 performs face recognition on the screenshot image, which is about to be received.
  • the screenshot image is compared with its own big data for face recognition, and the information of the person in the screenshot image is obtained, that is, the second recognition result, and the person information includes the name of the person.
  • the second recognition result includes the person name "Song ** ", the first server 200 uses” Song ** "to search for a video work performed by" Song ** "in a local database (such as Hisense's Juhao database) as the first recognition result.
  • a local database such as Hisense's Juhao database
  • the server further includes one or more third servers 400.
  • the third server 400 refers to another server that has reached a cooperation agreement with the first server 200.
  • the third server 400 may have an information search function.
  • the third server (such as Baidu server, etc.) may also be a third server (such as Taobao server) with the function of searching by map.
  • the display terminal 100 calls an API (Application Programming Interface) with the third server 400 (Application Programming Interface).
  • Step S303 receiving a recognition result returned by the server based on the screen shot image search
  • the recognition result includes a face recognition result searched by the second server 300, that is, a second recognition result, and a first recognition result searched by the first server 200, wherein the first recognition result is the first server 200 according to the second Search the keywords of the recognition results.
  • the first server 200 sends the second recognition result fed back by the second server 300 and the first recognition result searched by the first server 200 itself to the display terminal 100.
  • the recognition result further includes a third recognition result obtained by the third server 400 according to the second recognition result and / or a screenshot image search, and the third server 400 sends the third recognition result to the display terminal 100.
  • Step S304 Display different types of recognition results in different regions, and set boundaries between the different regions.
  • the type of the recognition result is determined according to the server.
  • the recognition result returned by the Taobao server is placed in the first area 201 of the recognition result display area for display through background configuration, as shown in the figure.
  • the top left corner of the first area 201 is marked as "similar items”, and the first area 201 sequentially displays items similar to the screenshot image searched from the Taobao server, including the image of the item and the price of the item.
  • the recognition results of the servers other than the Taobao server are displayed in the second area 202 of the recognition result display area.
  • the second display region displaying the recognition result includes a first region 201, a second region 202, and a third region 203, and the first region 201, the second region 202, and the third region 203 are disposed between
  • the third area 203 is a thumbnail of the screenshot image, which is located in the middle of the second display area.
  • a floating layer is also provided at the lower left corner of the thumbnail. The floating layer displays a two-dimensional code, and “Scan the code to obtain a screenshot. "OK" to zoom in.
  • the screenshot image After the user scans the QR code on the image through a smart terminal such as a mobile phone or tablet, the screenshot image will be sent to the smart terminal that scans the code. At this time, the user can share the screenshot image received by the device that scans the code.
  • a screenshot enlarged interface is obtained. As shown in FIG. 5 (a), the second display area is hidden, and the thumbnail of the third area 203 is enlarged and covers the current screen of the display terminal. (That is, the first display area that displays the currently playing content), there is a floating layer on the left side of the screenshot zoom interface.
  • a QR code is displayed on the floating layer, and "Scan the code to obtain a screenshot.” “Push to phone”, the recognition area frame has been marked on the face of the person in the screenshot zoom interface.
  • the initial position of the focus in the screenshot zoom interface defaults to the recognition area frame A on the leftmost side of the screen by default. The user can select the content to be recognized by moving the focus. After moving the arrow key to the right, as shown in FIG. 5 (c), the position of the focus is switched to the recognition area frame B.
  • the first region 201 is located on the left side of the third region 203 and the second region 202 is located on the right side of the third region 203.
  • the second area 202 includes content identification, related people, related information, similar pictures, etc. These contents are provided by the first server, the second server, and a third server other than the Taobao server, according to the screenshot image and / or the second The server performs face recognition to search for the name of the person.
  • the second display area When the user chooses to view a certain recognition result of the first area 201, as shown in FIG. 6, the second display area is hidden, and the detailed page 2011 of the recognition result selected for viewing is displayed on the same side of the first area 201, that is, the first The region 201 is located on the left half of the second display region.
  • the second display region When a certain recognition result of the first region 201 is selected to be viewed, the second display region is hidden, and the detailed page 2011 of the selected recognition result is displayed on the left of the first display region.
  • the detailed page of the recognition result selected for viewing In the side area, the detailed page of the recognition result selected for viewing is a semi-transparent layer, covering the left area of the first display area.
  • information such as recommended photos, product tags, and price tags of the product are displayed.
  • the focus moves to “Related Information”.
  • the second display area Hidden the detailed page of the recognition result selected for viewing is displayed on the same side of the second area 202, that is, the second area 202 is located on the right half of the second display area.
  • the second display area is hidden, and the detailed page 2021 of the selected recognition result is displayed on the right side of the first display area.
  • the detailed page 2021 of the selected recognition result is a translucent layer, covering the right side of the first display area. Side area.
  • the recognition result returned by the Taobao server is placed in the first area 201 in the second display area, and the recognition result returned by the server other than the Taobao server is placed in the second area 202 in the second display area.
  • the recognition result returned by the server is a "similar item" related to the product.
  • thumbnails of the third area 203 can also be located at the far left or right of the second display area.
  • the recognition results of other servers are sequentially displayed on the other side of the third area 203.
  • the display positions of the second region 202 and the third region 203 are not limited herein.
  • the user can also choose to turn off the display of the recognition result of the third area and / or the second area by setting, and will not repeat them.
  • the methods of the above embodiments are used to display different types of results in different areas, and there are obvious visual boundaries between different areas. Users can set to display the information they want to know or turn off the unwanted ones. The information you see enhances the convenience and autonomy of the operation and improves the user experience.
  • Embodiment 1 Another method for displaying the recognition results in different regions provided by the embodiments of the present disclosure.
  • Embodiment 1 what is different from Embodiment 1 is that in this embodiment, it is determined whether the recognition result belongs to the recognition result of the product by extracting the attribute tag of the recognition result and determining whether the attribute tag includes a price tag.
  • the method includes the following steps:
  • Step S401 receiving a screenshot instruction, taking a screenshot of the current interface of the display terminal, and obtaining a screenshot image;
  • Step S402 sending the screenshot image to the server.
  • Step S403 receiving the recognition result returned by the server based on the screenshot image search;
  • Steps S401, S402, and S403 are specifically referred to the content shown in Embodiment 1 of the present disclosure, which will not be repeated here.
  • Step S404 Display different types of recognition results in different regions, and set boundaries between the different regions.
  • an attribute tag of the recognition result is extracted, and it is determined whether the attribute tag includes a price tag. If so, it is judged that the recognition result is a result of a product type, and the product type identification is displayed in the first area. As a result, if not, it is determined that the recognition result is a result of a non-commodity type, and the recognition result of the non-commodity type is displayed in the second area.
  • the attribute tag of the recognition result includes one or more of the tags: personName, title, location, sale price, quantity, etc.
  • Whether the attribute tag of the recognition result includes a price tag saleprice to determine whether the returned recognition result is a commodity is displayed in the first area 201.
  • the price tag is not included, When the sale price is determined to be a non-commodity result, it is displayed in the second area 202.
  • the user can select the display of the recognition results according to his preferences or needs, which enhances the convenience and autonomy of the operation and improves the user experience.
  • An embodiment of the present disclosure provides a device for displaying regions of recognition results.
  • the apparatus for performing the methods shown in Embodiments 1 and / or 2 of the present disclosure includes:
  • a command receiving and processing unit configured to receive a screenshot instruction, and take a screenshot of the current picture of the display terminal to obtain a screenshot image
  • a sending unit configured to send the screenshot image to a server
  • a receiving unit configured to receive an identification result returned by the server based on the screenshot image search
  • the display unit is configured to display different types of recognition results in different regions, and a boundary is set between the different regions.
  • the server includes a first server and a second server
  • the second server is a face recognition server
  • the recognition result subregion display device further includes:
  • a second recognition unit configured to perform face recognition on the screenshot image by the second server to obtain a second recognition result
  • a first recognition unit is configured to search the first server according to the second recognition result to obtain a first recognition result.
  • the server further includes a third server
  • the device for displaying the recognition result by region further includes:
  • a third recognition unit is configured to send the screenshot image and / or the keywords of the second recognition result to a third server for searching to obtain a third recognition result.
  • the user can select the display of the recognition result according to his preferences or needs, which enhances the convenience and autonomy of the operation and improves the user experience.
  • An embodiment of the present disclosure provides a smart TV, including:
  • Memory for storing program instructions
  • a processor configured to execute a computer program stored on the memory
  • the stored computer program is used for receiving a screenshot instruction, taking a screenshot of the current interface of the display terminal, and obtaining a screenshot image;
  • the sending the screenshot image to the server specifically includes:
  • the server includes a first server and a second server, and the second server is a face recognition server;
  • the server further includes a third server, and the display terminal sends the screenshot image and / or the second recognition result to a third server for searching to obtain a third recognition result.
  • the displaying the different types of recognition results in different regions is specifically: determining the type of the recognition results according to the server.
  • the displaying the different types of recognition results in different areas is specifically: extracting attribute tags of the recognition results, and determining whether the recognition results are determined by judging whether the attribute tags include price tags or not. Types of.
  • the recognition result display region includes a first region, a second region, and a third region, and a boundary is set between the first region, the second region, and the third region, and the third region is the screenshot image Thumbnail.
  • the method further includes:
  • the recognition result display area is hidden, and the detailed page of the selected recognition result is displayed on the same side of the first area;
  • the recognition result display area is hidden, and the detailed page of the selected recognition result is displayed on the same side of the second area.
  • the recognition result display region is hidden, and the thumbnail of the third region is enlarged and covers the current screen of the display terminal.
  • An embodiment of the present disclosure provides a readable storage medium, where the readable storage medium stores a smart TV executable instruction, and the smart TV executable instruction is used to make the smart TV execute the disclosed in the foregoing embodiment. method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开涉及一种识别结果分区域显示方法、装置及智能电视,包括接收截屏指令,对所述显示终端当前界面进行截屏,得到截屏图像;将所述截屏图像发送至服务器;接收所述服务器返回的基于所述截屏图像搜索得到的识别结果;将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。本公开的一个或多个实施例提供的识别结果分区域显示方法,按照识别结果的类型分不同区域进行显示,为后续用户有针对性的选择自己需要显示的识别结果,或者关闭自己不想了解的识别结果提供了基础,增强了操作的便捷性和自主性,提升了用户体验。

Description

一种识别结果分区域显示方法、装置及智能电视
相关申请的交叉引用
本公开要求在2018年09月03日提交中国专利局、公开号为201811021566.7、公开名称为“一种识别结果分区域显示方法、装置及智能电视”的中国专利公开的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及电子设备显示技术领域,特别涉及一种识别结果分区域显示方法、装置及智能电视。
背景技术
智能电视具有全开放式平台,搭载了操作系统,用户在欣赏普通电视内容的同时,可实现双向人机交互功能,集影音、娱乐、数据等多种功能于一体,满足用户多样化和个性化的需求,给用户带来了更便捷的体验。
目前市场上的智能电视中很多具有图像识别功能。即,用户在看电视时对于电视画面中出现的某些信息感兴趣,如想要了解视频中的演员是谁,演员穿的衣服、配饰,视频当前画面中的家具、汽车、摆设等,这时用户可以在播放该电视画面时通过遥控器发送截屏指令。然后,智能电视就会截屏,然后将截屏图片存储、分享和/或进行图片识别,并在本地数据库或云端或第二服务器的大数据中搜索关于截屏图像中的内容,并将识别结果显示在电视屏幕上。
在现有技术中,智能电视显示终端一般具有如图1所示的触发截屏指令后根据截屏图像识别出的识别结果的界面布局。当视频正在智能电视上播放时,通过点击遥控器、触摸屏幕或者手势等方式触发截屏指令后,智能电视的显示屏上显示图形用户界面,包括显示当前播放内容的第一显示区和显示截屏图像的识别结果的第二显示区。其中,第二显示区为多个选项栏集合,第一 显示区继续播放当前播放内容,第二显示区中选项栏则显示截屏图像的缩略图、以及根据截屏图像识别出的识别结果,或与截图关联功能操作的用户控制指令输入接口。
其中,显示截屏图像的识别结果的第二显示区在屏幕边缘区显示,如:右侧,从图1中我们可以看到,截屏图像的缩略图、以及根据截屏图像识别出的识别结果,或与截图关联功能操作的用户控制指令输入接口等等,全部压缩堆叠显示在第二显示区内。没有对识别结果进行分类,这种情况下,用户还需要从堆叠在一起的识别结果中选择出自己想要了解的信息,又或者所有识别结果都会堆叠显示出来,用户无法设置想要看到的信息,或者不想看到的信息,用户体验较差。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本公开的目的在于提供一种识别结果分区域显示方法、装置以及智能电视,将不同类型的识别结果分不同的区域进行显示,且不同区域之间存在明显的视频上的界限,为后续用户有针对性的选择自己需要显示的识别结果,或者关闭自己不想了解的识别结果提供了基础,增强了操作的便捷性和自主性,提升了用户体验。
本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。
根据本公开的第一方面,提供了一种识别结果分区域显示方法,所述方法应用于显示终端,包括如下步骤:
接收截屏指令,对所述显示终端当前界面进行截屏,得到截屏图像;
将所述截屏图像发送至服务器;
接收所述服务器返回的基于所述截屏图像搜索得到的识别结果;
将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。
进一步地,所述将所述截屏图像发送至服务器,所述服务器包括第一服务器和第二服务器,所述第二服务器为人脸识别服务器,将所述截屏图像发送至服务器包括:将所述截屏图像通过所述第一服务器发送至所述第二服务器;
接收所述服务器返回的基于所述截屏图像搜索得到的识别结果,包括:
通过所述第一服务器,接收第二服务器对所述截屏图像进行人脸识别得到的第二识别结果,及所述第一服务器根据所述第二识别结果进行搜索得到的第一识别结果。
进一步地,所述服务器还包括第三服务器,将所述截屏图像和\或所述第二识别结果发送至第三服务器进行搜索,得到第三识别结果。
进一步地,所述将不同类型的识别结果分别放在不同区域中进行显示之前还包括:根据所述服务器确定所述识别结果的类型。
进一步地,所述将不同类型的识别结果分别放在不同区域中进行显示,具体为:提取所述识别结果的属性标签,通过判断所述属性标签中是否包含价格标签来确定所述识别结果的类型。
进一步地,识别结果显示区域包括第一区域、第二区域以及第三区域,且所述第一区域、第二区域、第三区域之间设有界限,所述第三区域为所述截屏图像的缩略图。
进一步地,所述分区域显示所述识别结果之后,还包括:
当用户选择查看所述第一区域的某个识别结果时,所述识别结果显示区域隐藏,所述被选择查看的识别结果的详细页面显示在所述第一区域的同一侧;
当用户选择查看所述第二区域的某个识别结果时,所述识别结果显示区域隐藏,所述被选择查看的识别结果的详细页面显示在所述第二区域的同一侧。
进一步地,当用户选择查看所述第三区域时,所述识别结果显示区域隐藏,所述第三区域的缩略图放大并覆盖所述显示终端当前的画面。
本公开第二方面,命令接收处理单元,用于接收截屏指令,对所述显示终端当前的画面进行截屏,得到截屏图像;
发送单元,用于将所述截屏图像发送至服务器;
接收单元,用于接收所述服务器返回的基于所述截屏图像搜索得到的识别结果;
显示单元,用于将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。
进一步地,所述服务器包括第一服务器和第二服务器,所述第二服务器为人脸识别服务器,该识别结果分区域显示装置还包括:
第二识别单元,用于所述第二服务器对所述截屏图像进行人脸识别,得到第二识别结果;
第一识别单元,用于所述第一服务器根据所述第二识别结果进行搜索,得到第一识别结果。
进一步地,所述服务器还包括第三服务器,该识别结果分区域显示装置还包括:
第三识别单元,用于将所述截屏图像和\或所述第二识别结果的关键词发送至第三服务器进行搜索,得到第三识别结果。
本公开第三方面,提供了一种智能电视,包括:
显示屏;
存储器,用于存储程序指令;
处理器,用于执行所述存储器上所存放的计算机程序,实现识别结果分区域显示的方法步骤。
接收截屏指令,对所述显示终端当前界面进行截屏,得到截屏图像;
将所述截屏图像发送至服务器,所述服务器搜索得到识别结果;
接收所述服务器返回的基于所述截屏图像搜索得到的识别结果;
将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。
本公开第四方面,提供了一种可读性存储介质,所述可读性存储介质存储有智能电视可执行指令,所述智能电视可执行指令用于使所述智能电视执行第一方面公开的方法。
由上述技术方案可知,本公开示例性实施例中的识别结果分区域显示方法、装置及智能电视至少具备以下优点和积极效果:
首先,通过用户触发截屏指令,获得截屏图像,通过服务器获得与截屏图像相关的包括人物、商品、知识资讯、视频等全方位的信息,将不同类型的结果分别放在不同区域中进行显示,并且不同区域之间有明显的视觉上的界限,用户可以通过设置来显示自己想要了解的信息或者关闭自己不想要看到的信息,增强了操作的便捷性和自主性,提升了用户体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开的范围。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了现有技术中根据触发截图指令搜索到的识别结果显示界面的示例图;
图2示出了本公开所涉及的实施环境示意图;
图3示出了本公开实施例提供的识别结果分区域显示方法的流程图;
图4示出了本公开实施例提供的分区域显示识别结果界面的示例图;
图5(a)-5(c)示出了本公开实施例提供的截图放大界面的示例图;
图6示出了本公开实施例提供的选择第一区域某个识别结果后的界面示 例图;
图7(a)-7(b)示出了本公开实施例提供的选择第二区域某个识别结果后的界面示例图;
图8示出了本公开实施例提供的识别结果显示设置的界面示例图;
图9示出了本公开实施例提供的识别结果分区域显示装置的结构示意图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。
本说明书中使用用语“一个”、“一”、“该”和“所述”用以表示存在一个或多个要素/组成部分/等;用语“包括”和“具有”用以表示开放式的包括在内的意思并且是指除了列出的要素/组成部分/等之外还可存在另外的要素/组成部分/等;用语“第一”和“第二”等仅作为标记使用,不是对其对象的数量限制,用语“多个”用以表示两个或两个以上。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。
本公开首先提供了一种识别结果分区域显示方法,所述方法应用于显示终端,图2是根据一示例性实施例示出的本公开所涉及的实施环境示意图。 本公开所涉及的实施环境包括显示终端100。显示终端100可以采用本公开提供的截屏处理方法,获得当前显示界面的截屏图像。
显示终端100包括但不限于智能电视、手机、平板电脑、笔记本电脑、台式电脑等具有截屏处理功能的网络设备,在本公开的实施例中以智能电视为例进行说明。
根据需要,该实施环境还包括第一服务器200、一个或多个第二服务器300,此外,该实施环境还可以包括一个或多个第三服务器400。其中,第一服务器200为本地服务器,用于接收显示终端100上传的截屏图像,并将截屏图像发送到第二服务器300,第二服务器300是指与所述第一服务器200达成合作协议的具有人脸识别功能的服务器,即第二服务器300为人脸识别服务器,第二服务器300将第一服务器200发送的截屏图像,与自身的大数据进行比对,得到截屏图像中人物的信息,该人物信息中包括人名,并将人物信息反馈至第一服务器200,第一服务器200通过人名在本地数据库搜索与该人名相关的视频,将人物信息和相关视频作为识别结果发送至显示终端100。
第三服务器400是指与所述第一服务器200达成合作协议的其他服务器,第三服务器400可以是具备资讯搜索功能的第三服务器(如百度服务器等),还可以是具备以图搜图功能的第三服务器(如淘宝服务器),显示终端100调用与第三服务器400的API(Application Programming Interface,应用程序编程接口),将人名和\或截屏图像发送至第三服务器400,第三服务器400将接收到人名和\或截屏图像与自身的大数据进行比对,得到相应的识别结果,返回至显示终端100。其具体实施过程可以参见下述实施例的具体解释说明。
实施例1
一种识别结果分区域显示方法,所述方法应用于显示终端,如图3所示,包括如下步骤:
步骤301:接收截屏指令,对所述显示终端当前界面进行截屏,得到截屏图像;
其中,显示终端可以是图2所示实施环境中的显示终端100,例如智能电 视、智能电视机顶盒等。当前界面是指智能电视或智能电视机顶盒的显示界面。截屏指令可以是遥控器等控制设备发送出的,显示终端接收遥控器等控制设备发出的截屏指令,触发显示终端执行后续截屏操作,获取当前正在显示的画面内容。
应当说明的是,本公开截屏处理方法,不限于在显示终端100中部署相应的处理逻辑,其也可以是部署于其它机器中的处理逻辑。例如,在具备计算能力的显示终端100中部署截屏处理方法的处理逻辑等。
步骤S302:将所述截屏图像发送至服务器;
所述服务器包括第一服务器200、一个或多个第二服务器300;
第一服务器200为本地服务器,用于接收显示终端100上传的截屏图像,并可以将截屏图像发送到第二服务器300,本地服务器还将搜索出与截屏图像相关的识别信息返回至显示终端100。
将截屏图像通过第一服务器200发送至第二服务器300,第二服务器300是具备人脸识别功能的服务器,即人脸识别服务器,第二服务器300对截屏图像进行人脸识别,即将接收到的截屏图像与其自身的大数据进行对比以进行人脸识别,得到截屏图像中人物的信息,即第二识别结果,该人物信息中包括人名。
将第二识别结果发送至第一服务器(本地服务器)200,所述第一服务器200从所述第二识别结果中提取出人名进行搜索,举例:第二识别结果中包含了人名为“宋**”,第一服务器200利用“宋**”在本地数据库(如海信的聚好看数据库)中搜索出由“宋**”参演的视频作品作为第一识别结果。
作为一种优选的实施例,服务器还包括一个或多个第三服务器400,第三服务器400是指与所述第一服务器200达成合作协议的其他服务器,第三服务器400可以是具备资讯搜索功能的第三服务器(如百度服务器等),还可以是具备以图搜图功能的第三服务器(如淘宝服务器),显示终端100调用与第三服务器400的API(Application Programming Interface,应用程序编程接口),将人名“宋**”发送至具备资讯搜索功能的百度服务器,得到“宋**”的百 度百科内容,将截屏图像发送至具备以图搜图功能的淘宝服务器,识别出截屏图像中衣服、配饰、家具等商品信息,将上述识别结果作为第三识别结果,返回至显示终端100。
步骤S303:接收所述服务器返回的基于所述截屏图像搜索得到的识别结果;
所述识别结果包括第二服务器300搜索得到的人脸识别结果,即第二识别结果,以及第一服务器200搜索得到的第一识别结果,其中,第一识别结果是第一服务器200根据第二识别结果的关键词进行搜索得到的。第一服务器200将第二服务器300反馈的第二识别结果以及第一服务器200自身搜索得到的第一识别结果发送至显示终端100。
识别结果还包括第三服务器400根据第二识别结果和\或截屏图像搜索得到的第三识别结果,第三服务器400将第三识别结果发送至显示终端100。
步骤S304:将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。
作为一种优选的实施例,根据服务器来确定识别结果的类型,在本实施例中,通过后台配置,将淘宝服务器返回的识别结果放在识别结果显示区域的第一区域201进行显示,如图4所示,在第一区域201的左上角标注为“相似物品”,在第一区域201中依次显示从淘宝服务器中搜索得到的与截屏图像中相似的物品,包括物品的图像以及物品的价格,将除淘宝服务器之外的其他服务器的识别结果放在识别结果显示区域的第二区域202进行显示。
在本公开的实施例中显示识别结果的第二显示区包括第一区域201、第二区域202以及第三区域203,其中,第一区域201、第二区域202以及第三区域203之间设置有明显的视觉上的界限,用户在查看识别结果时能一目了然地区分出不同的显示区域。第三区域203为所述截屏图像的缩略图,位于第二显示区的中间,在缩略图的左下角位置还设有一个浮层,浮层上显示二维码,以及“扫码获取截图,“OK”放大”的提示信息。当用户通过智能终端如手机或平板电脑扫描图上的二维码后,截屏图像会发送至扫码的智能终端, 此时,用户可以将扫码的设备端接收到的截屏图像进行分享。当用户按“OK”键后,得到截图放大界面,如图5(a)所示,所述第二显示区隐藏,所述第三区域203的缩略图放大并覆盖所述显示终端当前的画面(即显示当前播放内容的第一显示区),在截图放大界面的左侧设有一个浮层,浮层上显示二维码,以及“扫码获取截图,“下载“海信聚好看”可直接推送到手机”的提示信息,截图放大界面中在人物面部已经标注出识别区域框。如图5(b)所示,截图放大界面中焦点的初始位置默认为屏幕最左侧的识别区域框A,用户可以通过移动焦点来选择想要识别的内容,向右移动方向键后,如图5(c)所示,焦点的位置切换至识别区域框B。
在本公开的实施例中,第一区域201位于第三区域203的左侧,第二区域202位于第三区域203的右侧。第二区域202包括内容识别、相关人物、相关信息、相似图片等内容,这些内容是由第一服务器、第二服务器、除淘宝服务器之外的其他第三服务器,根据截屏图像和\或第二服务器进行人脸识别得到的人名进行搜索得到的结果。
当用户选择查看第一区域201的某个识别结果时,如图6所示,第二显示区隐藏,被选择查看的识别结果的详细页面2011显示在第一区域201的同一侧,即第一区域201位于第二显示区的左半部分,当选择查看第一区域201的某个识别结果时,第二显示区隐藏,被选择查看的识别结果的详细页面2011显示在第一显示区的左侧区域,被选择查看的识别结果的详细页面为半透明图层,覆盖第一显示区的左侧区域,在详细页面2011中,显示商品的推荐照片、商品标签以及价格标签等信息,当用户点击该详细页面2011时,会跳转至该商品的淘宝购物网页。这种显示方式更加符合用户的观看习惯。
当用户选择查看第二区域202的某个识别结果时,如图7(a)所示,焦点移动至“相关信息”,用户选择查看后,如图7(b)所示,第二显示区隐藏,被选择查看的识别结果的详细页面显示在第二区域202的同一侧,即第二区域202位于第二显示区的右半部分,当选择查看第二区域202的某个识别结果时,第二显示区隐藏,被选择查看的识别结果的详细页面2021显示在第一显示区 的右侧区域,被选择查看的识别结果的详细页面2021为半透明图层,覆盖第一显示区的右侧区域。
实施例一将淘宝服务器返回识别结果放在第二显示区中的第一区域201内,将除淘宝服务器之外的服务器返回的识别结果放在第二显示区中的第二区域202内,淘宝服务器返回的识别结果是与商品有关的“相似物品”,当用户对显示终端100播放的内容中每个画面内中服饰、配饰、家具等商品感兴趣,想要购买同款商品时,可以通过将截屏图像发送至淘宝服务器进行商品搜索,并在淘宝服务器返回的识别结果中选择自己想要了解或购买的商品,但有些用户对类似商品推荐非常反感,他们感觉这种推荐是一种变相的广告,根本不想看到类似的商品推荐,则用户可以通过选择设置关闭第一区域201的显示,如图8所示,这样在用户截屏后,显示界面只显示第三区域203的缩略图,以及第二区域202的识别结果内容,此时,第三区域203仍然位于第二显示区的中间,除淘宝服务器之外的其他服务器的识别结果分别显示在第三区域203的两边,当然,第三区域203的缩略图也可以位于第二显示区的最左边或者最右边,而除淘宝服务器之外的其他服务器的识别结果依次展示在第三区域203的另一侧。在此并不对第二区域202、第三区域203的显示位置做限定。与之类似的,用户也可以通过设置选择关闭第三区域和\或第二区域的识别结果显示,不再赘述。
通过上述实施例的方法将不同类型的结果分别放在不同区域中进行显示,并且不同区域之间有明显的视觉上的界限,用户可以通过设置来显示自己想要了解的信息或者关闭自己不想要看到的信息,增强了操作的便捷性和自主性,提升了用户体验。
实施例2
本公开实施例提供的另一种识别结果分区域显示方法。在实施例1的基础上,与实施例1不同的是,在本实施例中,通过提取识别结果的属性标签,判断属性标签中是否包含价格标签来确定识别结果是否属于商品的识别结果。所述方法包括如下步骤:
步骤S401:接收截屏指令,对所述显示终端当前界面进行截屏,得到截屏图像;
步骤S402:将所述截屏图像发送至服务器步骤S403:接收所述服务器返回的基于所述截屏图像搜索得到的识别结果;
步骤S401、S402、S403具体参见本公开实施例1示出的内容,本公开在此不再赘述。
步骤S404:将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。
在本实施例中,提取所述识别结果的属性标签,判断所述属性标签是否包含价格标签,如果是,则判断该识别结果为商品类型的结果,在所述第一区域显示该商品类型识别结果,如果不是,则判断该识别结果为非商品类型的结果,在所述第二区域显示该非商品类型的识别结果。
示例性的,识别结果的属性标签中包含其中的一个或多个标签:personName(人名)、title(标题)、location(位置)、saleprice(价格)、quantity(数量)等,通过判断接收到的识别结果的属性标签中是否包含价格标签saleprice来确定返回的识别结果是否是商品,当包含价格标签saleprice时,确定该识别结果为商品,则放在第一区域201中显示,当不包含价格标签saleprice时,确定该识别结果为非商品,则放在第二区域202中显示。
通过上述识别结果分区域显示的方法,用户可以根据自己的喜好或需求,选择识别结果的显示,增强了操作的便捷性和自主性,提升了用户体验。
实施例3
本公开实施例提供的一种识别结果分区域显示装置,如图9所示,用于执行本公开实施例1和\或2示出的方法,包括:
命令接收处理单元,用于接收截屏指令,对所述显示终端当前的画面进行截屏,得到截屏图像;
发送单元,用于将所述截屏图像发送至服务器;
接收单元,用于接收所述服务器返回的基于所述截屏图像搜索得到的识 别结果;
显示单元,用于将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。
作为一种优选的实施例,所述服务器包括第一服务器和第二服务器,所述第二服务器为人脸识别服务器,该识别结果分区域显示装置还包括:
第二识别单元,用于所述第二服务器对所述截屏图像进行人脸识别,得到第二识别结果;
第一识别单元,用于所述第一服务器根据所述第二识别结果进行搜索,得到第一识别结果。
作为另一种优选的实施例,所述服务器还包括第三服务器,该识别结果分区域显示装置还包括:
第三识别单元,用于将所述截屏图像和\或所述第二识别结果的关键词发送至第三服务器进行搜索,得到第三识别结果。
通过上述识别结果分区域显示装置,用户可以根据自己的喜好或需求,选择识别结果的显示,增强了操作的便捷性和自主性,提升了用户体验。
实施例4
本公开的实施例提供了一种智能电视,包括:
显示屏;
存储器,用于存储程序指令;
处理器,用于执行所述存储器上所存放的计算机程序;
所存放的计算机程序用于接收截屏指令,对所述显示终端当前界面进行截屏,得到截屏图像;
将所述截屏图像发送至服务器,所述服务器搜索得到识别结果;
接收所述服务器返回的基于所述截屏图像搜索得到的识别结果;
将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。
进一步地,所述将所述截屏图像发送至服务器具体包括:
所述服务器包括第一服务器和第二服务器,所述第二服务器为人脸识别服务器;
将所述截屏图像通过所述第一服务器发送至所述第二服务器,所述第二服务器对所述截屏图像进行人脸识别,得到第二识别结果,所述第一服务器根据所述第二识别结果进行搜索,得到第一识别结果。
进一步地,所述服务器还包括第三服务器,所述显示终端将所述截屏图像和\或所述第二识别结果发送至第三服务器进行搜索,得到第三识别结果。
进一步地,所述将不同类型的识别结果分别放在不同区域中进行显示,具体为:根据所述服务器确定所述识别结果的类型。
进一步地,所述将不同类型的识别结果分别放在不同区域中进行显示,具体为:提取所述识别结果的属性标签,通过判断所述属性标签中是否包含价格标签来确定所述识别结果的类型。
进一步地,识别结果显示区域包括第一区域、第二区域以及第三区域,且所述第一区域、第二区域、第三区域之间设有界限,所述第三区域为所述截屏图像的缩略图。
进一步地,所述分区域显示所述识别结果之后,还包括:
当用户选择查看所述第一区域的某个识别结果时,所述识别结果显示区域隐藏,所述被选择查看的识别结果的详细页面显示在所述第一区域的同一侧;
当用户选择查看所述第二区域的某个识别结果时,所述识别结果显示区域隐藏,所述被选择查看的识别结果的详细页面显示在所述第二区域的同一侧。
进一步地,当用户选择查看所述第三区域时,所述识别结果显示区域隐藏,所述第三区域的缩略图放大并覆盖所述显示终端当前的画面。
实施例5
本公开的实施例提供了一种可读性存储介质,所述可读性存储介质存储有智能电视可执行指令,所述智能电视可执行指令用于使所述智能电视执行 上述实施例公开的方法。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本公开的旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由所附的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限。

Claims (13)

  1. 一种识别结果分区域显示方法,其特征在于,所述方法应用于显示终端,包括如下步骤:
    接收截屏指令,对所述显示终端当前界面进行截屏,得到截屏图像;
    将所述截屏图像发送至服务器;
    接收所述服务器返回的基于所述截屏图像搜索得到的识别结果;
    将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。
  2. 根据权利要求1所述的方法,其特征在于,所述服务器包括第一服务器和第二服务器,所述第二服务器为人脸识别服务器,将所述截屏图像发送至服务器,包括:
    将所述截屏图像通过所述第一服务器发送至所述第二服务器;
    接收所述服务器返回的基于所述截屏图像搜索得到的识别结果,包括:
    通过所述第一服务器,接收第二服务器对所述截屏图像进行人脸识别得到的第二识别结果,及所述第一服务器根据所述第二识别结果进行搜索得到的第一识别结果。
  3. 根据权利要求2所述的方法,其特征在于,所述服务器还包括第三服务器,将所述截屏图像和\或所述第二识别结果发送至第三服务器进行搜索,得到第三识别结果。
  4. 根据权利要求1所述的方法,其特征在于,所述将不同类型的识别结果分别放在不同区域中进行显示之前还包括:
    根据所述服务器确定所述识别结果的类型。
  5. 根据权利要求1所述的方法,其特征在于,所述将不同类型的识别结果分别放在不同区域中进行显示,包括:
    提取所述识别结果的属性标签,通过判断所述属性标签中是否包含价格标签来确定所述识别结果的类型。
  6. 根据权利要求1所述的方法,其特征在于,识别结果显示区域包括第一区域、第二区域以及第三区域,且所述第一区域、第二区域、第三区域之间设有界限,所述第三区域为所述截屏图像的缩略图。
  7. 根据权利要求6所述的方法,其特征在于,所述分区域显示所述识别结果之后,还包括:
    当用户选择查看所述第一区域的某个识别结果时,所述识别结果显示区域隐藏,所述被选择查看的识别结果的详细页面显示在所述第一区域的同一侧;
    当用户选择查看所述第二区域的某个识别结果时,所述识别结果显示区域隐藏,所述被选择查看的识别结果的详细页面显示在所述第二区域的同一侧。
  8. 根据权利要求6所述的方法,其特征在于,当用户选择查看所述第三区域时,所述识别结果显示区域隐藏,所述第三区域的缩略图放大并覆盖所述显示终端当前的画面。
  9. 一种识别结果分区域显示装置,其特征在于,包括:
    命令接收处理单元,用于接收截屏指令,对所述显示终端当前的画面进行截屏,得到截屏图像;
    发送单元,用于将所述截屏图像发送至服务器;
    接收单元,用于接收所述服务器返回的基于所述截屏图像搜索得到的识别结果;
    显示单元,用于将不同类型的识别结果分别放在不同区域中进行显示,且所述不同区域之间设有界限。
  10. 根据权利要求9所述的装置,其特征在于,所述服务器包括第一服务器和第二服务器,所述第二服务器为人脸识别服务器,所述发送单元具体用于将所述截屏图像通过所述第一服务器发送至所述第二服务器;
    所述接收单元具体用于通过所述第一服务器,接收第二服务器对所述截 屏图像进行人脸识别得到的第二识别结果,及所述第一服务器根据所述第二识别结果进行搜索得到的第一识别结果。
  11. 根据权利要求10所述的装置,其特征在于,所述服务器还包括第三服务器,该识别结果分区域显示装置还包括:
    第三识别单元,用于将所述截屏图像和\或所述第二识别结果的关键词发送至第三服务器进行搜索,得到第三识别结果。
  12. 一种智能电视,其特征在于,包括:
    显示屏;
    存储器,用于存储程序指令;
    处理器,用于执行所述存储器上所存放的计算机程序,实现权利要求1-8任一项所述的识别结果分区域显示的方法步骤。
  13. 一种可读性存储介质,其特征在于,所述可读性存储介质存储有智能电视可执行指令,所述智能电视可执行指令用于使所述智能电视执行权利要求1-8任一项所述的识别结果分区域显示的方法。
PCT/CN2019/104179 2018-09-03 2019-09-03 一种识别结果分区域显示方法、装置及智能电视 WO2020048447A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811021566.7 2018-09-03
CN201811021566.7A CN109168069A (zh) 2018-09-03 2018-09-03 一种识别结果分区域显示方法、装置及智能电视

Publications (1)

Publication Number Publication Date
WO2020048447A1 true WO2020048447A1 (zh) 2020-03-12

Family

ID=64893884

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104179 WO2020048447A1 (zh) 2018-09-03 2019-09-03 一种识别结果分区域显示方法、装置及智能电视

Country Status (2)

Country Link
CN (1) CN109168069A (zh)
WO (1) WO2020048447A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108055589B (zh) 2017-12-20 2021-04-06 聚好看科技股份有限公司 智能电视
CN108289236B (zh) 2017-12-20 2020-07-10 海信视像科技股份有限公司 智能电视及电视画面截图的图形用户界面的显示方法
CN109168069A (zh) * 2018-09-03 2019-01-08 聚好看科技股份有限公司 一种识别结果分区域显示方法、装置及智能电视
US11039196B2 (en) 2018-09-27 2021-06-15 Hisense Visual Technology Co., Ltd. Method and device for displaying a screen shot
WO2020063095A1 (zh) * 2018-09-27 2020-04-02 青岛海信电器股份有限公司 一种截图显示方法及设备
CN110110252B (zh) * 2019-05-17 2021-01-15 北京市博汇科技股份有限公司 一种视听节目识别方法、装置及存储介质
CN110245251A (zh) * 2019-06-24 2019-09-17 重庆佳渝测绘有限公司 一种土地情况的对比显示方法
CN110765296A (zh) * 2019-10-23 2020-02-07 京东方科技集团股份有限公司 图像搜索方法、终端设备及存储介质
CN111343512B (zh) * 2020-02-04 2023-01-10 聚好看科技股份有限公司 信息获取方法、显示设备及服务器
WO2021223074A1 (zh) * 2020-05-06 2021-11-11 海信视像科技股份有限公司 显示设备及交互控制方法
CN116325770A (zh) * 2020-05-25 2023-06-23 聚好看科技股份有限公司 显示设备及图像识别结果显示方法
CN111787350B (zh) * 2020-08-03 2023-01-20 聚好看科技股份有限公司 显示设备及视频通话中的截图方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090762A (zh) * 2014-07-10 2014-10-08 福州瑞芯微电子有限公司 一种截图处理装置和方法
US20170330336A1 (en) * 2016-05-14 2017-11-16 Google Inc. Segmenting content displayed on a computing device into regions based on pixels of a screenshot image that captures the content
CN108111898A (zh) * 2017-12-20 2018-06-01 聚好看科技股份有限公司 电视画面截图的图形用户界面的显示方法以及智能电视
CN108322806A (zh) * 2017-12-20 2018-07-24 青岛海信电器股份有限公司 智能电视及电视画面截图的图形用户界面的显示方法
CN109168069A (zh) * 2018-09-03 2019-01-08 聚好看科技股份有限公司 一种识别结果分区域显示方法、装置及智能电视

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013501976A (ja) * 2009-08-07 2013-01-17 グーグル インコーポレイテッド 視覚クエリの複数の領域についての検索結果を提示するためのユーザインターフェイス
CN103369049B (zh) * 2013-07-22 2016-05-04 王雁林 移动终端和服务器交互方法及其系统
US9633496B2 (en) * 2014-01-09 2017-04-25 Ford Global Technologies, Llc Vehicle contents inventory system
CN106598998B (zh) * 2015-10-20 2020-10-27 北京安云世纪科技有限公司 信息获取方法和信息获取装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090762A (zh) * 2014-07-10 2014-10-08 福州瑞芯微电子有限公司 一种截图处理装置和方法
US20170330336A1 (en) * 2016-05-14 2017-11-16 Google Inc. Segmenting content displayed on a computing device into regions based on pixels of a screenshot image that captures the content
CN108111898A (zh) * 2017-12-20 2018-06-01 聚好看科技股份有限公司 电视画面截图的图形用户界面的显示方法以及智能电视
CN108322806A (zh) * 2017-12-20 2018-07-24 青岛海信电器股份有限公司 智能电视及电视画面截图的图形用户界面的显示方法
CN109168069A (zh) * 2018-09-03 2019-01-08 聚好看科技股份有限公司 一种识别结果分区域显示方法、装置及智能电视

Also Published As

Publication number Publication date
CN109168069A (zh) 2019-01-08

Similar Documents

Publication Publication Date Title
WO2020048447A1 (zh) 一种识别结果分区域显示方法、装置及智能电视
US11558578B2 (en) Smart television and method for displaying graphical user interface of television screen shot
US11601719B2 (en) Method for processing television screenshot, smart television, and storage medium
CN102722517B (zh) 用于观看者选择的视频对象的增强信息
US20200311126A1 (en) Methods to present search keywords for image-based queries
US20180152767A1 (en) Providing related objects during playback of video data
CN108055590B (zh) 电视画面截图的图形用户界面的显示方法
CN106598998B (zh) 信息获取方法和信息获取装置
CN107341185A (zh) 信息显示的方法及装置
WO2017190471A1 (zh) 电视购物信息处理方法和装置
JP7104242B2 (ja) 個人情報を共有する方法、装置、端末設備及び記憶媒体
CN108111898B (zh) 电视画面截图的图形用户界面的显示方法以及智能电视
US20220254143A1 (en) Method and apparatus for determining item name, computer device, and storage medium
CN105787102A (zh) 搜索方法、装置以及用于搜索的装置
US20210042809A1 (en) System and method for intuitive content browsing
US20190325497A1 (en) Server apparatus, terminal apparatus, and information processing method
KR20170013369A (ko) 검색 정보를 표시하는 방법, 장치 및 컴퓨터 프로그램
WO2022078172A1 (zh) 一种显示设备和内容展示方法
CN108540851A (zh) 基于语音交互的选择推荐位方法、装置及智能电视
US11863829B2 (en) Display apparatus and method for displaying image recognition result
CN115170220A (zh) 商品信息展示方法及电子设备
TWM522418U (zh) 條碼隱藏/浮現的呈現裝置
KR101701952B1 (ko) 검색 정보를 표시하는 방법, 장치 및 컴퓨터 프로그램
US20190095468A1 (en) Method and system for identifying an individual in a digital image displayed on a screen
KR101566222B1 (ko) 스마트 디스플레이를 이용한 광고 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19858438

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19858438

Country of ref document: EP

Kind code of ref document: A1