CN113141532B - Identification method of graphic identification code and display device - Google Patents

Identification method of graphic identification code and display device Download PDF

Info

Publication number
CN113141532B
CN113141532B CN202010067646.7A CN202010067646A CN113141532B CN 113141532 B CN113141532 B CN 113141532B CN 202010067646 A CN202010067646 A CN 202010067646A CN 113141532 B CN113141532 B CN 113141532B
Authority
CN
China
Prior art keywords
identification code
pattern
screenshot picture
code
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010067646.7A
Other languages
Chinese (zh)
Other versions
CN113141532A (en
Inventor
苑衍梅
付友苹
付延松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202010067646.7A priority Critical patent/CN113141532B/en
Publication of CN113141532A publication Critical patent/CN113141532A/en
Application granted granted Critical
Publication of CN113141532B publication Critical patent/CN113141532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method for identifying a graphic identification code and a display device, which belong to the display technology, wherein the display device comprises a display, and the display is configured to display a user interface, wherein the user interface comprises an original graphic identification code; a controller in communication with the display, the controller configured to perform presenting a user interface: responding to a user input instruction, and generating a first pattern recognition code based on the original pattern recognition code, wherein the pattern of the first pattern recognition code is different from that of the original pattern recognition code, and the information associated with the first pattern recognition code is the same as the information associated with the original pattern recognition code; and displaying a floating window on a display, wherein the first graphic identification code is displayed in the floating window.

Description

Identification method of graphic identification code and display device
Technical Field
Embodiments of the present application relate to display technology. And more particularly, to a recognition method of a graphic recognition code and a display apparatus.
Background
With the development of internet connection technology, the display device and other devices can be seamlessly connected, and especially, interaction scenes with the mobile device are quite numerous.
In order to increase interactivity between the intelligent television and a user, two-dimension codes can be displayed in a picture of the intelligent television, and the user can scan the two-dimension codes displayed on a screen of the intelligent television through handheld equipment such as a smart phone to realize interaction, so that user experience is improved.
Disclosure of Invention
According to the display device and the identification method of the graphic identification code, the graphic identification code in the picture played by the display device can be identified, and a new graphic identification code is generated and displayed according to the identification result, so that a user can conveniently scan through the handheld device, and the identification rate of the graphic identification code is improved.
According to an aspect of exemplary embodiments, there is provided a display apparatus including:
a display configured to display a user interface, wherein the user interface includes an original graphical identification code therein; a controller in communication with the display, the controller configured to perform presenting a user interface:
responding to a user input instruction, and generating a first pattern recognition code based on the original pattern recognition code, wherein the pattern of the first pattern recognition code is different from that of the original pattern recognition code, and the information associated with the first pattern recognition code is the same as the information associated with the original pattern recognition code;
And displaying a floating window on a display, wherein the first graphic identification code is displayed in the floating window.
In the above embodiment of the present application, after the original graphic identification code is identified from the user interface, the first graphic identification code is regenerated and displayed, so that the user can scan the graphic identification code through the handheld device conveniently. And because the information related to the first graphic identification code is the same as the information related to the original graphic identification code obtained from the user interface, the result obtained by the user through scanning the first graphic identification code by the handheld device can be equal to the result obtained by scanning the graphic identification code in the display picture.
In some exemplary embodiments, the controller is further configured to: and responding to the user input of the user for selecting the first graphic identification code, and displaying the first graphic identification code on a floating window of a display in an enlarged mode, so that the user can scan the first graphic code by using the handheld device more conveniently, and the success rate of graphic code identification can be improved.
In some exemplary embodiments, the controller is further configured to: responding to a user input instruction, generating a second graphic identification code based on a screenshot picture of a user interface containing the original graphic identification code, wherein the second graphic identification code is associated with the screenshot picture or a thumbnail of the screenshot picture of the user interface containing the original graphic identification code; and displaying the second graphic identification code in the floating window.
In the above embodiment of the present application, after the original graphic identifier is identified from the user interface, a second graphic identifier is generated and displayed based on the screenshot picture of the user interface containing the original graphic identifier, and because the second graphic identifier is associated with the screenshot picture of the user interface, the user can obtain and display the screenshot picture of the user interface containing the original graphic identifier or the thumbnail of the screenshot picture through the handheld device by scanning the second graphic identifier, so as to obtain the user interface from which the first graphic identifier is derived.
In some exemplary embodiments, the controller is further configured to: judging whether the screenshot picture or the thumbnail of the screenshot picture of the user interface contains a graphic identification code or not, and if the screenshot picture or the thumbnail of the screenshot picture of the user interface contains the graphic identification code, identifying the graphic identification code in the user interface.
Before the graphic identification code is identified, firstly judging that the graphic identification code is contained in the screenshot picture or the thumbnail of the screenshot picture of the user interface, and then identifying the graphic identification code under the condition that the graphic identification code is judged to be contained, wherein the time for identifying the graphic identification code is longer than the time required for judging whether the graphic identification code is contained in the user interface or not, so that the situation that the user interface cannot be known by waiting for a longer time can be avoided by adopting the embodiment.
In some exemplary embodiments, the controller is specifically configured to: cutting a screenshot picture or a thumbnail of the screenshot picture of the user interface to obtain at least two areas, wherein the at least two areas are partially overlapped; and scanning the screenshot pictures or the thumbnails of the screenshot pictures in each area respectively to obtain the graphic identification codes in the user interface.
Through the embodiment, the sectional scanning and the identification of the graphic identification code can be realized on the screenshot picture or the thumbnail of the screenshot picture of the user interface, so that the identification efficiency can be improved; on the other hand, due to the partial overlapping between the areas, the problem of recognition failure caused by cutting the pattern recognition code can be avoided.
In some exemplary embodiments, the controller is further configured to: cutting the screenshot pictures or the thumbnails of the screenshot pictures of the user interface to obtain at least two areas, respectively converting the thumbnails of the screenshot pictures or the thumbnails of the screenshot pictures in each area into gray images, respectively amplifying the gray images in each area, and scanning the amplified gray images to obtain the graphic identification code.
By converting the user interface into a gray image, noise in the image can be reduced, and recognition of the graphic recognition code is facilitated; the identification of the graphic identification code can be facilitated by the amplification process.
In some exemplary embodiments, the pattern recognition code is a two-dimensional code.
According to an aspect of the exemplary embodiment, there is provided a method for identifying a graphic identification code, including:
generating a first graphic identification code based on an original graphic identification code included in a user interface in response to a user input instruction, wherein the pattern of the first graphic identification code is different from that of the original graphic identification code, and information associated with the first graphic identification code is the same as information associated with the original graphic identification code;
and displaying a floating window, wherein the first graphic identification code is displayed in the floating window.
In some exemplary embodiments, further comprising: the first graphical identification code is displayed in enlarged form on a hover window of the display in response to user input by a user selecting the first graphical identification code.
In some exemplary embodiments, further comprising: responding to a user input instruction, generating a second graphic identification code based on a screenshot picture of a user interface containing the original graphic identification code, wherein the second graphic identification code is associated with the screenshot picture or a thumbnail of the screenshot picture of the user interface containing the original graphic identification code; and displaying the second graphic identification code in the floating window.
In some exemplary embodiments, further comprising: cutting a screenshot picture or a thumbnail of the screenshot picture of the user interface to obtain at least two areas, wherein the at least two areas are partially overlapped; and scanning the screenshot pictures or the thumbnails of the screenshot pictures in each area respectively to obtain the graphic identification codes in the user interface.
In some exemplary embodiments, further comprising: cutting the screenshot picture or the thumbnail of the screenshot picture of the user interface to obtain at least two areas, and respectively converting the screenshot picture or the thumbnail of the screenshot picture in each area into a gray level image; and amplifying the gray level images in each region respectively, and scanning the amplified gray level images to obtain the pattern identification code.
According to an aspect of the exemplary embodiments, there is provided a computer storage medium having stored therein computer program instructions which, when run on a computer, cause the computer to perform a method as described above.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
A schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment is exemplarily shown in fig. 1;
a hardware configuration block diagram of the display device 200 in accordance with the embodiment is exemplarily shown in fig. 2;
a hardware configuration block diagram of the control device 100 in accordance with the embodiment is exemplarily shown in fig. 3;
a functional configuration diagram of the display device 200 according to the embodiment is exemplarily shown in fig. 4;
a schematic diagram of the software configuration in the display device 200 according to an embodiment is exemplarily shown in fig. 5 a;
a schematic configuration of an application in the display device 200 according to an embodiment is exemplarily shown in fig. 5 b;
a pattern recognition code recognition flow according to an embodiment is exemplarily shown in fig. 6;
a two-dimensional code sample illustration in accordance with an embodiment is schematically shown in fig. 7;
an interface schematic diagram of the display device 200 performing two-dimensional code recognition and displaying the generated two-dimensional code according to the embodiment is exemplarily shown in fig. 8;
a schematic diagram of a user interface segmentation in accordance with an embodiment is illustrated in fig. 9;
a schematic diagram of another user interface segmentation in accordance with an embodiment is illustrated in fig. 10.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the exemplary embodiments of the present application more apparent, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is apparent that the described exemplary embodiments are only some embodiments of the present application, but not all embodiments.
All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are intended to be within the scope of the present application based on the exemplary embodiments shown in the present application. Furthermore, while the disclosure has been presented in terms of an exemplary embodiment or embodiments, it should be understood that various aspects of the disclosure can be practiced separately from the disclosure in a complete subject matter.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such as where appropriate, for example, implementations other than those illustrated or described in accordance with embodiments of the present application.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this application refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
The term "remote control" as used in this application refers to a component of an electronic device (such as a display device as disclosed in this application) that can typically be controlled wirelessly over a relatively short distance. Typically, the electronic device is connected to the electronic device using infrared and/or Radio Frequency (RF) signals and/or bluetooth, and may also include functional modules such as WiFi, wireless USB, bluetooth, motion sensors, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in a general remote control device with a touch screen user interface.
The term "gesture" as used herein refers to a user action by a change in hand shape or hand movement, etc., used to express an intended idea, action, purpose, or result.
A schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment is exemplarily shown in fig. 1. As shown in fig. 1, a user may operate the display apparatus 200 by controlling the device 100.
The control device 100 may be a remote controller 100A, including infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display apparatus 200 is controlled by wireless or other wired modes. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc. Such as: the user can input corresponding control instructions through volume up-down keys, channel control keys, up/down/left/right movement keys, voice input keys, menu keys, on-off keys, etc. on the remote controller to realize the functions of the control display device 200.
The control device 100 may also be an intelligent device, such as a mobile terminal 100B, a tablet computer, a notebook computer, or the like. For example, the display device 200 is controlled using an application running on a smart device. The application may provide various controls to the user through an intuitive User Interface (UI) on a screen associated with the smart device.
By way of example, the mobile terminal 100B may install a software application with the display device 200, implement connection communication through a network communication protocol, and achieve the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 100B and the display device 200 may be caused to establish a control instruction protocol, synchronize a remote control keyboard to the mobile terminal 100B, and control the functions of the display device 200 by controlling a user interface on the mobile terminal 100B. The audio/video content displayed on the mobile terminal 100B may also be transmitted to the display device 200, so as to implement a synchronous display function.
As shown in fig. 1, the display device 200 also communicates data with the server 300 through a variety of communication means. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the display device 200. By way of example, display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The servers 300 may be one group, may be multiple groups, and may be one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 300.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The particular display device type, size, resolution, etc. are not limited, and those skilled in the art will appreciate that the display device 200 may be subject to some changes in performance and configuration as desired.
The display device 200 may additionally provide an intelligent network television function of a computer support function in addition to the broadcast receiving television function. Examples include web tv, smart tv, internet Protocol Tv (IPTV), etc.
A hardware configuration block diagram of the display device 200 according to an exemplary embodiment is illustrated in fig. 2. As shown in fig. 2, a modem 220, a communicator 230, a detector 240, an external device interface 250, a controller 210, a memory 290, a user input interface, a video processor 260-1, an audio processor 260-2, a display 280, an audio input interface 272, a power supply may be included in the display apparatus 200.
The modem 220 receives broadcast television signals through a wired or wireless manner, and may perform modulation and demodulation processes such as amplification, mixing, and resonance for demodulating an audio/video signal carried in a frequency of a television channel selected by a user and additional information (e.g., an EPG data signal) from among a plurality of wireless or wired broadcast television signals.
The tuning demodulator 220 is responsive to the user selected television channel frequency and television signals carried by that frequency, as selected by the user, and as controlled by the controller 210.
The tuning demodulator 220 can receive signals in various ways according to broadcasting systems of television signals, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; according to different modulation types, the digital modulation mode and the analog modulation mode can be adopted; and the analog signal and the digital signal can be demodulated according to the kind of the received television signal.
In other exemplary embodiments, the modem 220 may also be in an external device, such as an external set-top box, or the like. In this way, the set-top box outputs the television audio and video signals after modulation and demodulation, and inputs the television audio and video signals to the display device 200 through the input/output interface 250.
Communicator 230 is a component for communicating with external devices or external servers according to various communication protocol types. For example: the communicator 230 may include a WIFI module 231, a bluetooth communication protocol module 232, a wired ethernet communication protocol module 233, and other network communication protocol modules or a near field communication protocol module.
The display device 200 may establish a connection of control signals and data signals with an external control device or a content providing device through the communicator 230. For example, the communicator may receive a control signal of the remote controller 100 according to the control of the controller.
The detector 240 is a component of the display device 200 for collecting signals of an external environment or interaction with the outside. The detector 240 may include a light receiver 242, a sensor for capturing the intensity of ambient light, a display parameter change that may be adapted by capturing ambient light, etc.; the system can also comprise an image collector 241, such as a camera, a video camera and the like, which can be used for collecting external environment scenes, collecting attributes of a user or interacting gestures with the user, adaptively changing display parameters and identifying the gestures of the user so as to realize the interaction function with the user.
In other exemplary embodiments, the detector 240 may further include a temperature sensor, such as by sensing ambient temperature, and the display device 200 may adaptively adjust the display color temperature of the image. Illustratively, the display device 200 may be adjusted to display a colder color temperature shade of the image when the temperature is higher than ambient; when the temperature is low, the display device 200 may be adjusted to display a color temperature-warm tone of the image.
In other exemplary embodiments, the detector 240 may further include a sound collector, such as a microphone, that may be used to receive a user's sound, including a voice signal of a control instruction of the user controlling the display device 200, or collect an ambient sound for identifying an ambient scene type, and the display device 200 may adapt to ambient noise.
An external device interface 250 provides a component for the controller 210 to control data transmission between the display apparatus 200 and external other apparatuses. The external device interface may be connected to an external device such as a set-top box, a game device, a notebook computer, etc., in a wired/wireless manner, and may receive data such as a video signal (e.g., a moving image), an audio signal (e.g., music), additional information (e.g., an EPG), etc., of the external device.
Among other things, the external device interface 250 may include: any one or more of a High Definition Multimedia Interface (HDMI) terminal 251, a Composite Video Blanking Sync (CVBS) terminal 252, an analog or digital component terminal 253, a Universal Serial Bus (USB) terminal 254, a Red Green Blue (RGB) terminal (not shown), and the like.
The controller 210 controls the operation of the display device 200 and responds to the user's operations by running various software control programs (e.g., an operating system and various application programs) stored on the memory 290.
As shown in fig. 2, the controller 210 includes a random access memory RAM213, a read only memory ROM214, a graphics processor 216, a CPU processor 212, a communication interface 218, and a communication bus. The RAM213 and the ROM214 are connected to the graphics processor 216, the CPU processor 212, and the communication interface 218 via buses.
A ROM213 for storing instructions for various system starts. When the power of the display device 200 starts to be started when the power-on signal is received, the CPU processor 212 executes a system start instruction in the ROM, and copies the operating system stored in the memory 290 into the RAM214 to start to run the start-up operating system. When the operating system is started, the CPU processor 212 copies various applications in the memory 290 to the RAM214, and then starts running the various applications.
A graphics processor 216 for generating various graphical objects, such as: icons, operation menus, user input instruction display graphics, and the like. The device comprises an arithmetic unit, wherein the arithmetic unit is used for receiving various interaction instructions input by a user to carry out operation and displaying various objects according to display attributes. And a renderer that generates various objects based on the results of the operator, and displays the results of rendering on the display 280.
CPU processor 212 is operative to execute operating system and application program instructions stored in memory 290. And executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 212 may include multiple processors. The plurality of processors may include one main processor and a plurality or one sub-processor. A main processor for performing some operations of the display apparatus 200 in the pre-power-up mode and/or displaying a picture in the normal mode. A plurality of or a sub-processor for performing an operation in a standby mode or the like.
The communication interfaces may include first interface 218-1 through nth interface 218-n. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 210 may control the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: operations to connect to a hyperlink page, document, image, etc., or operations to execute a program corresponding to an icon are displayed. The user command for selecting the UI object may be an input command through various input means (e.g., mouse, keyboard, touch pad, etc.) connected to the display device 200 or a voice command corresponding to a voice uttered by the user.
Memory 290 includes memory for storing various software modules for driving and controlling display device 200. Such as: various software modules stored in memory 290, including: basic module, detection module, communication module, display control module, browser module and various service modules.
The base module is a bottom software module for signal communication between the various hardware in the display device 200 and for sending processing and control signals to the upper modules. The detection module is a management module for collecting various information from various sensors or user input interfaces, and performing digital-to-analog conversion and analysis management.
For example: the voice recognition module comprises a voice analysis module and a voice instruction database module. The display control module is a module for controlling the display 280 to display image content, and can be used for playing multimedia image content, UI interface and other information. The communication module is used for controlling and data communication with external equipment. The browser module is a module for performing data communication between the browsing servers. The service module is used for providing various services and various application programs.
Meanwhile, the memory 290 is also used to store received external data and user data, images of various items in various user interfaces, visual effect maps of focus objects, and the like.
A user input interface for transmitting an input signal of a user to the controller 210 or transmitting a signal output from the controller to the user. Illustratively, the control device (e.g., mobile terminal or remote control) may send input signals such as power switch signals, channel selection signals, volume adjustment signals, etc., input by the user to the user input interface, which may then be forwarded to the controller; alternatively, the control device may receive an output signal such as audio, video, or data, which is output from the user input interface via the controller, and display the received output signal or output the received output signal in the form of audio or vibration.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 280, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
The term "user interface" in the present specification and claims and in the drawings is a media interface for interaction and exchange of information between an application program or operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (graphic user interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
The video processor 260-1 is configured to receive a video signal, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image composition according to a standard codec protocol of an input signal, so as to obtain a video signal that is directly displayed or played on the display 280.
The video processor 260-1, by way of example, includes a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio/video data stream, such as the input MPEG-2, and demultiplexes the input audio/video data stream into video signals, audio signals and the like.
And the video decoding module is used for processing the demultiplexed video signal, including decoding, scaling and the like.
And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display.
The frame rate conversion module is configured to convert a frame rate of an input video, such as converting a frame rate of an input 24Hz, 25Hz, 30Hz, 60Hz video to a frame rate of 60Hz, 120Hz, or 240Hz, where the input frame rate may be related to a source video stream and the output frame rate may be related to an update rate of a display screen. The input is carried out in a usual format such as a frame inserting mode.
And a display formatting module for converting the signal output by the frame rate conversion module into a signal conforming to a display format such as a display, for example, format converting the signal output by the frame rate conversion module to output an RGB data signal.
A display 280 for receiving image signals from the video processor 260-1 for displaying video content and images and a menu manipulation interface. The display 280 includes a display screen assembly for presenting pictures and a drive assembly for driving the display of images. The video content may be displayed from a video in a broadcast signal received by the modem 220 or may be displayed from a video input from a communicator or an external device interface. And a display 220 simultaneously displaying a user manipulation interface UI generated in the display device 200 and used to control the display device 200.
And, depending on the type of display 280, a drive assembly for driving the display. Alternatively, if the display 280 is a projection display, a projection device and projection screen may be included.
The audio processor 260-2 is configured to receive the audio signal, decompress and decode according to the standard codec protocol of the input signal, and perform audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing, so as to obtain an audio signal that can be played in the speaker 272.
An audio output interface 270 for receiving the audio signal output from the audio processor 260-2 under the control of the controller 210, where the audio output interface may include a speaker 272 or an external audio output terminal 274 for outputting to a generating device of an external device, such as: external sound terminals or earphone output terminals, etc.
In other exemplary embodiments, video processor 260-1 may include one or more chip components. The audio processor 260-2 may also include one or more chip components.
And, in other exemplary embodiments, the video processor 260-1 and the audio processor 260-2 may be separate chips or integrated with the controller 210 in one or more chips.
And a power supply for providing power supply support for the display device 200 with power inputted from an external power supply under the control of the controller 210. The power supply may include a built-in power circuit installed inside the display apparatus 200, or may be a power supply installed outside the display apparatus 200, such as a power interface providing an external power supply in the display apparatus 200.
A block diagram of the configuration of the control apparatus 100 according to an exemplary embodiment is exemplarily shown in fig. 3. As shown in fig. 3, the control device 100 includes a controller 110, a communicator 130, a user input/output interface 140, a memory 190, and a power supply 180.
The control apparatus 100 is configured to control the display device 200, and to receive an input operation instruction of a user, and to convert the operation instruction into an instruction recognizable and responsive to the display device 200, and to function as an interaction between the user and the display device 200. Such as: the user responds to the channel addition and subtraction operation by operating the channel addition and subtraction key on the control apparatus 100.
In some embodiments, the control apparatus 100 may be a smart device. Such as: the control apparatus 100 may install various applications for controlling the display device 200 according to user's needs.
In some embodiments, as shown in fig. 1, a mobile terminal 100B or other intelligent electronic device may function similarly to the control apparatus 100 after installing an application for manipulating the display device 200. Such as: the user may implement the functions of the physical keys of the control apparatus 100 by installing an application, various function keys or virtual buttons of a graphical user interface available on the mobile terminal 100B or other intelligent electronic device.
The controller 110 includes a processor 112, RAM113 and ROM114, a communication interface, and a communication bus. The controller 110 is used to control the operation and operation of the control device 100, as well as the communication collaboration among the internal components and the external and internal data processing functions.
The communicator 130 performs communication of control signals and data signals with the display device 200 under the control of the controller 110. Such as: the received user input signal is transmitted to the display device 200. The communicator 130 may include at least one of a WIFI module 131, a bluetooth module 132, an NFC module 133, and the like.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touch pad 142, a sensor 143, keys 144, etc. Such as: the user can implement a user instruction input function through actions such as voice, touch, gesture, press, and the like, and the input interface converts a received analog signal into a digital signal and converts the digital signal into a corresponding instruction signal, and sends the corresponding instruction signal to the display device 200.
The output interface includes an interface that transmits the received user instruction to the display device 200. In some embodiments, an infrared interface may be used, as well as a radio frequency interface. Such as: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. And the following steps: when the radio frequency signal interface is used, the user input instruction is converted into a digital signal, and then the digital signal is modulated according to a radio frequency control signal modulation protocol and then transmitted to the display device 200 through the radio frequency transmission terminal.
In some embodiments, the control device 100 includes at least one of a communicator 130 and an output interface. The control device 100 is provided with a communicator 130 such as: the modules such as WIFI, bluetooth, NFC, etc. may send the user input instruction to the display device 200 through the WIFI protocol, or the bluetooth protocol, or the NFC protocol code.
A memory 190 for storing various operation programs, data and applications for driving and controlling the control device 100 under the control of the controller 110. The memory 190 may store various control signal instructions input by a user.
A power supply 180 for providing operating power support for the various elements of the control device 100 under the control of the controller 110. May be a battery and associated control circuitry.
A schematic diagram of the functional configuration of the display device 200 according to an exemplary embodiment is illustrated in fig. 4. As shown in fig. 4, the memory 290 is used to store an operating system, application programs, contents, user data, and the like, and performs system operations for driving the display device 200 and various operations in response to a user under the control of the controller 210. Memory 290 may include volatile and/or nonvolatile memory.
The memory 290 is specifically used for storing an operation program for driving the controller 210 in the display device 200, and storing various application programs built in the display device 200, various application programs downloaded by a user from an external device, various graphic user interfaces related to the application programs, various objects related to the graphic user interfaces, user data information, and various internal data supporting the application programs. The memory 290 is used to store system software such as an Operating System (OS) kernel, middleware and applications, and to store input video data and audio data, as well as other user data.
The memory 290 is specifically configured to store drivers and related data for the video processor 260-1 and the audio processor 260-2, the display 280, the communication interface 230, the modem 220, the detector 240, the input/output interface, and the like.
In some embodiments, memory 290 may store software and/or programs, the software programs used to represent an Operating System (OS) including, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. For example, the kernel may control or manage system resources, or functions implemented by other programs (such as the middleware, APIs, or application programs), and the kernel may provide interfaces to allow the middleware and APIs, or applications to access the controller to implement control or management of system resources.
By way of example, the memory 290 includes a broadcast receiving module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, an audio control module 2906, an external instruction recognition module 2907, a communication control module 2908, a light receiving module 2909, a power control module 2910, an operating system 2911, and other applications 2912, a browser module, and the like. The controller 210 executes various software programs in the memory 290 such as: broadcast television signal receiving and demodulating functions, television channel selection control functions, volume selection control functions, image control functions, display control functions, audio control functions, external instruction recognition functions, communication control functions, optical signal receiving functions, power control functions, software control platforms supporting various functions, browser functions and other applications.
A block diagram of the configuration of the software system in the display device 200 according to an exemplary embodiment is illustrated in fig. 5 a.
As shown in FIG. 5a, operating system 2911, which includes executing operating software for handling various basic system services and for performing hardware-related tasks, acts as a medium for completing data processing between application programs and hardware components.
In some embodiments, portions of the operating system kernel may contain a series of software to manage display device hardware resources and to serve other programs or software code.
In other embodiments, portions of the operating system kernel may contain one or more device drivers, which may be a set of software code in the operating system that helps operate or control the devices or hardware associated with the display device. The driver may contain code to operate video, audio and/or other multimedia components. Examples include a display screen, camera, flash, wiFi, and audio drivers.
Wherein, accessibility module 2911-1 is configured to modify or access an application program to realize accessibility of the application program and operability of display content thereof.
The communication module 2911-2 is used for connecting with other peripheral devices via related communication interfaces and communication networks.
User interface module 2911-3 is configured to provide an object for displaying a user interface for access by each application program, so as to implement user operability.
Control applications 2911-4 are used to control process management, including runtime applications, and the like.
The event delivery system 2914 may be implemented within the operating system 2911 or in the application 2912. In some embodiments, an aspect is implemented within the operating system 2911, while implemented in the application 2912, for listening for various user input events, a process that implements one or more sets of predefined operations in response to recognition results of various events or sub-events will be referred to in terms of various events.
The event monitoring module 2914-1 is configured to monitor a user input interface to input an event or a sub-event.
The event recognition module 2914-1 is configured to input definitions of various events to various user input interfaces, recognize various events or sub-events, and transmit them to a process for executing one or more corresponding sets of processes.
The event or sub-event refers to an input detected by one or more sensors in the display device 200, and an input of an external control device (such as the control apparatus 100). Such as: various sub-events are input through voice, gesture input sub-events of gesture recognition, sub-events of remote control key instruction input of a control device and the like. By way of example, one or more sub-events in the remote control may include a variety of forms including, but not limited to, one or a combination of key press up/down/left/right/, ok key, key press, etc. And operations of non-physical keys, such as movement, holding, releasing, etc.
The interface layout management module 2913 directly or indirectly receives the user input events or sub-events from the event transmission system 2914, and is used for updating the layout of the user interface, including but not limited to the positions of the controls or sub-controls in the interface, and various execution operations related to the interface layout, such as the size or position of the container, the level, and the like.
As shown in fig. 5b, the application layer 2912 contains various applications that may be executed on the display device 200. Applications may include, but are not limited to, one or more applications such as: live television applications, video on demand applications, media center applications, application centers, gaming applications, etc.
Live television applications can provide live television through different signal sources. For example, a live television application may provide television signals using inputs from cable television, radio broadcast, satellite services, or other types of live television services. And, the live television application may display video of the live television signal on the display device 200.
Video on demand applications may provide video from different storage sources. Unlike live television applications, video-on-demand provides video displays from some storage sources. For example, video-on-demand may come from the server side of cloud storage, from a local hard disk storage containing stored video programs.
The media center application may provide various applications for playing multimedia content. For example, a media center may be a different service than live television or video on demand, and a user may access various images or audio through a media center application.
An application center may be provided to store various applications. The application may be a game, an application, or some other application associated with a computer system or other device but which may be run in a smart television. The application center may obtain these applications from different sources, store them in local storage, and then be run on the display device 200.
In order to increase interactivity between the intelligent television and a user, two-dimension codes can be displayed in a picture of the intelligent television, and the user can scan the two-dimension codes displayed on a screen of the intelligent television through handheld equipment such as a smart phone to realize interaction, so that user experience is improved.
However, the two-dimensional code displayed in the picture played by the smart television is usually not clear enough, which results in recognition failure, especially for the picture in the live channel or the video picture with low definition, the recognition rate of the two-dimensional code is low.
Therefore, how to facilitate the user to scan the two-dimensional code in the picture played by the smart television through the handheld device and improve the recognition rate is a technical problem to be solved at present.
In order to improve the recognition rate of the graphic recognition code in the display picture on the screen of the display device, in the embodiment of the application, the original graphic recognition code in the picture displayed by the display device can be recognized, the graphic recognition code is regenerated and displayed according to the recognition result, and compared with the graphic recognition code in the picture, the regenerated graphic recognition code can have higher definition, so that a user can conveniently scan through the handheld device, and the recognition rate of the graphic recognition code can be improved.
The graphic identification code can comprise a two-dimensional code or other forms or types of graphic identification codes.
The following describes the pattern recognition code recognition method in the embodiment of the present application in detail with reference to the accompanying drawings.
Fig. 6 illustrates a pattern recognition code recognition flow in an embodiment of the present application. The process may be performed by a smart device, such as the display device described above.
As shown, the process may include:
s601: and receiving user input, and acquiring a screenshot picture or a thumbnail of the screenshot picture of a user interface displayed by the display.
Taking the display device as an example, in the step, when a picture played on a display screen of the display device includes a graphic identification code (such as a two-dimensional code), and a user wants to scan the two-dimensional code by using a handheld device, a key command can be sent to the display device through a remote controller to trigger a screenshot command, or a screenshot command is triggered by touching the screen, gesture or voice input, etc., and the screenshot command is sent to the display device. In response to the screenshot instruction, the display device may intercept a screenshot picture or a thumbnail of the screenshot picture of the currently playing picture of the display screen.
The screen played on the display screen of the display device may be a still image, or may be a dynamic video, such as a video in a live channel being played by the display device or a video on demand being played by the display device. If the display device is playing the dynamic video currently, when responding to the key instruction of the user, intercepting a video frame or a screenshot picture which is played currently.
S602: and identifying the original graphic identification code in the screenshot picture or the screenshot picture thumbnail of the user interface.
Optionally, before identifying the original graphic identification code in the screenshot picture or the screenshot picture thumbnail of the user interface, whether the screenshot picture or the screenshot picture thumbnail of the user interface contains the graphic identification code may be first determined, if it is determined that the screenshot picture or the screenshot picture thumbnail of the user interface contains the graphic identification code, the graphic identification code in the screenshot picture or the screenshot picture thumbnail of the user interface is identified, otherwise, the operation of the graphic identification code may be abandoned, and prompt information may be further output to prompt that the user interface does not contain the graphic identification code.
When judging whether the screenshot picture or the screenshot picture thumbnail of the user interface contains the graphic identification code, judging whether the screenshot picture or the screenshot picture thumbnail of the user interface contains the graphic identification code according to the characteristic information of the graphic identification code. The time delay for identifying whether the screenshot picture or the screenshot picture thumbnail of the user interface contains the graphic identification code or not is generally smaller than the time delay for identifying the graphic identification code from the screenshot picture or the screenshot picture thumbnail of the user interface according to the characteristic information of the graphic identification code, so that whether the screenshot picture or the screenshot picture thumbnail of the user interface contains the graphic identification code or not can be rapidly judged, the process can be rapidly ended under the condition that the screenshot picture or the screenshot picture thumbnail of the user interface does not contain the graphic identification code, the system overhead is saved, prompt information can be rapidly output, the user can know the condition, and the user experience is improved.
Taking two-dimensional codes as an example, the two-dimensional codes (2-dimensional bar code) record data symbol information by using a certain specific geometric figure distributed on a plane (in a two-dimensional direction) according to a certain rule and a black-white alternate figure. The concept of "0", "1" bit stream forming the internal logic base of computer is utilized in code formation, several geometric shapes corresponding to binary system are used to represent literal value information, and the information is automatically read by means of image input equipment or photoelectric scanning equipment so as to implement automatic information processing.
Fig. 7 exemplarily shows a two-dimensional code sample. The position detection patterns are included in 3 positions (such as the upper left corner, the upper right corner and the lower left corner) of the 4 angular positions of the two-dimensional code, and the 3 position detection patterns are characteristic information of the two-dimensional code, namely, the 3 position detection patterns can position one two-dimensional code. If 3 position detection patterns are identified from the picture, it can be determined that the picture contains a two-dimensional code.
The positioning pattern and the correction pattern in the two-dimensional code sample can also be used for judging whether the two-dimensional code exists or not, namely, if the positioning pattern and/or the correction pattern are identified from the picture, the picture can be judged to contain the two-dimensional code.
In this embodiment of the present application, when identifying a graphic identification code included in a picture displayed by a display device, an identification algorithm with higher identification efficiency may be used, for example, a zbar identification algorithm may be used. Zbar is a two-dimensional code open source library realized by the c code, and the speed of Zbar is higher in the aspect of two-dimensional code identification.
Alternatively, if the graphic identification code is not read at a time, affine rotation may be performed on the screenshot picture or the screenshot picture thumbnail of the user interface, and the graphic identification code may be read again.
S603: a first pattern recognition code and a second pattern recognition code are generated.
Wherein the information associated with the first graphical identification code is the same as the information associated with the original graphical identification code included in the user interface.
Specifically, in some embodiments, the pattern of the first graphical identification code is identical to the pattern of the original graphical identification code in the user interface, i.e. the first graphical identification code having the same pattern is generated from the pattern of the identified graphical identification code. Thus, the result obtained when the user scans the first graphic identification code using the handheld device is the same as the result obtained when the graphic identification code included in the user interface is scanned.
In other embodiments, the pattern of the first graphical identification code is different from the pattern of the original graphical identification code included in the user interface. That is, the display device may regenerate a new graphic identifier based on the original graphic identifier identified from the screenshot picture or the screenshot picture thumbnail of the user interface, based on its corresponding content address (URL, uniform Resource Locator, uniform resource locator), the regenerated graphic identifier having the same information associated with the original graphic identifier from the user interface. Thus, the result obtained when the user scans the first graphic identification code using the handheld device is the same as the result obtained when the original graphic identification code contained in the user interface is scanned.
And the second graphic identification code is associated with the screenshot picture or screenshot picture thumbnail of the user interface. Thus, when the user scans the second graphic identification code by using the handheld device, the user can acquire the screenshot picture or the screenshot picture thumbnail of the user interface so as to be displayed on the handheld device, so that the user can know the user interface from which the second graphic identification code comes.
In the embodiment of the application, when generating the graphic identification code, a higher-efficiency generation algorithm may be used, for example, a zxing generation algorithm may be used. zxing is a two-dimensional code open source library realized by jaVa, and the speed of zxing is high in terms of two-dimensional code generation.
S604: and displaying a floating window, wherein the first graphic identification code and the second graphic identification code generated in the step S603 are displayed in the floating window.
Optionally, the floating window is located at the uppermost layer, and the size of the floating window is smaller than the size of the user interface of the display device, so that only the user interface currently displayed by the display device (such as the currently playing video picture) is partially blocked, so as to minimize the influence on the user's viewing.
Fig. 8 illustrates a schematic diagram of identifying a two-dimensional code in a play screen by a display device, generating a new two-dimensional code, and displaying the new two-dimensional code when the embodiment of the application is applied to the display device. As shown, the display device 200 is currently playing a video program. The screenshot instruction is triggered by clicking a remote controller, touching a screen, inputting a gesture or voice, and the like, after the display device 200 receives the screenshot instruction, the display device 200 obtains a currently played video frame or screenshot picture or obtains the screenshot picture as a response, performs two-dimensional code scanning and recognition on the video frame or screenshot picture, generates a first two-dimensional code and a second two-dimensional code if the two-dimensional code is recognized, and displays the floating window 701. The floating window 701 has a first two-dimensional code 702 and a second two-dimensional code 703 displayed thereon.
The information associated with the first two-dimensional code 702 is the same as the information associated with the two-dimensional code identified from the video frame or the screenshot picture. The second two-dimensional code 703 is associated with the video frame or the screenshot picture, and the display device 200 may establish a correspondence between the second two-dimensional code 703 and the video frame or the screenshot picture, where the correspondence may be specifically expressed as: the corresponding relation between the two-dimensional code and the video frame or the screenshot picture, or the corresponding relation between the two-dimensional code and the video frame or the screenshot picture thumbnail (wherein the size of the thumbnail can be set according to the requirement), or the information corresponding to the two-dimensional code and the video frame or the screenshot picture (such as which live broadcast channel the video frame or the screenshot picture comes from, which program, etc.), or the combination of the above information.
If the user scans the first two-dimensional code 702 displayed on the floating window 701 through the handheld device, the same effect as that of scanning the two-dimensional code in the video frame or the screenshot picture can be obtained, for example, the user enters the corresponding application according to the URL corresponding to the two-dimensional code. If the user scans the second two-dimensional code 703 displayed on the floating window 701 through the handheld device, the display device 200 may push the video frame or the screenshot picture or the thumbnail of the video frame or the screenshot picture to the handheld device, so that the video frame or the screenshot picture or the thumbnail of the video frame or the screenshot picture is displayed on the screen of the handheld device, so that the user knows the source of the picture where the first two-dimensional code 702 is located.
In some embodiments of the present application, a user may be allowed to perform an enlarged display operation on the first graphic identification code, so that the user may conveniently identify the first graphic identification code through a handheld device, so as to improve an identification success rate. Specifically, the user may select the first graphic identification code and send an amplifying instruction to the display device through a remote controller, a touch operation, a voice input, or the like, and the display device amplifies and displays the first graphic identification code on a floating window of the display in response to the user input of the first graphic identification code selected by the user. Wherein the display device may display the enlarged first graphical identification code in the newly popped up floating window.
In the above embodiment of the present application, after the original graphic identification code is identified from the user interface, the first graphic identification code and the second graphic identification code are regenerated and displayed, so that the user can scan the graphic identification code through the handheld device conveniently. By the mode, the newly generated pattern recognition code has higher definition, so that a user can conveniently scan the pattern recognition code through the handheld device, and the recognition rate can be improved.
And because the information related to the first graphic identification code is the same as the information related to the original graphic identification code in the user interface, the result obtained by the user scanning the first graphic identification code through the handheld device can be equal to the result obtained by scanning and identifying the graphic identification code in the display picture.
And because the second graphic identification code is associated with the user interface (such as a video frame or a screenshot picture) or a thumbnail of the user interface (such as a video frame or a screenshot picture), a user can obtain a picture displayed with the graphic identification code by scanning the second graphic identification code through the handheld device.
In other embodiments of the present application, only the first graphic identification code may be generated and displayed, and the second graphic identification code may not be generated and displayed, and other processing operations may be the same as those of the previous embodiments.
Considering that the picture played by a display device (such as a smart television) is usually a dynamic video picture, or that the definition of the picture that may be played is not high, this will affect the recognition success rate of the graphic recognition code. In order to improve the recognition rate of the graphic recognition code, in some embodiments of the present application, the method of image partition scanning may be used to recognize the graphic recognition code.
Specifically, a screenshot picture of a user interface or a thumbnail of the screenshot picture (such as a live video picture played by a display device) can be cut to obtain at least two areas, the at least two areas are partially overlapped, and a combined area of the at least two areas covers a scanning area in the screenshot picture of the user interface or the thumbnail of the screenshot picture; and then scanning the screenshot pictures or the thumbnails of the screenshot pictures of the user interface in each area respectively to obtain the original graphic identification codes in the user interface.
The combined area of the at least two areas covers the scanning area in the screenshot picture or the thumbnail of the screenshot picture of the user interface, so that the screenshot picture or the scanning area in the thumbnail of the screenshot picture of the whole user interface can be ensured to be scanned, and the area containing the graphic identification code is prevented from being omitted. The at least two areas are partially overlapped, so that the pattern recognition code can be ensured not to be just cut and fail to be recognized.
Fig. 9 illustrates a user interface segmentation schematic. As shown, the screenshot 800 of the user interface is partitioned into an upper left region 801 (shown as a), an upper right region 802 (shown as b), a lower left region 803 (shown as c), and a lower right region 804 (shown as d). Wherein, as shown in e in the figure, the right side part region of the upper left region 801 overlaps with the left side part region of the upper right region 802; a lower side portion region of the upper left region 801 overlaps with an upper side portion region of the lower left region 803; a left side portion region of the lower right region 804 overlaps with a right side portion region of the lower left region 803; the upper portion region of the lower right region 804 overlaps the lower portion region of the upper right region 802. The overlapping area is shown as a diagonally filled area.
Fig. 10 illustrates a screenshot of another user interface or a thumbnail segmentation schematic of the screenshot. As shown, a screenshot picture or thumbnail of a screenshot picture of a user interface is partitioned into an upper left region 901, an upper right region 902, a lower left region 903, a lower right region 904 (as shown in figure a), a middle horizontal region 905, and a middle vertical region 906 (as shown in figure b). Therein, as shown in c in the figure, the middle horizontal region 905 partially overlaps the regions 901 to 904, respectively, and the middle vertical region 906 partially overlaps the regions 901 to 904, respectively. The overlapping area is shown as a diagonally filled area.
After the user interface is cut according to the method shown in fig. 9 or 10, the recognition call of the graphic recognition code can be performed in the order of lower right, lower left, upper right and upper left during scanning. In the smart tv application scenario, the picture (such as a live tv picture) played by the smart tv application scenario generally only includes one graphic identification code, and is generally displayed at the lower right corner of the picture, so that the scanning identification process for all areas can be ended as long as one graphic identification code is identified in a certain area.
Alternatively, to ensure recognition efficiency, scanning and recognition of different regions may be implemented with different threads to implement parallel processing.
Fig. 9 and fig. 10 above are only examples, and other division methods may be used to scan the screenshot pictures of the user interface or the thumbnails of the screenshot pictures in a partitioned manner to identify the graphic identification codes contained therein.
Optionally, in some embodiments of the present application, after cutting the user interface to obtain at least two regions, before identifying the graphic identification code, the screenshot picture or the thumbnail of the screenshot picture of the user interface in each region may be converted into a grayscale image respectively. After the image is converted into the gray image, the image noise can be reduced, and the memory consumption of the equipment in the identification process can be reduced, thereby being beneficial to the scanning and the identification of the graphic identification code.
Optionally, in some embodiments of the present application, after cutting the screenshot picture of the user interface or the thumbnail of the screenshot picture to obtain at least two areas, or after converting the screenshot picture of the user interface or the thumbnail of the screenshot picture in the cut area into the grayscale image, the grayscale image in each area may be further enlarged, so as to facilitate scanning and recognition of the graphic identification code.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A display device, characterized by comprising:
a display configured to display a user interface, wherein the user interface includes an original graphical identification code therein;
a controller in communication with the display, the controller configured to perform presenting a user interface:
responding to a user input instruction, and acquiring a screenshot picture of a user interface displayed by the display or a thumbnail of the screenshot picture;
judging whether the screenshot picture or the thumbnail contains an original pattern identification code according to at least one of a position detection pattern, a positioning pattern and a correction pattern in the screenshot picture or the thumbnail;
if the first pattern identification code is included, identifying an original pattern identification code in the screenshot picture or the thumbnail, and generating at least one of a first pattern identification code and a second pattern identification code based on the original pattern identification code; the pattern of the first pattern recognition code is the same as or different from the pattern of the original pattern recognition code, the second pattern recognition code is associated with the screenshot picture or the thumbnail, and when the pattern of the first pattern recognition code is different from the pattern of the original pattern recognition code, the information associated with the first pattern recognition code is the same as the information associated with the original pattern recognition code;
Displaying a floating window on a display, wherein at least one of the first graphic identification code and the second graphic identification code is displayed in the floating window.
2. The display device of claim 1, wherein the controller is further configured to:
the first graphical identification code is displayed in enlarged form on a hover window of the display in response to user input by a user selecting the first graphical identification code.
3. The display device of claim 1, wherein the controller is specifically configured to:
cutting a screenshot picture or a thumbnail of the screenshot picture of the user interface to obtain at least two areas, wherein the at least two areas are partially overlapped;
and scanning the screenshot pictures or the thumbnails of the screenshot pictures in each area respectively to obtain the graphic identification codes in the user interface.
4. A display device according to claim 3, wherein the controller is specifically configured to:
cutting the screenshot picture or the thumbnail of the screenshot picture of the user interface to obtain at least two areas, and respectively converting the screenshot picture or the thumbnail of the screenshot picture in each area into a gray level image;
And amplifying the gray level images in each region respectively, and scanning the amplified gray level images to obtain the pattern identification code.
5. The display device of any one of claims 1-4, wherein the graphical identification code is a two-dimensional code.
6. A method for identifying a pattern recognition code, comprising:
responding to a user input instruction, and acquiring a screenshot picture or a thumbnail of the screenshot picture of a user interface;
judging whether the screenshot picture or the thumbnail contains an original pattern identification code according to at least one of a position detection pattern, a positioning pattern and a correction pattern in the screenshot picture or the thumbnail;
if the first graphic identification code is included, identifying an original graphic identification code in the screenshot picture or the thumbnail, and generating at least one of a first graphic identification code and a second graphic identification code based on the original graphic identification code included in the user interface; the pattern of the first pattern recognition code is the same as or different from the pattern of the original pattern recognition code, the second pattern recognition code is associated with the screenshot picture or the thumbnail, and when the pattern of the first pattern recognition code is different from the pattern of the original pattern recognition code, the information associated with the first pattern recognition code is the same as the information associated with the original pattern recognition code; and displaying a floating window, wherein at least one of the first graphic identification code and the second graphic identification code is displayed in the floating window.
7. The method as recited in claim 6, further comprising:
the first graphical identification code is displayed in enlarged form on a hover window of the display in response to user input by a user selecting the first graphical identification code.
8. The method as recited in claim 6, further comprising:
cutting a screenshot picture or a thumbnail of the screenshot picture of the user interface to obtain at least two areas, wherein the at least two areas are partially overlapped;
and scanning the screenshot pictures or the thumbnails of the screenshot pictures in each area respectively to obtain the graphic identification codes in the user interface.
9. The method as recited in claim 8, further comprising:
cutting the screenshot picture or the thumbnail of the screenshot picture of the user interface to obtain at least two areas, and respectively converting the screenshot picture or the thumbnail of the screenshot picture in each area into a gray level image;
and amplifying the gray level images in each region respectively, and scanning the amplified gray level images to obtain the pattern identification code.
10. A computer storage medium having stored therein computer program instructions which, when run on a computer, cause the computer to perform the method of any of claims 6-9.
CN202010067646.7A 2020-01-20 2020-01-20 Identification method of graphic identification code and display device Active CN113141532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010067646.7A CN113141532B (en) 2020-01-20 2020-01-20 Identification method of graphic identification code and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010067646.7A CN113141532B (en) 2020-01-20 2020-01-20 Identification method of graphic identification code and display device

Publications (2)

Publication Number Publication Date
CN113141532A CN113141532A (en) 2021-07-20
CN113141532B true CN113141532B (en) 2024-04-05

Family

ID=76809292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010067646.7A Active CN113141532B (en) 2020-01-20 2020-01-20 Identification method of graphic identification code and display device

Country Status (1)

Country Link
CN (1) CN113141532B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116984853B (en) * 2023-08-28 2024-05-31 佛山市爱投信息科技有限公司 Display screen assembling system and assembling process thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007034998A (en) * 2005-07-25 2007-02-08 Shinsekai:Kk Two-dimensional code generation system having optional specified image as pattern
CN105138939A (en) * 2015-07-31 2015-12-09 海信集团有限公司 Method, apparatus and system for displaying two-dimensional code
CN105335771A (en) * 2015-11-12 2016-02-17 深圳Tcl数字技术有限公司 Two-dimensional code generating and displaying method and apparatus
CN108259973A (en) * 2017-12-20 2018-07-06 青岛海信电器股份有限公司 The display methods of the graphic user interface of smart television and television image sectional drawing
CN109358930A (en) * 2018-09-27 2019-02-19 深圳点猫科技有限公司 Method, electronic equipment based on linux system intelligent recognition two dimensional code
CN110147864A (en) * 2018-11-14 2019-08-20 腾讯科技(深圳)有限公司 The treating method and apparatus of coding pattern, storage medium, electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007034998A (en) * 2005-07-25 2007-02-08 Shinsekai:Kk Two-dimensional code generation system having optional specified image as pattern
CN105138939A (en) * 2015-07-31 2015-12-09 海信集团有限公司 Method, apparatus and system for displaying two-dimensional code
CN105335771A (en) * 2015-11-12 2016-02-17 深圳Tcl数字技术有限公司 Two-dimensional code generating and displaying method and apparatus
CN108259973A (en) * 2017-12-20 2018-07-06 青岛海信电器股份有限公司 The display methods of the graphic user interface of smart television and television image sectional drawing
CN109358930A (en) * 2018-09-27 2019-02-19 深圳点猫科技有限公司 Method, electronic equipment based on linux system intelligent recognition two dimensional code
CN110147864A (en) * 2018-11-14 2019-08-20 腾讯科技(深圳)有限公司 The treating method and apparatus of coding pattern, storage medium, electronic device

Also Published As

Publication number Publication date
CN113141532A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN111752518A (en) Screen projection method of display equipment and display equipment
CN112073762B (en) Information acquisition method based on multi-system display equipment and multi-system display equipment
CN112055240B (en) Display device and operation prompt display method for pairing display device with remote controller
CN110659010A (en) Picture-in-picture display method and display equipment
CN111970549B (en) Menu display method and display device
CN111970548B (en) Display device and method for adjusting angle of camera
US11960674B2 (en) Display method and display apparatus for operation prompt information of input control
CN111176603A (en) Image display method for display equipment and display equipment
CN112118400A (en) Display method of image on display device and display device
CN111954059A (en) Screen saver display method and display device
CN111163343A (en) Method for recognizing pattern recognition code and display device
CN112306604B (en) Progress display method and display device for file transmission
CN112017415A (en) Recommendation method of virtual remote controller, display device and mobile terminal
CN111984167A (en) Rapid naming method and display device
CN113141532B (en) Identification method of graphic identification code and display device
CN112040340A (en) Resource file acquisition method and display device
CN111954043A (en) Information bar display method and display equipment
CN113495711A (en) Display apparatus and display method
CN111259639B (en) Self-adaptive adjustment method of table and display equipment
CN113438528A (en) Method for realizing combined key and display equipment
CN112261463A (en) Display device and program recommendation method
CN111949179A (en) Control amplifying method and display device
CN111931692A (en) Display device and image recognition method
CN114079827A (en) Menu display method and display device
CN113495654A (en) Control display method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant