CN112040291A - Intelligent display method and display system - Google Patents

Intelligent display method and display system Download PDF

Info

Publication number
CN112040291A
CN112040291A CN202011213176.7A CN202011213176A CN112040291A CN 112040291 A CN112040291 A CN 112040291A CN 202011213176 A CN202011213176 A CN 202011213176A CN 112040291 A CN112040291 A CN 112040291A
Authority
CN
China
Prior art keywords
display device
frames
images
fixed visiting
fixed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011213176.7A
Other languages
Chinese (zh)
Other versions
CN112040291B (en
Inventor
陈孝良
常乐
阮明江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN202011213176.7A priority Critical patent/CN112040291B/en
Publication of CN112040291A publication Critical patent/CN112040291A/en
Application granted granted Critical
Publication of CN112040291B publication Critical patent/CN112040291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention is suitable for the technical field of electronics, and provides an intelligent display method and a display system, wherein the method comprises the following steps: acquiring at least two frames of images acquired by an image acquisition device within a preset time when a display device displays current content, wherein the image acquisition device is installed at a preset position of the display device; identifying visiting users in the at least two frames of images, and determining the number of fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device; and controlling the display device to update the current content based on the number of the fixed visiting users and the attention area of each fixed visiting user to the display device.

Description

Intelligent display method and display system
Technical Field
The invention belongs to the technical field of electronics, and particularly relates to an intelligent display method and an intelligent display system.
Background
In the display process of the exhibition hall, the display device is usually required to display information corresponding to the display object, so that the visiting user visiting the exhibition hall can better know about the display object.
At present, a display screen is usually installed in an exhibition hall as a display device, and preset contents are repeatedly and continuously played by using the display screen. However, different visitors pay different attention to the displayed object, and the preset played content is usually played and displayed according to the personal attention of the designer, that is, the displayed content cannot be adjusted according to different visitors, so that there may be a situation that the display device continuously displays the content that the visitors do not pay attention to, or the content that the visitors pay attention to is quickly displayed according to the preset, so that the visitors have to watch for many times to know the content that the visitors pay attention to, and the like, which affects the overall display effect.
Disclosure of Invention
In view of this, embodiments of the present invention provide an intelligent display method and a display system, so as to solve the technical problem that the display device in the prior art cannot adjust the display content according to different visiting users, which results in a poor overall display effect.
In a first aspect of the embodiments of the present invention, an intelligent display method is provided, including:
acquiring at least two frames of images acquired by an image acquisition device within a preset time when a display device displays current content, wherein the image acquisition device is installed at a preset position of the display device;
identifying visiting users in the at least two frames of images, and determining the number of fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device;
and controlling the display device to update the current content based on the number of the fixed visiting users and the attention area of each fixed visiting user to the display device.
In a second aspect of the embodiments of the present invention, an intelligent display system is provided, which at least includes: the device comprises an image acquisition device, a control device and a display device, wherein the image acquisition device is arranged at a preset position of the display device;
the control device includes:
the image acquisition module is used for acquiring at least two frames of images acquired by the image acquisition device within preset time when the display device displays the current content;
the image identification module is used for identifying visiting users in the at least two frames of images and determining the number of fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device;
and the display control module is used for controlling the display device to update the current content based on the number of the fixed visiting users and the attention area of each fixed visiting user to the display device.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described method.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the embodiment of the invention installs the image acquisition device at the preset position of the display device, utilizes the image acquisition device to acquire the image, acquires at least two frames of images acquired by the image acquisition device within the preset time when the display device displays the current content, then identifies the visiting users in the at least two frames of images, determines the number of the fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device, wherein the fixed visiting users are the visiting users which exist in the at least two frames of images and face towards the display device, and further controls the display device to update the current content according to the number of the fixed visiting users and the attention area of the fixed visiting users to the display device. According to the technical scheme provided by the embodiment, the current content displayed by the display device can be determined according to the attention area of the visiting user, the display content can be adjusted according to different visiting users, different contents can be displayed for different visiting users, the requirements of different visiting users on different concerns of the displayed object are met, the display experience of the visiting user is improved, and the overall display effect is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first schematic structural diagram of an intelligent display system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an intelligent display system according to an embodiment of the present invention;
fig. 3 is a first schematic flow chart illustrating an implementation of an intelligent display method according to an embodiment of the present invention;
fig. 4 is a first schematic flow chart illustrating an implementation flow of step S22 in the intelligent display method according to the embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating an implementation process of step S223 in the intelligent display method according to the embodiment of the present invention;
fig. 6 is a schematic diagram illustrating an implementation flow of step S22 in the intelligent display method according to the embodiment of the present invention;
fig. 7 is a schematic flow chart illustrating an implementation of step S226 in the intelligent display method according to the embodiment of the present invention;
fig. 8 is a first schematic flow chart illustrating an implementation process of step S23 in the intelligent display method according to the embodiment of the present invention;
fig. 9 is a schematic diagram illustrating an implementation flow of step S23 in the intelligent display method according to the embodiment of the present invention;
fig. 10 is a schematic flow chart illustrating an implementation flow of step S23 in the intelligent display method according to the embodiment of the present invention;
fig. 11 is a schematic diagram of a control device in the intelligent display system according to the embodiment of the invention;
fig. 12 is a first schematic diagram of a display control module in a control device in an intelligent display system according to an embodiment of the present invention;
fig. 13 is a second schematic diagram of a display control module in a control device in the intelligent display system according to the embodiment of the present invention;
fig. 14 is a third schematic diagram of a display control module in the control device in the intelligent display system according to the embodiment of the present invention;
fig. 15 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
In the display process of the exhibition hall, the display device is usually required to display information corresponding to the display object, so that the visiting user visiting the exhibition hall can better know about the display object. At present, a display screen is usually installed in an exhibition hall as a display device, and preset contents are repeatedly and continuously played by using the display screen. However, different visitors pay different attention to the displayed object, and the preset content is played and displayed according to the personal attention of the designer, so that the displayed content cannot be adjusted according to different visitors, and there may be a situation that the display device continuously displays the content which is not paid attention by the visitors, or the content which is paid attention by the visitors is quickly displayed according to the preset setting, so that the visitors have to watch for many times to know the content paid attention by themselves, and the like, which affects the overall display effect.
According to the embodiment, different concerns of different visiting users on the displayed object are fully considered, the display device is creatively realized to adjust the display content according to different visiting users, different concerns of various visiting users on the displayed object can be met, the displaying experience of the visiting users is improved, and the overall displaying effect is improved.
Referring to fig. 1, in a first aspect of the present embodiment, an intelligent display system 10 is provided, which at least includes an image acquisition device 11, a display device 12, a control device 13 and a sound amplification device 14, where the control device 13 is respectively connected to the image acquisition device 11, the display device 12 and the sound amplification device 14, and the image acquisition device 11 is installed at a preset position of the display device 12, so that the image acquisition device can acquire an image containing a face of a visiting user. In the embodiment shown in fig. 1, the image capturing device 11, the control device 13, and the sound amplifying device 14 may be mounted on the display device 12, and the display device displays the content, and the sound amplifying device mounted on the display device gives out a comment voice to explain the displayed content. Referring to fig. 2, the intelligent display system may include a mobile robot, an image capturing device 11, and a display device 12, where the mobile robot is provided with a sound amplifying device 14, a control device 13, and a mobile device 15, and the display device 12 displays content in the display system, and the sound amplifying device of the mobile robot performs a voice explanation corresponding to the displayed content.
The sound reinforcement device 14 includes a speaker, and sound reinforcement playback can be performed under the control of the control device 13, and the number thereof may be one or more. When the number of the sound amplifying devices 14 is plural, they may be disposed at different positions of the exhibition hall, and of course, plural speakers may be disposed at different positions of the mobile robot, and may be oriented to different directions so as to amplify sound toward different directions. The sound reinforcement device 14 may be fixedly disposed with respect to the mobile robot, or may be rotated with respect to the mobile robot to adjust its orientation. The moving means 15 includes universal wheels installed at the bottom of the mobile robot and can move in any direction under the control of the control means 13.
Image acquisition device 11 can be the camera, and the quantity can be one or more, and when display device 12 was the display screen, image acquisition device 11 can set up the four corners at the display screen to image acquisition device 11 gathers visiting user's face orientation, eyes orientation. Of course, the image capturing device 11 may be set according to the shape and the installation position of the display device 12, as long as the head information of the visiting user can be captured, and the setting position of the image capturing device 11 is not particularly limited in this embodiment.
Referring to fig. 3, a second aspect of the present embodiment provides an intelligent display method, which can be implemented by the intelligent display system described above, or implemented in other manners. The following describes the smart display method by taking the smart display system as an example. The intelligent display method may include the steps of:
step S21: the method comprises the steps of obtaining at least two frames of images collected by an image collecting device within a preset time when a display device displays current content, wherein the image collecting device is installed at a preset position of the display device.
The image acquisition device 11 is installed at a preset position of the display device 12, where the preset position is a position where the image acquisition device 11 can acquire the face orientation and the eye orientation of the visiting user, that is, it can be determined whether the visiting user watches the current content displayed by the display device 12 according to the image acquired by the image acquisition device 11. The image capturing device 11 may be a camera or a video camera, when the image capturing device 11 is a camera, a capturing frequency is set for the image capturing device 11, and when the image capturing device 11 is a video camera, a video captured by the video camera is subjected to frame processing to obtain an image. The display device 12 may be a large screen display device with a diagonal dimension of more than 40 inches, and in order to meet the requirement of displaying a scene, a plurality of displays may be combined together to form a spliced large screen. The predetermined position may be the periphery of the display device 12, such as the right top of the display device 12, the four corners of the display device 12, or the two corners of the display device 12. Since there is a visiting user who moves continuously while visiting the exhibition hall, that is, there is a case where the visiting user passes only in front of the display device 12 and does not stay to watch the display content, it is necessary to acquire at least two frames of images in order to accurately determine the visiting user who pays attention to the current content displayed by the display device 12. If the display content of the display device 12 is changed, the region of interest of the visiting user cannot be specified, and therefore it is necessary to acquire an image when the display device 12 displays the same content. And considering that if the visiting user pays attention to the display content of the display device 12, the action of the visiting user does not change in a short time, at least two frames of images acquired by the image acquisition device 11 in a preset time need to be acquired, where the preset time can be set by himself according to experience, for example, to 2 seconds, and specifically, the acquisition frequency of the image acquisition device 11 can be set to obtain the acquired images in the preset time.
It is understood that the method of the present embodiment is triggered each time the current content displayed by the display device 12 is updated, and therefore step S21 may be to, in response to the received content update instruction, acquire at least two frames of images captured by the image capturing device 11 within a preset time when the display device 12 displays the current content.
Step S22: and identifying visiting users in the at least two frames of images, and determining the number of fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device.
The fixed visiting users are determined by identifying the at least two frames of images, the fixed visiting users are visiting users which exist in the at least two frames of images and face towards the display device 12, the number of the fixed visiting users is counted, and the attention area of the fixed visiting users to the display device 12 is determined according to the identified image information, wherein the attention area refers to the area of the display device 12 which is watched by the fixed visiting users. Specifically, when the display device 12 is a large-screen display device, the fixed visiting user has different rotation directions and rotation angles of the face when watching different areas of the large-screen display device, for example, the large-screen display device is divided into a left area and a right area, when the fixed visiting user standing in the front middle area of the large-screen display device watches the left area, the face will rotate a certain angle to the direction corresponding to the left area of the display device, and when watching the right area, the face will rotate a certain angle to the direction corresponding to the right area of the display device; when a fixed visiting user standing in the front non-middle area of the large-screen display device watches different areas of the large-screen display device, even if the rotating directions of the face of the fixed visiting user are the same, the rotating angles are different, namely the watching areas can be determined through the rotating angles of the face of the fixed visiting user. It is of course also possible to determine the area the stationary visiting user is looking at by determining the direction and angle of rotation of the eyes of the stationary visiting user. Furthermore, when the fixed visiting user is a plurality of persons who are in the same line, the situation that one of the fixed visiting users points to a certain area of the large-screen display device to discuss with the fixed visiting user in the same line often exists, so the gesture of the fixed visiting user can be identified, the area pointed by the gesture of the fixed visiting user is determined according to the position of the fixed visiting user and the direction and angle of the gesture of the fixed visiting user, and the area pointed by the gesture is determined as the area which is watched by the fixed visiting user on the large-screen display device. In one possible case, the area of the large-screen display device being viewed by the stationary visiting user is accurately determined by a combination of gesture recognition of the stationary visiting user and rotation direction and rotation angle recognition of the face of the stationary visiting user.
Step S23: and controlling the display device to update the current content based on the number of the fixed visiting users and the attention area of each fixed visiting user to the display device.
In order to meet different requirements of different visiting users on the display content, the display device 12 is controlled to update the current content according to the number of the fixed visiting users and the attention area of the fixed visiting users on the display device 12. Specifically, the updating may include highlighting the current content displayed in the attention area, when a plurality of attention areas exist, highlighting all the current content displayed in the attention area, or encoding each attention area in advance, highlighting each attention area in sequence according to an encoding order, or determining the attention value corresponding to each area of the display device according to the number of the fixed visiting users and the attention area of each fixed visiting user to the display device, where the attention value represents the number of the fixed visiting users who pay attention to the area, for example, the attention value of the a area is 5 for the attention area of the fixed visiting user, the attention value of the B area is 7 for the attention area of the fixed visiting user, the attention value of the C area is 15 for the attention area of the fixed visiting user, the attention value of the a area is 5, the attention value of the B area is 7, the interest value of the C region is 15. And highlighting and/or voice explaining the current content displayed in the attention area corresponding to each attention value according to the sequence of the attention values from large to small, wherein the current content in the C area is highlighted and/or voice explained firstly, then the current content in the B area is highlighted and/or voice explained, and finally the current content in the C area is highlighted and/or voice explained.
The intelligent display method provided by the embodiment has the beneficial effects that: in the embodiment, the image acquisition device 11 is installed at a preset position of the display device, the image acquisition device 11 is used for image acquisition, when the display device 12 displays current content, at least two frames of images acquired by the image acquisition device 11 within a preset time are acquired, then visiting users in the at least two frames of images are identified, the number of fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device are determined, wherein the fixed visiting users are visiting users which exist in the at least two frames of images and face towards the display device, and further, the generation of a content update instruction is controlled according to the number of the fixed visiting users and the attention area of the fixed visiting users to the display device, wherein the content update instruction is used for controlling the display device to update the current content. The technical scheme provided by the embodiment can determine the current content displayed by the display device 12 according to the attention area of the visiting user, realize the adjustment of the display content according to different visiting users, display different contents for different visiting users, meet the requirements of different visiting users on different concerns of the displayed object, improve the display experience of the visiting user and improve the overall display effect.
In one possible implementation manner, referring to fig. 4, the step S22 of identifying the visiting users in the at least two images, and determining the number of fixed visiting users in the at least two images and the attention area of each of the fixed visiting users to the display device includes:
step S221, performing face recognition on the visiting users in the at least two frames of images to determine the number of the fixed visiting users in the at least two frames of images.
In order to ensure that the intelligently displayed content is determined according to the attention area of the visiting user watching the displayed content, the visiting user needs to be screened, the visiting user in at least two frames of images is subjected to face recognition, the visiting user existing in at least two frames of images is determined, the visiting user with the face facing the display device 12 in the two frames of images is determined, namely, the fixed visiting user in at least two frames of images is determined, and the number of the fixed visiting users is further counted.
It should be noted that, when the acquired images are two frames, the faces in the two frames of images may be recognized respectively, so as to determine the fixed visiting user who exists in both frames of images and faces toward the display device 12. When the multi-frame images are acquired, since the image capturing device 11 captures the multi-frame images within a short preset time, visiting users who all exist in the multi-frame images and have their faces oriented toward the display device 12 can be determined as fixed visiting users.
In step S222, the face orientation of each of the fixed visiting users in the at least two frames of images is determined.
When the fixed visiting user focuses on the display contents in different areas of the display device 12, the face of the fixed visiting user faces towards different directions, that is, the rotation direction and rotation angle of the face of the fixed visiting user are different, and the different rotation directions and rotation angles correspond to different regions of interest, wherein the region of interest may be divided into two regions, namely, an upper left region, a lower left region, an upper right region and a lower right region, or an upper left region, an upper right region, a lower left region, a lower right region and a lower right region.
Step S223 is to determine a region of interest of each of the stationary visiting users to the display device based on the face orientations of the stationary visiting users in the at least two frames of images.
Different fixed visiting users pay attention to different display contents, and when the face orientations of the fixed visiting users are different, the attention areas of the fixed visiting users to the display device 12 are different, for example, when the face orientation of a certain fixed visiting user is rotated 45 degrees to the upper left, the attention area of the fixed visiting user to the display device 12 can be corresponding to the upper left.
Referring to fig. 5, further, the step S223 of determining the attention area of each of the fixed visiting users to the display device 12 based on the face orientation of each of the fixed visiting users in the at least two frames of images includes:
step S2231, determining a first difference value corresponding to the face orientation of each of the stationary visiting users in the at least two frames of images based on the face orientation of each of the stationary visiting users in the at least two frames of images.
There are cases where the stationary visiting user scans across the current content displayed on the display device 12, and therefore in order to more accurately determine the area of interest of the stationary visiting user, the stationary visiting user needs to be screened, and a first difference corresponding to the face orientation of each stationary visiting user in at least two frames of images needs to be determined first.
Specifically, when at least two captured frames of images are captured by the same fixedly mounted image capture device 11, since the field of view of the same fixedly mounted image capture device 11 is not changed, a first difference value may be determined according to the pixel coordinates of the face orientation in the at least two captured frames of images, for example, a head detection frame of a fixed visiting user is determined, the pixel coordinates of the center point of the head detection frame is determined as the pixel coordinates of the fixed visiting user, and the first difference value is determined according to the difference value of the center points of the fixed visiting user detection frames in the two captured frames of images. Of course, the spatial coordinates of the head of the fixed visiting user may also be determined according to the pixel coordinates and the parameter information of the image capturing device 11, and the corresponding first difference value may be determined according to the spatial coordinates of the fixed visiting user in at least two frames of images. It should be noted that, when the acquired image is a multi-frame image, the pixel coordinate in the first frame image of the fixed visiting user may be compared with the pixel coordinate in the latest acquired image to determine the first difference.
Step S2232, determining whether the first difference is smaller than a first preset threshold, if so, performing step S2233, otherwise, performing step S2234.
Step S2233, the fixed visiting user has an attention area to the display device, and the area corresponding to the display device and facing the face of the fixed visiting user is determined as the attention area of the fixed visiting user to the display device.
At this time, a first difference value corresponding to the face orientation of the fixed visiting user in the at least two frames of images is smaller than the first preset threshold, that is, the current display content of the display device 12 concerned by the fixed visiting user is not changed, and at this time, the area corresponding to the face orientation of the visiting user in the display device 12 is determined as the area concerned by the fixed visiting user to the display device 12.
At step S2234, the stationary visiting user has no area of interest with the display device 12.
At this time, a first difference value corresponding to the face orientation of the fixed visiting user in the at least two frames of images is not less than a first preset threshold, that is, the fixed visiting user changes the attention area of the display device 12 within the time that the image acquisition device 11 acquires the at least two frames of images, so that the fixed visiting user does not have the attention area to the display device 12.
In another possible implementation manner, referring to fig. 6, the step S22 of identifying the visiting users in the at least two frame images, and determining the number of fixed visiting users in the at least two frame images and the attention area of each of the fixed visiting users to the display device includes:
step S224, performing face recognition on the visiting users in the at least two frames of images to determine the number of the fixed visiting users in the at least two frames of images.
Step S225, determining the eye orientation corresponding to each of the fixed visiting users in the at least two frames of images.
Step S226, determining a region of interest of each of the fixed visiting users to the display device based on the eye orientations of the fixed visiting users in the at least two frames of images.
In the above embodiment, the area of interest of each fixed visiting user to the display device 12 is determined according to the eye orientation of the fixed visiting user in at least two frame images.
Referring to fig. 7, further, the step S226 of determining the attention area of each of the fixed visiting users to the display device based on the corresponding eye orientations of the fixed visiting users in the at least two frames of images includes:
step S2261: determining a second difference value corresponding to the eye orientation of each fixed visiting user in the at least two frames of images based on the eye orientation of each fixed visiting user in the at least two frames of images.
When at least two frames of images acquired by the same fixedly-installed image acquisition device 11 are acquired, the pixel coordinates of the eyes of the visiting user in the at least two frames of images can be fixed, and a second difference value is further determined; of course, the pixel may also be determined as a corresponding spatial coordinate according to the pixel coordinate in the image and the parameter information of the image acquisition device 11, and then the second difference value for fixing the eye orientation of the visiting user may be determined according to the spatial coordinate.
Step S2262: and judging whether the second difference is smaller than a second preset threshold value, if so, executing step S2263, otherwise, executing step S2264.
Step S2263: and the fixed visiting user has an attention area to the display device, and the attention area of the fixed visiting user to the display device is determined according to the eye orientation of the fixed visiting user.
At this time, a second difference value corresponding to the eye orientation of the fixed visiting user in the at least two frames of images is smaller than the second preset threshold, that is, the attention area of the fixed visiting user to the display device 12 is not changed, and then the viewpoint of the fixed visiting user on the display device 12 is determined according to the eye orientation of the fixed visiting user, and the area where the viewpoint is located is determined as the attention area of the fixed visiting user to the display device 12.
Step S2264: the stationary visiting user has no area of interest to the display device.
At this time, a second difference value corresponding to the eye orientation of the fixed visiting user in the at least two frames of images is not less than a second preset threshold value, that is, the attention area of the fixed visiting user has changed within the time of acquiring the image by the image acquisition device 11, so that the fixed visiting user has a possibility of panning the screen, and in order to ensure that the content displayed by the display device 12 more accurately displays the attention area of each fixed visiting user, it is determined that the fixed visiting user does not have the attention area on the display device 12, that is, the fixed visiting user is screened out.
Referring to fig. 8, further, the step S23, based on the number of the fixed visiting users and the attention area of each of the fixed visiting users to the display device, controls the display device to update the current content, including:
step S231, determining attention values respectively corresponding to all areas of the display device based on the number of the fixed visiting users and the attention areas of all the fixed visiting users to the display device;
step S232, regarding the attention area with the highest attention value as the most attention area;
step S233, determining the updated content based on the current content displayed in the region of greatest interest, and controlling the display device to display the updated content.
Different fixed visiting users have different attention areas on the display device 12, and in order to meet the needs of most of the fixed visiting users, attention values corresponding to the areas of the display device are determined according to the number of the fixed visiting users and the attention areas of the fixed visiting users on the display device 12. Specifically, after the attention area of the display device 12 by one fixed visiting user is determined, the attention value corresponding to the area is added by 1, the attention areas of all the fixed visiting users are determined, that is, the attention values corresponding to the areas of the display device 12 are determined, the attention values corresponding to the areas of the display device 12 are sorted, the attention area with the highest attention value is determined as the most attention area of the fixed visiting user according to the sorting result, the content is updated according to the current content displayed by the most attention area, and the display device is controlled to display the updated content. The updating may include enlarging the current content displayed in the area of greatest interest as the updated content from the partial display to the full display, highlighting the current content displayed in the area of greatest interest as the updated content, and highlighting the current content displayed in the area of greatest interest as the updated content while enlarging the current content displayed in the area of greatest interest from the partial display to the full display. Of course, other updating manners may also exist, such as determining the content to be displayed according to the determined updating content, wherein the content to be displayed is the content that is not present in the current content of the display device 12, i.e., the content to be displayed on the display device 12. If different link contents can be set in advance for the display contents in different areas of the display device 12, when the attention areas of the fixed visiting user are different, different contents to be displayed are determined according to the link contents corresponding to the current contents displayed in the attention areas. By setting different link contents for the display contents of different areas, the display contents can be determined according to the fixed visiting user, and even if all the display contents are the same, the display order can be adjusted according to the attention of the fixed visiting user. For example, if the number of the fixed visiting users is 15, where the attention area of 10 people is left and the attention area of 5 people is right, the left area of the display device 12 is the most attention area, and the display content in the left area may be displayed in a full screen, highlighted, or determined according to the content in the left area.
In a possible case, at least two regions exist as the most interesting regions, that is, the attention values of the at least two regions are the same and are the maximum values, at this time, at least two new frames of images can be acquired, the most interesting regions are determined according to the at least two new frames of images, if a unique most interesting region exists, the display device 12 is controlled to update the current content according to the display content of the most interesting region, if at least two regions still exist as the most interesting regions, sequential highlighting can be performed according to the number sequence of each region of the display device 12, and the highlighted content is played in a public address each time. Or when at least two most focuses exist, the fixed visiting user is regarded as that no most focused region exists in the current content displayed on the display device 12, and the content to be displayed on the display device 12 is determined according to the preset playing sequence.
When the display device 12 is used to display content, a sound reinforcement device is often set to play sound reinforcement, please refer to fig. 9, and the method further includes: step S234: and controlling the sound reinforcement device to play the voice commentary corresponding to the updated content.
In the above embodiment, not only the intelligent display of the display device 12 but also the cooperative explanation of the intelligent display are realized. According to the current content displayed in the most concerned area of the display device 12, the updated content is determined, and a sound amplifying instruction is generated, so that the sound amplifying device performs sound amplifying playing. Specifically, the sound reinforcement device may perform direct sound reinforcement playing on the text information of the updated content displayed by the display device 12, and may also determine the voice commentary corresponding to the displayed content in advance, and after the updated content is determined, play the voice commentary corresponding to the updated content, so that the fixed visiting user may have a clear and comprehensive understanding of the displayed content, thereby improving the experience effect of the visiting user, and improving the overall display effect.
When the commentary is given the responsibility of the commentator, the commentator can perform the commentary according to the display content of the display device 12, and the commentary is dominated by the attention area of the fixed visiting user instead of being dominated by the commentator, so that the commentator is prevented from performing the commentary according to the attention point of the commentator. Of course, after the current content displayed on the display device 12 is updated, the commentator may perform corresponding commentary, collect the voice information of the commentator by using the voice collection device, and send the voice information to the control device 13, so that the control device 13 performs recognition, sends a sound amplification instruction, and controls the sound amplification device 14 to perform sound amplification playing on the voice information of the commentator.
When the sound reinforcement apparatus is installed on the mobile robot, please refer to fig. 10, further, before the step S234 controls the sound reinforcement apparatus to play the voice commentary corresponding to the updated content, the method further includes:
step S235: determining a real-time distance of the mobile robot from the stationary visiting user.
The mobile robot is used for explaining, so that the situation that the explaining effect is poor due to the state (such as fatigue or illness) of an explainer can be avoided. When a mobile robot is used for explanation, the distance between the mobile robot and a fixed visiting user needs to be controlled, and the real-time distance between the mobile robot and the fixed visiting user needs to be determined.
Specifically, the mobile robot is provided with a binocular camera, that is, the mobile robot is provided with a left camera and a right camera, and the real-time distance between the mobile robot and the fixed visiting user is obtained by using a triangular ranging principle, and certainly, the real-time distance can also be determined by using infrared rays or laser beams, and in other embodiments, the real-time distance between the mobile robot and the fixed visiting user can also be obtained by other methods, which is not limited in this embodiment.
Step S236: and controlling the movable robot to adjust the position of the movable robot relative to the fixed visiting user based on the real-time distance and the preset distance.
The mobile robot is characterized in that a proper comment distance, namely a preset distance, between the mobile robot and the fixed visiting user is preset, and the fixed visiting user can better hear and clearly comment voice played by the sound amplifying device of the mobile robot within the preset distance. Specifically, whether the real-time distance between the mobile robot and the fixed visiting user is larger than a preset distance or not is judged, if yes, the mobile device 15 of the mobile robot is controlled to approach the fixed visiting user, otherwise, the position of the mobile robot is kept unchanged, or the mobile robot is made to move towards the direction far away from the fixed visiting user, and the voice explanation of the mobile robot can be heard clearly by the fixed visiting user through continuously adjusting the relative position of the mobile robot and the fixed visiting user. Certainly, in order to make the mobile robot explain more vivid, the mobile robot may execute corresponding actions when playing the specific voice explanation, for example, the mobile robot may execute actions such as "move forward", "move backward", "move leftward", "move rightward", "turn" when playing the specific voice, so that the explanation is more vivid and interesting while the sound amplification effect of the explanation is ensured, the explanation effect is improved, and the display experience of the fixed visiting user is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 11, based on the same inventive concept, in the intelligent display system 10 provided in the embodiment of the present invention, the control device 13 includes an image obtaining module 131, an image recognizing module 132, and a display control module 133. The image acquiring module 131 is configured to acquire at least two frames of images acquired by the image acquiring device 11 within a preset time when the display device 12 displays the current content; an image identification module 132, configured to identify visiting users in the at least two frames of images, and determine the number of fixed visiting users in the at least two frames of images and a region of interest of each of the fixed visiting users to the display apparatus 12; a display control module 133, configured to control the display apparatus 12 to update the current content based on the number of the fixed visiting users and the attention area of each of the fixed visiting users to the display apparatus 12.
Referring to fig. 12, further, the display control module 133 includes:
an attention value determining unit 1331, configured to determine attention values respectively corresponding to the regions of the display apparatus based on the number of the fixed visiting users and the attention regions of the fixed visiting users to the display apparatus;
a region selecting unit 1332, configured to use a region of interest with the highest attention value as a most attention region;
a display control unit 1333, configured to determine updated content based on the current content displayed in the region of greatest interest, and control the display device to display the updated content.
Referring to fig. 13, further, the display control module 133 further includes:
and an audio playback processing unit 1334, configured to control the audio playback apparatus to play the voice commentary corresponding to the updated content.
Referring to fig. 14, further, the display control module 133 further includes: a distance determination unit 1335 and a position control unit 1336, wherein the distance determination unit 1335 is configured to determine a real-time distance of the mobile robot from the stationary visiting user; a position control unit 1336, configured to control the mobile robot to adjust its position relative to the fixed visiting user based on the real-time distance and the preset distance.
Of course, in other embodiments, each module of the control device 13 may further include one or more units for implementing corresponding functions, which are not described herein again.
Fig. 15 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 15, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as a sound amplification program based on image recognition, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various smart display method embodiments described above, such as the steps S21-S23 shown in fig. 3. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 131 to 133 shown in fig. 11.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 15 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or fewer components than shown, or some components may be combined, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer programs and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (14)

1. An intelligent display method, comprising:
acquiring at least two frames of images acquired by an image acquisition device within a preset time when a display device displays current content, wherein the image acquisition device is installed at a preset position of the display device;
identifying visiting users in the at least two frames of images, and determining the number of fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device;
and controlling the display device to update the current content based on the number of the fixed visiting users and the attention area of each fixed visiting user to the display device.
2. The intelligent display method as claimed in claim 1, wherein the step of identifying the visiting users in the at least two frames of images, and determining the number of fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device comprises:
performing face recognition on visiting users in the at least two frames of images to determine the number of fixed visiting users in the at least two frames of images;
determining the face orientation of each fixed visiting user in the at least two frames of images respectively;
determining the attention area of each fixed visiting user to the display device based on the face orientation of each fixed visiting user in the at least two frames of images.
3. The intelligent display method as claimed in claim 2, wherein the step of determining the attention area of each of the fixed visiting users to the display device based on the face orientation of each of the fixed visiting users in the at least two frames of images comprises:
determining a first difference value corresponding to the face orientation of each fixed visiting user in the at least two frames of images based on the face orientation of each fixed visiting user in the at least two frames of images;
if a first difference value corresponding to the face orientation of the fixed visiting user in the at least two frames of images is not smaller than a first preset threshold value, the fixed visiting user does not have a focus area on the display device;
if the first difference value corresponding to the face orientation of the fixed visiting user in the at least two frames of images is smaller than the first preset threshold value, the fixed visiting user has an attention area to the display device, and the area corresponding to the face orientation of the fixed visiting user in the display device is determined as the attention area of the fixed visiting user to the display device.
4. The intelligent display method as claimed in claim 1, wherein the step of identifying the visiting users in the at least two frames of images, and determining the number of fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device comprises:
performing face recognition on visiting users in the at least two frames of images to determine the number of fixed visiting users in the at least two frames of images;
determining the eye orientation of each fixed visiting user in the at least two frames of images respectively;
determining the attention area of each fixed visiting user to the display device based on the eye orientation of each fixed visiting user in the at least two frames of images.
5. The intelligent display method as claimed in claim 4, wherein the step of determining the attention area of each of the fixed visiting users to the display device based on the eye orientation of each of the fixed visiting users in the at least two frames of images comprises:
determining a second difference value corresponding to the eye orientation of each fixed visiting user in the at least two frames of images based on the eye orientation of each fixed visiting user in the at least two frames of images;
if a second difference value corresponding to the eye orientation of the fixed visiting user in the at least two frames of images is not smaller than a second preset threshold value, the fixed visiting user does not have a focus area on the display device;
if a second difference value corresponding to the eye orientation of the fixed visiting user in the at least two frames of images is smaller than a second preset threshold value, the fixed visiting user has an attention area for the display device, and the attention area of the fixed visiting user for the display device is determined according to the eye orientation of the fixed visiting user.
6. The intelligent display method according to any one of claims 1 to 5, wherein the step of controlling the display device to update the current content based on the number of the fixed visiting users and the attention area of each of the fixed visiting users to the display device comprises:
determining attention values respectively corresponding to all areas of the display device based on the number of the fixed visiting users and attention areas of all the fixed visiting users to the display device;
taking the attention area with the highest attention value as the most attention area;
and determining updated content based on the current content displayed in the most attention area, and controlling the display device to display the updated content.
7. The intelligent display method according to claim 6, wherein the step of determining updated content based on the current content displayed in the region of greatest interest, and controlling the display device to display the updated content comprises:
amplifying the current content displayed in the most attention area as an updating content from local display to full-screen display; and/or highlighting the current content displayed in the most attention area as the updated content.
8. The intelligent display method according to claim 6, wherein after the steps of determining updated content based on the current content displayed in the region of greatest interest, and controlling the display device to display the updated content, the method further comprises:
and controlling the sound reinforcement device to play the voice commentary corresponding to the updated content.
9. The intelligent display method according to claim 8, wherein the sound reinforcement device is mounted on a mobile robot; before the step of controlling the sound reinforcement device to play the voice commentary corresponding to the updated content, the method further includes:
determining a real-time distance of the mobile robot from the stationary visiting user;
and controlling the movable robot to adjust the position of the movable robot relative to the fixed visiting user based on the real-time distance and the preset distance.
10. The smart display method as claimed in claim 1, wherein the controlling the display device to update the current content step based on the number of the fixed visiting users and the area of interest of each of the fixed visiting users to the display device comprises:
and highlighting the current content displayed in the attention area.
11. The intelligent display method according to any one of claims 1 to 5, wherein the step of controlling the display device to update the current content based on the number of the fixed visiting users and the attention area of each of the fixed visiting users to the display device comprises:
determining attention values respectively corresponding to all areas of the display device based on the number of the fixed visiting users and attention areas of all the fixed visiting users to the display device;
and highlighting and/or performing voice explanation on the current content displayed in the attention area corresponding to each attention value according to the sequence of the attention values from large to small.
12. An intelligent display system, comprising at least: the device comprises an image acquisition device, a control device and a display device, wherein the image acquisition device is arranged at a preset position of the display device;
the control device includes:
the image acquisition module is used for acquiring at least two frames of images acquired by the image acquisition device within preset time when the display device displays the current content;
the image identification module is used for identifying visiting users in the at least two frames of images and determining the number of fixed visiting users in the at least two frames of images and the attention area of each fixed visiting user to the display device;
and the display control module is used for controlling the display device to update the current content based on the number of the fixed visiting users and the attention area of each fixed visiting user to the display device.
13. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 11 when executing the computer program.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
CN202011213176.7A 2020-11-04 2020-11-04 Intelligent display method and display system Active CN112040291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011213176.7A CN112040291B (en) 2020-11-04 2020-11-04 Intelligent display method and display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011213176.7A CN112040291B (en) 2020-11-04 2020-11-04 Intelligent display method and display system

Publications (2)

Publication Number Publication Date
CN112040291A true CN112040291A (en) 2020-12-04
CN112040291B CN112040291B (en) 2021-03-05

Family

ID=73573616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011213176.7A Active CN112040291B (en) 2020-11-04 2020-11-04 Intelligent display method and display system

Country Status (1)

Country Link
CN (1) CN112040291B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231673A1 (en) * 2022-05-31 2023-12-07 京东方科技集团股份有限公司 Information delivery method, apparatus and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070113242A1 (en) * 2005-11-16 2007-05-17 Fetkovich John E Selective post-processing of compressed digital video
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN103310721A (en) * 2012-03-16 2013-09-18 捷达世软件(深圳)有限公司 Display device and method for regulating display contents of display device
CN105446673A (en) * 2014-07-28 2016-03-30 华为技术有限公司 Screen display method and terminal device
CN106227424A (en) * 2016-07-20 2016-12-14 北京小米移动软件有限公司 The display processing method of picture and device
CN106531073A (en) * 2017-01-03 2017-03-22 京东方科技集团股份有限公司 Processing circuit of display screen, display method and display device
CN106557937A (en) * 2015-09-24 2017-04-05 杭州海康威视数字技术股份有限公司 Advertisement sending method and device
CN107992839A (en) * 2017-12-12 2018-05-04 北京小米移动软件有限公司 Person tracking method, device and readable storage medium storing program for executing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070113242A1 (en) * 2005-11-16 2007-05-17 Fetkovich John E Selective post-processing of compressed digital video
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN103310721A (en) * 2012-03-16 2013-09-18 捷达世软件(深圳)有限公司 Display device and method for regulating display contents of display device
CN105446673A (en) * 2014-07-28 2016-03-30 华为技术有限公司 Screen display method and terminal device
CN106557937A (en) * 2015-09-24 2017-04-05 杭州海康威视数字技术股份有限公司 Advertisement sending method and device
CN106227424A (en) * 2016-07-20 2016-12-14 北京小米移动软件有限公司 The display processing method of picture and device
CN106531073A (en) * 2017-01-03 2017-03-22 京东方科技集团股份有限公司 Processing circuit of display screen, display method and display device
CN107992839A (en) * 2017-12-12 2018-05-04 北京小米移动软件有限公司 Person tracking method, device and readable storage medium storing program for executing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231673A1 (en) * 2022-05-31 2023-12-07 京东方科技集团股份有限公司 Information delivery method, apparatus and device

Also Published As

Publication number Publication date
CN112040291B (en) 2021-03-05

Similar Documents

Publication Publication Date Title
CN110012209B (en) Panoramic image generation method and device, storage medium and electronic equipment
KR101755412B1 (en) Method and device for processing identification of video file, program and recording medium
CN105100609A (en) Mobile terminal and shooting parameter adjusting method
KR20140141100A (en) Method and apparatus for protecting eyesight
CN108668086B (en) Automatic focusing method and device, storage medium and terminal
US11409794B2 (en) Image deformation control method and device and hardware device
CN110287891B (en) Gesture control method and device based on human body key points and electronic equipment
US20160065791A1 (en) Sound image play method and apparatus
US11812152B2 (en) Method and apparatus for controlling video frame image in live classroom
WO2022017006A1 (en) Video processing method and apparatus, and terminal device and computer-readable storage medium
CN108986117B (en) Video image segmentation method and device
CN112040291B (en) Intelligent display method and display system
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
CN110868632B (en) Video processing method and device, storage medium and electronic equipment
CN110597391A (en) Display control method, display control device, computer equipment and storage medium
CN113225550A (en) Offset detection method and device, camera module, terminal equipment and storage medium
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN112133319A (en) Audio generation method, device, equipment and storage medium
CN115205164B (en) Training method of image processing model, video processing method, device and equipment
CN109218620B (en) Photographing method and device based on ambient brightness, storage medium and mobile terminal
CN111507142A (en) Facial expression image processing method and device and electronic equipment
CN113505672B (en) Iris acquisition device, iris acquisition method, electronic device, and readable medium
CN111507139A (en) Image effect generation method and device and electronic equipment
CN115601316A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115082828A (en) Video key frame extraction method and device based on dominating set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant