CN111083553B - Image processing method and image output equipment - Google Patents

Image processing method and image output equipment Download PDF

Info

Publication number
CN111083553B
CN111083553B CN201911416001.3A CN201911416001A CN111083553B CN 111083553 B CN111083553 B CN 111083553B CN 201911416001 A CN201911416001 A CN 201911416001A CN 111083553 B CN111083553 B CN 111083553B
Authority
CN
China
Prior art keywords
image
local
displayed
partial
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911416001.3A
Other languages
Chinese (zh)
Other versions
CN111083553A (en
Inventor
董芳菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911416001.3A priority Critical patent/CN111083553B/en
Publication of CN111083553A publication Critical patent/CN111083553A/en
Application granted granted Critical
Publication of CN111083553B publication Critical patent/CN111083553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Abstract

The application provides an image processing method and an image output device, wherein in the process of displaying a first image, a first local area meeting conditions in the first image is determined, a first local image corresponding to the first local area and a second local image except the first local image in the first image are determined, the first local image is controlled to be displayed in a first display mode, and the second local image is controlled to be displayed in a second display mode; the first display mode is different from the second display mode, different local images can be displayed in one image through the two display modes, the display mode of one image is enriched, and the flexibility of displaying the image is improved.

Description

Image processing method and image output equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a data processing method and an image output device.
Background
Currently, an image output apparatus can display a still image or a moving image, such as a small video, when displaying an image.
However, the above display modes are static display or dynamic display for the whole image, and the display mode is single.
Disclosure of Invention
In view of the above, the present application provides an image processing method and an image output apparatus to solve the above technical problems.
In order to achieve the above purpose, the present application provides the following technical solutions:
an image processing method comprising:
in the process of showing a first image, determining a first local area meeting a condition in the first image;
determining a first partial image corresponding to the first partial region in the first image and a second partial image other than the first partial image;
controlling the first partial image to be displayed in a first display mode, and controlling the second partial image to be displayed in a second display mode; wherein the first display mode is different from the second display mode.
Preferably, the second partial image corresponds to a second partial region of the first image;
the determining that the first partial image is displayed in a first display mode and controlling the second partial image to be displayed in a second display mode comprises the following steps:
at least the first local image is controlled to be displayed in a dynamic mode, and the second local image is controlled to be displayed in a static mode in the second local area.
Preferably, the controlling at least the first partial image to be displayed in a dynamic manner includes:
controlling the first local image to be displayed in a dynamic mode in the first local area; wherein the dynamic mode presentation is characterized in that the first local image is a plurality of frames of local images which are continuously presented;
the controlling the second local image to be displayed in a static manner in the second local area includes:
and controlling all the second local images to be displayed in a static mode in the second local area.
Preferably, the controlling at least the first partial image to be displayed in a dynamic manner includes:
controlling the first partial image to move in a first direction in the whole area of the first image and to be displayed in a static manner while moving;
the controlling the second local image to be displayed in a static manner in the second local area includes:
controlling all the second local images to be displayed in a static mode in the second local area; wherein the first partial image obscures part of the second partial image when moved to the second partial region.
Preferably, the controlling at least the first partial image to be displayed in a dynamic manner includes:
after the first local image is controlled to be displayed in a dynamic mode in the first local area, sequentially controlling local images corresponding to a plurality of local areas, which are located on the same datum line with the first local area, in the first image to be displayed in a dynamic mode; wherein the control direction on the reference line corresponds to a first direction; the dynamic mode display is characterized in that the first local image is a plurality of frames of local images which are continuously displayed;
the controlling the second local image to be displayed in a static manner in the second local area includes:
and controlling other local images except the currently dynamically displayed local image in the second local image to be displayed in a static mode in the second local area.
Preferably, the method further comprises the following steps:
and performing line-of-sight detection by using a sensing unit, and determining the first direction corresponding to the line-of-sight direction.
Preferably, the method further comprises the following steps:
determining a motion direction or a motion trend of a target object in the first partial image;
a first direction corresponding to the direction or trend of motion is determined.
Preferably, the determining a first local area in the first image that satisfies a condition includes:
receiving a first operation of an operation body on the first image, and determining a first local area corresponding to the first operation;
or, using a sensing unit to perform sight line detection, and determining a first local area positioned by a sight line in the first image;
alternatively, the first image is subjected to image processing to extract a first local region having a preset image feature.
An image output apparatus comprising:
a first display for displaying a first image;
a first processor, configured to determine a first partial area satisfying a condition in the first image during presentation of the first image, determine a first partial image corresponding to the first partial area and a second partial image excluding the first partial image in the first image, control the first partial image to be presented in a first presentation manner in the first display, and control the second partial image to be presented in a second presentation manner in the first display; wherein the first display mode is different from the second display mode.
Preferably, the method further comprises the following steps:
a sensor for monitoring a line of sight;
the first processor is specifically configured to determine a first local region of the gaze location in the first image.
According to the technical scheme, the first local area meeting the condition in the first image is determined in the process of displaying the first image, the first local image corresponding to the first local area in the first image and the second local image before the first local image are determined, the first local image is controlled to be displayed in the first display mode, and the second local image is controlled to be displayed in the second display mode.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2a is a diagram of an image output device showing a first image according to an embodiment of a method of the present application;
FIG. 2b is a diagram illustrating a first local area determined by a touch operation performed on an image output device according to an embodiment of a method of the present application;
FIG. 2c is a schematic diagram of a method embodiment of the present application for determining a first local area by performing gaze monitoring on an image output device;
FIG. 2d is a schematic diagram of a method embodiment of the present application for determining a first local region by feature extraction on an image output device;
FIG. 3 is a schematic flow chart of an image processing method according to another embodiment of the present application;
FIG. 4 is a schematic flow chart of an image processing method according to another embodiment of the present application;
FIG. 5 is a diagram illustrating different local areas and different local images of a first image on an image output device according to another embodiment of the present application;
FIG. 6 is a schematic flow chart of an image processing method according to another embodiment of the present application;
FIG. 7 is a diagram illustrating a first image on an image output device and determining a first local area and a first direction according to an embodiment of a method of the present application;
FIG. 8 is a schematic diagram of different local areas and different local images of a first image shown on an image output device according to another embodiment of the method of the present application;
FIG. 9 is a flowchart illustrating an image processing method according to another embodiment of the present application;
FIG. 10 is a diagram illustrating a first image with a motion effect on an image output device according to another embodiment of the present application;
FIG. 11 is a schematic diagram showing different partial images of a first document on an image output device according to an example in yet another embodiment of the present application;
FIG. 12 is a schematic diagram of an image output apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an image output apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
One embodiment of the present application provides an image processing method, as shown in fig. 1, including the following steps:
step 101: in the process of showing a first image, determining a first local area meeting a condition in the first image;
the image processing method provided by the application can be applied to an image output device, and the image output device determines a first local area meeting a condition in a first image in the process of displaying the first image on a display area.
As an implementation manner of determining the first local area in the first image that satisfies the condition, the method may include:
receiving a first operation of an operation body on the first image, and determining a first local area corresponding to the first operation.
That is, the user may perform a first operation, such as a first touch operation, at the first local area of the first image, so that the first local area corresponding to the first operation can be determined. Specifically, as an example, referring to fig. 2a and 2b, the image output apparatus 100 is shown with a first image P1, and when the user performs a first touch operation at the first area E1 of the first image P1, the display apparatus is able to determine a first local area E1 corresponding to the first touch operation in the first image.
When the first touch operation is the first touch operation, in one mode, the image output apparatus can determine a touch point of the first touch operation in the first image, and determine a first local area of a preset size with reference to the touch point. For example, a circular first partial area with a predetermined radius is determined with the touch point as the center, or a square first partial area with a predetermined side length is determined with the touch point as the center.
In another mode, the image output apparatus can recognize the object touched by the first touch operation in the first image and determine the area where the object is located as the first local area.
It is noted that the identified objects may include a single object and/or an associated object of the single object. As shown in fig. 2b, if the object identified based on the first touch operation is a horse, other horses and persons in a team with the horse may be associated.
In still another mode, the first touch operation may be a first touch move operation, and the image output apparatus may be capable of determining a touch trajectory formed by the first touch move operation in the first image, thereby determining a first local area within a range of the touch trajectory.
In another mode, the first touch operation may be a first touch moving operation, and the image output device may be configured to determine a touch trajectory formed in the first image by the first touch moving operation, so as to identify an object through which the touch trajectory passes in the first image, and determine that the area where the object is located is the first local area.
As another implementation of determining a first local region in the first image that satisfies a condition, the method may include:
and carrying out sight line monitoring by using a sensing unit, and determining a first local area positioned by a sight line in the first image.
That is, the image output apparatus may be provided with a sensing unit, a monitoring direction of the sensing unit coincides with a display direction of the image output apparatus, and a line of sight of a user viewing a first image displayed by the image output apparatus is monitored by the sensing unit, such as eye tracking, so that a first local area where the line of sight is located can be determined in the first image. Specifically, as an example, referring to fig. 2a and 2c, the display device 100 is shown with a first image P1, and when the user focuses his/her sight on the first partial area E1 of the first image, the display device is able to determine the first partial area E1 in the first image.
It should be noted that the line of sight of the user may be positioned at a certain position in the first image or the line of sight may be moved in the first image.
In a case where the line of sight of the user can be positioned at a certain position in the first image, in one mode, the image output apparatus can determine a positioning point of the line of sight in the first image using the sensing unit, and determine a first local area of a preset size with reference to the positioning point. For example, a circular first partial region with a predetermined radius is determined with the positioning point as the center, or a square first partial region with a predetermined side length is determined with the touch point as the center.
In another mode, the image output apparatus can recognize the object at the localization point and determine the region where the object is located as the first local region.
It is noted that the identified objects may include a single object and/or an associated object of the single object. Assuming that the location point corresponds to a horse in the first image, other horses and persons in a team with the horse may be the associated object, as shown in fig. 2 c.
When the user performs the line of sight movement in the first image, in one mode, the image output apparatus is capable of determining a line of sight locus formed in the first image by the line of sight movement using the sensing unit, thereby determining a first local area within the range of the line of sight locus.
In another mode, the image output apparatus can determine a sight line locus formed in the first image by the sight line movement using the sensing unit, recognize an object through which the sight line locus passes in the first image, and determine the area where the object is located as the first partial area. As another implementation of determining a first local area in the first image that satisfies the condition, the method may include:
and performing image processing on the first image, and extracting a first local area with preset image characteristics.
That is, various image features, such as image features with motion tendency, image features with motion capability, such as human body features, animal features, and the like, may be preset in the display device. The first local area having the preset image characteristics can be determined by image processing the first image. Specifically, as an example, referring to fig. 2a and 2d, a first image P1 is shown on the display device 100, and by performing image processing on the first image P1, a first local area E1 having a preset image feature in the first image can be determined.
Step 102: determining a first partial image corresponding to the first partial region in the first image and a second partial image other than the first partial image;
the first partial image is an image corresponding to the first partial region, and the second partial image is an image other than the first partial image in the first image, so that the first partial image and the second partial image constitute the first image.
Step 103: and controlling the first partial image to be displayed in a first display mode, and controlling the second partial image to be displayed in a second display mode.
The first display mode and the second display mode are different, so that the first image respectively displays different parts through two displays.
Therefore, in the embodiment, in the process of displaying the first image, the first local area meeting the condition in the first image is determined, the first local image corresponding to the first local area in the first image and the second local image before the first local image are determined, the first local image is controlled to be displayed in the first display mode, and the second local image is controlled to be displayed in the second display mode, so that different local images are displayed in one image through two display modes, the display modes of one image are enriched, and the flexibility of the displayed image is improved.
Another embodiment of the present application further provides an image processing method, as shown in fig. 3, the method includes the following steps:
step 301: in the process of showing a first image, determining a first local area meeting a condition in the first image;
step 302: determining a first partial image corresponding to the first partial region in the first image and a second partial image other than the first partial image;
step 303: controlling at least the first local image to be displayed in a dynamic mode, and controlling the second local image to be displayed in a static mode in a second local area;
in this embodiment, the second local image corresponds to the second local region of the first image, and referring to fig. 2a to 2d, the local region of the first image P1 except the first local region E1 is the second local region.
The second partial image is shown in a static manner to be equivalent to the second partial image being a second partial static image.
Step 303 is a step of controlling the first partial image to be displayed in a first display manner, and controlling a specific implementation of the second partial image to be displayed in a second display manner.
Therefore, in the embodiment, the first local image can be displayed in the first image in a dynamic mode, and the second local image can be displayed in the first image in a static mode, so that different local images can be displayed in one image in two display modes, the display modes of one image are enriched, and the flexibility of displaying the images is improved.
Further, the image output device in the present application may be a large-sized image output device, and if the distance between the user and the image output device is short, the user cannot perceive all images, and if the image output device controls all dynamic displays of the first image, resources are wasted. In the application, only the first local area in the first image is controlled to be displayed in a dynamic mode, so that the user can perceive the first local area conveniently and resources can be saved.
In addition, if the image output device continuously plays the moving image, the system needs to continuously process the moving image, which wastes system resources, and in the present application, based on the description of the previous embodiment, the first local area and the first local image corresponding to the first local area may be determined when the first operation is received or the sight line is monitored, so as to control the dynamic display of the first local image, that is, the present application can control the dynamic display of the local image in the first image under the trigger of the first operation or the sight line monitoring, and the system resources are saved.
In the above embodiments, there are various ways of displaying the first local image in a dynamic manner, and correspondingly, there are various ways of displaying the second local area in a static manner, which are described below by different method embodiments.
A further embodiment of the present application provides an image processing method, which mainly describes a first implementation manner of controlling at least a first local image to be displayed in a dynamic manner and a second local image to be displayed in a static manner in the second local area, as shown in fig. 4, where the method includes the following steps:
step 401: in the process of showing a first image, determining a first local area meeting a condition in the first image;
step 402: determining a first partial image corresponding to the first partial region in the first image and a second partial image other than the first partial image;
step 403: controlling the first local image to be displayed in a dynamic mode in the first local area;
the first partial area corresponds to a first partial image, and in this embodiment, the first partial image is controlled to be dynamically displayed in the first partial area. The dynamic mode presentation may be characterized in that the first partial image is a plurality of frames of partial images presented consecutively. It is to be understood that the first partial image is presented in a dynamic manner, which can correspond to the first partial image being a first partial video image. Vividly, in displaying the first image, if the corresponding position of the first local area is regarded as a small window opened in the first image, the first local video image is displayed in the small window.
Step 404: and controlling all the second local images to be displayed in a static mode in the second local area.
The second local image corresponds to a second local area in the first image, and the second local area is an area except the first local area in the first image.
It should be noted that, in this embodiment, the first image may be a continuous multi-frame image, for example, the number of frames of the multi-frame image is fixed, so that the continuous multi-frame image can represent a video image with a period of time being a preset period of time, and the preset period of time may be set based on an actual situation, for example, 10 seconds, 15 seconds, and the like.
By the embodiment, the display modes of different local images of the first image are different, that is, in the process of displaying the first image, the first local area in the first image is a local video image, and the second local area is a local still image. As an example, as shown in fig. 5, the image output apparatus 100 displays a first image P1 (in fig. 5, the first image P1 is blank and does not represent the real first image P1 is blank), the first image P1 has a first local area E1 and a second local area E2 other than the first local area E1, wherein the first local area E1 corresponds to the first local image P11, the first local image P11 (corresponding to a local video image) is dynamically displayed at the first local area E1, and the second local image P12 is statically displayed at the second local area E2.
Therefore, in the embodiment, the first local image can be displayed in a dynamic mode in the first local area of the first image, and all the second local images can be displayed in a static mode in the second local area of the first image, so that different local images can be displayed in one image in two display modes, the display modes of one image are enriched, and the flexibility of displaying the images is improved.
A further embodiment of the method of the present application provides an image processing method, which mainly describes a second implementation manner that at least a first partial image is controlled to be displayed in a dynamic manner, and a second partial image is controlled to be displayed in a static manner in a second partial area, as shown in fig. 6, the method includes the following steps:
step 601: in the process of showing a first image, determining a first local area meeting a condition in the first image;
step 602: determining a first partial image corresponding to the first partial region in the first image and a second partial image other than the first partial image;
step 603: controlling the first partial image to move in a first direction in the whole area of the first image and to be displayed in a static manner while moving;
in this embodiment, the first local image corresponds to the first local area, the local images in the first image except the first local image are the second local image, and the second local image corresponds to the second local area, so that when the first local image is controlled to move in the first direction in the whole area of the first image, the first local area will leave the first local area and enter the second local area during the movement, that is, when the first local image moves to the second local area, a part of the second local image will be blocked. Note that the moving speed of the first partial image is not limited in the present application and may be set in advance based on actual conditions.
In order to prevent the first local area from being blank when the first local image starts to move from the first local area, the first local area may be optionally image-supplemented based on the local image of the area around the first local area.
It should be noted that, in this embodiment, the first image may be a single-frame image or may correspond to a first file, and the first file is a static file. Or, the first image may also be a continuous multi-frame image, for example, the number of frames of the multi-frame image is fixed, so that the continuous multi-frame image can represent a video image with a period of time being a preset period of time, and the preset period of time may be set based on an actual situation, such as 10 seconds, 15 seconds, and the like. However, when the first image is a continuous multi-frame image, the embodiment also only shows one of the frames, and the frame may be any one of the continuous multi-frame images, such as the first frame of the continuous multi-frame image, or a key frame of the continuous multi-frame image.
In this embodiment, the first direction may be a preset direction. In another embodiment of the present disclosure, the first direction may be determined, and specifically, the method may further include: and monitoring the sight line by using a sensing unit, and determining the first direction corresponding to the sight line direction.
The monitoring direction of the sensing unit is consistent with the display direction of the image output device, so that the sight of a user watching a first image displayed on the image output device can be monitored, the sight monitoring can be carried out in an eyeball tracking mode, the first direction corresponding to the sight direction can be further determined, the first local image is controlled to move towards the first direction in the whole area of the first image, and the first local image is displayed in a static mode while moving. That is, by determining the first direction corresponding to the direction of the line of sight, an effect is achieved that the first partial image moves along with the movement of the line of sight of the user.
In another embodiment of the present disclosure, the first direction may also be determined, and specifically, the method may further include: determining a motion direction or a motion trend of a target object in the first partial image; a first direction corresponding to the direction or trend of motion is determined.
The target object may be an object having a motion capability, and the target object in the first partial image can be determined by performing image processing on the first partial image so as to determine a motion direction or a motion trend of the target object. It should be noted that if the first image is a multi-frame image that is continuously displayed, analyzing the first image can determine the movement direction of the target object, and if the first image is a single-frame image, analyzing the target object to determine the movement trend of the target object, such as analyzing the orientation of the target object, the inclination of the target object, and the like, can determine the movement trend of the target object.
As an example, as shown in fig. 7, in the first image P1 presented by the image output apparatus 100, by analyzing the little person and the horse in the first partial image P11 corresponding to the first partial area E1, the moving direction or the moving tendency as shown by the arrow can be determined.
Step 604: and controlling all the second local images to be displayed in a static mode in the second local area.
The display mode of different local images of the first image is different through the embodiment. As another example, as shown in fig. 8, the image output apparatus 100 displays a first image P1 (in fig. 8, the first image P1 is blank and does not represent the real first image P1), the first image P1 has a first local area E1 and a second local area E2 except the first local area E1, wherein the first local area E1 corresponds to the first local image P11, the first local image P11 moves in the first direction, i.e., the arrow direction, in the whole area of the first image, and then the first local image P11 moves away from the first local area E1 and enters the second local area E2 during the movement process, so as to block a part of the second local image P12, and the second local image P12 is statically displayed in the second local area E2.
Therefore, in the embodiment, the first local image can move towards the first direction in the whole area of the first image and be displayed in a static mode while moving, and all the second local images can be displayed in the static mode in the second local area of the first image, so that different local images can be displayed in one image through two display modes, the display modes of one image are enriched, and the flexibility of displaying the images is improved.
A further embodiment of the method of the present application provides an image processing method, which mainly describes a third implementation manner that at least a first local image is controlled to be displayed in a dynamic manner, and a second local image is controlled to be displayed in a static manner in a second local area, as shown in fig. 9, the method includes the following steps:
step 901: in the process of showing a first image, determining a first local area meeting a condition in the first image;
step 902: determining a first partial image corresponding to the first partial region in the first image and a second partial image other than the first partial image;
step 903: after the first local image is controlled to be displayed in a dynamic mode in the first local area, sequentially controlling local images corresponding to a plurality of local areas, which are located on the same datum line with the first local area, in the first image to be displayed in a dynamic mode;
wherein the control direction on the reference line corresponds to a first direction; the dynamic mode display is characterized in that the first local image is a plurality of frames of local images displayed continuously.
In this embodiment, the first partial image corresponds to the first partial region, the partial images other than the first partial image in the first image are the second partial image, and the second partial image corresponds to the second partial region. Then, it can be understood that, the plurality of local areas located on the same reference line as the first local area all belong to the second local area, the first local image dynamic presentation in the first local area is controlled first, and then the local image dynamic presentations corresponding to the plurality of local areas located on the same reference line as the first local image are sequentially controlled according to the control direction, so that the effect of sequentially presenting the local video images in the plurality of local areas of the first image is achieved.
In this embodiment, the first direction may be a preset direction. In another embodiment of the present disclosure, the first direction may be determined, and specifically, the method may further include: and monitoring the sight line by using a sensing unit, and determining the first direction corresponding to the sight line direction.
The monitoring direction of the sensing unit is consistent with the display direction of the image output device, so that the sight of a user watching a first image displayed on the image output device can be monitored, the sight monitoring can be specifically carried out in an eyeball tracking mode, the first direction corresponding to the sight direction is further determined, the first local image is controlled to move towards the first direction in the whole area of the first image, and the first local image is displayed in a static mode while moving. That is, determining the first direction corresponding to the line-of-sight direction can ultimately achieve the effect that the first partial image moves along with the movement of the line of sight of the user.
In another embodiment of the present disclosure, the first direction may also be determined, and specifically, the method may further include: determining a motion direction or a motion trend of a target object in the first partial image; a first direction corresponding to the direction or trend of motion is determined.
The target object may be an object having a motion capability, and the target object in the first partial image can be determined by performing image processing on the first partial image so as to determine a motion direction or a motion trend of the target object. It should be noted that if the first image is a multi-frame image that is continuously displayed, analyzing the first image can determine the movement direction of the target object, and if the first image is a single-frame image, analyzing the target object to determine the movement trend of the target object, such as analyzing the orientation of the target object, the inclination of the target object, and the like, can determine the movement trend of the target object.
Then the effect that the target object moves in the whole area of the first image can be exhibited by the control manner of the present embodiment.
Step 904: and controlling other local images except the currently dynamically displayed local image in the second local image to be displayed in a static mode in the second local area.
That is to say, in the process of sequentially controlling the local images corresponding to the plurality of local areas on the same reference line as the first local area in the first image to be displayed in a dynamic manner, except for the currently dynamically displayed local image, other local images in the second local image are all displayed in a static manner in the corresponding local areas.
After the first partial image is controlled to be dynamically displayed in the first partial area, the first partial image can also be controlled to be displayed in a static manner in the first partial area.
It should be noted that, in this embodiment, the first image may be a continuous multi-frame image, for example, the number of frames of the multi-frame image is fixed, so that the continuous multi-frame image can represent a video image with a period of time being a preset period of time, and the preset period of time may be set based on an actual situation, for example, 10 seconds, 15 seconds, and the like.
With the image processing method provided by the present embodiment, it is possible to cause the first image to exhibit the effect of local motion, and as an example, as shown in fig. 10, the effect of forward movement of a small person and a horse in the first image P1 of the image output apparatus 100 occurs.
Therefore, in this embodiment, after the first local image is dynamically displayed in the first local area, the local images corresponding to the local areas, which are located on the same reference line as the first local area, in the first image can be sequentially displayed in a dynamic manner, and the other local images except for the currently dynamically displayed local image in the second local image are displayed in a static manner in the second local area, so that different local images are displayed in one image in two display manners, the display manner of one image is enriched, and the flexibility of displaying images is improved.
In another embodiment of the method of the present application, a fourth implementation manner is described that at least the first local image is controlled to be displayed in a dynamic manner, and the second local image is controlled to be displayed in a static manner in the second local area.
The controlling at least the first partial image to be presented in a dynamic manner includes:
controlling the first file to dynamically display a plurality of frames of local images including a first local image in a first local area;
the multi-frame local images correspond to the multi-frame images, and the first local image belongs to one of the multi-frame local images.
The controlling the second local image to be displayed in a static manner in the second local area includes:
and controlling the first file to maintain a second local image showing the first image in a second local area.
It should be noted that the first image may be a first frame image in a multi-frame image in the first file, or a key frame image in the multi-frame image, and the first image includes the first partial image and the second partial image.
In practical applications, the image output device may control playing of the first file, and when the first file is played to the first image in the first file, the first local area may be determined by automatic triggering or external triggering (for example, triggering by using line-of-sight monitoring of the sensing unit), so as to control the first local area to play the first file, and the other areas stay in the first image.
Further, this embodiment may further include:
and when the line of sight is monitored by using the sensing unit to determine that the line of sight leaves, controlling the first local area in the first file to stop displaying the local image and controlling the global display to be the second image. That is, by displaying the entire display area of the image output device as the second image including the partial image that is stopped from being displayed in the first partial area in the first file, it is possible to realize the full-screen update of the moving file without causing a shift in the displayed image when the first partial area is stopped from being displayed (the second partial area displays a part of the first image, and the first partial area displays a part of the image other than the first image) when the user is out of sight.
Alternatively, the present application may further include:
and determining that the multi-frame local image dynamically displayed in the first local area in the first file is the last frame local image, and controlling the overall display to be the second image. That is, the display area of the image output device is entirely displayed as the second image, and the second image is the last frame image in the first file, so that the display image is not displaced when the display of the first partial area is stopped (the second partial area displays the part of the first image, and the first partial area displays the part of the other image except the first image), and the full frame update of the dynamic file is realized.
In another embodiment of the method of the present application, a fifth implementation manner is described that at least the first local image is controlled to be displayed in a dynamic manner, and the second local image is controlled to be displayed in a static manner in the second local area.
The controlling at least the first partial image to be presented in a dynamic manner includes:
and after the first file is controlled to dynamically display the multi-frame local images including the first local image in the first local area, sequentially controlling the first file to dynamically display corresponding continuous multi-frame local images in a plurality of local areas which are positioned on the same datum line with the first local area.
The multi-frame local images correspond to multi-frame images in a first file, the first local image belongs to one of the multi-frame local images, the first local image belongs to the first image, and the first image is one of the multi-frame images.
The controlling the second local image to be displayed in a static manner in the second local area includes:
and controlling the first file to maintain the currently displayed local image of one frame in other local areas except the currently dynamically displayed local area.
It should be noted that, a currently displayed local image is maintained to correspond to a local area, and in a plurality of local areas located on the same reference line as the first local area, except for a local area currently being dynamically displayed, other local areas are maintained to display a last local image where dynamic display is stopped. Meanwhile, the corresponding local image in the first image is maintained and displayed in other local areas except the first local area and a plurality of other local areas which are positioned on the same datum line with the first local area.
In this embodiment, the control direction on the reference line corresponds to the first direction. Specifically, this embodiment may further include: and carrying out sight line monitoring by using the sensing unit, and determining a first direction corresponding to the sight line direction.
The monitoring direction of the sensing unit is consistent with the display direction of the image output device, so that the sight of a user watching a first file played currently on the image output device can be monitored, and the sight monitoring can be performed in an eyeball tracking mode. When the user sight line is determined to be positioned to the first file, the first file is triggered to be locally and dynamically played, specifically, a first local area where the user sight line is positioned can be determined, so that the first local area is controlled to continuously play multiframe local images corresponding to the first local area in the first file, and other local areas except the first local area are controlled to maintain and display the local images when the user sight line is monitored.
In this application, a frame of image in which the first file is played when the first file is triggered to perform the local dynamic playing is referred to as a first image, that is, at the beginning of triggering the local area dynamic playing, the first local area corresponds to the first local image in the first image, and continues to play the local images in other frame images of the first file, and the second local area except the first local area corresponds to the second local image in the first image, and maintains to display the second local image.
And then, monitoring that the sight of the user moves, and determining the direction of the movement of the sight of the user as a first direction, in the application, sequentially controlling a plurality of local areas corresponding to the first direction and located on the same datum line with the first local area to dynamically display a plurality of frames of local images, and controlling the first file to maintain the currently displayed frame of local image in other local areas except the currently dynamically displayed local area, so that the effect of sequentially and dynamically displaying the local areas in the movement direction of the sight along with the movement of the sight of the user in the process of playing the first file can be realized.
That is to say, the image output device may control the playing of the first file, and use the sensing unit to perform the sight monitoring to trigger the first file to perform the local dynamic display.
Further, this embodiment may further include:
and when the line of sight is monitored by using the sensing unit to determine that the line of sight leaves, controlling a local area which dynamically displays a plurality of frames of local images at present to stop dynamically displaying the local images in the first file, and globally updating and displaying the local images as a second image. The second image is the image of the local image when the dynamic display is stopped in the first file, so that the dislocation of the displayed image can not occur when the local area stops displaying when the sight of the user leaves, and the full-frame updating of the dynamic file is realized.
For convenience of understanding, a specific example is described, and as shown in fig. 11, assuming that a first file includes 300 frames of images, when the image output apparatus 100 monitors the line of sight of the user during playing of the first file by using a sensing unit, a first local area E1 where the line of sight of the user is located is determined, wherein when the line of sight of the user is located, the first file is played to a 100 th frame of image (first image), the first local area E1 continues to play a corresponding local image from the 100 th frame of image, and a second local area E2 except for the first local area E1 remains to be shown in the 100 th frame. As the user's sight line moves, the sensing unit can detect a first direction indicated by an arrow corresponding to the moving direction of the user's sight line, and then control a local area E21 located on the same reference line as the first local area E1 to start playing the local image dynamically, assuming that the first local area E1 plays from the 100 th frame local image to the 150 th frame local image at this time, the first local area E1 maintains showing the 150 th frame local image, the local area E21 plays from the 151 th frame local image, then control a local area E22 to start playing the local image dynamically with the user's sight line, assuming that the local area E21 plays from the 151 th frame local image to the 201 th frame local image at this time, the local area E21 maintains showing the 201 th frame local image, the local area E22 plays from the 202 nd frame local image, and finally, monitoring that the user's sight line leaves, control a local area E22 to stop playing, assume that the partial image is stopped at 245 th frame. Then local area E21 remains shown in frame 201, local area E22 remains shown in frame 245 and the other local areas remain shown in frame 100, except for first local area E1 which remains shown in frame 150. Note that, the local region E21 and the local region E22 both belong to the second local region E2.
Further, corresponding to this example, the image output device may perform global update on the image, that is, the global update is to be the 245 th frame image corresponding to the 245 th frame local image in the first file. Assuming that the user's sight line is to view the automobile in the first file, the dynamic effect of the automobile moving along with the sight line on the image output device can be seen by the present embodiment.
Corresponding to the image processing method, the embodiment of the device of the present application provides an image output apparatus, and is described below through several device embodiments.
An apparatus embodiment of the present application provides an image output apparatus, as shown in fig. 11, the image output apparatus 100 includes: a first display 110, a first processor 120. Wherein:
a first display 110 for displaying a first image;
a first processor 120, configured to, during displaying the first image, determine a first partial area in the first image that satisfies a condition, determine a first partial image and a second partial image other than the first partial image in the first image, which correspond to the first partial area, control the first partial image to be displayed in a first display manner in the first display, and control the second partial image to be displayed in a second display manner in the first display.
Specifically, the determining, by the first processor 120, a first local area in the first image that satisfies a condition may include: receiving a first operation of an operation body on the first image, and determining a first local area corresponding to the first operation.
When the first operation is the first touch operation, in one mode, the first processor 120 can determine a touch point of the first touch operation in the first image, and determine a first local area with a preset size based on the touch point. For example, a circular first partial area with a predetermined radius is determined with the touch point as the center, or a square first partial area with a predetermined side length is determined with the touch point as the center.
In another mode, the first processor 120 can recognize the object touched by the first touch operation in the first image, and determine the area where the object is located as the first local area.
It is noted that the identified objects may include a single object and/or an associated object of the single object.
In yet another mode, the first touch operation may be a first touch move operation, and the first processor 120 may be capable of determining a touch trajectory formed by the first touch move operation in the first image, thereby determining a first local area within the touch trajectory.
In another mode, the first touch operation may be a first touch moving operation, and the first processor 120 may be configured to determine a touch trajectory formed in the first image of the first touch moving operation, so as to identify an object passing through the touch trajectory in the first image, and determine that the area where the object is located is the first local area.
Alternatively, as shown in fig. 12, the image output apparatus further includes a sensor 130 for monitoring the line of sight. The first processor 120 determining a first local region in the first image that satisfies a condition may include: a first local region of the first image where the line of sight is located is determined.
Wherein the monitoring direction of the sensor 130 is consistent with the display direction of the image output device, the line of sight of the user viewing the first image displayed by the image output device can be monitored by the sensor 130, such as performing eye tracking, so that the first processor can determine the first local area where the line of sight is located in the first image.
It should be noted that the line of sight of the user may be positioned at a certain position in the first image or the line of sight may be moved in the first image.
In one mode, when the line of sight of the user can be located at a certain position in the first image, the first processor 120 can determine the location point of the line of sight in the first image by using the sensor, and determine a first local area with a preset size by taking the location point as a reference. For example, a circular first partial region with a predetermined radius is determined with the positioning point as the center, or a square first partial region with a predetermined side length is determined with the touch point as the center.
Alternatively, the first processor 120 can identify the object at the location point and determine the area where the object is located as the first local area.
It is noted that the identified objects may include a single object and/or an associated object of the single object.
In one approach, the first processor 120 can determine a line of sight trajectory formed by the line of sight movement in the first image using the sensor to determine a first local area within the line of sight trajectory.
Alternatively, the first processor 120 can determine a line-of-sight locus formed in the first image by the line-of-sight movement using the sensor, identify an object through which the line-of-sight locus passes in the first image, and determine that the area in which the object is located is the first local area.
Alternatively, the determining, by the first processor 120, the first local region in the first image that satisfies the condition may include: and performing image processing on the first image, and extracting a first local area with preset image characteristics.
The first display mode and the second display mode are different, so that the first image respectively displays different parts through two displays.
Therefore, in the embodiment, in the process of displaying the first image, the first local area meeting the condition in the first image is determined, the first local image corresponding to the first local area in the first image and the second local image before the first local image are determined, the first local image is controlled to be displayed in the first display mode, and the second local image is controlled to be displayed in the second display mode, so that different local images are displayed in one image through two display modes, the display modes of one image are enriched, and the flexibility of the displayed image is improved.
In another embodiment of the present application, the determining that the first partial image is shown in a first display manner and the controlling the second partial image to be shown in a second display manner by the first processor comprises: and controlling at least the first local image to be displayed in a dynamic mode, and controlling the second local image to be displayed in a static mode in a second local area.
Wherein the second local image corresponds to a second local region of the first image.
In the above embodiments, there are various ways of displaying the first local image in a dynamic manner, and correspondingly, there are various ways of displaying the second local area in a static manner, which are described below by using different embodiments of the apparatus.
In another embodiment of the present application, the first processor at least controls the first partial image to be displayed in a dynamic manner, including: controlling the first local image to be presented in a dynamic manner in the first local area.
The first partial area corresponds to a first partial image, and in this embodiment, the first partial image is controlled to be dynamically displayed in the first partial area. The dynamic mode presentation may be characterized in that the first partial image is a plurality of frames of partial images presented consecutively. It is to be understood that the first partial image is presented in a dynamic manner, which can correspond to the first partial image being a first partial video image. Vividly, in displaying the first image, if the corresponding position of the first local area is regarded as a small window opened in the first image, the first local video image is displayed in the small window.
The first processor controls the second local image to be displayed in a static manner in the second local area, and the method comprises the following steps: and controlling all the second local images to be displayed in a static mode in the second local area.
The second local image corresponds to a second local area in the first image, and the second local area is an area except the first local area in the first image.
It should be noted that, in this embodiment, the first image may be a continuous multi-frame image, for example, the number of frames of the multi-frame image is fixed, so that the continuous multi-frame image can represent a video image with a period of time being a preset period of time, and the preset period of time may be set based on an actual situation, for example, 10 seconds, 15 seconds, and the like.
In another embodiment of the present application, the first processor at least controls the first partial image to be displayed in a dynamic manner, including: and controlling the first local image to move towards the first direction in the whole area of the first image and to be displayed in a static mode while moving.
The first processor controls the second local image to be displayed in a static manner in the second local area, and the method comprises the following steps: and controlling all the second local images to be displayed in a static mode in the second local area.
In this embodiment, the first local image corresponds to the first local area, the local images in the first image except the first local image are the second local image, and the second local image corresponds to the second local area, so that when the first local image is controlled to move in the first direction in the whole area of the first image, the first local area will leave the first local area and enter the second local area during the movement, that is, when the first local image moves to the second local area, a part of the second local image will be blocked.
In order to prevent the first local area from being blank when the first local image is moved from the first local area, the first processor may optionally supplement the first local area with an image based on a local image of an area around the first local area.
It should be noted that, in this embodiment, the first image may be a single-frame image or a continuous multi-frame image, and for example, the number of frames of the multi-frame image is fixed, so that the continuous multi-frame image can represent a video image with a period of time as a preset period of time, and the preset period of time may be set based on an actual situation, such as 10 seconds, 15 seconds, and the like. However, when the first image is a continuous multi-frame image, the embodiment also only shows one of the frames, and the frame may be any one of the continuous multi-frame images, such as the first frame of the continuous multi-frame image.
In this embodiment, the first direction may be a preset direction. In another embodiment of the present disclosure, a sensor for monitoring the direction of the line of sight may also be included; the first processor may be further configured to determine the first direction corresponding to a gaze direction.
The monitoring direction of the sensor is consistent with the display direction of the image output device, so that the sight of a user watching a first image displayed on the image output device can be monitored, the sight monitoring can be specifically carried out in an eyeball tracking mode, the first direction corresponding to the sight direction is further determined, the first local image is controlled to move towards the first direction in the whole area of the first image, and the first local image is displayed in a static mode while moving. That is, determining the first direction corresponding to the line-of-sight direction can ultimately achieve the effect that the first partial image moves along with the movement of the line of sight of the user.
In another embodiment of the apparatus of the present application, the first direction may also be determined, and specifically, the first processor is further configured to determine a moving direction or a moving trend of the target object in the first local image, and determine the first direction corresponding to the moving direction or the moving trend.
The target object may be an object having a motion capability, and the target object in the first partial image can be determined by performing image processing on the first partial image so as to determine a motion direction or a motion trend of the target object. It should be noted that if the first image is a multi-frame image that is continuously displayed, analyzing the first image can determine the movement direction of the target object, and if the first image is a single-frame image, analyzing the target object to determine the movement trend of the target object, such as analyzing the orientation of the target object, the inclination of the target object, and the like, can determine the movement trend of the target object.
In another embodiment of the present application, the first processor at least controls the first partial image to be displayed in a dynamic manner, including: and after the first local image is controlled to be dynamically displayed in the first local area, sequentially controlling local images corresponding to a plurality of local areas, which are positioned on the same datum line with the first local area, in the first image to be dynamically displayed.
The first processor controls the second local image to be displayed in a static manner in the second local area, and the method comprises the following steps: and controlling other local images except the currently dynamically displayed local image in the second local image to be displayed in a static mode in the second local area.
Wherein the control direction on the reference line corresponds to a first direction; the dynamic mode display is characterized in that the first local image is a plurality of frames of local images displayed continuously.
In this embodiment, the first partial image corresponds to the first partial region, the partial images other than the first partial image in the first image are the second partial image, and the second partial image corresponds to the second partial region. Then, it can be understood that, a plurality of local areas located on the same reference line as the first local area all belong to the second local area, the first processor controls the dynamic display of the first local image in the first local area first, and then sequentially controls the dynamic display of the local images corresponding to the plurality of local areas located on the same reference line as the first local image according to the control direction, thereby achieving the effect of sequentially displaying the local video images in the plurality of local areas of the first image according to the sequence.
In this embodiment, the first direction may be a preset direction. In another embodiment of the present application, the first direction may be determined, and specifically, the apparatus may further include: a sensor for monitoring a direction of sight; accordingly, the processor is further configured to determine the first direction corresponding to a gaze direction.
The monitoring direction of the sensor is consistent with the display direction of the image output equipment, so that the sight line direction of a user watching a first image displayed on the image output equipment can be monitored, the sight line monitoring can be carried out in an eyeball tracking mode, the first processor can determine the first direction corresponding to the sight line direction, control the first local image to move towards the first direction in the whole area of the first image, and display the first local image in a static mode while moving. That is, by determining the first direction corresponding to the direction of the line of sight, an effect is achieved that the first partial image moves along with the movement of the line of sight of the user.
In another embodiment of the apparatus of the present application, the first direction may also be determined, and specifically, the first processor is further configured to determine a moving direction or a moving trend of the target object in the first local image, and determine the first direction corresponding to the moving direction or the moving trend.
The target object may be an object having a motion capability, and the target object in the first partial image can be determined by performing image processing on the first partial image so as to determine a motion direction or a motion trend of the target object. It should be noted that if the first image is a multi-frame image that is continuously displayed, analyzing the first image can determine the movement direction of the target object, and if the first image is a single-frame image, analyzing the target object to determine the movement trend of the target object, such as analyzing the orientation of the target object, the inclination of the target object, and the like, can determine the movement trend of the target object.
That is to say, in the process that the first processor sequentially controls the local images corresponding to the local areas on the same reference line as the first local area in the first image to be displayed in a dynamic manner, except for the currently dynamically displayed local image, other local images in the second local image are all displayed in a static manner in the corresponding local areas.
The first control may be further configured to control the first partial image to be displayed in a static manner in the first partial area after the first partial image is controlled to be displayed in a dynamic manner in the first partial area.
It should be noted that, in this embodiment, the first image may be a continuous multi-frame image, for example, the number of frames of the multi-frame image is fixed, so that the continuous multi-frame image can represent a video image with a period of time being a preset period of time, and the preset period of time may be set based on an actual situation, for example, 10 seconds, 15 seconds, and the like.
In another embodiment of the present application, the first image may correspond to a first file, and the first file is a dynamic file, that is, the first file includes a plurality of consecutive frames of images, and the first image belongs to one frame of the plurality of consecutive frames of images.
The first processor controls at least the first partial image to be presented in a dynamic manner, and comprises:
and controlling the first file to dynamically show a plurality of frames of local images including the first local image in the first local area.
Wherein the plurality of frame partial images correspond to the plurality of frame images, a first partial image belongs to one frame of the plurality of frame partial images, and the first image includes the first partial image and the second partial image.
The first processor controls the second local image to be displayed in a static mode in the second local area, and the method comprises the following steps:
and controlling the second file to maintain a second local image showing the first image in a second local area.
It should be noted that the first image may be a first frame image in a multi-frame image in the first file, or a key frame image in the multi-frame image.
In practical applications, the first processor may control playing of the first file, and when playing the first image in the first file, may automatically trigger or trigger from the outside (for example, trigger by using line-of-sight monitoring of a sensor) to determine the first local area, so as to control the first local area to play the first file, and the other areas stay in the first image.
Furthermore, the first processor may be further configured to control the first local area in the first file to stop displaying the local image when the gaze is determined to leave by using the sensor for gaze monitoring, and control the global display to be the second image, where the second image is an image of the first file that includes the local image that has been stopped displaying with the first local area, so that when the gaze of the user leaves, the display image is not displaced when the first local area stops displaying (the second local area displays a part of the first image, and the first local area displays a part of an image other than the first image), thereby implementing full-frame updating of the dynamic file.
Or, the first processor may be further configured to determine that the multiple frames of local images dynamically shown in the first local area in the first file are the last frame of local image, and control the global display to be the second image. That is, the display area of the image output device is entirely displayed as the second image, and the second image is the last frame image in the first file, so that the display image is not displaced when the display of the first partial area is stopped (the second partial area displays the part of the first image, and the first partial area displays the part of the other image except the first image), and the full frame update of the dynamic file is realized.
In another embodiment of the present application, the first image may correspond to a first file, and the first file is a dynamic file, that is, the first file includes consecutive multi-frame images, and the first image belongs to one frame of the consecutive multi-frame images.
The first processor controls at least the first partial image to be presented in a dynamic manner, and comprises:
and after the first file is controlled to dynamically display the multi-frame local images including the first local image in the first local area, sequentially controlling the first file to dynamically display corresponding continuous multi-frame local images in a plurality of local areas which are positioned on the same datum line with the first local area.
The multi-frame local images correspond to multi-frame images in a first file, the first local image belongs to one of the multi-frame local images, the first local image belongs to the first image, and the first image is one of the multi-frame images.
The first processor controls the second local image to be displayed in a static mode in the second local area, and the method comprises the following steps:
and controlling the first file to maintain the currently displayed local image of one frame in other local areas except the currently dynamically displayed local area.
It should be noted that, a currently displayed local image is maintained to correspond to a local area, and in a plurality of local areas located on the same reference line as the first local area, except for a local area currently being dynamically displayed, other local areas are maintained to display a last local image where dynamic display is stopped. Meanwhile, the corresponding local image in the first image is maintained and displayed in other local areas except the first local area and a plurality of other local areas which are positioned on the same datum line with the first local area.
In this embodiment, the control direction on the reference line corresponds to the first direction. Specifically, in this embodiment, the first processor may be further configured to perform line-of-sight monitoring by using the sensor, and determine a first direction corresponding to the line-of-sight direction.
The monitoring direction of the sensor is consistent with the display direction of the image output device, so that the sight of a user watching a first file played on the image output device at present can be monitored, and the sight monitoring can be carried out in an eyeball tracking mode. When the user sight line is determined to be positioned to the first file, the first file is triggered to be locally and dynamically played, specifically, a first local area where the user sight line is positioned can be determined, so that the first local area is controlled to continuously play multiframe local images corresponding to the first local area in the first file, and other local areas except the first local area are controlled to maintain and display the local images when the user sight line is monitored.
In this application, a frame of image in which the first file is played when the first file is triggered to perform the local dynamic playing is referred to as a first image, that is, at the beginning of triggering the local area dynamic playing, the first local area corresponds to the first local image in the first image, and continues to play the local images in other frame images of the first file, and the second local area except the first local area corresponds to the second local image in the first image, and maintains to display the second local image.
And then, monitoring that the sight of the user moves, and determining the direction of the movement of the sight of the user as a first direction, in the application, sequentially controlling a plurality of local areas corresponding to the first direction and located on the same datum line with the first local area to dynamically display a plurality of frames of local images, and controlling the first file to maintain the currently displayed frame of local image in other local areas except the currently dynamically displayed local area, so that the effect of sequentially and dynamically displaying the local areas in the movement direction of the sight along with the movement of the sight of the user in the process of playing the first file can be realized.
That is, the first processor may control playing of the first file, and perform line-of-sight monitoring using the sensor to trigger the first file to perform local dynamic display.
Further, in this embodiment, the first processor may be further configured to, when it is determined that the line of sight is away by using line of sight monitoring performed by a sensor, control the local area, in the first file, in which multiple frames of local images are currently and dynamically displayed, to stop dynamically displaying the local images, and globally update the local area to be displayed as the second image. The second image is the image of the local image when the dynamic display is stopped in the first file, so that the dislocation of the displayed image can not occur when the local area stops displaying when the sight of the user leaves, and the full-frame updating of the dynamic file is realized.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. An image processing method comprising:
in the process of showing a first image, determining a first local area meeting a condition in the first image; the first image corresponds to a first file, the first file comprises continuous multi-frame images, and the first image belongs to one frame of the continuous multi-frame images;
determining a first partial image corresponding to the first partial region in the first image and a second partial image other than the first partial image;
controlling the first partial image to be displayed in a first display mode, and controlling the second partial image to be displayed in a second display mode; wherein the first display mode is different from the second display mode; wherein the second local image corresponds to a second local region of the first image;
the determining that the first partial image is displayed in a first display mode and controlling the second partial image to be displayed in a second display mode comprises the following steps:
controlling at least the first partial image to be displayed in a dynamic mode, and controlling the second partial image to be displayed in a static mode in the second partial area;
the controlling at least the first partial image to be presented in a dynamic manner includes: controlling the first file to dynamically display a plurality of frames of local images including a first local image in a first local area; the multi-frame local images correspond to the multi-frame images, and the first local image belongs to one frame of the multi-frame local images;
the controlling the second local image to be displayed in a static manner in the second local area includes: controlling the first file to maintain a second local image showing the first image in a second local area;
alternatively, the first and second electrodes may be,
the controlling at least the first partial image to be presented in a dynamic manner includes: after the first file is controlled to dynamically display a plurality of frames of local images including a first local image in a first local area, sequentially controlling the first file to dynamically display corresponding continuous multi-frame local images in a plurality of local areas which are positioned on the same datum line with the first local area; the multi-frame local image corresponds to a multi-frame image in a first file, the first local image belongs to one of the multi-frame local images, the first local image belongs to the first image, and the first image is one of the multi-frame images;
the controlling the second local image to be displayed in a static manner in the second local area includes: and controlling the first file to maintain the currently displayed local image of one frame in other local areas except the currently dynamically displayed local area.
2. The method of claim 1, the controlling at least the first partial image to be presented in a dynamic manner, comprising:
controlling the first local image to be displayed in a dynamic mode in the first local area; wherein the dynamic mode presentation is characterized in that the first local image is a plurality of frames of local images which are continuously presented;
the controlling the second local image to be displayed in a static manner in the second local area includes:
and controlling all the second local images to be displayed in a static mode in the second local area.
3. The method of claim 1, further comprising:
the line of sight detection is performed by the sensing unit, and a first direction corresponding to the line of sight direction is determined.
4. The method of any of claim 1, further comprising:
determining a motion direction or a motion trend of a target object in the first partial image;
a first direction corresponding to the direction or trend of motion is determined.
5. The method of claim 1, the determining a first local region in the first image that satisfies a condition, comprising:
receiving a first operation of an operation body on the first image, and determining a first local area corresponding to the first operation;
or, using a sensing unit to perform sight line detection, and determining a first local area positioned by a sight line in the first image;
alternatively, the first image is subjected to image processing to extract a first local region having a preset image feature.
6. An image output apparatus comprising:
a first display for displaying a first image; the first image corresponds to a first file, the first file comprises continuous multi-frame images, and the first image belongs to one frame of the continuous multi-frame images;
a first processor, configured to determine a first partial area satisfying a condition in the first image during presentation of the first image, determine a first partial image corresponding to the first partial area and a second partial image excluding the first partial image in the first image, control the first partial image to be presented in a first presentation manner in the first display, and control the second partial image to be presented in a second presentation manner in the first display; wherein the first display mode is different from the second display mode;
wherein the second local image corresponds to a second local region of the first image;
the first processor determines that the first partial image is displayed in a first display mode, and controls the second partial image to be displayed in a second display mode, and the method comprises the following steps: controlling at least the first partial image to be displayed in a dynamic mode, and controlling the second partial image to be displayed in a static mode in the second partial area;
the first processor controls at least the first partial image to be presented in a dynamic manner, including: controlling the first file to dynamically display a plurality of frames of local images including a first local image in a first local area; the multi-frame local images correspond to the multi-frame images, and the first local image belongs to one frame of the multi-frame local images;
the first processor controls the second local image to be displayed in a static manner in the second local area, and the method comprises the following steps: controlling the first file to maintain a second local image showing the first image in a second local area;
alternatively, the first and second electrodes may be,
the first processor controls at least the first partial image to be presented in a dynamic manner, including: after the first file is controlled to dynamically display a plurality of frames of local images including a first local image in a first local area, sequentially controlling the first file to dynamically display corresponding continuous multi-frame local images in a plurality of local areas which are positioned on the same datum line with the first local area; the multi-frame local image corresponds to a multi-frame image in a first file, the first local image belongs to one of the multi-frame local images, the first local image belongs to the first image, and the first image is one of the multi-frame images;
the first processor controls the second local image to be displayed in a static manner in the second local area, and the method comprises the following steps: and controlling the first file to maintain the currently displayed local image of one frame in other local areas except the currently dynamically displayed local area.
7. The image output apparatus according to claim 6, further comprising:
a sensor for monitoring a line of sight;
the first processor is specifically configured to determine a first local region of the gaze location in the first image.
CN201911416001.3A 2019-12-31 2019-12-31 Image processing method and image output equipment Active CN111083553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416001.3A CN111083553B (en) 2019-12-31 2019-12-31 Image processing method and image output equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416001.3A CN111083553B (en) 2019-12-31 2019-12-31 Image processing method and image output equipment

Publications (2)

Publication Number Publication Date
CN111083553A CN111083553A (en) 2020-04-28
CN111083553B true CN111083553B (en) 2021-08-17

Family

ID=70320861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416001.3A Active CN111083553B (en) 2019-12-31 2019-12-31 Image processing method and image output equipment

Country Status (1)

Country Link
CN (1) CN111083553B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105474213A (en) * 2013-07-30 2016-04-06 柯达阿拉里斯股份有限公司 System and method for creating navigable views of ordered images
CN106851114A (en) * 2017-03-31 2017-06-13 努比亚技术有限公司 A kind of photo shows, photo generating means and method, terminal
CN106959759A (en) * 2017-03-31 2017-07-18 联想(北京)有限公司 A kind of data processing method and device
CN107360381A (en) * 2017-07-03 2017-11-17 联想(北京)有限公司 Data processing method and photographing device
CN108255299A (en) * 2018-01-10 2018-07-06 京东方科技集团股份有限公司 A kind of image processing method and device
CN110460831A (en) * 2019-08-22 2019-11-15 京东方科技集团股份有限公司 Display methods, device, equipment and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7034833B2 (en) * 2002-05-29 2006-04-25 Intel Corporation Animated photographs
CN103533249B (en) * 2013-10-25 2017-12-12 惠州Tcl移动通信有限公司 A kind of method and device for adjusting foreground and background
JP2016118991A (en) * 2014-12-22 2016-06-30 カシオ計算機株式会社 Image generation device, image generation method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105474213A (en) * 2013-07-30 2016-04-06 柯达阿拉里斯股份有限公司 System and method for creating navigable views of ordered images
CN106851114A (en) * 2017-03-31 2017-06-13 努比亚技术有限公司 A kind of photo shows, photo generating means and method, terminal
CN106959759A (en) * 2017-03-31 2017-07-18 联想(北京)有限公司 A kind of data processing method and device
CN107360381A (en) * 2017-07-03 2017-11-17 联想(北京)有限公司 Data processing method and photographing device
CN108255299A (en) * 2018-01-10 2018-07-06 京东方科技集团股份有限公司 A kind of image processing method and device
CN110460831A (en) * 2019-08-22 2019-11-15 京东方科技集团股份有限公司 Display methods, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN111083553A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN109637518B (en) Virtual anchor implementation method and device
US9997197B2 (en) Method and device for controlling playback
US9674395B2 (en) Methods and apparatuses for generating photograph
US10026381B2 (en) Method and device for adjusting and displaying image
EP3182716A1 (en) Method and device for video display
US20190005415A1 (en) Queuing apparatus, and queuing control method thereof
US9817235B2 (en) Method and apparatus for prompting based on smart glasses
JP2017526078A5 (en)
JP2008269588A (en) Recognition device, recognition method, and recognition program
EP3076664A1 (en) Method and device for controlling playing and electronic equipment
EP3879530A1 (en) Video processing method, video processing device, and storage medium
EP4231625A1 (en) Photographing method and apparatus, and electronic device
CN111770374B (en) Video playing method and device
WO2017035025A1 (en) Engagement analytic system and display system responsive to user's interaction and/or position
CN111656313A (en) Screen display switching method, display device and movable platform
CN115719586A (en) Screen refresh rate adjusting method and device, electronic equipment and storage medium
CN111083553B (en) Image processing method and image output equipment
CN112954486B (en) Vehicle-mounted video trace processing method based on sight attention
CN107105311B (en) Live broadcasting method and device
EP3624443B1 (en) Surveillance device, surveillance method, computer program, and storage medium
CN111340690A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108986803B (en) Scene control method and device, electronic equipment and readable storage medium
CN104244065B (en) A kind of method and device of captions processing
CN106293065A (en) The control method of application program and control system
CN110941344B (en) Method for obtaining gazing point data and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant