CN110740309A - image display method, device, electronic equipment and storage medium - Google Patents

image display method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110740309A
CN110740309A CN201910927049.4A CN201910927049A CN110740309A CN 110740309 A CN110740309 A CN 110740309A CN 201910927049 A CN201910927049 A CN 201910927049A CN 110740309 A CN110740309 A CN 110740309A
Authority
CN
China
Prior art keywords
image
target information
frame
area
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910927049.4A
Other languages
Chinese (zh)
Other versions
CN110740309B (en
Inventor
潘嘉荔
张树鹏
梁雅涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910927049.4A priority Critical patent/CN110740309B/en
Publication of CN110740309A publication Critical patent/CN110740309A/en
Application granted granted Critical
Publication of CN110740309B publication Critical patent/CN110740309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses image display methods and devices, electronic equipment and a storage medium, which can realize a special effect of a naked eye stereoscopic effect when a monocular camera shoots, and the method can comprise the steps of acquiring an original image frame from an acquired image stream when the image is acquired, drawing an auxiliary foreground image on the original image frame to obtain a middle image frame with the auxiliary foreground image, calculating the proportion of target information occupying a display screen if the original image frame contains the target information, wherein the target information is an image to be displayed in the stereoscopic effect, dividing an area occupied by the target information from the middle image frame when the calculation result of the proportion is larger than or equal to a preset threshold value, using the area occupied by the target information as a target area, filling the target area with the target information in the original image frame to obtain the image to be displayed in the stereoscopic effect, and displaying the image in the stereoscopic effect.

Description

image display method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of software engineering, and in particular, to image display methods and apparatuses, an electronic device, and a storage medium.
Background
At present, when a terminal shoots a three-dimensional (3D, 3 Dimensions) image, a binocular camera is generally required to shoot an object at the same time, then the image shot by the binocular camera is synthesized into a 3D image, and the 3D effect of the image is experienced by wearing 3D glasses when the terminal watches the image. The prior art has the problem that needs the binocular camera to shoot when shooting and watching 3D images and needs to experience the 3D effect of the images through 3D glasses.
Disclosure of Invention
The embodiment of the disclosure provides image display methods and devices, electronic equipment and storage media.
, embodiments of the present disclosure provide methods of displaying images, comprising:
acquiring an original image frame from an acquired image stream when acquiring an image;
drawing an auxiliary foreground image on the original image frame to obtain an intermediate image frame with the auxiliary foreground image;
if the original image frame contains target information, calculating the proportion of the target information occupying a display screen, wherein the target information is an image to be subjected to three-dimensional effect display;
when the calculation result of the ratio is greater than or equal to th preset threshold, dividing an area occupied by the target information from the intermediate image frame, and taking the area occupied by the target information as a target area;
filling the target area with the target information in the original image frame to obtain a stereoscopic image to be displayed;
and displaying the stereoscopic effect image.
In the above scheme, after calculating the ratio of the target information occupying the display screen, the method further includes:
when the calculation result of the proportion is greater than or equal to a second preset threshold and smaller than the th preset threshold, performing the treatment of reducing on the target area to obtain a frame body corresponding to the target area;
and according to the position relation between the frame body corresponding to the target area and the auxiliary foreground image, distorting the auxiliary foreground image to obtain the distorted auxiliary foreground image.
In the foregoing solution, the distorting the auxiliary foreground image according to the position relationship between the frame corresponding to the target region and the auxiliary foreground image to obtain the distorted auxiliary foreground image includes:
for each pixel point in the auxiliary foreground image, calculating the product of the length-to-width ratio of the frame body corresponding to the target area and the ordinate of each pixel point, and taking the sum of the square of the product and the square of the abscissa of each pixel point as the th calculation result;
calculating a double value of the square of the width for each pixel point in the auxiliary foreground image as a second calculation result;
and when the th calculation result is smaller than the second calculation result, transforming the coordinates of each pixel point in a texture coordinate space to obtain the distorted auxiliary foreground image.
In the foregoing solution, the transforming the coordinate of each pixel point in a texture coordinate space to obtain the distorted auxiliary foreground image includes:
acquiring a central coordinate of a frame corresponding to the target area, wherein the central coordinate is a coordinate of a central point;
calculating a distortion coefficient of the auxiliary foreground image according to the proportion of the target information occupying a display screen, wherein the distortion coefficient represents the distortion degree of the auxiliary foreground image;
calculating a new distorted coordinate of each pixel point according to the central coordinate, the th calculation result, the second calculation result and the distortion coefficient;
and obtaining the distorted auxiliary foreground image according to the distorted new coordinates of each pixel point.
In the above solution, the dividing the area occupied by the target information from the intermediate image frame, and taking the area occupied by the target information as a target area, includes:
dividing the intermediate image frame by using an image segmentation detection algorithm to obtain a target information occupied area, wherein an image in the target information occupied area comprises the target information and the auxiliary foreground image in the target information occupied area;
and shielding the target information occupied area, reserving images of other areas in the intermediate image frame, and taking the shielded target information occupied area as the target area.
In the foregoing solution, when the auxiliary foreground image is an annular auxiliary foreground image, the filling the target area with the target information in the original image frame to obtain a to-be-displayed stereoscopic image includes:
and after the target area is filled into the image of the corresponding area in the original image frame, drawing the lower half part of the auxiliary foreground image in the ring shape again to obtain the image with the stereoscopic effect to be displayed.
In the above solution, after calculating a ratio of the target information occupying a display screen when the target information appears in the intermediate image frame, the method further includes:
and when the calculation result of the proportion is less than th preset threshold value, not displaying the stereoscopic effect image.
In the above solution, the acquiring an original image frame from an acquired image stream includes:
and when the preset time length is reached, capturing the acquired image stream to obtain the original image frame.
In the above solution, before the obtaining the original image frame from the acquired image stream, the method further includes:
detecting whether a stereoscopic effect switch is turned on, wherein the stereoscopic effect switch is used for controlling whether the stereoscopic effect of the image is turned on;
when the stereoscopic effect switch is turned on, the image display method is performed.
In a second aspect, embodiments of the present disclosure provide image display devices, including an acquisition unit, a rendering unit, a calculation unit, a division unit, and a filling unit, wherein,
the acquisition unit is used for acquiring an original image frame from an acquired image stream when an image is acquired;
the drawing unit is used for drawing an auxiliary foreground image on the original image frame to obtain an intermediate image frame with the auxiliary foreground image;
the calculating unit is used for calculating the proportion of the target information occupying a display screen if the original image frame contains the target information, wherein the target information is an image to be subjected to three-dimensional effect display;
the dividing unit is used for dividing an area occupied by the target information from the intermediate image frame when the calculation result of the proportion is greater than or equal to preset threshold value, and taking the area occupied by the target information as a target area;
the filling unit is used for filling the target area with the target information in the original image frame to obtain a stereoscopic image to be displayed;
the drawing unit is further used for displaying the stereoscopic effect image.
In the foregoing solution, the calculating unit is specifically configured to, after the calculating the ratio of the target information occupying the display screen, perform a reducing processing on the target area when a calculation result of the ratio is greater than or equal to a second preset threshold and smaller than an th preset threshold to obtain a frame corresponding to the target area, and distort the auxiliary foreground image according to a positional relationship between the frame corresponding to the target area and the auxiliary foreground image to obtain the distorted auxiliary foreground image.
In the above aspect, the image display apparatus further includes a warping unit, wherein,
the distortion unit is used for calculating the product of the length-to-width ratio of a frame body corresponding to the target area and the ordinate of each pixel point for each pixel point in the auxiliary foreground image, taking the sum of the square of the product and the square of the abscissa of each pixel point as an calculation result, calculating the double value of the square of the width as a second calculation result, and when the calculation result is smaller than the second calculation result, transforming the coordinate of each pixel point in a texture coordinate space to obtain the distorted auxiliary foreground image.
In the above scheme, the warping unit is specifically configured to obtain a center coordinate of a frame corresponding to the target area, where the center coordinate is a coordinate of a center point, calculate a warping coefficient of the auxiliary foreground image according to a ratio of the target information occupying a display screen, where the warping coefficient represents a warping degree of the auxiliary foreground image, calculate a new distorted coordinate of each pixel according to the center coordinate, the -th calculation result, the second calculation result, and the warping coefficient, and obtain the warped auxiliary foreground image according to the new distorted coordinate of each pixel.
In the foregoing solution, the dividing unit is specifically configured to divide the intermediate image frame by using an image segmentation detection algorithm to obtain the target information occupied area, where an image in the target information occupied area includes the target information and the auxiliary foreground image in the target information occupied area; and shielding the target information occupied area, reserving images of other areas in the intermediate image frame, and taking the shielded target information occupied area as the target area.
In the foregoing solution, when the auxiliary foreground image is an annular auxiliary foreground image, the filling unit is specifically configured to, after filling the target region with an image of a corresponding region in the original image frame, draw the lower half portion of the annular auxiliary foreground image again to obtain a to-be-displayed stereoscopic image.
In the foregoing solution, the calculating unit is further configured to, after calculating a ratio of the target information occupying a display screen if the original image frame includes the target information, not display the stereoscopic image when a calculation result of the ratio is smaller than an th preset threshold.
In the above scheme, the acquisition unit is specifically configured to capture the acquired image stream to obtain the original image frame when a predetermined time length is reached.
In the above-mentioned aspect, the image display apparatus further includes a detection unit and an execution unit, wherein,
the detection unit is used for detecting whether a stereoscopic effect switch is turned on before an original image frame is acquired from the acquired image stream, wherein the stereoscopic effect switch is used for controlling whether the stereoscopic effect of the image is turned on;
the execution unit is used for executing the image display method when the stereoscopic effect switch is turned on.
In a third aspect, an embodiment of the present disclosure provides kinds of image display electronic devices, including:
a memory for storing executable instructions;
and the processor is used for realizing the image display method provided by the embodiment of the disclosure when the executable instruction is executed.
In a fourth aspect, the present disclosure provides storage media storing executable instructions, which when executed, are used to implement the image display method provided by the embodiments of the present disclosure.
The embodiment of the disclosure has the following beneficial effects:
the spatial position is expressed by changing the shielding relation between the auxiliary foreground image and the target information, 3D spatial sense is simulated, the visual effect that the target information stretches out from the auxiliary foreground image is embodied, and therefore the special effect of naked eye stereoscopic effect is established when the monocular camera shoots.
Drawings
The above and other features, advantages, and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings in which like or similar reference numbers identify like or similar elements throughout the figures, it being understood that the figures are schematic and that elements and elements are not are drawn to scale.
FIG. 1 is a schematic block diagram of an electronic device (e.g., the electronic device or the server in FIG. 1) 100 implementing an embodiment of the disclosure;
fig. 2 is a schematic view of alternative structures of an image display device 200 embodying an embodiment of the present disclosure;
FIG. 3 is a schematic flow diagram of alternatives for implementing the image display method of an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an original image frame in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an intermediate image frame with an auxiliary foreground image in an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a face frame corresponding to a face image in an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a target information footprint in an embodiment of the present disclosure;
FIG. 8 is a schematic view of a target area in an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a process in which an image display apparatus obtains a face image in an original image frame through a target area correspondence in an intermediate image frame according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of a stereoscopic effect image to be displayed in an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a stereoscopic image to be displayed corresponding to a human hand in an embodiment of the disclosure;
FIG. 12 is a schematic diagram of an auxiliary foreground image of a circular line in an embodiment of the present disclosure;
FIG. 13 is a second flowchart illustration of an alternative image display method in an embodiment of the disclosure;
FIG. 14 is a schematic of a warped auxiliary foreground image in an embodiment of the disclosure;
FIG. 15 is a second schematic diagram of a warped auxiliary foreground image in an embodiment of the present disclosure;
fig. 16 is a third schematic diagram of the warped auxiliary foreground image in the embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is inclusive, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term " embodiments" means "at least embodiments". The term "another embodiment" means "at least additional embodiments". The term " embodiments" means "at least embodiments.
It should be noted that the terms "", "second", etc. mentioned in this disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by these devices, modules or units.
It is noted that references to "", "plurality" in this disclosure are intended to be illustrative rather than limiting, and those skilled in the art will appreciate that references to " or more" are intended to be exemplary unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring now to fig. 1, fig. 1 is a schematic block diagram of an electronic device 100 implementing an embodiment of the present disclosure, which may be various terminals, including a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a digital Television (TV), a desktop computer, etc. the electronic device shown in fig. 1 is merely examples and should not impose any limitation on the function and scope of use of an embodiment of the present disclosure.
As shown in fig. 1, the electronic device 100 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 110, which may perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM)120 or a program loaded from a storage means 180 into a Random Access Memory (RAM) 130. In the RAM 130, various programs and data necessary for the operation of the electronic apparatus 100 are also stored. The processing device 110, the ROM120, and the RAM 130 are connected to each other through a bus 140. An Input/Output (I/O) interface 150 is also connected to bus 140.
Generally, the following devices may be connected to the I/O interface 150: input devices 160 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 170 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; a storage device 180 including, for example, a magnetic tape, a hard disk, or the like; and a communication device 190. The communication device 190 may allow the electronic device 100 to communicate wirelessly or by wire with other devices to exchange data. While fig. 1 illustrates an electronic device 100 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
For example, the disclosed embodiments include computer program products comprising a computer program embodied on a computer readable storage medium (i.e., storage medium), the computer program containing program code for performing the methods illustrated by the flowcharts.
More specific examples of the computer readable storage medium may include, but are not limited to, an electrical connection having or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer-readable storage medium may be included in the electronic device 100; or may be separate and not incorporated into the electronic device 100.
The computer-readable storage medium carries or more programs, which, when executed by the electronic device 100, cause the electronic device to perform the image display method provided by the embodiments of the present disclosure.
Computer program code for carrying out operations in embodiments of the present disclosure may be written in or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" language, or similar programming languages.
It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures, for example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
The name of the unit does not in some cases form a limitation of the unit itself, for example, the th acquiring unit may also be described as a "unit acquiring at least two internet protocol addresses".
For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOC), Complex Programmable Logic Devices (CPLDs), and so forth.
In the context of embodiments of the present disclosure, a machine-readable medium may be a tangible medium that may contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The following provides a unit and/or a module in an image display device in connection with an embodiment of the present disclosure. It is understood that the units or modules in the image display apparatus can be implemented in the electronic device shown in fig. 1 by software (for example, the computer program stored in the computer software program) or the hardware logic components (for example, FPGA, ASIC, ASSP, SOC, and CPLD) described above.
Referring to fig. 2, fig. 2 is a schematic diagram of optional structures of an image display device 200 implementing an embodiment of the present disclosure, showing modules of an acquisition unit 210, a drawing unit 220, a calculation unit 230, a dividing unit 240 and a filling unit 250, wherein,
the acquiring unit 210 is configured to, when acquiring an image, acquire an original image frame from an acquired image stream;
the drawing unit 220 is configured to draw an auxiliary foreground image on the original image frame to obtain an intermediate image frame with the auxiliary foreground image;
the calculating unit 230 is configured to calculate, if the original image frame includes target information, a ratio of the target information occupying a display screen, where the target information is an image to be displayed with a stereoscopic effect;
the dividing unit 240 is configured to divide an area occupied by the target information from the intermediate image frame, taking the area occupied by the target information as a target area, when the calculation result of the ratio is greater than or equal to preset threshold;
the filling unit 250 is configured to fill the target area with the target information in the original image frame to obtain a stereoscopic image to be displayed;
the rendering unit 220 is further configured to display the stereoscopic image.
In the foregoing solution, the calculating unit 230 is specifically configured to, after the calculating the ratio of the target information occupying the display screen, perform a reducing processing on the target area when a calculation result of the ratio is greater than or equal to a second preset threshold and smaller than an th preset threshold to obtain a frame corresponding to the target area, and warp the auxiliary foreground image according to a position relationship between the frame corresponding to the target area and the auxiliary foreground image to obtain the warped auxiliary foreground image.
In the above-described aspect, the image display apparatus further includes a warping unit 260, wherein,
the warping unit 260 is configured to calculate, for each pixel point in the auxiliary foreground image, a product of a length-to-width ratio of a frame corresponding to the target region and a longitudinal coordinate of each pixel point, use a sum of a square of the product and a square of an abscissa of each pixel point as an -th calculation result, calculate a double value of the square of the width as a second calculation result, and transform, when the -th calculation result is smaller than the second calculation result, the coordinate of each pixel point in a texture coordinate space to obtain the warped auxiliary foreground image.
In the above scheme, the warping unit 260 is specifically configured to obtain a center coordinate of a frame corresponding to the target area, where the center coordinate is a coordinate of a center point, calculate a warping coefficient of the auxiliary foreground image according to a ratio of the target information occupying a display screen, where the warping coefficient represents a warping degree of the auxiliary foreground image, calculate a new distorted coordinate of each pixel according to the center coordinate, the -th calculation result, the second calculation result, and the warping coefficient, and obtain the warped auxiliary foreground image according to the new distorted coordinate of each pixel.
In the foregoing solution, the dividing unit 240 is specifically configured to divide the intermediate image frame by using an image segmentation detection algorithm to obtain the target information occupied area, where an image in the target information occupied area includes the target information and the auxiliary foreground image in the target information occupied area; and shielding the target information occupied area, reserving images of other areas in the intermediate image frame, and taking the shielded target information occupied area as the target area.
In the foregoing solution, when the auxiliary foreground image is an annular auxiliary foreground image, the filling unit 250 is specifically configured to draw the lower half portion of the annular auxiliary foreground image again after the target area is filled with the image of the corresponding area in the original image frame, so as to obtain the to-be-displayed stereoscopic image.
In the above solution, the calculating unit 230 is further configured to, after calculating a ratio of the target information occupying a display screen if the original image frame includes the target information, not display the stereoscopic image when a calculation result of the ratio is smaller than an th preset threshold.
In the above scheme, the acquisition unit 210 is specifically configured to capture a frame of the acquired image stream when a predetermined time length is reached, so as to obtain the original image frame.
In the above solution, the image display apparatus further comprises a detection unit 270 and an execution unit 280, wherein,
the detecting unit 270 is configured to detect whether a stereoscopic effect switch is turned on before the original image frame is acquired from the acquired image stream, where the stereoscopic effect switch is a switch that controls whether a stereoscopic effect of an image is turned on;
the execution unit 280 is configured to execute the image display method when the stereoscopic effect switch is turned on.
It should be noted that the above classification of units does not constitute a limitation of the electronic device itself, for example, units may be split into two or more sub-units, or units may be combined into new units.
It is further noted that the names of the above units do not in some cases constitute a limitation on the units themselves, for example, the above acquisition unit 210 may also be described as a unit for "acquiring an original image frame from an acquired image stream".
For the same reason, units and/or modules in the electronic device, which are not described in detail, do not represent defaults of the corresponding units and/or modules, and all operations performed by the electronic device may be implemented by the corresponding units and/or modules in the electronic device.
With continuing reference to fig. 3, fig. 3 is a flow chart illustrating alternatives for implementing the image display method according to the embodiment of the present disclosure, for example, when the processing device 110 loads the program in the Read Only Memory (ROM)120 or the program in the storage device 180 into the Random Access Memory (RAM), the image display method shown in fig. 3 can be implemented when the program is executed, and the steps shown in fig. 3 are explained below.
In the following description, reference is made to " embodiments," which describe a subset of all possible embodiments, but it is understood that " embodiments" may be the same subset or a different subset of all possible embodiments, and may be combined with each other without conflict.
S101, when an image is collected, obtaining an original image frame from a collected image stream.
The image display modes provided by the embodiment of the disclosure are suitable for recognizing target objects such as human faces and human hands in video or image shooting and simulating special effect production scenes with three-dimensional effects.
In the embodiment of the present disclosure, the process of acquiring the image may be that when the electronic device takes a picture or a video, the image display device captures a frame of the acquired image stream to obtain an original image frame.
In the embodiment of the present disclosure, the original image frames obtained by the image display device are original image frames in the acquired image stream without any image processing.
In embodiments, the original image frame obtained by the image display device may be as shown in FIG. 4.
In the embodiment of the disclosure, the image display device may capture a frame of the acquired image stream each time a predetermined time period is reached, so as to obtain an original image frame.
In embodiments, the image display device may set a cycle timer of a predetermined duration to acquire raw image frames from the acquired image stream each time the predetermined duration is reached.
And S102, drawing the auxiliary foreground image on the original image frame to obtain an intermediate image frame with the auxiliary foreground image.
In the embodiment of the present disclosure, after the original image frame is acquired, the image display device draws an auxiliary foreground image on the original image frame, so as to obtain an intermediate image frame with the auxiliary foreground image.
In the embodiment of the disclosure, the auxiliary foreground image can be a foreground sticker for embodying an autostereoscopic effect, the foreground sticker can be a static or dynamic image and is displayed on the upper layer of an original image frame in a sticker form to accompany target information serving as a main body, and the foreground sticker serves as additional images and is combined with the collected images to embody different decoration and depth of field effects.
In , the image display device may display the auxiliary foreground sticker in a shape of a straight line, as shown in fig. 5, two parallel vertical lines 500-1 and 500-2 displayed on the original image frame are the auxiliary foreground sticker in a shape of a straight line, and the image display device takes 500-1 and 500-2 as the auxiliary foreground images.
It should be noted that, in the embodiment of the present disclosure, the finally displayed image on the display screen of the electronic device is displayed after synthesizing the contents of the multiple layers, where each layer may include different contents, and the contents included in the multiple layers are synthesized according to a predetermined display area and a predetermined hierarchical order to form the finally displayed image on the screen.
In the embodiment of the disclosure, the image display device draws the auxiliary foreground image on the upper layer of the original image frame, the image display device can obtain the auxiliary foreground image on the upper layer, the original image frame is on two image layers of the lower layer, and the image display device synthesizes the two image layers to obtain the intermediate image frame with the auxiliary foreground image.
S103, if the original image frame contains target information, calculating the proportion of the target information occupying a display screen, wherein the target information is an image to be subjected to three-dimensional effect display.
In the embodiment of the disclosure, after the image display device obtains the original image frame and the intermediate image frame, the image display device first performs target information detection on the original image frame, and if it is detected that the original image frame includes the target information, calculates a ratio of the target information occupying a display screen.
In the embodiment of the present disclosure, the target information is an image to be subjected to a stereoscopic effect display, the original image frame includes a lot of image information, the image display device performs a stereoscopic effect display on the target image, and other image information is not subjected to a display process of the stereoscopic effect.
In embodiments, the original image frame acquired by the image display device includes images of a human face and a scene, when the image display needs to display a human face with a stereoscopic effect, the target information is the human face, the image display device displays the human face with the stereoscopic effect, and the images in the original image frame are reserved for the human face and the scene, and are not displayed with the stereoscopic effect.
In the embodiments , the image display device may determine whether the original image frame includes the target information through a corresponding target information detection algorithm, and in the embodiments, when the target information is a human face, the image display device may determine whether the original image frame includes the human face through a human face detection algorithm, and when the target information is a human hand, the image display device may determine whether the original image frame includes a left hand or a right hand through a human hand detection algorithm.
In the embodiment of the disclosure, after the image display device detects that the original image frame contains the target information, the image display device calculates the proportion of the target information occupying the display screen of the electronic device.
In , the image display device may obtain a rectangular region corresponding to the image boundary of the target information after being processed into by using a target information detection algorithm used in the target information detection.
In embodiments, when the target information is a human face, the image display device identifies whether the original image frame includes a human face through a human face detection algorithm, and at the same time, the human face detection algorithm may also use to calculate a face frame after the human face image boundary corresponding to the identified human face image is classified into , as shown in fig. 6, a dotted-line rectangular frame 600 in fig. 6 is a rectangular region corresponding to the image boundary of the target information human face after being classified into .
In embodiments , after the image display apparatus obtains the rectangular area corresponding to the target information, the image display apparatus calculates a ratio of the rectangular area corresponding to the target information to the display screen, in embodiments , the image display apparatus may calculate a ratio between a maximum value of the length or the width of the rectangular area corresponding to the target information and the length or the width corresponding to the display screen, so as to obtain a calculation result of the ratio of the target rectangular area to the display screen.
In the embodiment of the present disclosure, the image display device may also calculate the ratio of the target information occupying the display screen by using other methods, and specifically select the target information according to the actual situation, which is not limited in the embodiment of the present disclosure.
And S104, when the calculation result of the proportion is greater than or equal to the th preset threshold value, dividing the area occupied by the target information from the middle image frame, and taking the area occupied by the target information as the target area.
In the embodiment of the present disclosure, when the ratio calculated by the image display apparatus is greater than or equal to the th preset threshold, the image display apparatus divides an area occupied by the target information from the intermediate image frame, and takes the area occupied by the target information as the target area.
In the embodiment of the present disclosure, the image display apparatus may infer whether the target information is blocked by the auxiliary foreground image through the th preset threshold, and when the calculation result of the proportion of the target information occupying the display screen is greater than or equal to the th preset threshold, it indicates that the size of the target information in the screen is already large and is likely to be blocked by the auxiliary foreground image, so that the image display apparatus may perform subsequent image processing to present the stereoscopic effect.
In embodiments, the preset threshold may be 0.5, or may be other values, which are specifically selected according to actual situations, and the embodiments of the present disclosure are not limited.
In the embodiment of the present disclosure, the image display apparatus divides the area occupied by the target information from the intermediate image frame, and taking the area occupied by the target information as the target area may include S1041 to S1042 as follows:
s1041, dividing the intermediate image frame by using an image segmentation detection algorithm to obtain a target information occupying area, wherein an image in the target information occupying area comprises target information and an auxiliary foreground image in the target information occupying area.
In the embodiment of the disclosure, the image display device divides the intermediate image frame by using an image segmentation detection algorithm to obtain the target information occupying area.
In the embodiment of the present disclosure, the target information occupying region is an image portion within a range included in a boundary of the target information in the intermediate image frame.
In the embodiment of the disclosure, when the target information is a human face, the image display apparatus may use a head segmentation headseg algorithm in the neural network model to segment a region occupied by the head from the intermediate image frame, so as to obtain a region occupied by the target information. When the target information is a human hand, the image display device may also use an image segmentation detection algorithm of the human hand to respectively divide the areas occupied by the left hand and the right hand.
In , the original image frame is shown in fig. 4, the intermediate image frame is shown in fig. 5, the target information is face information in the original image frame, the image display device divides the intermediate image frame into regions occupied by the face information, and the obtained face information occupying regions can be shown in fig. 7, wherein the dotted line part 700 in fig. 7 is the face information occupying region divided from the intermediate image frame by the image display device, as can be seen from fig. 7, the image display device divides the image in fig. 6 into the face occupying regions with the face as the boundary, and the face occupying region 700 divided by the image display device includes the face and the auxiliary foreground image in a straight line shape partially within the face.
S1042, shielding the occupied area of the target information, reserving the images of the rest areas in the middle image frame, and taking the shielded occupied area of the target information as the target area.
In the embodiment of the present disclosure, after the image display device divides the target information occupied area from the intermediate image frame, the image layer mask corresponding to the target information occupied area is obtained, and the image layer mask blocks the target information occupied area, and performs the transparency processing on other areas.
In the embodiment of the present disclosure, the image display device takes the blocked occupied area of the target information as the target area.
In , the original image frame is shown in fig. 4, the intermediate image frame is shown in fig. 5, the target information is face information in the original image frame, the target information occupying area is shown in fig. 7, the target area obtained by the image display device blocking the target information occupying area can be shown in fig. 8, and the slashed area 800 in fig. 8 is the blocked target information occupying area, i.e., the target area, as can be seen from fig. 8, the image display device blocks 700 in fig. 7, and the image of the rest part except 700 in fig. 7 is retained.
It will be appreciated that the image display device blocks the target information occupying area, and blocks the target information from the auxiliary foreground image in the target information occupying area, while the images of the other areas in the intermediate image frame except the target area can be displayed normally.
And S105, filling a target area by adopting target information in the original image frame to obtain a stereoscopic effect image to be displayed.
In the embodiment of the disclosure, after the image display device obtains the target area, the target area is filled with the target information in the original image frame, so as to obtain the stereoscopic image to be displayed.
In the embodiment of the disclosure, the original image frame is displayed on the lower layer, the intermediate image frame is displayed on the upper layer of the original image frame, and the coordinates of the same pixel point in the original image frame and the intermediate image frame are all aligned, so that after the image display device obtains the target area from the intermediate image frame, the image display device can correspondingly obtain the area where the original image frame is the same as the target area, and the image in the area is the target information in the original image frame.
In the embodiment of the present disclosure, the image display device may invoke a layer rendering function of the electronic device display system through a general OpenGL ES (OpenGL for embedded systems) interface, and extract target information corresponding to a target area in an original image frame to a top layer of a rendering chain, that is, extract a target information image originally displayed in a lower layer to an uppermost layer for drawing, so that the shielded target area is filled with an image of the target information, and the image display device synthesizes images of all layers to obtain a stereoscopic image to be displayed.
In the embodiment of the present disclosure, an OpenGL ES (OpenGL for Embedded Systems) interface is a cross-language and cross-platform application programming interface for rendering 2D and 3D vector graphics on an Embedded platform.
in some embodiments, the process of the image display device obtaining the portion 900 of the face image in the original image frame through the target area 800 in the intermediate image frame may be as shown in fig. 9, since the intermediate image frame is obtained by drawing the auxiliary foreground image on the basis of the original image frame, and the two image layers of the intermediate image frame and the original image frame are the same in size and are displayed in a stacked manner, so the image display device may obtain the portion 900 of the face image in the original image frame by aligning coordinates of each pixel point in the target area 800 of the intermediate image frame with coordinates of a pixel point in the original image frame, after the image display device fills the target area 800 in fig. 8 with the portion 900 of the face image in the original image frame, obtain the stereoscopic image to be displayed as shown in fig. 10, it can be seen from fig. 10 that the face image 900 in the original image frame forms a barrier to the auxiliary foreground images 500-1 and 500-2, while other portions of the body of the human image are still blocked by the auxiliary foreground images 500-1 and 500-2, thereby showing the stereoscopic effect that the face 900 protrudes from the auxiliary foreground images 500-1 and 500-.
The image display device can simulate the stereoscopic sense of space through the shielding relation formed by the target information and the auxiliary foreground image in the finally obtained stereoscopic image to be displayed, and realizes the special effect of shooting the image with the naked eye stereoscopic effect by using the monocular camera.
In embodiments, as shown in fig. 11, when the target information is the human hand 110, the image display apparatus may also obtain a stereoscopic image in which the human hand 110 stretches out the auxiliary foreground images 111_1 and 111_2 by the above-described image display method.
In embodiments of the present disclosure, the image display device may also display a secondary foreground image of the line of the circle.
In this embodiment of the disclosure, when the image display device displays the auxiliary foreground image in the shape of a circular strip, in order to reflect the effect that the target information passes through the ring, that is, the target information only passes through the upper half ring and does not pass through the lower half ring, S105 may also be:
and S105, after the target area is filled into the image of the corresponding area in the original image frame, drawing the lower half part of the auxiliary foreground image in the shape of a circular ring again to obtain the image with the stereoscopic effect to be displayed.
In the embodiment of the present disclosure, when the image display device displays the auxiliary foreground images of the circular lines, after the image display device fills the target area into the image of the corresponding area in the original image frame, auxiliary foreground images of only the lower half of the circular rings are overlaid to draw the uppermost layer of all the image layers, so as to obtain the stereoscopic image to be displayed, and the stereoscopic image to be displayed can achieve the stereoscopic effect that the head of the stereoscopic image passes through the upper half of the circular rings.
In , in fig. 12, when the target information in fig. 12 is a face 1121 and a human hand 1122, the auxiliary foreground image is a ring 1120, and the stereoscopic image to be displayed corresponding to the auxiliary foreground image of the ring line obtained by the image display device may be as shown in fig. 12, after the image display device fills the target region corresponding to the face and the human hand, the lower half 1120_1 of the ring-shaped auxiliary foreground image is drawn again, and in the obtained stereoscopic image to be displayed, the face 1121 blocks the upper half ring, and the human hand is blocked by the lower half ring 1120_1 drawn again, so as to show the stereoscopic effect that the face 1121 protrudes from the auxiliary foreground image ring 1120.
It can be understood that, when the image display device displays the auxiliary foreground image of the circular line, since the second half circular rings are additionally drawn, the stereoscopic effect that the target information in the upper half picture passes through the circular rings can be embodied in the finally obtained stereoscopic effect image to be displayed.
And S106, displaying the stereoscopic effect image.
In the embodiment of the disclosure, after the image display device obtains the stereoscopic image to be displayed, the stereoscopic image to be displayed is sent to the display screen of the electronic device for displaying, and the stereoscopic effect of the image is finally presented.
It should be noted that, in the embodiment of the present disclosure, S101 to S106 are methods for processing frame original images, and when an image display device processes an input video stream, the image display device may perform the processing of S101 to S106 on each acquired original image frames in the video stream, so as to finally obtain a dynamic effect of a change in a blocking relationship between target information and an auxiliary foreground image in the entire video stream, and implement a naked eye stereoscopic effect of a video.
In the embodiment of the present disclosure, after S103, S107 may be further included, as follows:
and S107, when the calculation result of the proportion is less than th preset threshold value, not displaying the stereoscopic effect image.
In the embodiment of the present disclosure, when the calculation result of the ratio of the target information occupying the screen is smaller than the th preset threshold, it indicates that the image size of the target information in the screen is smaller and the target information is further away from the auxiliary foreground image, so that the stereoscopic effect display is not required, and the method in S104 and the following steps are not performed.
In the embodiment of the present disclosure, before S101, S108-S109 may be further included, as follows:
and S108, detecting whether the stereoscopic effect switch is turned on or not, wherein the stereoscopic effect switch is a switch for controlling whether the stereoscopic effect of the image is turned on or not.
And S109, when the stereoscopic effect switch is turned on, executing the image display method.
In the embodiment of the present disclosure, the image display method may be controlled by a general stereoscopic effect switch, and the image display method in the embodiment of the present disclosure is performed when the stereoscopic effect switch is turned on, and is not performed when the stereoscopic effect switch is turned off.
With continuing reference to fig. 13, fig. 13 is a flow chart illustrating alternatives for implementing the image display method of the embodiment of the present disclosure, and after S103, methods are further provided, including S201 to S202, as follows:
s201, when the calculation result of the proportion is larger than or equal to a second preset threshold and smaller than an th preset threshold, performing processing on the target area to obtain a frame body corresponding to the target area.
In the embodiment of the present disclosure, when the calculation result of the ratio is greater than or equal to the second preset threshold and less than the th preset threshold, the image display device performs a grouping process on the target region, and performs a grouping process on the irregular curve boundary of the target region, so as to obtain a regular frame corresponding to the target region.
In the embodiment of the present disclosure, the second preset threshold is a value greater than 0 and less than the th preset threshold, and in embodiments, when the th preset threshold is set to 0.5, the second preset threshold may be set to 0.3, which is specifically selected according to actual situations, and the embodiment of the present disclosure is not limited.
In , the image display apparatus may use a rectangular area corresponding to the target information obtained by the target information detection algorithm in S103 as a frame corresponding to the target area.
In the embodiment of the present disclosure, the frame body corresponding to the target area may also be a frame body with other shapes, which is specifically selected according to the actual situation, and the embodiment of the present disclosure is not limited.
S202, according to the position relation between the frame body corresponding to the target area and the auxiliary foreground image, the auxiliary foreground image is distorted, and the distorted auxiliary foreground image is obtained.
In the embodiment of the disclosure, after the image display device obtains the frame corresponding to the target area, the lines of the auxiliary foreground image are distorted in different directions and in different degrees according to the position relationship between the frame corresponding to the target area and the auxiliary foreground image, so as to obtain the distorted auxiliary foreground image, and thus, the interactive effect between the target information and the auxiliary foreground image is reflected.
In the embodiment of the disclosure, in order to embody the interactive special effect between the target information and the auxiliary foreground image, the image display device may distort the auxiliary foreground image in the direction opposite to the position of the target information when the target information is close to the auxiliary foreground image and coincides with the auxiliary foreground image, so as to embody the special effect that the auxiliary foreground image is deformed and distorted under the extrusion of the target information.
In this embodiment of the present disclosure, in order to obtain the distorted auxiliary foreground image, the image display device needs to calculate, according to the position relationship between the frame corresponding to the target region and the auxiliary foreground image, a new coordinate value of each pixel point in the auxiliary foreground image under the influence of the frame corresponding to the target region in the texture coordinate system, and obtain the distorted auxiliary foreground image according to the new coordinate value.
In the embodiment of the present disclosure, the image display device distorts the auxiliary foreground image according to the position relationship between the frame corresponding to the target area and the auxiliary foreground image, and obtaining the distorted auxiliary foreground image may include S2021 to S2023, as follows:
s2021, for each pixel point in the auxiliary foreground image, calculating the product of the length-to-width ratio of the frame body corresponding to the target area and the ordinate of each pixel point, and taking the sum of the square of the product and the square of the abscissa of each pixel point as the th calculation result.
In the embodiment of the disclosure, the image display device first obtains the length and the width of the frame body according to the frame body corresponding to the target area, and then, for each pixel point in the auxiliary foreground image, the image display device can obtain the abscissa and the ordinate of each pixel point in the texture space coordinate system.
Multiplying the ratio by the ordinate of each pixel, and taking the sum of the square of the multiplied product and the square of the abscissa of each pixel as the th calculation result.
In embodiments, the image display device may take the calculation result of equation 1 as the th calculation result:
x2+(y*rectWidth/rectHeight)2(1)
wherein x is an abscissa value of each pixel in the auxiliary foreground image, y is an ordinate value of each pixel in the auxiliary foreground image, rectWidth is a length of the frame corresponding to the target area, and rectHeight is a width of the frame corresponding to the target area.
S2022, calculate a double value of the square of the width as a second calculation result.
In the embodiment of the present disclosure, the image display device calculates a square value of the width of the frame corresponding to the target area, and then multiplies the square value by two to obtain a second calculation result.
In , the image display device may obtain the second calculation result according to equation 2:
2*(rectWidth2) (2)
and S2023, when the th calculation result is smaller than the second calculation result, transforming the coordinates of each pixel point in the texture coordinate space to obtain the distorted auxiliary foreground image.
In the embodiment of the present disclosure, when the th calculation result is smaller than the second calculation result, it indicates that the position of the frame corresponding to the target area partially coincides with the position of the auxiliary foreground image, and at this time, the auxiliary foreground image may be distorted to show an effect that the target information and the auxiliary foreground image interact with each other.
In , when equation 3 is satisfied, that is, when the statement in if is determined to be true, the image display device transforms the coordinates of each pixel point in the texture coordinate space to obtain the warped auxiliary foreground image.
if(x2+(y*rectWidth/rectHeight)2<2*(rectWidth2)) (3)
It can be understood that the image display device calculates which parts of the auxiliary foreground image coincide with the target information distance through the length and the width of each pixel point in the auxiliary foreground image and the frame body corresponding to the target area, that is, the target information extrudes the auxiliary foreground image, the image display device distorts the part of the auxiliary foreground image extruded by the target information, and the rest of the auxiliary foreground image is not distorted, so that the special effect that the corresponding part of the auxiliary foreground image deforms under the extrusion effect of the target information can be embodied.
In this embodiment of the present disclosure, the transforming, by the image display device in S2023, the coordinate of each pixel point in the texture coordinate space to obtain the distorted auxiliary foreground image may further include S301 to S304, as follows:
s301, obtaining the center coordinate of the frame body corresponding to the target area, wherein the center coordinate is the coordinate of the center point.
In the embodiment of the present disclosure, after obtaining the frame corresponding to the target area, the image display device may correspondingly obtain the coordinate of the central point of the frame corresponding to the target area as the central coordinate.
In the embodiment of the present disclosure, since the frame corresponding to the target region obtained by the image display device has a regular shape subjected to the normalization , the image display device can obtain the center point of the frame corresponding to the target region for frames corresponding to target regions having regular shapes, and the image display device takes the coordinates of the center point in the texture space coordinate system as the center coordinates.
In , when the frame corresponding to the target area is rectangular, the image display device may obtain the center point of the rectangle, and use the coordinate corresponding to the center point of the rectangle as the center coordinate.
S302, calculating a distortion coefficient of the auxiliary foreground image according to the proportion of the target information occupying the display screen, wherein the distortion coefficient represents the distortion degree of the auxiliary foreground image.
In the embodiment of the present disclosure, the image display device may further calculate a distortion coefficient of the auxiliary foreground image according to a ratio of the target information occupying the display screen.
In the embodiment of the present disclosure, the image display apparatus may calculate the distortion coefficient of the auxiliary foreground image by obtaining the proportion of the target information occupying the display screen in step S103.
In , the warping factor of the auxiliary foreground image can be represented by i, which means that the foreground auxiliary image is not warped when i is 0, and the value of i is mapped to 0-1 when the ratio of the frame corresponding to the target area occupying the screen is between the th preset threshold and the second preset threshold.
In the embodiment of the present disclosure, for the auxiliary foreground images with different shapes, algorithms with different distortion coefficients exist in the OPEN GL graphics algorithm, which is specifically selected according to actual situations, and the embodiment of the present disclosure is not limited.
By calculating the distortion coefficient, the effect that the closer the frame corresponding to the target region is to the foreground auxiliary image, the greater the distortion degree of the foreground auxiliary image can be achieved.
And S303, calculating a new distorted coordinate of each pixel point according to the central coordinate, the th calculation result, the second calculation result and the distortion coefficient.
In the embodiment of the present disclosure, the image display apparatus may calculate a new coordinate of each pixel point after being distorted according to the central coordinate obtained in S301, the th calculation result obtained in S2021, the second calculation result obtained in S2022, and the distortion coefficient obtained in S302, according to equation (4), as follows:
and u and v are new coordinates corresponding to the pixels in the foreground auxiliary image after the x and y of the pixels are distorted and transformed, and c is the central coordinate of the target information rectangular frame.
In the embodiment of the present disclosure, for each pixel point in the auxiliary foreground image that is determined to need to be distorted in S2023, the image display device may calculate, according to formula (4), a new coordinate corresponding to each distorted pixel point in the texture coordinate system, where the new coordinate is a coordinate after the pixel point in the auxiliary foreground image is close to and coincides with the target information, and is transformed in different directions and different programs according to the relative position and the close degree of the target information.
It can be understood that, by calculating the new coordinates of each distorted pixel point, the image display device obtains the position of the auxiliary foreground image after deformation generated by approaching and extruding the target information, and the interaction effect of the auxiliary foreground image and the target information is embodied.
And S304, obtaining a distorted auxiliary foreground image according to the distorted new coordinates of each pixel point.
In the embodiment of the present disclosure, after the image display device obtains the new distorted coordinates of each pixel point, the line position of the corresponding portion of the auxiliary foreground image is correspondingly changed according to the new distorted coordinates of each pixel point, so as to obtain the distorted auxiliary foreground image.
In , as shown in fig. 14, when the target information is the human face 140 and the auxiliary foreground images are the auxiliary foreground images 141_1 and 141_2 in the shape of horizontal straight lines, the image display device distorts the auxiliary foreground image at the portion of the extrusion position 141_2_1 of the human face 140 due to the extrusion of the auxiliary foreground image 142 by the position of the human face 140, resulting in the distorted special effect shown in fig. 14.
In , as shown in fig. 15, when the target information is the face 150 and the auxiliary foreground images are the auxiliary foreground images 151_1 and 151_2 in the shape of oblique straight lines, the image display device distorts the auxiliary foreground image at the squeezed position 151_1_1 of the face 150 due to the squeezing of the position of the face 150 to the auxiliary foreground image 151_1, resulting in the distorted special effect shown in fig. 15.
In , as shown in fig. 16, when the target information is a human face 160 and the auxiliary foreground image is an auxiliary foreground image 161 in a circular shape, the image display device distorts the auxiliary foreground image at the squeezed position 161_1 of the human face 160 due to the squeeze of the position of the human face 160 on the auxiliary foreground image 161, resulting in the distorted special effect shown in fig. 16.
It can be understood that when the ratio of the target information occupying the screen is greater than or equal to the second preset threshold and less than the th preset threshold, the image display device represents the interactive special effect between the target information and the auxiliary foreground image by distorting the auxiliary foreground image according to the position of the target information, and enhances the stereoscopic effect of the image.
It should be noted that the image display method in the embodiment of the present disclosure is a method of performing image display processing on original image frames, and for a continuously captured video stream, the embodiment of the present disclosure may perform the same image display method processing on each acquired original image frames, so as to finally obtain a stereoscopic effect of target information in the entire dynamic video or image stream.
The above description is only an example of the present disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Similarly, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the disclosure.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (12)

1, A method for displaying an image, comprising:
acquiring an original image frame from an acquired image stream when acquiring an image;
drawing an auxiliary foreground image on the original image frame to obtain an intermediate image frame with the auxiliary foreground image;
if the original image frame contains target information, calculating the proportion of the target information occupying a display screen, wherein the target information is an image to be subjected to three-dimensional effect display;
when the calculation result of the ratio is greater than or equal to th preset threshold, dividing an area occupied by the target information from the intermediate image frame, and taking the area occupied by the target information as a target area;
filling the target area with the target information in the original image frame to obtain a stereoscopic image to be displayed;
and displaying the stereoscopic effect image.
2. The method of claim 1, wherein after calculating the proportion of the target information occupying the display screen, the method further comprises:
when the calculation result of the proportion is greater than or equal to a second preset threshold and smaller than the th preset threshold, performing the treatment of reducing on the target area to obtain a frame body corresponding to the target area;
and according to the position relation between the frame body corresponding to the target area and the auxiliary foreground image, distorting the auxiliary foreground image to obtain the distorted auxiliary foreground image.
3. The method according to claim 2, wherein the warping the auxiliary foreground image according to a position relationship between a frame corresponding to the target region and the auxiliary foreground image to obtain the warped auxiliary foreground image comprises:
for each pixel point in the auxiliary foreground image, calculating the product of the length-to-width ratio of the frame body corresponding to the target area and the ordinate of each pixel point, and taking the sum of the square of the product and the square of the abscissa of each pixel point as the th calculation result;
calculating a value of twice the square of the width as a second calculation result;
and when the th calculation result is smaller than the second calculation result, transforming the coordinates of each pixel point in a texture coordinate space to obtain the distorted auxiliary foreground image.
4. The method according to claim 3, wherein transforming the coordinates of each pixel point in a texture coordinate space to obtain the warped auxiliary foreground image comprises:
acquiring a central coordinate of a frame corresponding to the target area, wherein the central coordinate is a coordinate of a central point;
calculating a distortion coefficient of the auxiliary foreground image according to the proportion of the target information occupying a display screen, wherein the distortion coefficient represents the distortion degree of the auxiliary foreground image;
calculating a new distorted coordinate of each pixel point according to the central coordinate, the th calculation result, the second calculation result and the distortion coefficient;
and obtaining the distorted auxiliary foreground image according to the distorted new coordinates of each pixel point.
5. The method according to claim 1, wherein said dividing the area occupied by the target information from the intermediate image frame, with the area occupied by the target information as a target area, comprises:
dividing the intermediate image frame by using an image segmentation detection algorithm to obtain a target information occupied area, wherein an image in the target information occupied area comprises the target information and the auxiliary foreground image in the target information occupied area;
and shielding the target information occupied area, reserving images of other areas in the intermediate image frame, and taking the shielded target information occupied area as the target area.
6. The method according to claim 1, wherein when the auxiliary foreground image is a ring-shaped auxiliary foreground image, the filling the target region with the target information in the original image frame to obtain a stereoscopic image to be displayed comprises:
and after the target area is filled into the image of the corresponding area in the original image frame, drawing the lower half part of the auxiliary foreground image in the ring shape again to obtain the image with the stereoscopic effect to be displayed.
7. The method as claimed in claim 1, wherein if the original image frame contains target information, after calculating the ratio of the target information occupying the display screen, the method further comprises:
and when the calculation result of the proportion is less than th preset threshold value, not displaying the stereoscopic effect image.
8. The method of claim 1, wherein said obtaining raw image frames from an acquired image stream comprises:
and when the preset time length is reached, capturing the acquired image stream to obtain the original image frame.
9. The method of claim 1, wherein prior to said obtaining raw image frames from an acquired image stream, the method further comprises:
detecting whether a stereoscopic effect switch is turned on, wherein the stereoscopic effect switch is used for controlling whether the stereoscopic effect of the image is turned on;
when the stereoscopic effect switch is turned on, the image display method is performed.
10, image display device, comprising an acquisition unit, a drawing unit, a calculation unit, a dividing unit and a filling unit, wherein:
the acquisition unit is used for acquiring an original image frame from an acquired image stream when an image is acquired;
the drawing unit is used for drawing an auxiliary foreground image on the original image frame to obtain an intermediate image frame with the auxiliary foreground image;
the calculating unit is used for calculating the proportion of the target information occupying a display screen if the original image frame contains the target information, wherein the target information is an image to be subjected to three-dimensional effect display;
the dividing unit is used for dividing an area occupied by the target information from the intermediate image frame when the calculation result of the proportion is greater than or equal to preset threshold value, and taking the area occupied by the target information as a target area;
the filling unit is used for filling the target area with the target information in the original image frame to obtain a stereoscopic image to be displayed;
the drawing unit is further used for displaying the stereoscopic effect image.
An electronic device of the kind 11, , comprising:
a memory for storing executable data instructions;
a processor, configured to execute the executable instructions stored in the memory to implement the image display method of any of claims 1-9.
Storage medium of , characterized in that executable instructions are stored, which when executed, are adapted to implement the image display method of any of claims 1 to 9 and .
CN201910927049.4A 2019-09-27 2019-09-27 Image display method and device, electronic equipment and storage medium Active CN110740309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910927049.4A CN110740309B (en) 2019-09-27 2019-09-27 Image display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910927049.4A CN110740309B (en) 2019-09-27 2019-09-27 Image display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110740309A true CN110740309A (en) 2020-01-31
CN110740309B CN110740309B (en) 2022-05-03

Family

ID=69268279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910927049.4A Active CN110740309B (en) 2019-09-27 2019-09-27 Image display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110740309B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173109A (en) * 2022-01-12 2022-03-11 纵深视觉科技(南京)有限责任公司 Watching user tracking method and device, electronic equipment and storage medium
CN114327124A (en) * 2020-09-30 2022-04-12 优派国际股份有限公司 Touch display device and operation method thereof
WO2022130876A1 (en) * 2020-12-18 2022-06-23 ソニーグループ株式会社 Image display device, image display method, and program
WO2023206282A1 (en) * 2022-04-28 2023-11-02 京东方科技集团股份有限公司 Image display method and system, computer readable storage medium, and electronic device

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0778541A2 (en) * 1995-11-20 1997-06-11 Hamamatsu Photonics K.K. Individual identification apparatus
CN1248002A (en) * 1998-09-14 2000-03-22 罗德修 Camera capable of taking both common and stereo pictures
US7158654B2 (en) * 1993-11-18 2007-01-02 Digimarc Corporation Image processor and image processing method
US20080166022A1 (en) * 2006-12-29 2008-07-10 Gesturetek, Inc. Manipulation Of Virtual Objects Using Enhanced Interactive System
US8154544B1 (en) * 2007-08-03 2012-04-10 Pixar User specified contact deformations for computer graphics
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
CN103995584A (en) * 2014-04-29 2014-08-20 深圳超多维光电子有限公司 Three-dimensional interactive method, display device, handling rod and system
CN104574484A (en) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 Method and device for generating picture dynamic effect on basis of interaction operation
CN106803286A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 Mutual occlusion real-time processing method based on multi-view image
CN107168614A (en) * 2012-03-06 2017-09-15 苹果公司 Application for checking image
US20170293158A1 (en) * 2014-09-23 2017-10-12 Ep Global Communications, Inc. Medical device with pre-defined space and related methods
CN108604121A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 Both hands object manipulation in virtual reality
CN108965715A (en) * 2018-08-01 2018-12-07 Oppo(重庆)智能科技有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN109478344A (en) * 2016-04-22 2019-03-15 交互数字Ce专利控股公司 Method and apparatus for composograph
CN109791703A (en) * 2017-08-22 2019-05-21 腾讯科技(深圳)有限公司 Three dimensional user experience is generated based on two-dimensional medium content
CN110109512A (en) * 2009-11-17 2019-08-09 高通股份有限公司 The system and method for three-dimensional virtual object are controlled on portable computing device

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7158654B2 (en) * 1993-11-18 2007-01-02 Digimarc Corporation Image processor and image processing method
EP0778541A2 (en) * 1995-11-20 1997-06-11 Hamamatsu Photonics K.K. Individual identification apparatus
CN1248002A (en) * 1998-09-14 2000-03-22 罗德修 Camera capable of taking both common and stereo pictures
US20080166022A1 (en) * 2006-12-29 2008-07-10 Gesturetek, Inc. Manipulation Of Virtual Objects Using Enhanced Interactive System
US8154544B1 (en) * 2007-08-03 2012-04-10 Pixar User specified contact deformations for computer graphics
CN110109512A (en) * 2009-11-17 2019-08-09 高通股份有限公司 The system and method for three-dimensional virtual object are controlled on portable computing device
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
CN107168614A (en) * 2012-03-06 2017-09-15 苹果公司 Application for checking image
CN103995584A (en) * 2014-04-29 2014-08-20 深圳超多维光电子有限公司 Three-dimensional interactive method, display device, handling rod and system
US20170293158A1 (en) * 2014-09-23 2017-10-12 Ep Global Communications, Inc. Medical device with pre-defined space and related methods
CN104574484A (en) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 Method and device for generating picture dynamic effect on basis of interaction operation
CN109478344A (en) * 2016-04-22 2019-03-15 交互数字Ce专利控股公司 Method and apparatus for composograph
CN108604121A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 Both hands object manipulation in virtual reality
CN106803286A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 Mutual occlusion real-time processing method based on multi-view image
CN109791703A (en) * 2017-08-22 2019-05-21 腾讯科技(深圳)有限公司 Three dimensional user experience is generated based on two-dimensional medium content
CN108965715A (en) * 2018-08-01 2018-12-07 Oppo(重庆)智能科技有限公司 A kind of image processing method, mobile terminal and computer readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327124A (en) * 2020-09-30 2022-04-12 优派国际股份有限公司 Touch display device and operation method thereof
US11733797B2 (en) 2020-09-30 2023-08-22 Viewsonic International Corporation Touch display apparatus and operating method thereof
TWI813907B (en) * 2020-09-30 2023-09-01 優派國際股份有限公司 Touch display apparatus and operating method thereof
WO2022130876A1 (en) * 2020-12-18 2022-06-23 ソニーグループ株式会社 Image display device, image display method, and program
CN114173109A (en) * 2022-01-12 2022-03-11 纵深视觉科技(南京)有限责任公司 Watching user tracking method and device, electronic equipment and storage medium
WO2023206282A1 (en) * 2022-04-28 2023-11-02 京东方科技集团股份有限公司 Image display method and system, computer readable storage medium, and electronic device

Also Published As

Publication number Publication date
CN110740309B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
KR102295403B1 (en) Depth estimation method and apparatus, electronic device, program and medium
CN110740309B (en) Image display method and device, electronic equipment and storage medium
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN109672886B (en) Image frame prediction method and device and head display equipment
CN112104854A (en) Method and system for robust virtual view generation between camera views
CN110796664B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN109992226A (en) Image display method and device and spliced display screen
CN111833461B (en) Method and device for realizing special effect of image, electronic equipment and storage medium
CN108833877B (en) Image processing method and device, computer device and readable storage medium
CN104010180B (en) Method and device for filtering three-dimensional video
CN108076208B (en) Display processing method and device and terminal
CN114531553B (en) Method, device, electronic equipment and storage medium for generating special effect video
KR20200138349A (en) Image processing method and apparatus, electronic device, and storage medium
JP5911292B2 (en) Image processing apparatus, imaging apparatus, image processing method, and image processing program
WO2023075493A1 (en) Method and apparatus for three-dimensional scene reconstruction and dense depth map
US10650488B2 (en) Apparatus, method, and computer program code for producing composite image
CN108737852A (en) A kind of method for processing video frequency, terminal, the device with store function
CN117319725A (en) Subtitle display method, device, equipment and medium
CN110870304A (en) Method and apparatus for providing information to a user for viewing multi-view content
CN105530505B (en) 3-D view conversion method and device
CN115937291B (en) Binocular image generation method and device, electronic equipment and storage medium
JP2018041201A (en) Display control program, display control method and information processing device
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108702465A (en) Method and apparatus for handling image in virtual reality system
JP6103942B2 (en) Image data processing apparatus and image data processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder