CN111629242B - Image rendering method, device, system, equipment and storage medium - Google Patents

Image rendering method, device, system, equipment and storage medium Download PDF

Info

Publication number
CN111629242B
CN111629242B CN202010461670.9A CN202010461670A CN111629242B CN 111629242 B CN111629242 B CN 111629242B CN 202010461670 A CN202010461670 A CN 202010461670A CN 111629242 B CN111629242 B CN 111629242B
Authority
CN
China
Prior art keywords
image
information
target
plane
image display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010461670.9A
Other languages
Chinese (zh)
Other versions
CN111629242A (en
Inventor
邓朔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010461670.9A priority Critical patent/CN111629242B/en
Publication of CN111629242A publication Critical patent/CN111629242A/en
Application granted granted Critical
Publication of CN111629242B publication Critical patent/CN111629242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an image rendering method, an image rendering device, an image rendering system, image rendering equipment and a storage medium, wherein the method comprises the following steps: acquiring reference information corresponding to at least two reference surfaces in an image acquisition range corresponding to image acquisition equipment, wherein the image acquisition equipment is used for acquiring images of a viewing area of image display equipment; when the image display equipment displays the panoramic image, acquiring a target image acquired by the image acquisition equipment; detecting a reference area of a target object in the target image, and determining target area information corresponding to the reference area; determining target position information of the target object relative to the image display equipment according to the target area information and the reference information corresponding to the at least two reference surfaces; determining an image rendering angle according to the target position information; and rendering the picture under the corresponding view angle in the panoramic image based on the image rendering angle. The method can improve the experience of watching the panoramic image through the large-screen equipment by the user and expand the application of the panoramic image at the large-screen equipment.

Description

Image rendering method, device, system, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image rendering method, apparatus, system, device, and storage medium.
Background
The 360-degree panoramic image is a three-dimensional panoramic image generated by processing image information of the entire scene captured by a professional camera, or a three-dimensional panoramic image constructed using modeling software. When a user watches the 360-degree panoramic image through the terminal equipment, pictures of all angles in the 360-degree panoramic image can be watched by changing the position of the terminal equipment.
At this stage, the above 360-degree panoramic image is generally only suitable for mobile devices equipped with a gyroscope or other position sensor, such as smart phones, Virtual Reality (VR) glasses, VR helmets, and the like. The mobile device can sense the position change of the mobile device through a gyroscope or a position sensor, and then control the three-dimensional rendering model to render a corresponding picture according to the sensed position change information, for example, when the mobile device senses that the mobile device is rotated downwards by 30 degrees, a camera of the three-dimensional rendering model also rotates by a corresponding angle, and the picture effect after the angle rotation is rendered.
For large-screen devices which are inconvenient to move, such as televisions, desktop computers, projection devices and the like, because the large-screen devices do not have the characteristic that the mobile devices are portable and movable, currently, only the user is supported to manually and interactively adjust the display picture in the 360-degree panoramic image, for example, the picture in the 360-degree panoramic image displayed on the television is adjusted through controls of a touch remote controller, such as up, down, left, right and the like. For users, the viewing mode experience is poor, and the application of the 360-degree panoramic image on a large-screen device side is limited.
Disclosure of Invention
The embodiment of the application provides an image rendering method, an image rendering device, an image rendering system, an image rendering device and a storage medium, which can correspondingly render pictures at different viewing angles in a panoramic image along with the movement of a user, improve the experience of the user in watching the panoramic image through large-screen equipment, and effectively expand the application of the panoramic image at a large-screen equipment end.
In view of the above, an aspect of the present application provides an image rendering method, including:
acquiring reference information corresponding to at least two reference surfaces in an image acquisition range corresponding to image acquisition equipment; the image acquisition equipment is used for acquiring images of a watching area of the image display equipment;
when the image display equipment displays the panoramic image, acquiring a target image acquired by the image acquisition equipment;
detecting a reference area of a target object in the target image, and taking information corresponding to the reference area as target area information;
determining position information of the target object relative to the image display equipment according to the target area information and the reference information corresponding to the at least two reference surfaces respectively, and taking the position information as target position information;
determining an image rendering angle according to the target position information;
and rendering the picture under the corresponding view angle in the panoramic image based on the image rendering angle.
Another aspect of the present application provides an image rendering apparatus, the apparatus including:
the reference information acquisition module is used for acquiring reference information corresponding to at least two reference surfaces in an image acquisition range corresponding to the image acquisition equipment; the image acquisition equipment is used for acquiring images of a watching area of the image display equipment;
the target image acquisition module is used for acquiring a target image acquired by the image acquisition equipment when the image display equipment displays a panoramic image;
the target information determining module is used for detecting a reference region of a target object in the target image and taking the area corresponding to the reference region as the area of the target region;
the position information determining module is used for determining the position information of the target object relative to the image display equipment according to the area of the target area and the reference information corresponding to the at least two reference surfaces respectively, and taking the position information as target position information;
the rendering angle determining module is used for determining an image rendering angle according to the target position information;
and the image rendering module is used for rendering the picture in the panoramic image under the corresponding view angle based on the image rendering angle.
Another aspect of the present application provides an image rendering system, the system including: the system comprises image acquisition equipment, image display equipment and processing equipment;
the image acquisition equipment is used for acquiring images of a watching area of the image display equipment and transmitting the acquired images to the processing equipment;
the image display device is used for displaying the panoramic image under the control of the processing device;
the processing device is configured to perform the steps of the image rendering method according to the first aspect.
Another aspect of the application provides an apparatus comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is adapted to perform the steps of the image rendering method according to the first aspect as described above, according to the computer program.
In another aspect, a computer-readable storage medium is provided, which is used for storing a computer program for executing the steps of the image rendering method according to the first aspect.
Another aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the image rendering method of the first aspect described above.
According to the technical scheme, the embodiment of the application has the following advantages:
the embodiment of the application provides an image rendering method, and the method provides a novel panoramic image rendering mode suitable for large-screen equipment, and can correspondingly determine the rendering angle of a panoramic image according to the relative position relation between a user and the large-screen equipment. Specifically, in the image rendering method provided in the embodiment of the present application, when the image display device displays the panoramic image, the image acquisition device may be used to acquire a target image corresponding to a viewing area of the image display device, then target area information corresponding to a reference area of a target object in the target image is determined, reference information corresponding to at least two reference surfaces in an image acquisition range of the image acquisition device is obtained in advance, a relative position relationship between the target object and the image display device is determined according to the target area information, an image rendering angle is determined according to the relative position relationship, and a picture at a corresponding viewing angle in the panoramic image is rendered based on the image rendering angle. According to the method, the position of the target object in the target image relative to the image display equipment is determined based on the target image acquired by the image acquisition equipment and the reference information corresponding to the multiple reference surfaces in the image acquisition range of the image acquisition equipment, and the rendering angle of the panoramic image is determined according to the position, so that the effect of correspondingly rendering pictures at different viewing angles in the panoramic image along with the change of the position of a user is achieved, the experience of the user in watching the panoramic image through large-screen equipment is greatly improved, and the application of the panoramic image at the large-screen equipment end is expanded.
Drawings
Fig. 1 is a schematic diagram illustrating an operating principle of an image rendering system according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of an image rendering method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an exemplary implementation of determining at least two reference planes provided by an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a principle of determining target location information according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of another principle for determining target location information according to an embodiment of the present disclosure
Fig. 6 is a schematic structural diagram of a first image rendering apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a second image rendering apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a third image rendering apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a fourth image rendering apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a fifth image rendering apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the related art, when a user watches a panoramic image through a large-screen device, the display viewing angle of the panoramic image is often adjusted only in a manual interaction mode, and for the user, the watching mode experience is poor, so that the application of the panoramic image at the large-screen device end is severely limited.
Aiming at the problems in the related technology, the embodiment of the application provides an image rendering method, and the method develops a new method and provides a new panoramic image rendering mode aiming at large-screen equipment; the method can sense the relative position relation between the user and the image display equipment when the image display equipment displays the panoramic image, and correspondingly adjust the display visual angle of the panoramic image based on the relative position relation.
Specifically, in the image rendering method provided in the embodiment of the present application, reference information corresponding to each of at least two reference surfaces in an image capturing range corresponding to an image capturing device may be obtained first, where the image capturing device is used to capture an image of a viewing area of an image display device. When the image display equipment displays a panoramic image, acquiring a target image acquired by the image acquisition image; then, detecting a reference area of a target object in the target image, and taking information corresponding to the reference area as target area information; then, according to the target area information and the reference information corresponding to the at least two reference surfaces, determining the position information of the target object in the target image relative to the image display equipment, and taking the position information as the target position information; and then, determining an image rendering angle according to the target position information, and rendering a picture in the panoramic image at the corresponding view angle based on the image rendering angle.
Compared with the implementation mode that a user manually and interactively adjusts the display view angle of the panoramic image, the image rendering method provided by the embodiment of the application can determine the relative position relationship between a target object in the target image and the image display device based on the target image acquired by the image acquisition device and the reference information corresponding to each of the multiple reference surfaces in the image acquisition range of the image acquisition device when the image display device displays the panoramic image, and convert the relative position relationship into the corresponding image rendering angle, thereby rendering the picture in the panoramic image at the corresponding view angle; therefore, the effect of correspondingly changing the display view angle of the panoramic image along with the position change of the user can be achieved, the experience of the user in watching the panoramic image through the large-screen equipment is greatly improved, and the application of the panoramic image at the large-screen equipment end is expanded.
In order to facilitate understanding of the image rendering method provided in the embodiment of the present application, the image rendering system provided in the embodiment of the present application is first described below.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an operating principle of an image rendering system according to an embodiment of the present disclosure. As shown in fig. 1, the image rendering system includes: image acquisition device 110, image display device 120, and processing device 130. The image acquisition device 110 is configured to acquire an image of a viewing area of the image display device 120 and transmit the acquired image to the processing device 130; the image display device 120 is used to display a panoramic image under the control of the processing device 130; the processing device 130 is configured to execute the image rendering method provided by the embodiment of the application.
It should be understood that, in the image rendering system shown in fig. 1, the processing device 130 is taken as an example, the server may specifically be an application server or a Web server, and when actually deployed, the server may be an independent server, or may also be a cluster server or a cloud server. In practical applications, the processing device 130 may also be a terminal device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart television, a projection device, a Personal Digital Assistant (PDA), and the like, in which an application program for supporting displaying a panoramic image, such as a video playing client, is installed. Alternatively, the processing device 130 may be integrated in the image display device 120, and the processing device 130 is not limited in this application.
It should be understood that the image rendering system shown in fig. 1 takes the example that the image capturing device 110 is disposed right above the image display device 120, and in practical applications, the image capturing device 110 may be disposed at any position capable of capturing the viewing area of the image display device 120, and the disposed position of the image capturing device 110 is not limited in this application. Furthermore, the image capturing device 110 may also be an image capturing component, such as a camera, integrated in the image display device 120, without additional deployment.
It should be understood that the image rendering system shown in fig. 1 takes the image display device 120 as an example of an intelligent television, and in practical applications, the image display device 120 may be any terminal device, preferably a large-screen terminal device, such as an intelligent television, a desktop computer, a projection device, and the like, and the application is not limited to the image display device 120 specifically.
In a specific application, the processing device 130 needs to obtain reference information corresponding to at least two reference surfaces in an image capturing range corresponding to the image capturing device 110. The image capture range is the effective viewing angle range of image capture device 110, which generally covers the viewing area of image display device 120. At least two reference surfaces in the image acquisition range are two planes which are parallel to the image display device 120 and have different distances from the image display device 120, the reference information corresponding to each reference surface at least comprises reference region information of a target object located on the reference surface and length information of the reference surface in a direction parallel to the image display device 120 in the image acquisition range, the reference region of the target object can be a face region, a body region and the like of the target object, correspondingly, the reference region information of the target object can be the width and height of the face region, the area of the face region, the width and height of the body region, the area of the body region and the like of the target object, and the reference region information of the target object in each reference information is generally determined by interaction with the target object in advance.
When the image display device 120 displays the panoramic image, the image capture device 110 may monitor the viewing area of the image display device 120 in real time and capture the target image for transmission to the processing device 130. After receiving the target image transmitted by the image capturing device 110, the processing device 130 may correspondingly detect a reference region of the target object in the target image, and determine information corresponding to the reference region, such as width and height of the reference region, as target region information; it should be understood that if the reference region information included in the above-described reference information is information determined based on a face region, the processing device 130 should accordingly detect the face region of the target object in the target image and take the information corresponding to the face region as the target region information.
Further, the processing device 130 may determine, according to the determined target area information, the position information of the target object in the target image with respect to the image display device 120 in combination with the reference information corresponding to each of the at least two reference surfaces acquired previously, and use the position information as the target position information, that is, determine the relative position relationship between the target object in the target image and the image display device 120. Then, the relative positional relationship between the target object and the image display device 120 is converted into a corresponding image rendering angle, a rendering instruction is generated based on the image rendering angle, and the rendering instruction is transmitted to the image display device 120, so that the image display device 120 renders and displays a screen at the corresponding angle of view in the panoramic image.
It should be understood that the image rendering system shown in fig. 1 is only an example, and in practical applications, the representations of the image capturing device 110, the image display device 120, and the processing device 130 in the image rendering system are not limited to the representation in fig. 1, and the image rendering system is not limited in any way herein.
The image rendering method provided by the present application is described in detail below by embodiments.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image rendering method according to an embodiment of the present disclosure. For convenience of description, the following embodiments describe a processing apparatus as an example of an execution subject. As shown in fig. 2, the image rendering method includes the steps of:
step 201: and acquiring reference information corresponding to at least two reference surfaces in an image acquisition range corresponding to the image acquisition equipment.
In the image rendering method provided by the embodiment of the application, the processing device needs to determine the rendering angle of the panoramic image based on the relative position relationship between the target object and the image display device, and it is often difficult to determine the relative position relationship between the target object and the image display device only according to the target image acquired by the image acquisition device, so the processing device needs to acquire reference information corresponding to at least two reference surfaces in the image acquisition range corresponding to the image acquisition device in advance.
As introduced above, the image capturing device is used for capturing an image of a viewing area of the image display device, and may be a device additionally disposed independently from the image display device, such as an additionally disposed camera or the like, or may be an image capturing component integrated in the image display device, such as a camera integrated in the image display device. The image capture range corresponding to the image capture device is substantially the effective viewing angle range of the image capture device, and generally covers the viewing area of the image display device.
The at least two reference planes are two planes which are parallel to the image display device (which can be understood as being parallel to the display screen of the image display device) within the image acquisition range and have different distances from the image display device. In practical application, the processing device may define at least two specific planes within the image capturing range as reference planes according to practical requirements.
In some embodiments, the at least two reference planes may include a base plane, a near plane, and a far plane parallel to the image display device; wherein the near plane is a plane determined according to an edge point closest to the image display device within the image capturing range, the far plane is a plane determined according to an edge point farthest from the image display device within the image capturing range, and the reference plane is located between the near plane and the far plane.
Specifically, considering that the image capturing range is a range in which the image capturing device can capture an object that can be effectively recognized, the processing device may determine an edge point closest to the image display device within the image capturing range, and determine a plane that passes through the edge point and is parallel to the image display device as a near plane; determining an edge point which is farthest from the image acquisition equipment in the image acquisition range, and determining a plane which passes through the edge point and is parallel to the image display equipment as a far plane; any plane between the near plane and the far plane is determined as the reference plane, and for example, a plane located at the midpoint between the near plane and the far plane may be determined as the reference plane.
Fig. 3 is a schematic diagram of the implementation principle of determining the reference plane, the near plane and the far plane. As shown in (a) of fig. 3, the processing device may determine an edge point a closest to the image display device, and determine an edge point B farthest from the image display device, within the image display range; further, as shown in (B) in fig. 3, a plane passing through the edge point a and parallel to the image display apparatus is determined as a near plane, a plane passing through the edge point B and parallel to the image display apparatus is determined as a far plane, and a plane intermediate between the near plane and the far plane is determined as a reference plane.
In some embodiments, the at least two reference planes may also only include a near plane and a far plane parallel to the image display device, where the near plane is a plane determined according to an edge point within the image capture range closest to the image display device, and the far plane is a plane determined according to an edge point within the image capture range farthest from the image display device.
It should be noted that, the at least two reference planes include the reference plane, the near plane and the far plane, and the difference from the at least two reference planes only including the near plane and the far plane is that, when the relative position relationship between the target object in the target image and the image display device is determined based on the reference information corresponding to the at least two reference planes, if the at least two reference planes include the reference plane, the near plane and the far plane, the reference plane thereof may be defined as a zero boundary line of the position information in the vertical direction (the direction perpendicular to the image display apparatus), the position information located behind the reference plane (near the far plane) in the vertical direction being a negative value, the position information located in front of the reference plane (near the near plane) being a positive value, or defining that the position information located behind the reference plane (near the far plane) in the vertical direction is a positive value and the position information located in front of the reference plane (near the near plane) is a negative value; if the at least two reference planes only include the near plane and the far plane, either one of the near plane and the far plane may be taken as a zero boundary in the vertical direction, if the near plane is taken as the zero boundary, it may be defined that the position information located behind the near plane is both negative values or both positive values, and if the far plane is taken as the zero boundary, it may be defined that the position information located in front of the far plane is both positive values or both negative values.
It should be understood that, in practical applications, besides defining at least two reference planes based on the above two manners, any multiple planes with different distances from the image display device within the image acquisition range may also be defined as the reference planes, and the application does not make any limitation on the specific position of each reference plane, nor on the number of defined reference planes.
The reference information corresponding to each reference surface may include reference region information of the target object located at the reference surface and length information of the reference surface in a direction parallel to the image display apparatus within the image acquisition range. Specifically, the reference region of the target object may be any region on the target object, such as a face region, a body region, and the like; when the reference region of the target object is a face region, the reference region information of the target object may be the width and height of the face region of the target object, or the area of the face region of the target object, respectively; when the reference region of the target object is a body region, the reference region information of the target object may be the width and height of the body region of the target object, or the area of the body region of the target object, respectively. The processing device may control the image acquisition device to acquire an image of the target object at each reference surface, and then determine reference area information in the reference information corresponding to each reference surface based on the acquired image. The length information of the reference plane may be determined according to relevant configuration parameters of the image capturing device, for example, on the premise that an image capturing view angle of the image capturing device is known, the processing device may determine the length information of the reference plane in a direction parallel to the image displaying device within the image capturing range according to a distance between the reference plane and the image displaying device.
An implementation of determining the reference area information included in the reference information is described below. When it is determined that the target object is located on the reference surface, the processing device may acquire the image acquired by the image acquisition device as a reference image, and further determine information corresponding to a reference area of the target object in the reference image as reference area information.
Taking at least two reference planes including a near plane, a reference plane and a far plane, and the reference area being a face area as an example, the processing device may obtain an image I acquired by the image acquisition device when the target object is located at the near planenearImage I acquired by image acquisition equipment when target object is positioned on reference planebaseAnd an image I acquired by the image acquisition equipment when the target object is positioned at a far planefar(ii) a Based on images InearDetermining a size pos of a rectangular frame in which a face position of a target object and the face position correspondnearface[x,y,w,h]As the reference area information corresponding to the near plane; based on images IbaseDetermining a size pos of a rectangular frame in which a face position of a target object and the face position correspondbaseface[x,y,w,h]As reference area information corresponding to the reference plane; based on images IfarDetermining a size pos of a rectangular frame in which a face position of a target object and the face position correspondfarface[x,y,w,h]As the reference area information corresponding to the far plane.
In a possible implementation manner, after determining the actual position information corresponding to each reference surface based on the configuration parameters of the image acquisition device, the processing device may directly prompt that the target object is located at the position corresponding to each reference surface, and when the target object determines that the target object is located at the reference surface, the processing device may send determination information to the processing device through the related device to notify the processing device that the target object is currently located at the reference surface.
In another possible implementation manner, the processing device may configure, in advance, corresponding preset ratio thresholds for the respective reference surfaces based on the respective corresponding actual position information of the respective reference surfaces. When determining the reference region information included in each piece of reference information, the processing device may control the image acquisition device to acquire an image in real time, after receiving the image acquired by the image acquisition device, the processing device may determine a proportion of a reference region of a target object in the image in the entire image, and further determine whether the target object is located on the reference surface at the moment according to a relationship between the proportion and a preset proportion threshold corresponding to the reference surface, and when determining that the target object is located on the reference surface, the processing device may determine that the image is a reference image, and further determine an area corresponding to the reference region of the target object in the reference image as the area of the reference region.
Taking the determination of the area of the reference region in the reference information corresponding to the reference plane, where the reference region is a face region as an example, after receiving an image transmitted by the image acquisition device, the processing device may determine the position of a target object therein by using a target tracking algorithm, and identify the face region of the target object by using a face detection algorithm, and then determine whether the proportion of the face region in the whole image meets a preset proportion threshold corresponding to the reference plane, if so, determine that the target object in the image is already in the reference plane, and further determine the width and height corresponding to the face region of the target object as reference region information, or determine the area corresponding to the reference region as reference region information.
It should be understood that the preset proportion threshold values corresponding to different reference surfaces are different, the closer the reference surface is to the image display device, the larger the corresponding preset proportion threshold value is, and the farther the reference surface is from the image display device, the smaller the corresponding preset proportion threshold value is.
Optionally, in this implementation manner, in order to help the target object to quickly reach the reference surface, when the processing device determines that the target object is not located on the reference surface, the processing device may determine a relative position relationship between the target object and the reference surface according to the image acquired by the image acquisition device, and further generate prompt information according to the relative position relationship, where the prompt information may be used to prompt a moving direction of the target object to the reference surface.
Specifically, when the processing device determines that the target object is not located on the reference surface according to the image captured by the image capturing device, that is, when the processing device determines that the occupation ratio of the reference area of the target object in the image does not satisfy the preset ratio threshold corresponding to the reference surface, the processing device may determine the relative positional relationship between the target object and the reference surface according to the size relationship between the occupation ratio and the preset ratio threshold, for example, if the occupation ratio is smaller than the preset ratio threshold, it indicates that the target object is located behind the reference surface (i.e., in a direction away from the image display device), and if the occupation ratio is larger than the preset ratio threshold, it indicates that the target object is located in front of the reference surface (i.e., in a direction close to the image display device). Then, based on the determined relative positional relationship, prompt information for prompting the target object is generated, for example, when it is determined that the target object is located behind the reference surface, prompt information for prompting the target object to move forward may be generated, and when it is determined that the target object is located in front of the reference surface, prompt information for prompting the target object to move backward may be generated. Furthermore, the processing device may control the image display device to give a prompt to the target object based on the prompt information, specifically, the prompt may be given in a form of voice, or the prompt may be given in a form of displaying related content.
Step 202: and when the image display equipment displays the panoramic image, acquiring a target image acquired by the image acquisition equipment.
When the processing device detects that the image display device displays the panoramic image under the control of the user, the processing device can control the image acquisition device to start to continuously acquire the images in real time, the image acquisition device correspondingly transmits the acquired images to the processing device, and the processing device takes the images transmitted by the image acquisition device as target images and executes subsequent operations aiming at each target image.
It should be noted that the panoramic image in the embodiment of the present application may be a 360-degree panoramic image or a VR image. When the panoramic image is a 360-degree panoramic image, the target object can directly view the 360-degree panoramic image displayed by the image display device without other peripheral equipment. When the panoramic image is a VR image, the target object may view the VR image displayed by the image display device using a corresponding external device, such as VR glasses, a VR helmet, or the like, so as to obtain a more realistic stereoscopic viewing experience.
It should be noted that the panoramic image in the embodiment of the present application may specifically be a panoramic picture, and may also be a panoramic video, and the specific form of the panoramic image is not limited in this application.
Step 203: and detecting a reference area of the target object in the target image, and taking information corresponding to the reference area as target area information.
After the processing device acquires the target image transmitted by the image acquisition device, the processing device may determine, through a corresponding machine learning model, a reference region of the target object in the target image based on the target image, and further determine information corresponding to the reference region as target region information, for example, determine a width and a height corresponding to the reference region as the target region information, or determine an area corresponding to the reference region as the target region information.
In a possible implementation manner, the processing device may first determine a position of the target object in the target image by using a target tracking algorithm, then further identify a reference region of the target object based on the determined position of the target object by using a corresponding region detection algorithm, and determine information corresponding to the reference region as target region information.
Taking the reference region as the face region as an example, the processing device may determine a position of the target object in the target image by using a target tracking algorithm, further detect the face region of the target object based on the determined position of the target object by using a face detection algorithm, and after detecting the face region of the target object, determine a width and a height corresponding to the face region as target region information, or calculate an area of the face region as the target region information. It should be understood that, when the reference region is a body region, the processing device may employ a corresponding body detection algorithm to detect the body region of the target object based on the position of the target object and determine information corresponding to the detected body region as target region information.
It should be understood that, in practical applications, the processing device may use any one of the target tracking algorithms to determine the position of the target object, and may also use any one of the area detection algorithms to detect the reference area of the target object, and the application does not limit the target tracking algorithm and the area detection algorithm used herein.
In another possible implementation manner, the processing device may directly employ a high-performance region detection algorithm to detect a reference region of a target object based on the target image, and then determine information corresponding to the detected reference region as target region information based on the detected reference region.
Still taking the reference region as the face region as an example, the processing device may directly use a face detection algorithm to process the target image, identify the face region of the target object in the target image, and further determine the width and height corresponding to the identified face region as target region information, or calculate the area of the face region as the target region information. It should be understood that when the reference region is a body region, the processing device may also process the target image using a corresponding body region detection algorithm to identify the body region of the target object in the target image and determine target region information based thereon.
It should be understood that, in practical applications, the processing device may employ any region detection algorithm to detect the reference region of the target object based on the target image, and the application does not limit the region detection algorithm used herein.
It should be noted that in some application scenarios, a plurality of objects may exist in a viewing area of the image display device at the same time, and accordingly, a target image acquired by the image acquisition device at this time also includes a plurality of objects, and in order to ensure that the processing device can accurately provide a panoramic image rendering service for the target object, the processing device may perform a target object identification operation based on the target image in this case.
Specifically, the reference information corresponding to at least one reference surface acquired in advance by the processing device may further include target facial feature information corresponding to a target object, and after the processing device acquires the target image, the processing device may determine, for each object in the target image, facial feature information corresponding to the object and determine a similarity between the facial feature information and the target facial feature information as a similarity corresponding to the object; after the respective similarity of each object in the target image is determined, the object with the highest similarity can be determined as the target object according to the respective similarity of each object in the target image.
The reference information corresponding to the reference plane also comprises target facial feature information feature corresponding to the target objectbaseFor example, after the processing device acquires the target image, the processing device may detect a position of each face included in the target image, determine a facial feature corresponding to each face, and obtain a facial feature queue feature _ group { { feature { (feature)1,pos1=[x,y,w,h]},{feature2,pos2=[x,y,w,h]},… {featurei,posi=[x,y,w,h]}; further, for each facial feature, its and target facial feature information feature is calculatedbaseThe similarity between the facial features is selected, and the object corresponding to the facial feature with the highest similarity is finally selected as the target object, namely the featurebest=most_similarty(feature_group,featurebase)。
It should be understood that, if the reference information corresponding to each reference surface includes target facial feature information of the target object at the reference surface, the processing device may select one of the target facial feature information as the target facial feature information according to which the target object is identified, for example, select the target facial feature information with the highest quality from the target facial feature information as the target facial feature information according to which the target object is identified.
Step 204: and determining the position information of the target object relative to the image display equipment according to the target area information and the reference information corresponding to the at least two reference surfaces respectively, and taking the position information as target position information.
When the processing device determines target area information corresponding to the target object based on the target image acquired by the image acquisition device, the processing device may determine, as the target position information, position information of the target object in the target image relative to the image display device according to the target area information and the reference information corresponding to each of the at least two reference surfaces acquired in step 201.
Specifically, the target position information may include first offset information in a vertical direction, which is a direction perpendicular to the image display device, and second offset information in a horizontal direction, which is a direction parallel to the image display device. When the processing device determines the target position information, it may determine first offset information according to the target area information and reference area information in the reference information corresponding to each of the at least two reference surfaces, and further determine second offset information according to length information in the reference information corresponding to each of the first offset information and the at least two reference surfaces.
In some embodiments, the processing device may determine the first offset information according to an area corresponding to a reference region of the target object in the target image and an area corresponding to a reference region of the target object when the target object is located in each reference plane. Specifically, if the reference region information in the reference information corresponding to each reference surface includes the width and height of the reference region of the target object located on the reference surface, and the target region information includes the width and height of the reference region of the target object in the target image, the processing device may determine the area of the target region according to the target region information, determine the area of the reference region corresponding to each of the at least two reference surfaces according to the reference region information in the reference information corresponding to each of the at least two reference surfaces, and further determine the first offset information according to the area of the target region and the area of the reference region corresponding to each of the at least two reference surfaces. More specifically, after the processing device determines the area of the target region and the areas of the reference regions corresponding to the at least two reference surfaces, the processing device may calculate a difference between the area of the target region and the area of the reference region corresponding to a reference surface as a first difference, calculate a difference between the areas of the reference regions corresponding to two different reference surfaces as a second difference, and determine the first offset information according to a proportional relationship between the first difference and the second difference.
It should be understood that, in the case where the reference region information included in the reference information corresponding to the reference surface is the area corresponding to the reference region, and the target region information is the area corresponding to the reference region on the target object, the processing device may directly calculate the first offset information based on the reference region information and the target region information in each piece of reference information in the above manner.
It should be noted that, in practical applications, the processing device may determine the first offset information based on the area relationship, and may also determine the first offset information based on the width and/or height in the reference region information and the width and/or height in the target region information, and the determination manner of the first offset information is not specifically limited in this application.
After determining the first offset information, the processing device may determine a horizontal offset value corresponding to a plane (the plane is parallel to the image display device) where the target object is located according to the first offset information and a difference value between length information in two different pieces of reference information, and determine the second offset information based on the horizontal offset value and the length information in any one of the two pieces of reference information. Specific determination manners of the first offset information and the second offset information in the case where the at least two reference planes include the near plane, the base plane, and the far plane, and in the case where the at least two reference planes include only the near plane and the far plane are described below, respectively.
In a case where the at least two reference planes include a near plane, a base plane, and a far plane, when it is determined that the target object is located between the base plane and the far plane according to the target area information, the processing device may determine the first offset information according to the target area information and reference area information in reference information corresponding to each of the base plane and the far plane; and determining second offset information according to the first offset information and length information in the reference information corresponding to the reference plane and the far plane respectively. When the target object is determined to be located between the reference plane and the near plane according to the target area information, determining first offset information according to the target area information and reference area information in reference information corresponding to the reference plane and the near plane respectively; and determining second offset information according to the first offset information and length information in the reference information corresponding to the reference plane and the near plane respectively.
The above implementation is exemplarily described below with reference to fig. 4. In a case where the at least two reference planes include a near plane, a base plane, and a far plane, the processing device may define the base plane as a zero boundary line in the vertical direction, the first offset information corresponding to the target object located behind the base plane (i.e., in a direction close to the far plane) is a negative value, and the first offset information corresponding to the target object located in front of the base plane (i.e., in a direction close to the near plane) is a positive value. The processing device may define a ratio of a distance between a position where the target object is located and the reference plane to a distance between the near plane and the reference plane or a distance between the far plane and the reference plane as the first offset information; the processing device may define half the length of a plane (a plane parallel to the image display device) in which the target object is located, as the second offset information.
The processing device may determine a target region area S (pos) from the target region informationbest) Determining a reference region area S (pos) from the reference region information corresponding to the near planenear),Determining a reference region area S (pos) from reference region information corresponding to the reference planebase) Determining a reference region area S (pos) from reference region information corresponding to the far planefar)。
When the area of the target region S (pos)best) Smaller than the reference region area S (pos) in the reference information corresponding to the reference planebase) Then, it may be determined that the target object is located between the reference plane and the far plane, and at this time, the first offset information y may be calculated by equation (1)offset
Figure BDA0002511207770000161
Wherein, S (pos)best) Is the area of the target region, S (pos)base) Area of reference region corresponding to the base plane, S (pos)far) The reference area corresponding to the far plane.
Further, the second offset information x can be calculated by equation (2)offset
Figure BDA0002511207770000162
Wherein, yoffsetFor the first offset information, xfarLength information in reference information, x, corresponding to far planesbaseLength information in the reference information corresponding to the reference plane.
When the area of the target region S (pos)best) Larger than the reference region area S (pos) in the reference information corresponding to the reference planebase) Then, it may be determined that the target object is located between the reference plane and the near plane, and at this time, the first offset information y may be calculated by equation (3)offset
Figure BDA0002511207770000163
Wherein, S (pos)best) Is the area of the target region, S (pos)base) Is a reference plane pairCorresponding reference area, S (pos)near) The reference area corresponding to the near plane.
Further, the second offset information x can be calculated by equation (4)offset
Figure BDA0002511207770000164
Wherein, yoffsetFor the first offset information, xfarLength information in reference information, x, corresponding to far planesbaseLength information in the reference information corresponding to the reference plane.
It should be understood that the above-mentioned calculation manners of the first offset information and the second offset information are only examples, and in practical applications, the processing device may further give different definitions to the first offset information and the second offset information, and correspondingly calculate the first offset information and the second offset information by using different calculation manners.
In a case where the at least two reference planes include only the near plane and the far plane, the processing device may determine the first offset information according to the target area information and reference area information in reference information corresponding to each of the near plane and the far plane; and determining second offset information according to the first offset information and length information in the reference information corresponding to the near plane and the far plane respectively.
The above implementation is exemplarily described below with reference to fig. 5. In a case where the at least two reference planes include a near plane and a far plane, the processing device may define the near plane as a zero-boundary line in a vertical direction, and the first offset information corresponding to the target object located behind the zero-boundary line (i.e., in a direction close to the far plane) is a negative value. The processing device may define a ratio of a distance between a position where the target object is located and the near plane to a distance between the near plane and the far plane as the first offset information; the processing device may define half the length of a plane (a plane parallel to the image display device) in which the target object is located, as the second offset information.
The processing device may determine a target region area S (pos) from the target region informationbest) Determining a reference region area S (pos) from the reference region information corresponding to the near planenear) Determining a reference region area S (pos) from reference region information corresponding to the far planefar)。
Specifically calculating the first offset information yoffsetThe processing device may be based on the target area S (pos)best) Reference region area S (pos) corresponding to the near planenear) Reference region area S (pos) corresponding to far planefar) Calculating the first offset information y by equation (5)offset
Figure BDA0002511207770000171
Further, the processing device may be based on the first offset information yoffsetLength information x in reference information corresponding to near planenearLength information x in reference information corresponding to far planefarCalculating the second offset information x by equation (6)offset
Figure BDA0002511207770000172
It should be understood that the above-mentioned calculation manners of the first offset information and the second offset information are only examples, and in practical applications, the processing device may further give different definitions to the first offset information and the second offset information, and correspondingly calculate the first offset information and the second offset information by using different calculation manners.
It should be noted that, when the at least two reference surfaces are other planes within the image acquisition range, the processing device may also determine, in a corresponding calculation manner, the first offset information in the vertical direction and the second offset information in the horizontal direction according to the target area information and the reference information corresponding to each of the at least two reference surfaces, and the specific determination manner of the first offset information and the second offset information is not limited herein.
Step 205: and determining an image rendering angle according to the target position.
After the processing device determines the relative position relationship between the target object in the target image and the image display device, that is, after determining the target position information, the processing device may further determine the image rendering angle according to which the panoramic image needs to be rendered according to the target position information.
Specifically, in the case where the target position information includes first offset information in the vertical direction and second offset information in the horizontal direction, the processing device may determine the vertical rotation angle from the first offset information and determine the horizontal rotation angle from the second offset information and a target distance, where the target distance refers to a distance between the target object and a reference baseline in the target image.
Taking the example that the at least two reference planes comprise a near plane, a reference plane and a far plane, the processing device may define a corresponding positive rotation angle moving towards the image display device, where the angle is in the range of [0,180 ]]Moving in a direction away from the image display device corresponds to a negative rotation angle having an angle value in the range of-180, 0]. At this time, it may be based on the first offset information yoffsetCalculation of vertical rotation angle α by equation (7)v
αv=yoffset*180 (7)
Here yoffsetPositive and negative of (c) can already represent the direction corresponding to the vertical rotation angle.
Determination of the horizontal rotation angle alphahThe processing device needs to be based on the second offset information xoffsetAnd a target distance width (I), wherein the target distance width (I) is the distance between the target object and the reference base line of the target image, taking the reference base line of the target image as the central line of the target image as an example, the target distance width (I) is the target distanceThe distance between the target object in the target image and the middle line in the horizontal direction. The processing device may calculate the horizontal rotation angle α by equation (8)h
αh=(xoffset-width(I))*360 (8)
It should be understood that, in practical applications, the processing device may also use other manners to calculate the image rendering angle based on the relative position relationship between the target object and the image display device, and the present application does not limit the manner of calculating the image rendering angle.
Step 206: and rendering the picture under the corresponding view angle in the panoramic image based on the image rendering angle.
Finally, the processing device may apply the calculated image rendering angle to a three-dimensional image rendering model to render a picture in the panoramic image at the corresponding viewing angle.
The image rendering method provided by the embodiment of the application can determine the relative position relationship between a target object in a target image and image display equipment based on the target image acquired by the image acquisition equipment and the reference information corresponding to each of a plurality of reference surfaces in the image acquisition range of the image acquisition equipment when the image display equipment displays a panoramic image, and convert the relative position relationship into a corresponding image rendering angle, thereby rendering a picture in the panoramic image at a corresponding view angle; therefore, the effect of correspondingly changing the display view angle of the panoramic image along with the position change of the user can be achieved, the experience of the user in watching the panoramic image through the large-screen equipment is greatly improved, and the application of the panoramic image at the large-screen equipment end is expanded.
In order to further understand the image rendering method provided by the embodiment of the present application, the following takes at least two reference planes including a near plane, a reference plane, and a far plane as an example, and the image rendering method provided by the embodiment of the present application is wholly exemplified.
For a fixed large-screen image display device, the background of the large-screen image display device is generally fixed, and in order to facilitate subsequent determination of the relative positional relationship between the target object and the image display device, the processing device may first determine an image acquisition range corresponding to the image acquisition device based on the configuration of the image acquisition device, further determine a near plane based on an edge point closest to the image display device within the image acquisition range, determine a far plane based on an edge point farthest from the image display device within the image acquisition range, determine a plane between the near plane and the far plane as a reference plane, and take the near plane, the reference plane, and the far plane as reference planes.
Furthermore, the processing device may acquire the reference images of the target object respectively located on the near plane, the reference plane, and the far plane by interacting with the target object. Specifically, the processing device may control the image display device to prompt the target object to stand on the reference plane through voice or pictures, prompt the target object that the plane is a standard plane according to when the panoramic image is rendered, and when it is determined that the target object is located on the reference plane, obtain the reference image I captured by the image capturing device at that timebase(ii) a Similarly, the processing device may prompt the target object to stand on the near plane and the far plane in the same manner, and when it is determined that the target object is located on the near plane, obtain the reference image I captured by the image capturing device at that timenearWhen the target object is determined to be located on the far plane, a reference image I taken by the image acquisition equipment at the moment is acquiredfar
It should be noted that, when the processing device acquires the reference image, the target object is usually prompted to ensure that no other object exists in the image acquisition range, so as to ensure that only the target object is included in the reference image.
The processing device aims at the reference image IbasePerforming face detection, determining the facial feature information of the target object and the size of the minimum rectangular frame corresponding to the face region, and specifically, referring to the image IbaseThe size of the minimum rectangular frame corresponding to the face feature information and the face area in (1) is expressed as:
base_feature={featurebase,posbaseface=[x,y,w,h]}
the processing device aims at the reference image InearAnd a reference picture IfarFacial features therein may also be detected accordinglyThe size of the smallest rectangular box corresponding to the information and face region is noted as:
near_feature={featurenear,posnearface=[x,y,w,h]}
far_feature={featurefar,posfarface=[x,y,w,h]}
when the processing device detects that the image display device plays a 360-degree panoramic video, the processing device may control the image capturing device to continuously capture a target image in real time, and when the target image includes a plurality of objects, the processing device may detect the face of each object in the target image by using a face recognition algorithm, and determine the face feature information and the corresponding position information corresponding to each object, which are recorded as:
feature_group={{feature1,pos1=[x,y,w,h]},{feature2,pos2=[x,y,w,h],…… {featurei,posi=[x,y,w,h]}}
determining and referencing image I using image perception algorithmsbaseAnd determining that the object corresponding to the facial feature information is the target object.
{featurebest,posbest}=most_similarty(featuregroup,base_feature)
After determining the target object, the processing device may determine the position of the target object relative to the image display device based on the target image, and thereby determine the rotation angle α in the horizontal directionhAnd a rotation angle alpha in the vertical directionv
For the vertical angle, the closer to the image display device, the larger the area occupied by the face of the target object in the image, as can be seen from the principle of how large and small the distance is in the imaging principle, based on which the offset y of the target object in the vertical direction can be determined by the following formulaoffset
Figure BDA0002511207770000211
Wherein the content of the first and second substances,s () is an area calculation formula, and the processing device can be based on posbaseface=[x,y,w,h]W and h in (1) calculate S (pos)base) Based on posnearface=[x,y,w,h]W and h in (1) calculate S (pos)near) Based on posfarface=[x,y,w,h]W and h in (1) calculate S (pos)far) Based on posbest=[x,y,w,h]W and h in (1) calculate S (pos)best)。
During rendering, the processing device may define a forward rotation angle corresponding to a direction approaching the image display device, where the angle is in a range of [0,180%]Defining a negative rotation angle in a direction away from the image display device, wherein the negative rotation angle has an angle value range of [ -180,0 [ -180 [ ]]Therefore, the vertical rotation angle α can be calculated by the following formulav
αv=yoffset*180
Its positive and negative directions can already pass through yoffsetAnd (5) characterizing.
For horizontal angles, the processing equipment needs to rely on the above-mentioned offset y in the vertical directionoffsetThe horizontal offset is determined, for example, when the user moves forward, the distance between the horizontal offset and the edge of the image capture range is reduced, but the rotation of the rendering volume in the horizontal direction should be 0.
Specifically, the processing device may calculate the offset x in the horizontal direction by the following equationoffset
Figure BDA0002511207770000212
Wherein x isbaseIs the length of the reference plane in the direction parallel to the image display device within the image acquisition range, xnearIs the length of the near plane in the direction parallel to the image display device within the image acquisition range, xfarThe processing device may acquire x in advance for the length of the far plane in the direction parallel to the image display device within the image capturing rangebase、xnearAnd xfar
Further, according to the above xoffsetAnd target pairs in the target imageThe distance width (I) between the image and the center line of the target image, and the horizontal rotation angle alpha is calculated by the following formulah
αh=(xoffset-width(I))*360
The horizontal rotation angle and the vertical rotation angle are actually rotation angles of the panoramic model, and the rotation angles are applied to the three-dimensional rendering model to render pictures at corresponding visual angles in the 360-degree panoramic video.
For the image rendering method described above, the present application also provides a corresponding image rendering apparatus, so that the image rendering method is applied and implemented in practice.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image rendering apparatus 600 corresponding to the image rendering method shown in fig. 2, the image rendering apparatus including:
a reference information obtaining module 601, configured to obtain reference information corresponding to each of at least two reference surfaces in an image collection range corresponding to an image collection device; the image acquisition equipment is used for acquiring images of a watching area of the image display equipment;
a target image obtaining module 602, configured to obtain a target image collected by the image collecting device when the image display device displays a panoramic image;
a target information determining module 603, configured to detect a reference region of a target object in the target image, and use an area corresponding to the reference region as a target region area;
a position information determining module 604, configured to determine, according to the area of the target region and reference information corresponding to the at least two reference surfaces, position information of the target object relative to the image display device, where the position information is used as target position information;
a rendering angle determining module 605, configured to determine an image rendering angle according to the target position information;
an image rendering module 606, configured to render, based on the image rendering angle, a picture in the panoramic image at a corresponding viewing angle.
Optionally, on the basis of the image rendering apparatus shown in fig. 6, the reference information corresponding to each reference surface includes reference area information of the target object located on the reference surface, and length information of the reference surface in a direction parallel to the image display device within the image acquisition range; the target position information includes first offset information in a vertical direction and second offset information in a horizontal direction; the vertical direction is a direction perpendicular to the image display apparatus, and the horizontal direction is a direction parallel to the image display apparatus.
Referring to fig. 7, fig. 7 is a schematic structural diagram of another image rendering apparatus 700 according to an embodiment of the present disclosure. As shown in fig. 7, the location information determining module 604 includes:
a first offset determining unit 701, configured to determine the first offset information according to the target area information and reference area information in reference information corresponding to each of the at least two reference surfaces;
a second offset determining unit 702, configured to determine the second offset information according to the first offset information and length information in the reference information corresponding to each of the at least two reference surfaces.
Optionally, on the basis of the image rendering apparatus shown in fig. 7, the at least two reference planes include a reference plane, a near plane, and a far plane parallel to the image display device; the near plane is a plane determined according to an edge point which is closest to the image display device in the image acquisition range, the far plane is a plane determined according to an edge point which is farthest from the image display device in the image acquisition range, and the reference plane is located between the near plane and the far plane;
the first offset determining unit 701 is specifically configured to determine, when it is determined that the target object is located between the reference plane and the far plane according to the target area information, the first offset information according to the target area information and reference area information in reference information corresponding to each of the reference plane and the far plane; when the target object is determined to be located between the reference plane and the near plane according to the target area information, determining the first offset information according to the target area information and reference area information in reference information corresponding to the reference plane and the near plane respectively;
the second offset determining unit 702 is specifically configured to determine, when it is determined that the target object is located between the reference plane and the far plane according to the target area information, the second offset information according to the first offset information and length information in reference information corresponding to each of the reference plane and the far plane; and when the target object is determined to be positioned between the reference plane and the near plane according to the target area information, determining the second offset information according to the first offset information and length information in the reference information corresponding to the reference plane and the near plane respectively.
Optionally, on the basis of the image rendering apparatus shown in fig. 7, the at least two reference planes include a near plane and a far plane parallel to the image display device; the near plane is a plane determined according to the edge point which is closest to the image display device in the image acquisition range, and the far plane is a plane determined according to the edge point which is farthest from the image display device in the image acquisition range;
the first offset determining unit 701 is specifically configured to determine the first offset information according to the target area information and reference area information in reference information corresponding to each of the near plane and the far plane;
the second offset determining unit 702 is specifically configured to determine the second offset information according to the first offset information and length information in reference information corresponding to each of the near plane and the far plane.
Optionally, on the basis of the image rendering apparatus shown in fig. 7, the reference area information in the reference information corresponding to each reference surface includes: the width and height of the reference region of the target object located at the reference plane; the target area information includes: a width and a height of the reference region of a target object in the target image; the first offset determining unit 701 is specifically configured to:
determining the area of a target area according to the target area information;
determining the reference area corresponding to the at least two reference surfaces according to the reference area information in the reference information corresponding to the at least two reference surfaces;
and determining the first offset information according to the area of the target region and the area of the reference region corresponding to the at least two reference surfaces.
Optionally, on the basis of the image rendering apparatus shown in fig. 7, the image rendering angle includes a vertical rotation angle and a horizontal rotation angle; the rendering angle determination module 605 is specifically configured to:
determining the vertical rotation angle according to the first offset information;
determining the horizontal rotation angle according to the second offset information and the target distance; the target distance is a distance between the target object and a reference baseline in the target image.
Optionally, on the basis of the image rendering apparatus shown in fig. 6, the reference information corresponding to at least one of the reference surfaces further includes target facial feature information corresponding to the target object; referring to fig. 8, fig. 8 is a schematic structural diagram of another image rendering apparatus 800 according to an embodiment of the present disclosure. As shown in fig. 8, the apparatus further includes:
a target object detection module 801, configured to, in a case where a plurality of objects are included in the target image, determine, for each object in the target image, facial feature information corresponding to the object; taking the similarity between the facial feature information and the target facial feature information as the corresponding similarity of the object; and taking the object with the highest corresponding similarity in the target image as the target object.
Optionally, on the basis of the image rendering apparatus shown in fig. 6, referring to fig. 9, fig. 9 is a schematic structural diagram of another image rendering apparatus 900 provided in the embodiment of the present application. As shown in fig. 9, the apparatus further includes:
a reference information determining module 901, configured to, when it is determined that the target object is located in the reference plane, obtain an image acquired by the image acquisition device as a reference image; and taking information corresponding to the reference area of the target object in the reference image as the reference area information.
Optionally, on the basis of the image rendering apparatus shown in fig. 9, referring to fig. 10, fig. 10 is a schematic structural diagram of another image rendering apparatus 1000 according to an embodiment of the present application. As shown in fig. 10, the apparatus further includes:
a prompt module 1001, configured to determine, according to the image acquired by the image acquisition device, a relative position relationship between the target object and the reference surface when it is determined that the target object is not located on the reference surface, and generate prompt information according to the relative position relationship; the prompt information is used for prompting the moving direction of the target object to the reference surface.
Optionally, on the basis of the image rendering apparatus shown in fig. 6, the reference area includes: a facial region and/or a body region.
Optionally, on the basis of the image rendering apparatus shown in fig. 6, the panoramic image includes at least one of the following: 360 degree panoramic image, virtual reality VR image.
The image rendering device provided by the embodiment of the application can determine the relative position relationship between a target object in a target image and image display equipment based on the target image acquired by the image acquisition equipment and the reference information corresponding to each of a plurality of reference surfaces in the image acquisition range of the image acquisition equipment when the image display equipment displays a panoramic image, and convert the relative position relationship into a corresponding image rendering angle, thereby rendering a picture in the panoramic image at a corresponding view angle; therefore, the effect of correspondingly changing the display view angle of the panoramic image along with the position change of the user can be achieved, the experience of the user in watching the panoramic image through the large-screen equipment is greatly improved, and the application of the panoramic image at the large-screen equipment end is expanded.
The embodiment of the present application further provides a processing device for rendering an image, where the processing device may specifically be a server or a terminal device, and the server and the terminal device provided in the embodiment of the present application will be described below from the perspective of hardware materialization.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a server 1100 according to an embodiment of the present disclosure. The server 1100 may vary widely in configuration or performance and may include one or more Central Processing Units (CPUs) 1122 (e.g., one or more processors) and memory 1132, one or more storage media 1130 (e.g., one or more mass storage devices) storing applications 1142 or data 1144. Memory 1132 and storage media 1130 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1122 may be provided in communication with the storage medium 1130 to execute a series of instruction operations in the storage medium 1130 on the server 1100.
The server 1100 may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1158, and/or one or more operating systems such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 11.
The CPU 1122 is configured to execute the following steps:
acquiring reference information corresponding to at least two reference surfaces in an image acquisition range corresponding to image acquisition equipment; the image acquisition equipment is used for acquiring images of a watching area of the image display equipment;
when the image display equipment displays the panoramic image, acquiring a target image acquired by the image acquisition equipment;
detecting a reference area of a target object in the target image, and taking information corresponding to the reference area as target area information;
determining position information of the target object relative to the image display equipment according to the target area information and the reference information corresponding to the at least two reference surfaces respectively, and taking the position information as target position information;
determining an image rendering angle according to the target position information;
and rendering the picture under the corresponding view angle in the panoramic image based on the image rendering angle.
Optionally, the CPU 1122 may also be configured to execute the steps of any implementation manner of the image rendering method provided in the embodiment of the present application.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application. For convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the specific technology are not disclosed. Taking a terminal device as an example of an intelligent electronic device:
fig. 12 is a block diagram illustrating a partial structure of a smart television related to a terminal provided in an embodiment of the present application. Referring to fig. 12, the smart tv includes: radio Frequency (RF) circuit 1210, memory 1220, input unit 1230, display unit 1240, sensor 1250, audio circuit 1260, wireless fidelity (WiFi) module 1270, processor 1280, and power supply 1290. Those skilled in the art will appreciate that the smart tv architecture shown in fig. 12 does not constitute a limitation of the smart tv, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
The memory 1220 may be used to store software programs and modules, and the processor 1280 executes various functional applications and data processing of the smart tv by running the software programs and modules stored in the memory 1220. The memory 1220 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the smart tv, and the like. Further, the memory 1220 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1280 is a control center of the smart tv, connects various parts of the entire smart tv using various interfaces and lines, and performs various functions of the smart tv and processes data by running or executing software programs and/or modules stored in the memory 1220 and calling data stored in the memory 1220, thereby performing overall monitoring of the smart tv. Optionally, processor 1280 may include one or more processing units; preferably, the processor 1280 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into the processor 1280.
In this embodiment, the processor 1280 included in the terminal further has the following functions:
acquiring reference information corresponding to at least two reference surfaces in an image acquisition range corresponding to image acquisition equipment; the image acquisition equipment is used for acquiring images of a watching area of the image display equipment;
when the image display equipment displays the panoramic image, acquiring a target image acquired by the image acquisition equipment;
detecting a reference area of a target object in the target image, and taking information corresponding to the reference area as target area information;
determining position information of the target object relative to the image display equipment according to the target area information and the reference information corresponding to the at least two reference surfaces respectively, and taking the position information as target position information;
determining an image rendering angle according to the target position information;
and rendering the picture under the corresponding view angle in the panoramic image based on the image rendering angle.
Optionally, the processor 1280 is further configured to execute the steps of any implementation manner of the image rendering method provided in the embodiment of the present application.
The embodiment of the present application further provides a computer-readable storage medium for storing a computer program, where the computer program is configured to execute any one implementation manner of the image rendering method described in the foregoing embodiments.
The present application further provides a computer program product including instructions, which when run on a computer, cause the computer to perform any one of the embodiments of an image rendering method described in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing computer programs.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (13)

1. A method of image rendering, the method comprising:
acquiring reference information corresponding to at least two reference surfaces in an image acquisition range corresponding to image acquisition equipment; the image acquisition equipment is used for acquiring images of a viewing area of the image display equipment, the image display equipment comprises large-screen terminal equipment, and the at least two reference surfaces are planes which are parallel to the image display equipment in the image acquisition range and have different distances from the image display equipment; the at least two reference surfaces are determined at least according to the edge point which is closest to the image display equipment and the edge point which is farthest from the image display equipment in the image acquisition range; the reference information corresponding to each reference surface comprises reference area information of a target object positioned on the reference surface and length information of the reference surface in a direction parallel to the image display device in the image acquisition range;
when the image display equipment displays the panoramic image, acquiring a target image acquired by the image acquisition equipment;
detecting a reference area of a target object in the target image, and taking information corresponding to the reference area as target area information;
determining first offset information in a vertical direction according to the target area information and reference area information in the reference information corresponding to the at least two reference surfaces, wherein the vertical direction is a direction perpendicular to the image display device;
determining second offset information in a horizontal direction according to the first offset information and length information in the reference information corresponding to the at least two reference surfaces, wherein the horizontal direction is parallel to the image display device;
determining a vertical rotation angle according to the first offset information;
determining a horizontal rotation angle according to the second offset information and the target distance; the target distance is a distance between the target object and a reference baseline in the target image;
taking the vertical rotation angle and the horizontal rotation angle as image rendering angles;
and rendering the picture under the corresponding view angle in the panoramic image based on the image rendering angle.
2. The method of claim 1, wherein the at least two reference planes comprise a base plane, a near plane, and a far plane parallel to the image display device; the near plane is a plane determined according to an edge point which is closest to the image display device in the image acquisition range, the far plane is a plane determined according to an edge point which is farthest from the image display device in the image acquisition range, and the reference plane is located between the near plane and the far plane;
determining first offset information in the vertical direction according to the target area information and reference area information in the reference information corresponding to the at least two reference surfaces; determining second offset information in the horizontal direction according to the first offset information and length information in the reference information corresponding to each of the at least two reference surfaces, including:
when the target object is determined to be located between the reference plane and the far plane according to the target area information, determining the first offset information according to the target area information and reference area information in reference information corresponding to the reference plane and the far plane respectively; determining the second offset information according to the first offset information and length information in reference information corresponding to the reference plane and the far plane respectively;
when the target object is determined to be located between the reference plane and the near plane according to the target area information, determining the first offset information according to the target area information and reference area information in reference information corresponding to the reference plane and the near plane respectively; and determining the second offset information according to the first offset information and length information in the reference information corresponding to the reference plane and the near plane respectively.
3. The method of claim 1, wherein the at least two reference planes comprise a near plane and a far plane parallel to the image display device; the near plane is a plane determined according to the edge point which is closest to the image display device in the image acquisition range, and the far plane is a plane determined according to the edge point which is farthest from the image display device in the image acquisition range;
determining first offset information in the vertical direction according to the target area information and reference area information in the reference information corresponding to the at least two reference surfaces; determining second offset information in the horizontal direction according to the first offset information and length information in the reference information corresponding to each of the at least two reference surfaces, including:
determining the first offset information according to the target area information and reference area information in the reference information corresponding to the near plane and the far plane respectively;
and determining the second offset information according to the first offset information and length information in the reference information corresponding to the near plane and the far plane respectively.
4. The method according to claim 1, wherein the reference area information in the reference information corresponding to each of the reference planes comprises: the width and height of the reference region of the target object located at the reference plane; the target area information includes: a width and a height of the reference region of a target object in the target image;
determining first offset information in the vertical direction according to the target area information and reference area information in the reference information corresponding to each of the at least two reference surfaces, including:
determining the area of a target area according to the target area information;
determining the reference area corresponding to the at least two reference surfaces according to the reference area information in the reference information corresponding to the at least two reference surfaces;
and determining the first offset information according to the area of the target region and the area of the reference region corresponding to the at least two reference surfaces.
5. The method according to claim 1, wherein the reference information corresponding to at least one of the reference surfaces includes target facial feature information corresponding to the target object; in a case where a plurality of objects are included in the target image, the method further includes:
for each object in the target image, determining facial feature information corresponding to the object; taking the similarity between the facial feature information and the target facial feature information as the corresponding similarity of the object;
and taking the object with the highest corresponding similarity in the target image as the target object.
6. The method according to claim 1, wherein the reference area information included in the reference information corresponding to each of the reference planes is determined by:
when the target object is located on the reference surface, acquiring an image acquired by the image acquisition equipment as a reference image;
and taking information corresponding to the reference area of the target object in the reference image as the reference area information.
7. The method of claim 6, further comprising:
when the target object is determined not to be located on the reference surface, determining the relative position relation between the target object and the reference surface according to the image acquired by the image acquisition equipment, and generating prompt information according to the relative position relation; the prompt information is used for prompting the moving direction of the target object to the reference surface.
8. The method according to any one of claims 1 to 7, wherein the reference region comprises: a facial region and/or a body region.
9. The method of any of claims 1 to 7, wherein the panoramic image comprises at least one of: 360 degree panoramic image, virtual reality VR image.
10. An image rendering apparatus, characterized in that the apparatus comprises:
the reference information acquisition module is used for acquiring reference information corresponding to at least two reference surfaces in an image acquisition range corresponding to the image acquisition equipment; the image acquisition equipment is used for acquiring images of a viewing area of the image display equipment, the image display equipment comprises large-screen terminal equipment, and the at least two reference surfaces are planes which are parallel to the image display equipment in the image acquisition range and have different distances from the image display equipment; the at least two reference surfaces are determined at least according to the edge point which is closest to the image display equipment and the edge point which is farthest from the image display equipment in the image acquisition range; the reference information corresponding to each reference surface comprises reference area information of a target object positioned on the reference surface and length information of the reference surface in a direction parallel to the image display device in the image acquisition range;
the target image acquisition module is used for acquiring a target image acquired by the image acquisition equipment when the image display equipment displays a panoramic image;
the target information determining module is used for detecting a reference area of a target object in the target image and taking information corresponding to the reference area as target area information;
a position information determining module, configured to determine, according to the target area information and reference area information in reference information corresponding to each of the at least two reference surfaces, first offset information in a vertical direction, where the vertical direction is a direction perpendicular to the image display device; determining second offset information in a horizontal direction according to the first offset information and length information in the reference information corresponding to the at least two reference surfaces, wherein the horizontal direction is parallel to the image display device;
a rendering angle determining module, configured to determine a vertical rotation angle according to the first offset information; determining a horizontal rotation angle according to the second offset information and the target distance; the target distance is a distance between the target object and a reference baseline in the target image; taking the vertical rotation angle and the horizontal rotation angle as image rendering angles;
and the image rendering module is used for rendering the picture in the panoramic image under the corresponding view angle based on the image rendering angle.
11. An image rendering system, the system comprising: the system comprises image acquisition equipment, image display equipment and processing equipment;
the image acquisition equipment is used for acquiring images of a watching area of the image display equipment and transmitting the acquired images to the processing equipment;
the image display device is used for displaying the panoramic image under the control of the processing device;
the processing device for performing the image rendering method of any one of claims 1 to 9.
12. An image rendering apparatus, characterized in that the apparatus comprises a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the image rendering method of any one of claims 1 to 9 in accordance with the computer program.
13. A computer-readable storage medium for storing a computer program for executing the image rendering method according to any one of claims 1 to 9.
CN202010461670.9A 2020-05-27 2020-05-27 Image rendering method, device, system, equipment and storage medium Active CN111629242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010461670.9A CN111629242B (en) 2020-05-27 2020-05-27 Image rendering method, device, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010461670.9A CN111629242B (en) 2020-05-27 2020-05-27 Image rendering method, device, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111629242A CN111629242A (en) 2020-09-04
CN111629242B true CN111629242B (en) 2022-04-08

Family

ID=72272553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010461670.9A Active CN111629242B (en) 2020-05-27 2020-05-27 Image rendering method, device, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111629242B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112073640B (en) * 2020-09-15 2022-03-29 贝壳技术有限公司 Panoramic information acquisition pose acquisition method, device and system
CN114500846B (en) * 2022-02-12 2024-04-02 北京蜂巢世纪科技有限公司 Live action viewing angle switching method, device, equipment and readable storage medium
CN117278732A (en) * 2022-04-29 2023-12-22 河北雄安三千科技有限责任公司 Display area control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105704468A (en) * 2015-08-31 2016-06-22 深圳超多维光电子有限公司 Stereoscopic display method, device and electronic equipment used for virtual and reality scene
CN106454311A (en) * 2016-09-29 2017-02-22 北京利亚德视频技术有限公司 LED three-dimensional imaging system and method
US10397543B2 (en) * 2014-09-03 2019-08-27 Nextvr Inc. Methods and apparatus for capturing, streaming and/or playing back content

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10397543B2 (en) * 2014-09-03 2019-08-27 Nextvr Inc. Methods and apparatus for capturing, streaming and/or playing back content
CN105704468A (en) * 2015-08-31 2016-06-22 深圳超多维光电子有限公司 Stereoscopic display method, device and electronic equipment used for virtual and reality scene
CN106454311A (en) * 2016-09-29 2017-02-22 北京利亚德视频技术有限公司 LED three-dimensional imaging system and method

Also Published As

Publication number Publication date
CN111629242A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111629242B (en) Image rendering method, device, system, equipment and storage medium
CN108734736B (en) Camera posture tracking method, device, equipment and storage medium
TWI683259B (en) Method and related device of determining camera posture information
WO2021008456A1 (en) Image processing method and apparatus, electronic device, and storage medium
US11398044B2 (en) Method for face modeling and related products
WO2019237745A1 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
CN107646109B (en) Managing feature data for environment mapping on an electronic device
US20120127276A1 (en) Image retrieval system and method and computer product thereof
CN110599593B (en) Data synthesis method, device, equipment and storage medium
CN107167077B (en) Stereoscopic vision measuring system and stereoscopic vision measuring method
US8896692B2 (en) Apparatus, system, and method of image processing, and recording medium storing image processing control program
US10986401B2 (en) Image processing apparatus, image processing system, and image processing method
CN111724412A (en) Method and device for determining motion trail and computer storage medium
EP4090000A1 (en) Method and device for image processing, electronic device, and storage medium
CN112995491B (en) Video generation method and device, electronic equipment and computer storage medium
CN110555815B (en) Image processing method and electronic equipment
CN112333458A (en) Live broadcast room display method, device, equipment and storage medium
WO2019000464A1 (en) Image display method and device, storage medium, and terminal
CN111083513A (en) Live broadcast picture processing method and device, terminal and computer readable storage medium
CN112073640B (en) Panoramic information acquisition pose acquisition method, device and system
CN104866809B (en) Picture playing method and device
CN110765926B (en) Picture book identification method, device, electronic equipment and storage medium
CN105608469B (en) The determination method and device of image resolution ratio
CN116567349A (en) Video display method and device based on multiple cameras and storage medium
CN111325674A (en) Image processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028076

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant