CN107330850B - Method and device for controlling display size in VR interaction - Google Patents

Method and device for controlling display size in VR interaction Download PDF

Info

Publication number
CN107330850B
CN107330850B CN201710458264.5A CN201710458264A CN107330850B CN 107330850 B CN107330850 B CN 107330850B CN 201710458264 A CN201710458264 A CN 201710458264A CN 107330850 B CN107330850 B CN 107330850B
Authority
CN
China
Prior art keywords
portrait
unit
information
image
portrait information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710458264.5A
Other languages
Chinese (zh)
Other versions
CN107330850A (en
Inventor
廖裕民
朱祖建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockchip Electronics Co Ltd
Original Assignee
Rockchip Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rockchip Electronics Co Ltd filed Critical Rockchip Electronics Co Ltd
Priority to CN201710458264.5A priority Critical patent/CN107330850B/en
Publication of CN107330850A publication Critical patent/CN107330850A/en
Application granted granted Critical
Publication of CN107330850B publication Critical patent/CN107330850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for controlling display size in VR interaction. Therefore, when the user uses the VR equipment, the portrait information of the user can be converted into a standard proportion no matter how the user is positioned, and then the converted portrait information and the scene information are synthesized and finally transmitted to the VR for display. The invention well solves the problem that the real proportions of the social participators in the virtual space are inconsistent with the real proportions of the social participators in different spaces when the social participators in the virtual space are merged into the common virtual space, has the characteristics of convenient carrying, low power consumption and the like, and has wide market prospect in the application field of VR equipment.

Description

Method and device for controlling display size in VR interaction
Technical Field
The invention relates to the field of image interaction, in particular to a method and a device for controlling display size in VR interaction.
Background
With the rapid development of virtual reality technology (VR), virtual common space social contact becomes a market hotspot. At present, a plurality of technical difficulties exist in the virtual common space social technology, in particular to the problem of the real proportion when the space scenes among social participants are merged into each other and the people in different spaces are merged into the common virtual space scene.
Generally, when a camera acquires an image, the camera has a phenomenon of big-end-up or small-end-up, that is, when a person or an object is close to the camera, the person and the object acquired by the camera are relatively big, when the person or the object is far from the camera, the person and the object acquired by the camera are relatively small, and if the person or the object acquired by the camera is directly placed in a virtual space scene according to an original proportion, proportion unbalance and distortion are caused.
Disclosure of Invention
Therefore, a technical scheme for controlling the display size in VR interaction needs to be provided, so as to solve the problems of scene scale imbalance, distortion and the like caused by inconsistent human figure scales when different human figures are merged into a common virtual space scene.
To achieve the above object, the inventors provide an apparatus for display size control in VR interaction, the apparatus comprising at least one VR device and a server; the VR equipment is connected with a server; the VR equipment comprises an image acquisition unit, a storage unit, a portrait recognition unit, a portrait adjustment unit, a scene acquisition unit, an image fusion unit, a display control unit and a display unit; the storage unit comprises a first storage unit and a second storage unit;
the image acquisition unit is used for acquiring a first image positioned at a preset position, and the first image comprises first portrait information;
the portrait identification unit is used for extracting first portrait information from the first image and storing the first portrait information parameters in the first storage unit;
the image acquisition unit is further used for acquiring a second image, the second image is image information of the position of the current user, and the portrait identification unit is used for extracting second portrait information from the second image and storing second portrait information parameters in a second storage unit;
the portrait adjusting unit is used for determining a scaling ratio according to the proportional relation between the second portrait information parameter and the first portrait information parameter, scaling the second portrait information parameter according to the scaling ratio, and generating third portrait information according to the scaled portrait information parameter;
the scene acquisition unit is used for acquiring scene information from the server, and the image fusion unit is used for synthesizing the scene information and the third portrait information to obtain image synthesis information;
the display control unit is used for transmitting the image synthesis information to the display unit for displaying.
Further, the portrait adjusting unit is further configured to scale the third portrait information to a proportion corresponding to the current scene information according to a corresponding relationship between the scene information and the portrait information, so as to obtain fourth portrait information; and the image fusion unit is used for synthesizing the scene information and the fourth portrait information to obtain image synthesis information.
Further, the number of the VR devices is multiple, and the VR devices comprise a first VR device and a second VR device, and the first VR device and the second VR device are connected; the VR device further includes a communication unit; the first VR device comprises a first communication unit, a first portrait identification unit and a first portrait adjustment unit; the second VR device comprises a second communication unit, a second portrait recognition unit and a second portrait adjusting unit;
the first communication unit is used for sending the second portrait information extracted by the first portrait identification unit to the second communication unit;
the second portrait adjusting unit is used for respectively determining the scaling ratios of the second portrait information extracted by the second communication unit from the first portrait identifying unit and the second portrait information extracted by the second portrait identifying unit according to the corresponding proportion relation between the second portrait information and the first portrait information parameter, and scaling according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait;
or the first communication unit is used for receiving and sending second portrait information extracted by the second portrait identification unit and sent by the second communication unit;
the first portrait adjusting unit is used for respectively determining the scaling ratios of the second portrait information extracted by the second portrait identifying unit and sent by the second communication unit and the second portrait information extracted by the first portrait identifying unit according to the corresponding proportion relation between the second portrait information and the first portrait information parameters, and scaling according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait.
The inventor also provides a method for controlling the display size in VR interaction, which is applied to a device for controlling the display size in VR interaction, and the device comprises at least one VR device and a server; the VR equipment is connected with a server; the VR equipment comprises an image acquisition unit, a storage unit, a portrait recognition unit, a portrait adjustment unit, a scene acquisition unit, an image fusion unit, a display control unit and a display unit; the storage unit comprises a first storage unit and a second storage unit; the method comprises the following steps:
the method comprises the steps that an image acquisition unit acquires a first image located at a preset position, wherein the first image comprises first portrait information;
the portrait identification unit extracts first portrait information from the first image and stores the first portrait information parameters in the first storage unit;
the image acquisition unit acquires a second image, the portrait identification unit extracts second portrait information from the second image, and the second portrait information parameters are stored in the second storage unit; the second image is image information of the position of the current user;
the portrait adjusting unit determines a scaling ratio according to the proportional relation between the second portrait information parameter and the first portrait information parameter, scales the second portrait information parameter according to the scaling ratio, and generates third portrait information according to the scaled portrait information parameter;
the scene obtaining unit obtains scene information from the server, and the image fusion unit synthesizes the scene information and the third portrait information to obtain image synthesis information;
the display control unit transmits the image synthesis information to the display unit for display.
Further, the method further comprises:
the portrait adjusting unit scales the third portrait information into a proportion corresponding to the current scene information according to the corresponding relation between the scene information and the portrait information to obtain fourth portrait information;
and the image fusion unit synthesizes the scene information and the fourth portrait information to obtain image synthesis information.
Further, the number of the VR devices is multiple, and the VR devices comprise a first VR device and a second VR device, and the first VR device and the second VR device are connected; the VR device further includes a communication unit; the first VR device comprises a first communication unit, a first portrait identification unit and a first portrait adjustment unit; the second VR device comprises a second communication unit, a second portrait recognition unit and a second portrait adjusting unit; the method comprises the following steps:
the first communication unit sends the second portrait information extracted by the first portrait identification unit to the second communication unit;
the second portrait adjusting unit respectively determines the scaling ratios of the second portrait information extracted by the second communication unit from the first portrait recognition unit and the second portrait information extracted by the second portrait recognition unit according to the corresponding proportion relation between the second portrait information and the first portrait information parameter, and scales according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait;
or the first communication unit receives and transmits the second portrait information extracted by the second portrait identification unit and transmitted by the second communication unit;
the first portrait adjusting unit respectively determines the scaling ratios of the second portrait information extracted by the second portrait identifying unit and sent by the second communication unit and the second portrait information extracted by the first portrait identifying unit according to the corresponding relationship between the two and the first portrait information parameters, and scales according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait.
The inventors also provide another apparatus for display size control in VR interaction, the apparatus comprising a plurality of VR devices; the VR devices include a third VR device and a fourth VR device; the third VR device comprises a third image acquisition unit, a third storage unit, a third portrait identification unit, a third portrait adjustment unit, a third image fusion unit, a third display control unit, a third display unit, a scene generation unit and a third communication unit; the fourth VR device comprises a fourth image acquisition unit, a fourth storage unit, a fourth portrait identification unit, a fourth portrait adjustment unit, a fourth image fusion unit, a fourth display control unit, a fourth display unit, a fourth scene acquisition unit and a fourth communication unit;
the third image acquisition unit is used for acquiring a fifth image of the first user at a preset position, and the fifth image comprises fifth portrait information;
the third portrait identification unit is used for extracting fifth portrait information from a fifth image and storing fifth portrait information parameters in a third storage unit;
the third image acquisition unit is further used for acquiring a sixth image, the sixth image is image information of the current position of the first user, and the third portrait identification unit is used for extracting sixth portrait information from the sixth image and storing sixth portrait information parameters in the third storage unit;
the third communication unit is used for receiving ninth portrait information of the second user, which is sent by the fourth communication unit, wherein the ninth portrait information comprises a portrait information parameter corresponding to the ninth portrait information;
the third portrait adjusting unit is used for determining a corresponding scaling according to the proportional relation between a fifth portrait information parameter and a sixth portrait information parameter, scaling the sixth portrait information parameter according to the scaling, determining a corresponding scaling according to the proportional relation between the fifth portrait information parameter and a ninth portrait information parameter, scaling the ninth portrait information parameter according to the scaling, and generating first portrait integrated information, wherein the first portrait integrated information comprises the scaled sixth portrait information and the scaled ninth portrait information;
the scene generating unit is used for generating scene information, and the third image fusion unit is used for synthesizing the scene information and the first portrait integrated information to obtain first image synthetic information;
the third display control unit is used for transmitting the first image synthesis information to a third display unit for display;
or the fourth image acquisition unit is used for acquiring an eighth image of the second user at a preset position, wherein the eighth image comprises eighth portrait information;
the fourth portrait identification unit is used for extracting eighth portrait information from the eighth image and storing eighth portrait information parameters in the fourth storage unit;
the fourth image acquisition unit is further used for acquiring a ninth image, the ninth image is image information of the current position of the second user, and the fourth portrait identification unit is used for extracting ninth portrait information from the ninth image and storing ninth portrait information parameters in the fourth storage unit;
the fourth communication unit is used for sending ninth portrait information to the third communication unit and receiving scene information and sixth portrait information sent by the third communication unit, wherein the sixth portrait information comprises a portrait information parameter corresponding to the sixth portrait information;
the fourth portrait adjusting unit is used for determining a corresponding scaling according to the proportional relation between the eighth portrait information parameter and the ninth portrait information parameter, scaling the ninth portrait information parameter according to the corresponding scaling, determining a corresponding scaling according to the proportional relation between the sixth portrait information parameter and the eighth portrait information parameter, scaling the sixth portrait information parameter according to the corresponding scaling, and generating second portrait integrated information, wherein the second portrait integrated information comprises the scaled sixth portrait information and the scaled ninth portrait information;
the fourth scene acquisition unit is used for acquiring the scene information generated by the scene generation unit, and the fourth image fusion unit is used for synthesizing the scene information and the second portrait integrated information to obtain second image synthetic information;
the fourth display control unit is used for transmitting the second image synthesis information to the fourth display unit for display.
Further, the third portrait adjusting unit is further configured to scale the first portrait integrated information to a proportion corresponding to the current scene information according to a corresponding relationship between the scene information and the first portrait integrated information, so as to obtain third portrait integrated information; the third image fusion unit is used for synthesizing the scene information and the third portrait integrated information to obtain first image synthetic information;
the fourth portrait adjusting unit is further configured to scale the second portrait integrated information to a proportion corresponding to the current scene information according to a corresponding relationship between the scene information and the second portrait integrated information, so as to obtain fourth portrait integrated information; and the fourth image fusion unit is used for synthesizing the scene information and the fourth portrait integrated information to obtain second image synthetic information.
The inventor also provides another method for controlling the display size in VR interaction, which is applied to an apparatus for controlling the display size in VR interaction, and the apparatus comprises a plurality of VR devices; the VR devices include a third VR device and a fourth VR device; the VR device further includes a communication unit; the third VR device comprises a third image acquisition unit, a third storage unit, a third portrait identification unit, a third portrait adjustment unit, a third image fusion unit, a third display control unit, a third display unit and a scene generation unit; the fourth VR device comprises a fourth image acquisition unit, a fourth storage unit, a fourth portrait identification unit, a fourth portrait adjustment unit, a fourth image fusion unit, a fourth display control unit, a fourth display unit and a fourth scene acquisition unit; the method comprises the following steps:
a third image acquisition unit acquires a fifth image of the first user at a preset position, wherein the fifth image comprises fifth portrait information;
the third portrait identification unit extracts fifth portrait information from the fifth image and stores the fifth portrait information parameters in the third storage unit;
the third image acquisition unit acquires a sixth image, the third portrait identification unit extracts sixth portrait information from the sixth image, and stores the sixth portrait information parameters in the third storage unit, wherein the sixth image is the image information of the current position of the first user;
the third communication unit receives ninth portrait information of the second user, which is sent by the fourth communication unit, wherein the ninth portrait information comprises a portrait information parameter corresponding to the ninth portrait information;
the third portrait adjusting unit determines a corresponding scaling according to the proportional relation between the fifth portrait information parameter and the sixth portrait information parameter, scales the sixth portrait information parameter according to the scaling, determines a corresponding scaling according to the proportional relation between the fifth portrait information parameter and the ninth portrait information parameter, and scales the ninth portrait information parameter according to the scaling to generate first portrait integrated information, wherein the first portrait integrated information comprises the scaled sixth portrait information and the scaled ninth portrait information;
the scene generating unit generates scene information, and the third image fusion unit is used for synthesizing the scene information and the first portrait integrated information to obtain first image synthetic information;
the third display control unit transmits the first image synthesis information to the third display unit for display;
alternatively, the method comprises:
the fourth image acquisition unit acquires an eighth image of the second user at a preset position, wherein the eighth image comprises eighth portrait information;
the fourth portrait recognition unit extracts eighth portrait information from the eighth image and stores the eighth portrait information parameters in the fourth storage unit;
a fourth image acquisition unit acquires a ninth image, a fourth portrait identification unit extracts ninth portrait information from the ninth image and stores ninth portrait information parameters in a fourth storage unit, and the ninth image is image information of the current position of a second user;
the fourth communication unit sends ninth portrait information to the third communication unit, and receives scene information and sixth portrait information sent by the third communication unit, wherein the sixth portrait information comprises a portrait information parameter corresponding to the sixth portrait information;
the fourth portrait adjusting unit determines a corresponding scaling according to the proportional relation between the eighth portrait information parameter and the ninth portrait information parameter, scales the ninth portrait information parameter according to the corresponding scaling, determines a corresponding scaling according to the proportional relation between the sixth portrait information parameter and the eighth portrait information parameter, scales the sixth portrait information parameter according to the corresponding scaling, and generates second portrait integrated information, wherein the second portrait integrated information comprises the scaled sixth portrait information and the scaled ninth portrait information;
the fourth scene acquisition unit acquires the scene information generated by the scene generation unit, and the fourth image fusion unit synthesizes the scene information and the second portrait integrated information to obtain second image synthetic information;
the fourth display control unit transmits the second image composition information to the fourth display unit for display.
Further, the method comprises:
the third portrait adjusting unit scales the first portrait integrated information into a proportion corresponding to the current scene information according to the corresponding relation between the scene information and the first portrait integrated information to obtain third portrait integrated information;
the third image fusion unit synthesizes the scene information and the third portrait integrated information to obtain first image synthesis information;
the fourth portrait adjusting unit scales the second portrait integrated information into the proportion corresponding to the current scene information according to the corresponding relation between the scene information and the second portrait integrated information to obtain fourth portrait integrated information; and the fourth image fusion unit synthesizes the scene information and the fourth portrait integrated information to obtain second image synthesis information.
The invention has the following advantages: and acquiring portrait information of the user in advance through unified acquisition at a preset position to obtain a preset standard size proportion. Therefore, when the user uses the VR equipment, the portrait information of the user can be converted into a standard proportion no matter how the user is positioned, and then the converted portrait information and the scene information are synthesized and finally transmitted to the VR for display. The invention well solves the problem that the real proportions of the social participators in the virtual space are inconsistent with the real proportions of the social participators in different spaces when the social participators in the virtual space are merged into the common virtual space, has the characteristics of convenient carrying, low power consumption and the like, and has wide market prospect in the application field of VR equipment.
Drawings
FIG. 1 is a diagram illustrating an apparatus for display size control in VR interaction in accordance with an embodiment of the present invention;
FIG. 2 is a diagram illustrating an apparatus for display size control in VR interaction in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of an apparatus for display size control in VR interaction in accordance with another embodiment of the present invention;
FIG. 4 is a flow diagram of a method for display size control in VR interaction in accordance with another embodiment of the present invention;
FIG. 5 is a flow chart of a method for display size control in VR interaction in accordance with another embodiment of the present invention;
FIG. 6 is a flow diagram of a method for display size control in VR interaction in accordance with another embodiment of the present invention;
FIG. 7 is a flow chart of a method for display size control in VR interaction in accordance with another embodiment of the present invention;
description of reference numerals:
101. a VR device; 1011. a first VR device; 1012. a second VR device; 1013. a third VR device; 1014. a fourth VR device;
102. a server;
103. an image acquisition unit; 113. a first image acquisition unit; 123. a second image acquisition unit; 133. a third image acquisition unit; 143. a fourth image acquisition unit;
104. a storage unit; 114. a first storage unit; 124. a second storage unit; 134. a third storage unit; 144. a fourth storage unit;
105. a portrait recognition unit; 115. a first portrait recognition unit; 125. a second portrait recognition unit; 135. a third portrait recognition unit; 145. a fourth portrait recognition unit;
106. a portrait adjusting unit; 116. a first portrait adjusting unit; 126. a second portrait adjusting unit; 136. a third portrait adjusting unit; 146. a fourth portrait adjusting unit;
107. a scene acquisition unit; 117. a first scene acquisition unit; 127. a second scene acquisition unit; 147. a fourth scene acquisition unit;
108. an image fusion unit; 118. a first image fusion unit; 128. a second image fusion unit; 138. a third image fusion unit; 148. a fourth image fusion unit;
109. a display control unit; 119. a first display control unit; 129. a second display control unit; 139. a third display control unit; 149. a fourth display control unit;
110. a display unit; 120. a first display unit; 130. a second display unit; 140. a third display unit; 150. a fourth display unit;
111. a communication unit; 121. a first communication unit; 131. a second communication unit; 141. a third communication unit; 151. a fourth communication unit;
112. a scene generation unit.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Fig. 1 is a schematic diagram of a device for controlling display size in VR interaction according to an embodiment of the present invention. The apparatus comprises at least one VR device 101 and a server 102; the VR device 101 is connected with a server 102; the VR device 101 comprises an image acquisition unit 103, a storage unit 104, a portrait identification unit 105, a portrait adjustment unit 106, a scene acquisition unit 107, an image fusion unit 108, a display control unit 109 and a display unit 110; the memory cell 104 includes a first memory cell 114 and a second memory cell 124.
The image acquisition unit 103 is configured to acquire a first image located at a preset position. The image acquisition unit is an electronic device with an image acquisition function, such as a camera. The preset position is a position away from the image by a preset distance, and the size of the preset distance can be determined according to actual needs. In the use process, a user can stand at a preset distance position so that the camera can acquire a first image. The first image comprises first portrait information, and the first portrait information is an acquired human body image of the user at a preset position.
The face recognition unit 105 is configured to extract first face information from the first image, and store the first face information parameter in the first storage unit 114. The person image recognition unit may extract the first person image information by: the human body in the first image is identified through a human body identification algorithm, all parts except the human body on the image are filled with single pure color (such as pure blue), color _ key (color keying) operation is conveniently carried out, then the human body part on the first image is separated from the parts except the human body part through the color keying operation, and first portrait information is extracted. The first portrait information parameter is a corresponding parameter representing the first portrait information, and comprises a height parameter, a body width parameter and the like.
The image acquisition unit 103 is further configured to acquire a second image, where the second image is image information of a current location of the user, and the portrait identification unit 105 is configured to extract second portrait information from the second image and store second portrait information parameters in the second storage unit 124. In the practical application process, when the camera collects the second image, the user is not necessarily in the preset position, that is, the ratio of the collected second image to the first image is not necessarily corresponding, so that the ratio of the second image needs to be adjusted. The manner of extracting the second portrait information from the second image by the portrait identification unit is similar to the manner of extracting the first portrait information, and is not described herein again. The second portrait information parameters comprise height parameters, body width parameters and the like corresponding to the user in the collected image information.
The portrait adjusting unit 106 is configured to determine a scaling ratio according to a proportional relationship between the second portrait information parameter and the first portrait information parameter, scale the second portrait information parameter according to the scaling ratio, and generate third portrait information according to the scaled portrait information parameter. For example, the height parameter in the first portrait information extracted by the user is 1000 pixel heights, and the height parameter in the second portrait information collected this time is 800 pixel heights, then the ratio of the second portrait information in the second image to the first portrait information about the height parameter is 0.8. The portrait adjusting unit can restore the second portrait information parameter back to the size of the first portrait information parameter according to the scaling ratio to generate third portrait information, namely, the height parameter of 800 pixels is restored back to 1000 pixels and displayed. The zoom ratio corresponding to the height parameter may also be determined according to the corresponding relationship between the height parameter ratio and the zoom ratio, for example, when the ratio of the second portrait information to the first portrait information about the height parameter is 0.8, the zoom ratio is determined to be 1.5, and the height parameter of the current second portrait information is amplified by 1.5 times to generate the third portrait information. The scaling of the height parameter is similar to the height parameter, and is not described herein. Of course, in other embodiments, the portrait information parameter may also include some other parameters, such as the scale of the head and the body, and the portrait adjusting unit may also scale these parameters to generate the third portrait information.
The scene obtaining unit 107 is configured to obtain scene information from a server, and the image fusion unit 108 is configured to synthesize the scene information and the third portrait information to obtain image synthesis information. In this embodiment, the VR device includes a communication unit 111, the communication unit and the server may be connected in a wired or wireless manner, the scene information may be stored in a storage unit corresponding to the server in advance, and when the server receives a scene information acquisition instruction sent by the VR device, the server sends the scene information to the VR device. The scene information is a background data stream corresponding to the VR data stream, that is, other data streams except for the portrait part, and may be, for example, a virtual space scene (including school, outdoor, indoor, scenic spot, etc.). The image fusion unit 108 may synthesize the scene information and the third portrait information through an image fusion algorithm, which is widely applied in the VR field at present and is not described herein again.
The display control unit 109 is configured to transmit the image composition information to the display unit 110 for display. Therefore, when the user wears the VR equipment, the image synthesis information containing the scaled portrait information can be seen, and the user experience is effectively enhanced.
In the practical application process, because the distance between the current user and the camera is different, the portrait information in the acquired second image is also different, and in order to solve the problem of inconsistent proportion, the portrait information parameters in the second portrait need to be adjusted and zoomed, so that the sizes of different portraits presented in the same virtual space scene are ensured to be consistent. For different spatial scenes, a proportional relationship also exists between the portrait information and the scene information, so that the adjusted portrait information parameter (i.e., the third portrait information parameter) needs to be adjusted again to meet the requirement of the current scene information. Therefore, in some embodiments, the portrait adjusting unit 106 is further configured to scale the third portrait information to a ratio corresponding to the current scene information according to the corresponding relationship between the scene information and the portrait information, so as to obtain fourth portrait information; the image fusion unit 108 is configured to synthesize the scene information and the fourth portrait information to obtain image synthesis information.
For example, for a spatial scene of a scenic spot, the spatial scene includes a plurality of tall trees, the pixel height of the trees in the scene is 1000 pixel heights, and the height parameter of the third portrait information obtained after adjustment is also 1000 pixel heights, if the third portrait information is directly synthesized with the scene information at this time, the portrait presented in the image synthesis information will be the same as the size of the trees, which brings bad sensory experience to the user (assuming that the trees in the spatial scene are tall and the height of a general tree is much greater than the height of a human body). In this case, the portrait adjusting unit 106 is further configured to scale the height parameter of the third portrait information from a height of 1000 pixels to a height of 100 pixels according to the corresponding relationship (assumed to be 1:0.1) between the tree height in the scene information and the height parameter of the portrait information, so as to obtain a fourth portrait information. And the image fusion unit synthesizes the scene information and the fourth portrait information to obtain image synthesis information. Therefore, in the displayed image synthesis information, the ratio of the portrait information to the scene information better conforms to the application scene in the actual life, the people feel personally on the scene, and the user experience is effectively enhanced.
Fig. 2 is a schematic diagram of a display size control apparatus in VR interaction according to an embodiment of the present invention. The number of the VR devices 101 is multiple, and the VR devices include a first VR device 1011 and a second VR device 1012, and the first VR device 1011 and the second VR device 1012 are connected; the first VR device 1011 includes a first image capturing unit 113, a first portrait identifying unit 115, a first portrait adjusting unit 116, a first scene acquiring unit 117, a first image fusing unit 118, a first display control unit 119, a first display unit 120, and a first communication unit 121. The second VR device 1012 includes a second image capturing unit 123, a second portrait identifying unit 125, a second portrait adjusting unit 126, a second scene acquiring unit 127, a second image fusing unit 128, a second display control unit 129, a second display unit 130, and a second communication unit 131.
In this implementation, besides interacting with the server (acquiring scene information from the server), the VR device needs to interact with the VR device, so that the portrait information of the user and the portrait information of other users can be displayed in a certain VR device. For convenience of explanation, the following takes the example that the user a wears the first VR device and the user B wears the second VR device, and further explanation is made on the interaction manner between the VR devices.
For the first VR device, the first image capturing unit 113 captures image information of the user a at a preset position, and extracts and stores first portrait information corresponding to the user a through the first portrait identifying unit 115. In the application process, the first image acquisition unit 113 acquires the image information of the current position of the user a, and extracts and stores the second portrait information corresponding to the user a in the acquired second image through the first portrait identification unit 115. In order to enable the first display control unit 119 to display the portrait of the user B in the image composition information transmitted to the first display unit 120, the first communication unit 121 is further configured to receive the second portrait information extracted by the second portrait recognition unit 125 and sent by the second communication unit 131. The first portrait adjusting unit 116 is configured to determine scaling ratios of the second portrait information (i.e., the second portrait information of the user B) extracted by the second portrait identifying unit and sent by the second communication unit, and the second portrait information (i.e., the second portrait information of the user a) extracted by the first portrait identifying unit according to a corresponding relationship between the two and the first portrait information parameter, and scale the second portrait information according to the corresponding scaling ratios to obtain third portrait information including the scaled first portrait and second portrait.
Then, the first scene obtaining unit 117 may further obtain scene information from the server, and the first image fusion unit 118 synthesizes the obtained scene information and third portrait information including the adjusted first portrait (user a portrait) and second portrait (user B portrait) into image synthesis information, and sends the image synthesis information to the first display unit for display. User A can observe through first VR equipment not only that self is in the virtual space scene, can also observe other users and form images in same virtual space scene, and the proportion between portrait and the portrait is under same scale, has effectively strengthened user experience.
For the second VR device, the second image capturing unit 123 captures image information of the user B at a preset position, and extracts and stores the first portrait information corresponding to the user B through the second portrait identifying unit 125. In the application process, the second image collecting unit 123 collects the image information of the current position of the user B, and extracts and stores the second portrait information corresponding to the user B in the collected second image through the second portrait identifying unit 125. In order to enable the second display control unit 129 to display the portrait of the user a in the image composition information transmitted to the second display unit 130, the second communication unit 131 is further configured to receive the second portrait information extracted by the first portrait recognition unit 115 and sent by the first communication unit 121. The second portrait adjusting unit 126 is configured to determine scaling ratios of the second portrait information (i.e., the second portrait information of the user a) extracted by the first portrait identifying unit and sent by the first communication unit, and the second portrait information (i.e., the second portrait information of the user B) extracted by the second portrait identifying unit according to a corresponding relationship between the scaling ratios and the second portrait information parameter, and scale according to the corresponding scaling ratios to obtain third portrait information including the scaled first portrait and the scaled second portrait.
Then, the second scene obtaining unit 127 may further obtain scene information from the server, and the second image fusion unit 128 synthesizes the obtained scene information and third portrait information including the adjusted first portrait (user a portrait) and second portrait (user B portrait) into image synthesis information, and sends the image synthesis information to the second display unit for display. User B can observe that not only can self be in the virtual space scene through the second VR equipment, but also can observe that other users form images in the same virtual space scene, and the proportion between portrait and portrait is under the same scale, has effectively strengthened user experience.
In some embodiments, the first VR device is a master device and the second VR device is a slave device, such that the second portrait information corresponding to user A, B is adjusted according to a proportional relationship with the first portrait information corresponding to user a. In this embodiment, the first communication unit sends the first portrait information together when sending the second portrait information to the second communication unit.
In other embodiments, if the second VR device is a master device and the first VR device is a slave device, the second portrait information corresponding to the user A, B is adjusted according to a proportional relationship of the first portrait information corresponding to the user B. In this embodiment, the second communication unit transmits the second portrait information to the first communication unit together when transmitting the second portrait information to the first communication unit.
In other embodiments, the second portrait information corresponding to the users a and B may also be adjusted according to a proportional relationship between the second portrait information and the preset standard portrait information, respectively, so as to obtain third portrait information. In short, it is only necessary to ensure that the adjusted portrait information is at the same proportional size.
In some embodiments, when the first VR device is a master device, the other VR devices are all slave devices. The first communication unit is used for broadcasting and sending the image synthesis information to the communication units of other VR devices, and after the communication of other VR devices receives the image synthesis information, the image synthesis information can be directly sent to the display units of the VR devices for display. Because the received image synthesis information is consistent, the virtual space scene and portrait information presented by each VR device are ensured to be coordinated and unified, and the user experience is greatly enhanced. In addition, in this embodiment, the first communication unit does not need to broadcast the second portrait information of the user a to other VR devices for adjustment calculation, and only needs to calculate the final image synthesis information on one VR device (e.g., the first VR device) and send the final image synthesis information to other VR devices, thereby effectively saving the calculation amount.
Fig. 4 is a flowchart of a method for controlling display size in VR interaction according to another embodiment of the present invention. The method is applied to a device for controlling the display size in VR interaction, and the device comprises at least one VR device and a server; the VR equipment is connected with a server; the VR equipment comprises an image acquisition unit, a storage unit, a portrait recognition unit, a portrait adjustment unit, a scene acquisition unit, an image fusion unit, a display control unit and a display unit; the storage unit comprises a first storage unit and a second storage unit; the method comprises the following steps:
firstly, step S201 is entered, and an image acquisition unit acquires a first image at a preset position, wherein the first image comprises first portrait information. The image acquisition unit is an electronic device with an image acquisition function, such as a camera. The preset position is a position away from the image by a preset distance, and the size of the preset distance can be determined according to actual needs. In the use process, a user can stand at a preset distance position so that the camera can acquire a first image. The first image comprises first portrait information, and the first portrait information is an acquired human body image of the user at a preset position.
Then, the method proceeds to step S202, where the portrait identification unit extracts first portrait information from the first image, and stores the first portrait information parameter in the first storage unit. The person image recognition unit may extract the first person image information by: the human body in the first image is identified through a human body identification algorithm, all parts except the human body on the image are filled with single pure color (such as pure blue), color _ key (color keying) operation is conveniently carried out, then the human body part on the first image is separated from the parts except the human body part through the color keying operation, and first portrait information is extracted. The first portrait information parameter is a corresponding parameter representing the first portrait information, and comprises a height parameter, a body width parameter and the like.
Then step S203 is carried out, the image acquisition unit acquires a second image, the portrait identification unit extracts second portrait information from the second image, and the second portrait information parameters are stored in a second storage unit; the second image is the image information of the current position of the user. In the practical application process, when the camera collects the second image, the user is not necessarily in the preset position, that is, the ratio of the collected second image to the first image is not necessarily corresponding, so that the ratio of the second image needs to be adjusted. The manner of extracting the second portrait information from the second image by the portrait identification unit is similar to the manner of extracting the first portrait information, and is not described herein again. The second portrait information parameters comprise height parameters, body width parameters and the like corresponding to the user in the collected image information.
And then, in step S204, the portrait adjusting unit determines a scaling ratio according to the proportional relation between the second portrait information parameter and the first portrait information parameter, scales the second portrait information parameter according to the scaling ratio, and generates third portrait information according to the scaled portrait information parameter. For example, the height parameter in the first portrait information extracted by the user is 1000 pixel heights, and the height parameter in the second portrait information collected this time is 800 pixel heights, then the ratio of the second portrait information in the second image to the first portrait information about the height parameter is 0.8. The portrait adjusting unit can restore the second portrait information parameter back to the size of the first portrait information parameter according to the scaling ratio to generate third portrait information, namely, the height parameter of 800 pixels is restored back to 1000 pixels and displayed. The zoom ratio corresponding to the height parameter may also be determined according to the corresponding relationship between the height parameter ratio and the zoom ratio, for example, when the ratio of the second portrait information to the first portrait information about the height parameter is 0.8, the zoom ratio is determined to be 1.5, and the height parameter of the current second portrait information is amplified by 1.5 times to generate the third portrait information. The scaling of the height parameter is similar to the height parameter, and is not described herein. Of course, in other embodiments, the portrait information parameter may also include some other parameters, such as the scale of the head and the body, and the portrait adjusting unit may also scale these parameters to generate the third portrait information.
And then, the method goes to step S205, where the scene obtaining unit obtains scene information from the server, and the image fusion unit synthesizes the scene information and the third portrait information to obtain image synthesis information. In this embodiment, the VR device includes a communication unit 111, the communication unit and the server may be connected in a wired or wireless manner, the scene information may be stored in a storage unit corresponding to the server in advance, and when the server receives a scene information acquisition instruction sent by the VR device, the server sends the scene information to the VR device. The scene information is a background data stream corresponding to the VR data stream, that is, other data streams except for the portrait part, and may be, for example, a virtual space scene (including school, outdoor, indoor, scenic spot, etc.). The image fusion unit 108 may synthesize the scene information and the third portrait information through an image fusion algorithm, which is widely applied in the VR field at present and is not described herein again.
Then, the process proceeds to step S206 where the display control unit transmits the image composition information to the display unit for display. Therefore, when the user wears the VR equipment, the image synthesis information containing the scaled portrait information can be seen, and the user experience is effectively enhanced.
In certain embodiments, the method further comprises: the portrait adjusting unit scales the third portrait information into a proportion corresponding to the current scene information according to the corresponding relation between the scene information and the portrait information to obtain fourth portrait information; the image fusion unit synthesizes the scene information and the fourth portrait information to obtain image synthesis information, so that the adjusted portrait can meet the requirements of the scene information, and the user experience is effectively improved.
In some embodiments, the VR devices are multiple in number, including a first VR device and a second VR device, the first VR device and the second VR device being connected; the VR device further includes a communication unit; the first VR device comprises a first communication unit, a first portrait identification unit and a first portrait adjustment unit; the second VR device comprises a second communication unit, a second portrait recognition unit and a second portrait adjusting unit; the method comprises the following steps:
the first communication unit sends the second portrait information extracted by the first portrait identification unit to the second communication unit;
the second portrait adjusting unit respectively determines the scaling ratios of the second portrait information extracted by the second communication unit from the first portrait recognition unit and the second portrait information extracted by the second portrait recognition unit according to the corresponding proportion relation between the second portrait information and the first portrait information parameter, and scales according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait;
or the first communication unit receives and transmits the second portrait information extracted by the second portrait identification unit and transmitted by the second communication unit;
the first portrait adjusting unit respectively determines the scaling ratios of the second portrait information extracted by the second portrait identifying unit and sent by the second communication unit and the second portrait information extracted by the first portrait identifying unit according to the corresponding relationship between the two and the first portrait information parameters, and scales according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait.
Fig. 3 is a schematic diagram of a display size control apparatus in VR interaction according to another embodiment of the present invention. The apparatus includes a plurality of VR devices; the VR devices include a third VR device 1013 and a fourth VR device 1014; the third VR device 1013 includes a third image capturing unit 133, a third storage unit 134, a third portrait identifying unit 135, a third portrait adjusting unit 136, a third image fusing unit 138, a third display control unit 139, a third display unit 140, a scene generating unit 112, and a third communication unit 141; the fourth VR device 1014 includes a fourth image capturing unit 143, a fourth storage unit 144, a fourth portrait recognizing unit 145, a fourth portrait adjusting unit 146, a fourth image fusing unit 148, a fourth display control unit 149, a fourth display unit 150, a fourth scene obtaining unit 147, and a fourth communication unit 151.
In this embodiment, the third VR device 1013 is a master device, the fourth VR device 1014 is a slave device, and the third VR device includes a scene generation unit 112 for generating scene information. The fourth VR device includes a fourth scene acquisition unit 147 for acquiring scene information generated by the scene generation unit 112 of the third VR device.
For the third VR device, the third image capturing unit 133 is configured to capture a fifth image of the first user located at the preset position, where the fifth image includes fifth portrait information. The image acquisition unit is an electronic device with an image acquisition function, such as a camera. The preset position is a position away from the image by a preset distance, and the size of the preset distance can be determined according to actual needs. In the using process, the first user can stand at a preset distance position, so that the camera acquires the fifth image. The fifth image comprises fifth portrait information, and the fifth portrait information is the collected human body image of the user at the preset position.
The third portrait identification unit 135 is configured to extract fifth portrait information from the fifth image, and store the fifth portrait information parameter in the third storage unit 134. The third portrait recognition unit may extract the fifth portrait information by: and identifying the human body in the fifth image through a human body identification algorithm, filling all parts except the human body on the image with single pure color (such as pure blue), so as to conveniently perform color _ key operation, separating the human body part on the fifth image from the parts except the human body through the color key operation, and extracting fifth portrait information. The fifth portrait information parameter is a corresponding parameter representing the fifth portrait information, and comprises a height parameter, a width parameter and the like.
The third image collecting unit 133 is further configured to collect a sixth image, where the sixth image is image information of a current position of the first user, and the third portrait identifying unit is configured to extract sixth portrait information from the sixth image and store parameters of the sixth portrait information in the third storage unit. In an actual application process, when the camera collects the sixth image, the user is not necessarily in a preset position, that is, the ratio of the collected sixth image and the fifth image is not necessarily corresponding, and therefore the ratio of the sixth image needs to be adjusted. The way of extracting the sixth portrait information from the sixth image by the third portrait recognition unit is similar to the way of extracting the fifth portrait information, and is not described herein again. The sixth portrait information parameter comprises a height parameter, a body width parameter and the like corresponding to the user in the collected image information.
The third communication unit 141 is configured to receive ninth portrait information of the second user sent by the fourth communication unit 151, where the ninth portrait information includes a portrait information parameter corresponding to the ninth portrait information.
The third portrait adjusting unit 136 is configured to determine a corresponding scaling according to a proportional relationship between a fifth portrait information parameter and a sixth portrait information parameter, scale the sixth portrait information parameter according to the scaling, determine a corresponding scaling according to a proportional relationship between the fifth portrait information parameter and a ninth portrait information parameter, and scale the ninth portrait information parameter according to the scaling to generate first portrait integrated information, where the first portrait integrated information includes the scaled sixth portrait information and the scaled ninth portrait information.
The scene generating unit 112 is configured to generate scene information, and the third image fusing unit 138 is configured to synthesize the scene information and the first portrait integrated information to obtain first image synthesized information. The third image fusion unit 138 may synthesize the scene information and the third portrait information through an image fusion algorithm, which is widely used in the VR field at present and is not described herein again.
The third display control unit 139 is configured to transmit the first image composition information to the third display unit 140 for display. Therefore, the first user can observe that the first user is in the virtual space scene through the third VR device, and can also observe that other users form images in the same virtual space scene, and the proportion between the portrait and the portrait is under the same scale, so that the user experience is effectively enhanced.
For the fourth VR device, the fourth image capturing unit 143 is configured to capture an eighth image of the second user located at the preset position, where the eighth image includes eighth portrait information. The fourth image acquisition unit is an electronic device with an image acquisition function, such as a camera. The preset position is a position away from the image by a preset distance, and the size of the preset distance can be determined according to actual needs. In the using process, the second user can stand at a preset distance position, so that the camera acquires the eighth image. The eighth image comprises eighth portrait information, and the eighth portrait information is a collected human body image of the user at a preset position.
The fourth portrait identification unit 145 is configured to extract eighth portrait information from the eighth image, and store the eighth portrait information parameter in the fourth storage unit. The fourth person image recognition unit 145 may extract the first person image information by: and identifying the human body in the eighth image through a human body identification algorithm, filling all parts except the human body on the image with single pure color (such as pure blue), so as to conveniently perform color _ key operation, separating the human body part on the eighth image from the parts except the human body through color key operation, and extracting eighth portrait information. The eighth portrait information parameter is a corresponding parameter representing the eighth portrait information, and comprises a height parameter, a width parameter and the like.
The fourth image collecting unit 143 is further configured to collect a ninth image, where the ninth image is image information of a current position of the second user, and the fourth portrait identifying unit is configured to extract ninth portrait information from the ninth image and store ninth portrait information parameters in the fourth storage unit;
the fourth communication unit 151 is configured to send ninth portrait information to the third communication unit, and is configured to receive scene information and sixth portrait information sent by the third communication unit, where the sixth portrait information includes a portrait information parameter corresponding to the sixth portrait information;
the fourth portrait adjusting unit 146 is configured to determine a corresponding scaling according to a proportional relationship between the eighth portrait information parameter and the ninth portrait information parameter, scale the ninth portrait information parameter according to the corresponding scaling, determine a corresponding scaling according to a proportional relationship between the sixth portrait information parameter and the eighth portrait information parameter, and scale the sixth portrait information parameter according to the corresponding scaling to generate second portrait integrated information, where the second portrait integrated information includes the scaled sixth portrait information and the scaled ninth portrait information;
the fourth scene obtaining unit 147 is configured to obtain the scene information generated by the scene generating unit, and the fourth image fusing unit is configured to synthesize the scene information and the second portrait integrated information to obtain second image synthesized information;
the fourth display control unit 149 is configured to transmit the second image composition information to the fourth display unit 150 for display. Therefore, the second user can observe that the second user is in the virtual space scene through the fourth VR device, and can also observe that other users form images in the same virtual space scene, and the proportion between the portrait and the portrait is under the same scale, so that the user experience is effectively enhanced.
In some embodiments, the third portrait adjusting unit 136 is further configured to scale the first portrait integrated information to a proportion corresponding to the current scene information according to the corresponding relationship between the scene information and the first portrait integrated information, so as to obtain third portrait integrated information; the third image fusion unit 138 is configured to synthesize the scene information and the third portrait integrated information to obtain first image synthesis information. The first portrait integrated information is all portrait information to be adjusted on the third VR device, and includes portrait information of the current position of the first user extracted by the third portrait recognition unit and portrait information to be merged into the spatial scene sent by other VR devices received through the third communication unit.
The fourth portrait adjusting unit 146 is further configured to scale the second portrait integrated information to a ratio corresponding to the current scene information according to the corresponding relationship between the scene information and the second portrait integrated information, so as to obtain fourth portrait integrated information; and the fourth image fusion unit is used for synthesizing the scene information and the fourth portrait integrated information to obtain second image synthetic information. The second portrait integrated information is all portrait information to be adjusted on the fourth VR device, and includes portrait information of the current position of the second user extracted by the fourth portrait recognition unit and portrait information to be merged into the spatial scene sent by other VR devices received through the fourth communication unit.
The invention also provides a method for controlling the display size in VR interaction, which is applied to a device for controlling the display size in VR interaction, and the device comprises a plurality of VR devices; the VR devices include a third VR device and a fourth VR device; the VR device further includes a communication unit; the third VR device comprises a third image acquisition unit, a third storage unit, a third portrait identification unit, a third portrait adjustment unit, a third image fusion unit, a third display control unit, a third display unit and a scene generation unit; the fourth VR device comprises a fourth image acquisition unit, a fourth storage unit, a fourth portrait recognition unit, a fourth portrait adjustment unit, a fourth image fusion unit, a fourth display control unit, a fourth display unit and a fourth scene acquisition unit.
In this embodiment, the third VR device is a master device and the fourth VR device is a slave device. The third VR device includes a scene generation unit configured to generate scene information. The fourth VR device includes a fourth scene acquisition unit configured to acquire scene information generated by the scene generation unit of the third VR device. Preferably, the number of master devices is one, and the number of slave devices is one or more.
As shown in fig. 5, for a master device (taking a third VR device as an example), the method includes:
firstly, step S401 is carried out, wherein a third image acquisition unit acquires a fifth image of a first user at a preset position, and the fifth image comprises fifth portrait information;
then, the third portrait identification unit extracts fifth portrait information from the fifth image and stores the fifth portrait information parameter in a third storage unit in the step S402;
then, the step S403 is carried out, wherein a sixth image is acquired by a third image acquisition unit, sixth portrait information is extracted from the sixth image by a third portrait identification unit, and sixth portrait information parameters are stored in a third storage unit, wherein the sixth image is image information of the current position of the first user;
then, in step S404, the third portrait adjusting unit determines a corresponding scaling according to a proportional relationship between the fifth portrait information parameter and the sixth portrait information parameter, scales the sixth portrait information parameter according to the scaling, determines a corresponding scaling according to a proportional relationship between the fifth portrait information parameter and the ninth portrait information parameter, and scales the ninth portrait information parameter according to the scaling to generate first portrait integrated information, where the first portrait integrated information includes scaled sixth portrait information and scaled ninth portrait information;
then, the method goes to step S405, wherein the scene generating unit generates scene information, and the third image fusion unit is used for synthesizing the scene information and the first portrait integrated information to obtain first image synthetic information;
then, the third display control unit transmits the first image composition information to the third display unit for display in step S406.
In certain embodiments, the method comprises: and the third communication unit receives ninth portrait information of the second user, which is sent by the fourth communication unit, wherein the ninth portrait information comprises a portrait information parameter corresponding to the ninth portrait information.
For a slave device (taking the fourth VR device as an example), the method includes:
firstly, in step S501, a fourth image acquisition unit acquires an eighth image of a second user at a preset position, wherein the eighth image comprises eighth portrait information;
then, the fourth portrait identification unit extracts eighth portrait information from the eighth image and stores the eighth portrait information parameters in a fourth storage unit in the step S502;
then, step S503 is carried out, wherein a ninth image is acquired by the fourth image acquisition unit, ninth portrait information is extracted from the ninth image by the fourth portrait identification unit, and ninth portrait information parameters are stored in the fourth storage unit, wherein the ninth image is image information of the current position of the second user;
then, the fourth communication unit sends ninth portrait information to the third communication unit in step S504, and receives scene information and sixth portrait information sent by the third communication unit, where the sixth portrait information includes a portrait information parameter corresponding to the sixth portrait information;
then, in step S505, the fourth portrait adjusting unit determines a corresponding scaling ratio according to the proportional relationship between the eighth portrait information parameter and the ninth portrait information parameter, and scales the ninth portrait information parameter according to the corresponding scaling ratio;
then, in step S506, the fourth portrait adjusting unit determines a corresponding scaling ratio according to a proportional relationship between the sixth portrait information parameter and the eighth portrait information parameter, and scales the sixth portrait information parameter according to the corresponding scaling ratio to generate second portrait integrated information, where the second portrait integrated information includes scaled sixth portrait information and scaled ninth portrait information;
then, step S507 is performed, in which the fourth scene obtaining unit obtains the scene information generated by the scene generating unit, and the fourth image fusion unit synthesizes the scene information and the second portrait integrated information to obtain second image synthesis information;
then, the fourth display control unit proceeds to step S508 to transmit the second image composition information to the fourth display unit for display.
In certain embodiments, the method comprises:
firstly, in step S601, the third portrait adjusting unit scales the first portrait integrated information to the proportion corresponding to the current scene information according to the corresponding relation between the scene information and the first portrait integrated information to obtain third portrait integrated information;
then step S602, a third image fusion unit synthesizes the scene information and the third portrait integrated information to obtain first image synthesis information;
then, in step S603, the fourth portrait adjusting unit scales the second portrait session information to a ratio corresponding to the current scene information according to the corresponding relationship between the scene information and the second portrait session information, so as to obtain fourth portrait session information;
and then, the fourth image fusion unit in step S604 synthesizes the scene information and the fourth portrait integrated information to obtain second image synthesis information. Therefore, in the second image synthetic information, the proportion of the fourth portrait integrated information to the scene information is matched, the second image synthetic information is more in line with the habit of human eyes, the actual application scene is simulated, and the sensory experience of the user is improved.
The invention provides a method and a device for controlling display size in VR interaction. Therefore, when the user uses the VR equipment, the portrait information of the user can be converted into a standard proportion no matter how the user is positioned, and then the converted portrait information and the scene information are synthesized and finally transmitted to the VR for display. The invention well solves the problem that the real proportions of the social participators in the virtual space are inconsistent with the real proportions of the social participators in different spaces when the social participators in the virtual space are merged into the common virtual space, has the characteristics of convenient carrying, low power consumption and the like, and has wide market prospect in the application field of VR equipment.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising … …" or "comprising … …" does not exclude the presence of additional elements in a process, method, article, or terminal that comprises the element. Further, herein, "greater than," "less than," "more than," and the like are understood to exclude the present numbers; the terms "above", "below", "within" and the like are to be understood as including the number.
As will be appreciated by one skilled in the art, the above-described embodiments may be provided as a method, apparatus, or computer program product. These embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. All or part of the steps in the methods according to the embodiments may be implemented by a program instructing associated hardware, where the program may be stored in a storage medium readable by a computer device and used to execute all or part of the steps in the methods according to the embodiments. The computer devices, including but not limited to: personal computers, servers, general-purpose computers, special-purpose computers, network devices, embedded devices, programmable devices, intelligent mobile terminals, intelligent home devices, wearable intelligent devices, vehicle-mounted intelligent devices, and the like; the storage medium includes but is not limited to: RAM, ROM, magnetic disk, magnetic tape, optical disk, flash memory, U disk, removable hard disk, memory card, memory stick, network server storage, network cloud storage, etc.
The various embodiments described above are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a computer apparatus to produce a machine, such that the instructions, which execute via the processor of the computer apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer apparatus to cause a series of operational steps to be performed on the computer apparatus to produce a computer implemented process such that the instructions which execute on the computer apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the embodiments have been described, once the basic inventive concept is obtained, other variations and modifications of these embodiments can be made by those skilled in the art, so that the above embodiments are only examples of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes using the contents of the present specification and drawings, or any other related technical fields, which are directly or indirectly applied thereto, are included in the scope of the present invention.

Claims (6)

1. An apparatus for display size control in VR interaction, the apparatus comprising at least one VR device and a server; the VR equipment is connected with a server; the VR equipment comprises an image acquisition unit, a storage unit, a portrait recognition unit, a portrait adjustment unit, a scene acquisition unit, an image fusion unit, a display control unit and a display unit; the storage unit comprises a first storage unit and a second storage unit;
the image acquisition unit is used for acquiring a first image positioned at a preset position, and the first image comprises first portrait information;
the portrait identification unit is used for extracting first portrait information from the first image and storing the first portrait information parameters in the first storage unit;
the image acquisition unit is further used for acquiring a second image, the second image is image information of the position of the current user, and the portrait identification unit is used for extracting second portrait information from the second image and storing second portrait information parameters in a second storage unit;
the portrait adjusting unit is used for determining a scaling ratio according to the proportional relation between the second portrait information parameter and the first portrait information parameter, scaling the second portrait information parameter according to the scaling ratio, and generating third portrait information according to the scaled portrait information parameter;
the scene acquisition unit is used for acquiring scene information from the server, and the image fusion unit is used for synthesizing the scene information and the third portrait information to obtain image synthesis information;
the display control unit is used for transmitting the image synthesis information to the display unit for displaying;
the number of the VR devices is multiple, the VR devices comprise a first VR device and a second VR device, and the first VR device and the second VR device are connected; the first VR device comprises a first communication unit, and the second VR device comprises a second communication unit, a second display control unit, a second display unit and a second image fusion unit;
the first communication unit is used for sending the image synthesis information of the first VR device to the second communication unit;
the second communication unit is used for receiving image synthesis information of the first VR device, and the second image fusion unit is used for synthesizing the third portrait information obtained by the portrait adjusting unit of the second device with the received image synthesis information of the first VR device to obtain image synthesis information corresponding to the second VR device;
and the display control unit is used for transmitting the image synthesis information corresponding to the second VR equipment to the second display unit for display.
2. The apparatus for display size control in VR interaction as claimed in claim 1, wherein the portrait adjusting unit is further configured to scale the third portrait information to a scale corresponding to the current scene information according to a correspondence between the scene information and the portrait information to obtain fourth portrait information; and the image fusion unit is used for synthesizing the scene information and the fourth portrait information to obtain image synthesis information.
3. The apparatus of claim 1, in which the VR device is plural in number and includes a first VR device and a second VR device, the first VR device and the second VR device being connected; the VR device further includes a communication unit; the first VR device comprises a first communication unit, a first portrait identification unit and a first portrait adjustment unit; the second VR device comprises a second communication unit, a second portrait recognition unit and a second portrait adjusting unit;
the first communication unit is used for sending the second portrait information extracted by the first portrait identification unit to the second communication unit;
the second portrait adjusting unit is used for respectively determining the scaling ratios of the second portrait information extracted by the second communication unit from the first portrait identifying unit and the second portrait information extracted by the second portrait identifying unit according to the corresponding proportion relation between the second portrait information and the first portrait information parameter, and scaling according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait;
or the first communication unit is used for receiving and sending second portrait information extracted by the second portrait identification unit and sent by the second communication unit;
the first portrait adjusting unit is used for respectively determining the scaling ratios of the second portrait information extracted by the second portrait identifying unit and sent by the second communication unit and the second portrait information extracted by the first portrait identifying unit according to the corresponding proportion relation between the second portrait information and the first portrait information parameters, and scaling according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait.
4. A method for controlling display size in VR interaction is applied to a device for controlling display size in VR interaction, and the device comprises at least one VR device and a server; the VR equipment is connected with a server; the VR equipment comprises an image acquisition unit, a storage unit, a portrait recognition unit, a portrait adjustment unit, a scene acquisition unit, an image fusion unit, a display control unit and a display unit; the storage unit comprises a first storage unit and a second storage unit; the method comprises the following steps:
the method comprises the steps that an image acquisition unit acquires a first image located at a preset position, wherein the first image comprises first portrait information;
the portrait identification unit extracts first portrait information from the first image and stores the first portrait information parameters in the first storage unit;
the image acquisition unit acquires a second image, the portrait identification unit extracts second portrait information from the second image, and the second portrait information parameters are stored in the second storage unit; the second image is image information of the position of the current user;
the portrait adjusting unit determines a scaling ratio according to the proportional relation between the second portrait information parameter and the first portrait information parameter, scales the second portrait information parameter according to the scaling ratio, and generates third portrait information according to the scaled portrait information parameter;
the scene obtaining unit obtains scene information from the server, and the image fusion unit synthesizes the scene information and the third portrait information to obtain image synthesis information;
the display control unit transmits the image synthesis information to the display unit for display;
the number of the VR devices is multiple, the VR devices comprise a first VR device and a second VR device, and the first VR device and the second VR device are connected; the first VR device comprises a first communication unit, and the second VR device comprises a second communication unit, a second display control unit, a second display unit and a second image fusion unit; the method comprises the following steps:
the first communication unit sends the image synthesis information of the first VR device to the second communication unit;
the second communication unit receives image synthesis information of the first VR device, and the second image fusion unit synthesizes third portrait information obtained by the portrait adjusting unit of the second device with the received image synthesis information of the first VR device to obtain image synthesis information corresponding to the second VR device;
and the display control unit transmits the image synthesis information corresponding to the second VR device to the second display unit for display.
5. The method of display size control in a VR interaction of claim 4, the method further comprising:
the portrait adjusting unit scales the third portrait information into a proportion corresponding to the current scene information according to the corresponding relation between the scene information and the portrait information to obtain fourth portrait information;
and the image fusion unit synthesizes the scene information and the fourth portrait information to obtain image synthesis information.
6. The method of display size control in a VR interaction of claim 4, where the number of VR devices is multiple, including a first VR device and a second VR device, the first VR device and the second VR device connected; the VR device further includes a communication unit; the first VR device comprises a first communication unit, a first portrait identification unit and a first portrait adjustment unit; the second VR device comprises a second communication unit, a second portrait recognition unit and a second portrait adjusting unit; the method comprises the following steps:
the first communication unit sends the second portrait information extracted by the first portrait identification unit to the second communication unit;
the second portrait adjusting unit respectively determines the scaling ratios of the second portrait information extracted by the second communication unit from the first portrait recognition unit and the second portrait information extracted by the second portrait recognition unit according to the corresponding proportion relation between the second portrait information and the first portrait information parameter, and scales according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait;
alternatively, the method comprises:
the first communication unit receives and transmits second portrait information extracted by the second portrait identification unit and transmitted by the second communication unit;
the first portrait adjusting unit respectively determines the scaling ratios of the second portrait information extracted by the second portrait identifying unit and sent by the second communication unit and the second portrait information extracted by the first portrait identifying unit according to the corresponding relationship between the two and the first portrait information parameters, and scales according to the corresponding scaling ratios to obtain third portrait information containing the scaled first portrait and the scaled second portrait.
CN201710458264.5A 2017-06-16 2017-06-16 Method and device for controlling display size in VR interaction Active CN107330850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710458264.5A CN107330850B (en) 2017-06-16 2017-06-16 Method and device for controlling display size in VR interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710458264.5A CN107330850B (en) 2017-06-16 2017-06-16 Method and device for controlling display size in VR interaction

Publications (2)

Publication Number Publication Date
CN107330850A CN107330850A (en) 2017-11-07
CN107330850B true CN107330850B (en) 2021-01-26

Family

ID=60193999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710458264.5A Active CN107330850B (en) 2017-06-16 2017-06-16 Method and device for controlling display size in VR interaction

Country Status (1)

Country Link
CN (1) CN107330850B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697755B (en) * 2018-12-24 2023-07-21 深圳供电局有限公司 Augmented reality display method and device for power transmission tower model and terminal equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125412A (en) * 2014-06-16 2014-10-29 联想(北京)有限公司 Information processing method and electronic equipment
CN106412558A (en) * 2016-09-08 2017-02-15 深圳超多维科技有限公司 Method, equipment and device for stereo virtual reality live broadcasting

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293561B (en) * 2015-05-28 2020-02-28 北京智谷睿拓技术服务有限公司 Display control method and device and display equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125412A (en) * 2014-06-16 2014-10-29 联想(北京)有限公司 Information processing method and electronic equipment
CN106412558A (en) * 2016-09-08 2017-02-15 深圳超多维科技有限公司 Method, equipment and device for stereo virtual reality live broadcasting

Also Published As

Publication number Publication date
CN107330850A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN108737882B (en) Image display method, image display device, storage medium and electronic device
CN106548517B (en) Method and device for carrying out video conference based on augmented reality technology
CN106683195B (en) AR scene rendering method based on indoor positioning
WO2018095317A1 (en) Data processing method, device, and apparatus
CN106231349B (en) Main broadcaster's class interaction platform server method for changing scenes and its device, server
CN109242940B (en) Method and device for generating three-dimensional dynamic image
CN108114471B (en) AR service processing method and device, server and mobile terminal
JP7342366B2 (en) Avatar generation system, avatar generation method, and program
CN113840049A (en) Image processing method, video flow scene switching method, device, equipment and medium
CN109788359B (en) Video data processing method and related device
CN108829250A (en) A kind of object interaction display method based on augmented reality AR
CN106231411B (en) Main broadcaster's class interaction platform client scene switching, loading method and device, client
CN110324648A (en) Live broadcast display method and system
KR20150105069A (en) Cube effect method of 2d image for mixed reality type virtual performance system
CN108259806A (en) A kind of video communication method, equipment and terminal
KR101982436B1 (en) Decoding method for video data including stitching information and encoding method for video data including stitching information
CN108205822B (en) Picture pasting method and device
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
KR101408719B1 (en) An apparatus for converting scales in three-dimensional images and the method thereof
CN111147883A (en) Live broadcast method and device, head-mounted display equipment and readable storage medium
CN107330850B (en) Method and device for controlling display size in VR interaction
CN106231397B (en) Main broadcaster's class interaction platform main broadcaster end method for changing scenes and its device, Zhu Boduan
CN108932055B (en) Method and equipment for enhancing reality content
CN106875478A (en) Experience the AR devices of mobile phone 3D effect
CN110958463A (en) Method, device and equipment for detecting and synthesizing virtual gift display position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 350003 building 18, No.89, software Avenue, Gulou District, Fuzhou City, Fujian Province

Applicant after: Ruixin Microelectronics Co., Ltd

Address before: 350003 building 18, No.89, software Avenue, Gulou District, Fuzhou City, Fujian Province

Applicant before: Fuzhou Rockchips Electronics Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant