CN113489918B - Terminal equipment, server and virtual photo combining method - Google Patents

Terminal equipment, server and virtual photo combining method Download PDF

Info

Publication number
CN113489918B
CN113489918B CN202011173504.5A CN202011173504A CN113489918B CN 113489918 B CN113489918 B CN 113489918B CN 202011173504 A CN202011173504 A CN 202011173504A CN 113489918 B CN113489918 B CN 113489918B
Authority
CN
China
Prior art keywords
target object
photo
image
position relation
expected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011173504.5A
Other languages
Chinese (zh)
Other versions
CN113489918A (en
Inventor
杨雪洁
曲磊
刘帅帅
张振铎
高雪松
陈维强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202011173504.5A priority Critical patent/CN113489918B/en
Publication of CN113489918A publication Critical patent/CN113489918A/en
Application granted granted Critical
Publication of CN113489918B publication Critical patent/CN113489918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the disclosure provides a terminal device, a server and a virtual photo-taking method, which are used for solving the problem that a synthesized photo-taking image is poor in effect when cross-terminal photo-taking in the related technology. In the method, expected position relation information is adopted to represent expected relative positions between a target object participating in the photo taking and an image acquisition device, and when terminal equipment performs photo taking with other terminal equipment, expected position relation information associated with a photo taking background picture is obtained, and the target object is guided to perform photo taking according to the expected position relation information. Therefore, the embodiment of the disclosure can guide the user to a proper position for photo taking, so that different users can acquire images at the proper position, and a photo taking image with coordinated proportions among different users can be obtained as much as possible when the images are synthesized, thereby solving the problems that the proportion of the images of the target object is not regulated intelligently in the related art, so that the proportion of the images in the synthesized images is disordered, and the synthesis effect is unreal.

Description

Terminal equipment, server and virtual photo combining method
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a terminal device, a server and a virtual syndication method.
Background
With the widespread use of virtual syndication techniques, the demand for synthesizing images by a target object has increased. In the existing virtual photo technology, a composite image is obtained by simply processing each target object image participating in the photo and fusing the processed target object image into a background image. The obtained composite image has poor sense of reality, and the effect of the combination is required to be improved.
Disclosure of Invention
The purpose of the present disclosure is to provide a terminal device, a server and a virtual syndication method. The method is used for solving the problems that the synthesized image obtained by the existing virtual photo technology is poor in sense of reality and the photo effect is to be improved.
In a first aspect, the present disclosure provides a terminal device, including: display, image acquisition ware, memory and controller, wherein:
The display is used for displaying information;
The image collector is used for collecting images;
the memory is used for storing a computer program which can be executed by the controller;
The controller is connected with the display, the image collector and the memory respectively and is configured to:
When the terminal equipment performs the photo-combining with other terminal equipment, acquiring expected position relation information associated with a photo-combining background picture; the expected position relation information is used for representing an expected relative position between a target object participating in the combination and the image acquisition device;
And guiding the target object to move according to the expected position relation information so as to guide the target object to carry out the combination.
In a possible embodiment, the desired positional relationship information includes a desired distance and/or a desired angle between the target object and the image acquisition device;
The controller is specifically configured to:
when the actual positional relationship between the target object and the image pickup device does not match the desired positional relationship, the controller generates and outputs instruction information for guiding the target object to move based on the desired positional relationship.
In a possible embodiment, the synopsis background map is further associated with position information of the target object in the synopsis background map; the controller is further configured to:
And if an image uploading instruction is received, uploading the image of the target object to a server, so that the server fuses the target object into the photo background image according to the position information of the target object in the photo background image.
In a possible embodiment, the synopsis background map is further associated with a recommended synopsis for recommending to the target object; the controller is further configured to:
and outputting the recommended conjunctive gesture, wherein the recommended conjunctive gesture is determined according to the social relationship of the target object.
In one possible embodiment, the controller, prior to performing uploading the image of the target object to the server, is further configured to:
acquiring actual position relation information between the image acquisition device and the target object as final position relation information when the image acquisition device acquires the image;
and sending the final position relation information to the server, so that the server scales the image of the target object to the size corresponding to the expected position relation information according to the final position relation information, and then fusing the target object into the photo background picture.
In a second aspect, the present disclosure provides a server comprising a memory and a processor, wherein:
The memory is used for storing a computer program executable by the processor;
The processor, coupled to the memory, is configured to:
When a plurality of terminal devices are combined, executing for any one of the plurality of terminal devices:
Transmitting the expected position relation information associated with the preselected synopsis background picture to the terminal equipment; the expected position relation information is used for representing an expected relative position between a target object participating in the combination and the image acquisition device.
In a possible embodiment, the synopsis background map is further associated with position information of the target object in the synopsis background map; the processor, after performing the sending of the pre-selected desired positional relationship information associated with the synopsis context graph to the terminal device, is further configured to:
receiving an image of the target object sent by the terminal equipment;
and fusing the image of the target object into the photo background image according to the position information of the target object in the photo background image.
In a possible embodiment, before the processor sends the desired positional relationship information associated with the preselected synopsis graph to the terminal device, the processor is further configured to:
Acquiring social relation graphs between the target object and other photo objects participating in the photo;
And determining the position information of the target object in the photo background diagram according to the social relation graph.
In a possible embodiment, at least one anchor point position associated with the synopsis background graph is pre-stored, and when the processor executes the determining the position information of the target object in the synopsis background graph according to the social relationship graph, the processor is configured to:
deducing relative position ordering between the target object and the group photo object according to the social relation graph;
And determining the anchor point position corresponding to the target object as the position information of the target object in the synopsis background picture according to the corresponding relation between at least one anchor point position associated with the synopsis background picture and the relative station position sequencing.
In one possible embodiment, after obtaining the social relationship graph between the target object and the other photo objects participating in the photo, the processor is further configured to:
And determining a recommended syndication gesture recommended to the target object according to the social relation of the target object in the social relation graph, and pushing the recommended syndication gesture to the terminal equipment.
In one possible embodiment, the processor, when performing fusing of the image of the target object into the collapscope, is further configured to:
Receiving final position relation information between the target object and the image acquisition device, which is uploaded by the terminal equipment, wherein the final position relation information is actual position relation information between the image acquisition device and the target object when the terminal equipment controls the image acquisition device to acquire the image;
When the final position relation information and the expected position relation information are not in the allowable tolerance range, scaling the image of the target object to a size corresponding to the expected position relation information according to the final position relation information;
And fusing the zoomed image of the target object into the combined background image.
In a third aspect, the present disclosure provides a virtual syndication method applied to a terminal device, where the method includes:
When the terminal equipment performs the photo-combining with other terminal equipment, acquiring expected position relation information associated with a photo-combining background picture; the expected position relation information is used for representing an expected relative position between a target object participating in the combination and the image acquisition device;
And guiding the target object to move according to the expected position relation information so as to guide the target object to carry out the combination.
In a fourth aspect, the present disclosure provides a virtual syndication method applied to a server, the method including:
When a plurality of terminal devices are combined, executing for any one of the plurality of terminal devices:
Transmitting the expected position relation information associated with the preselected synopsis background picture to the terminal equipment; the expected position relation information is used for representing an expected relative position between a target object participating in the combination and the image acquisition device.
According to the embodiment of the disclosure, the expected position relation information is used for representing the expected relative position between the target object participating in the photo-taking and the image acquisition device, and the terminal equipment acquires the expected position relation information associated with the photo-taking background image when the terminal equipment performs photo-taking with other terminal equipment. And guiding the target object to move according to the expected position relation information, and guiding the target object to perform the combination. Therefore, the embodiment of the disclosure can guide the user to a proper position for photo taking, so that different users can acquire images at the proper position, and a photo taking image with coordinated proportions among different users can be obtained as much as possible when the images are synthesized, thereby solving the problems that the proportion of the images of the target object is not regulated intelligently in the related art, so that the proportion of the images in the synthesized images is disordered, and the synthesis effect is unreal.
In addition, not only can the target objects participating in the group photo be subjected to group photo guidance, but also the images of the target objects can be accurately fused to the proper positions when the images are fused. Therefore, when the photo background image is used, the photo guidance and the image fusion are carried out by referring to the actual target object position and the actual target object proportion in the photo background image, the quality of the photo image can be further improved, and the effect of real photo is achieved or approximated as much as possible.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the disclosure. The objectives and other advantages of the disclosure will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the embodiments of the present disclosure will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1A is an application scenario diagram of a virtual syndication method provided by some embodiments of the present disclosure;
fig. 1B is a rear view of a smart television provided by some embodiments of the present disclosure;
FIG. 2 is a block diagram of a hardware configuration of the control device 100 of FIG. 1A according to some embodiments of the present disclosure;
FIG. 3A is a block diagram of the hardware configuration of the intelligent television 200 of FIG. 1A provided in some embodiments of the present disclosure;
FIG. 3B is a block diagram of the server 300 of FIG. 1A provided in some embodiments of the present disclosure;
FIG. 4A is a timing diagram of a virtual syndication method provided by some embodiments of the present disclosure;
Fig. 4B is a schematic diagram of an anchor point provided by some embodiments of the present disclosure;
FIG. 4C is a schematic illustration of an image provided by some embodiments of the present disclosure in which relative position rankings are derived from social relationship graphs;
FIG. 4D is a diagram illustrating relative station ordering after virtual syndication provided by some embodiments of the present disclosure;
FIG. 4E is a schematic diagram of fusing portraits to anchor points provided by some embodiments of the present disclosure;
fig. 5 is a schematic flow chart of a virtual syndication method according to some embodiments of the disclosure;
fig. 6 is another flowchart of a virtual syndication method provided by some embodiments of the present disclosure.
Detailed Description
In order to further explain the technical solutions provided by the embodiments of the present disclosure, the following details are described with reference to the accompanying drawings and the detailed description. Although the embodiments of the present disclosure provide the method operational steps as shown in the following embodiments or figures, more or fewer operational steps may be included in the method based on routine or non-inventive labor. In steps where there is logically no necessary causal relationship, the order of execution of the steps is not limited to the order of execution provided by embodiments of the present disclosure. The methods may be performed sequentially or in parallel as shown in the embodiments or the drawings when the actual processing or the control device is executing.
It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure. The "first", "second" in the embodiments of the present disclosure are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, the features defining "first", "second", and so forth, may include one or more such features, either explicitly or implicitly, and in the description of the presently disclosed embodiments, the term "plurality" refers to two or more, unless otherwise indicated, and it is to be understood that other words and similar are merely for the purpose of describing and illustrating the present disclosure and are not intended to limit the present disclosure, and that embodiments of the present disclosure and features of the embodiments may be combined with one another without conflict.
The virtual syndication method provided by the embodiment of the disclosure is applicable to terminal equipment, and the terminal equipment comprises but is not limited to: computers, smart phones, smart watches, smart televisions, smart robots, etc. The virtual photo method provided by the present disclosure is described below with smart electricity as an example.
The inventor finds that the reason why the effect of the combination is bad in the related art is that the proportion of the portrait of the target object cannot be intelligently adjusted, so that the proportion of the portrait in the composite image is disordered, and therefore the problem that the composite effect is not real exists.
In view of this, the present disclosure provides a virtual syndication method, and the inventive concepts of the present disclosure may be summarized as follows: and according to the use condition of the combined background image, establishing the relative position relation information between the target object and the image acquisition device in the combined background image. When any terminal device and other terminal devices are combined, the user of the terminal device can be guided to take the combination according to the relative position relation information, namely, the user is guided to a proper position to participate in the combination. Therefore, when the user performs the photo taking, the user can conveniently move to a proper position to perform the photo taking so as to simulate the real photo taking position of the photo taking background picture as far as possible. Therefore, different users participating in the photo can be subjected to the photo on the proper positions according to the guidance, the final photo image can reproduce the positions of different people in actual photo as much as possible, the proportion of the different users is harmonious as much as possible, and the quality of the photo image is improved.
The virtual syndication method in the embodiments of the present disclosure is described in detail below with reference to the accompanying drawings.
Referring to fig. 1A, an application scenario diagram of a virtual syndication is provided for some embodiments of the present disclosure. As shown in fig. 1A, the system includes a server 300, a smart tv 200, and a control device 100 (including 100A and/or 100B in fig. 1A) for controlling the smart tv. The control device 100 and the smart television 200 may communicate with each other in a wired or wireless manner.
The control device 100 is configured to control the smart tv 200, and may receive an operation instruction input by a target object, and convert the operation instruction into an instruction that the smart tv 200 may recognize and respond to, so as to play an intermediary role in interaction between the target object and the smart tv 200. Such as: the target object responds to the channel addition and subtraction operation by operating the channel addition and subtraction key on the control device 100.
The control device 100 may be a remote controller 100A, including an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication modes, and the smart television 200 is controlled by a wireless or other wired mode. The target object may control the smart tv 200 by inputting a target object instruction through a key on a remote controller, a voice input, a control panel input, etc. Such as: the target object can input corresponding control instructions through volume up-down keys, channel control keys, up/down/left/right moving keys, voice input keys, menu keys, on-off keys and the like on the remote controller, so as to realize the function of controlling the combination photo of the intelligent television 200.
The control device 100 may also be an intelligent device, such as a mobile terminal 100B, a tablet computer, a notebook computer, or the like. For example, the smart television 200 is controlled using an application running on a smart device. The application program, by configuration, can provide various controls for the target object through an intuitive target object interface (UI) on a screen associated with the smart device.
For example, the mobile terminal 100B may install a software application with the smart tv 200, and implement connection communication through a network communication protocol, so as to achieve the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 100B may be caused to establish a control instruction protocol with the smart tv 200, and the functions of the physical buttons arranged as the remote controller 100A may be implemented by operating various function keys or virtual controls of the target object interface provided on the mobile terminal 100B. For example, a group shot may be initiated, participation in the group shot confirmed, and when to capture an image and upload its own photograph to a server may be initiated based on a key on the smart device. And the intelligent device can also select a photo background picture and a photo gesture template. The audio and video content displayed on the mobile terminal 100B may also be transmitted to the smart tv 200, so as to implement a synchronous display function.
The smart tv 200 may provide a broadcast receiving function and a network tv function of a computer supporting function. The smart tv may be implemented as a digital tv, a web tv, an Internet Protocol Tv (IPTV), etc.
The smart tv 200 may be a liquid crystal display, an organic light emitting display, or a projection device. The specific smart tv type, size, resolution, etc. are not limited.
The smart tv 200 also communicates data with the server 300 through a variety of communication means. The intelligent tv 200 may be allowed to make a communication connection through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the smart tv 200. By way of example, the smart television 200 may send and receive information, such as: receiving Electronic Program Guide (EPG) data, receiving software program updates, or accessing a remotely stored digital media library. The servers 300 may be one group, may be multiple groups, and may be one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 300.
In some embodiments, as shown in fig. 1B, the smart television 200 includes a rotation assembly 276, a controller 250, a display 275, a terminal interface 278 extending from a void on the back plate, and a rotation assembly 276 coupled to the back plate, the rotation assembly 276 enabling rotation of the display 275. From the perspective of viewing the front of the smart tv, the rotating assembly 276 may rotate the display to a portrait state, i.e., a state in which the vertical side length of the screen is greater than the lateral side length, or may rotate the screen to a landscape state, i.e., a state in which the lateral side length of the screen is greater than the vertical side length.
A block diagram of the configuration of the control apparatus 100 is exemplarily shown in fig. 2. As shown in fig. 2, the control device 100 includes a controller 110, a memory 120, a communicator 130, a target object input interface 140, a target object output interface 150, and a power supply 160.
The controller 110 includes a Random Access Memory (RAM) 111, a Read Only Memory (ROM) 112, a processor 113, a communication interface, and a communication bus. The controller 110 is used to control the operation and operation of the control device 100, as well as the communication collaboration between the internal components, external and internal data processing functions.
For example, when an interaction that a target object presses a key arranged on the remote controller 100A or an interaction that touches a touch panel arranged on the remote controller 100A is detected, the controller 110 may control to generate a signal corresponding to the detected interaction and transmit the signal to the smart tv 200.
The memory 120 stores various operation programs, data, and applications for driving and controlling the control device 100 under the control of the controller 110. The memory 120 may store various control signal instructions input by the target object.
The communicator 130 performs communication of control signals and data signals with the smart tv 200 under the control of the controller 110. Such as: the control device 100 transmits a control signal (e.g., a touch signal or a control signal) to the smart tv 200 via the communicator 130, and the control device 100 may receive the signal transmitted by the smart tv 200 via the communicator 130. Communicator 130 may include an infrared signal interface 131 and a radio frequency signal interface 132. For example: when the infrared signal interface is used, the input instruction of the target object needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the intelligent television 200 through the infrared sending module. For example: when the radio frequency signal interface is used, the target object input instruction is converted into a digital signal, and then the digital signal is modulated according to a radio frequency control signal modulation protocol and then transmitted to the intelligent television 200 through the radio frequency transmission terminal.
The target object input interface 140 may include at least one of a microphone 141, a touch pad 142, a sensor 143, keys 144, etc., so that the target object may input a target object instruction on controlling the smart tv 200 to the control apparatus 100 through voice, touch, gesture, press, etc. For example, a syndication indication may be generated and sent to the smart television 200 according to the target object operation.
The target object output interface 150 outputs a target object instruction received by the target object input interface 140 to the smart tv 200 or outputs an image or voice signal received by the smart tv 200. Here, the target object output interface 150 may include an LED interface 151, a vibration interface 152 generating vibrations, a sound output interface 153 outputting sound, a display 154 outputting an image, and the like. For example, the remote controller 100A may receive an output signal of audio, video, or data from the target object output interface 150, and display the output signal as an image form on the display 154, as an audio form at the sound output interface 153, or as a vibration form at the vibration interface 152.
A power supply 160 for providing operating power support for the various elements of the control device 100 under the control of the controller 110. May be in the form of a battery and associated control circuitry.
A hardware configuration block diagram of the smart tv 200 is exemplarily shown in fig. 3A. As shown in fig. 3A, a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a memory 260, a target object interface 265, a video processor 270, a display 275, a rotating component 276, an audio processor 280, an audio output interface 285, and a power supply 290 may be included in the smart television 200.
The rotating assembly 276 may also include other components, such as a transmission component, a detection component, and the like. Wherein, the transmission component can adjust the rotation speed and torque output by the rotating component 276 through a specific transmission ratio, and can be in a gear transmission mode; the detection means may be constituted by a sensor provided on the rotation shaft, such as an angle sensor, an attitude sensor, or the like. These sensors may detect parameters such as the angle at which the rotating assembly 276 rotates and send the detected parameters to the controller 250 to enable the controller 250 to determine or adjust the state of the intelligent tv 200 based on the detected parameters. In practice, the rotating assembly 276 may include, but is not limited to, one or more of the components described above.
The modem 210 receives broadcast television signals through a wired or wireless manner, and may perform modulation and demodulation processes such as amplification, mixing, and resonance, for demodulating an audio/video signal carried in a frequency of a television channel selected by a target object and additional information (e.g., EPG data) from among a plurality of wireless or wired broadcast television signals.
The tuning demodulator 210 is selectively responsive to the target object and to the frequency of the television channel selected by the target object and the television signal carried by the frequency, as controlled by the controller 250.
The tuning demodulator 210 can receive signals in various ways according to broadcasting systems of television signals, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; according to different modulation types, a digital modulation mode or an analog modulation mode can be adopted; and the analog signal and the digital signal can be demodulated according to the kind of the received television signal.
The communicator 220 is a component for communicating with an external device or an external server according to various communication protocol types. For example, the smart tv 200 may transmit content data to an external device connected via the communicator 220 or browse and download content data from the external device connected via the communicator 220. The communicator 220 may include a network communication protocol module or a near field communication protocol module such as a WIFI module 221, a bluetooth communication protocol module 222, a wired ethernet communication protocol module 223, etc., so that the communicator 220 may receive a control signal of the control device 100 according to the control of the controller 250 and implement the control signal as a WIFI signal, a bluetooth signal, a radio frequency signal, etc.
The detector 230 is a component of the terminal device 200 for collecting signals of the external environment or interaction with the outside. The detector 230 may include a sound collector 231, such as a microphone, which may be used to receive sound of a target object, such as a voice signal of a control instruction of the target object controlling the smart tv 200; or may collect environmental sounds for identifying the type of environmental scene, and the smart tv 200 may adapt to environmental noise.
In other exemplary embodiments, the detector 230 may further include an image collector 232, such as a camera, a video camera, etc., that may be used to collect external environmental scenes to adaptively change the display parameters of the smart tv 200; and the function of interaction between the intelligent television and the target object is realized by acquiring the attribute of the target object or the interaction gesture with the target object. In the disclosure, the image of the target object can be acquired according to the indication of the target object and used for fusing with the images of other target objects to obtain a synopsis.
The external device interface 240 is a component for providing the controller 250 to control data transmission between the smart tv 200 and an external device. The external device interface 240 may be connected to an external device such as a set-top box, a game device, a notebook computer, etc., in a wired/wireless manner, and may receive data such as a video signal (e.g., a moving image), an audio signal (e.g., music), additional information (e.g., an EPG), etc., of the external device.
The controller 250 controls the operation of the smart tv 200 and the operation of the response target object by running various software control programs (e.g., an operating system and various application programs) stored on the memory 260.
Among other things, the controller 250 includes a Random Access Memory (RAM) 251, a Read Only Memory (ROM) 252, a graphics processor 253, a CPU processor 254, a communication interface 255, and a communication bus 256. The RAM251, the ROM252, the graphics processor 253, and the CPU 254 are connected to each other via a communication bus 256.
A ROM252 for storing various system boot instructions. If the power of the smart tv 200 starts to be started when the power-on signal is received, the CPU processor 254 executes a system start instruction in the ROM252, and copies the operating system stored in the memory 260 into the RAM251 to start to run the start operating system. When the operating system is started, the CPU processor 254 copies various applications in the memory 260 to the RAM251, and then starts running the various applications.
The graphic processor 253 generates various graphic objects such as icons, operation menus, and target object input instruction display graphics. The graphic processor 253 may include an operator for performing an operation by receiving a target object input various interactive instructions, thereby displaying various objects according to display attributes; and a renderer for generating various objects based on the operator, and displaying the result of rendering on the display 275.
CPU processor 254 is operative to execute operating system and application program instructions stored in memory 260. And executing processing of various application programs, data and contents according to the received target object input instruction so as to finally display and play various audio and video contents.
Communication interface 255 may include a first interface through an nth interface. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 250 may control the overall operation of the smart tv 200. For example: in response to receiving a target object input command for selecting a GUI object displayed on the display 275, the controller 250 may perform an operation related to the object selected by the target object input command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. The operation related to the selected object, for example, an operation of displaying a link to a hyperlink page, a document, an image, or the like, or an operation of executing a program corresponding to the object. The target object input command for selecting the GUI object may be a command input through various input devices (e.g., mouse, keyboard, touch pad, etc.) connected to the smart tv 200 or a voice command corresponding to a voice uttered by the target object.
The memory 260 is used for storing various types of data, software programs or application programs that drive and control the operation of the smart tv 200. Memory 260 may include volatile and/or nonvolatile memory. And the term "memory" includes memory 260, RAM251 and ROM252 of controller 250, or a memory card in smart television 200.
In the embodiment of the present disclosure, for the target object initiating the photo, the controller 250 is configured to, when performing the photo with other terminal devices, respond to the photo indication of the target object initiating the photo, obtain the expected positional relationship information associated with the photo background map, and then control the display 275 to display the photo background map in the photo guidance information.
The controller 250 is connected to the image collector 232 and configured to send the image of the target object collected by the image collector 232 to the server 300, so that the server 300 fuses the target object into the synopsis background image according to the position information of the target object in the synopsis background image, and a synthetic image is obtained;
The controller 250 may also receive and control the display 275 to present the composite image sent by the server 300 and store the composite image by the memory 260.
A hardware configuration block diagram of the server 300 is exemplarily shown in fig. 3B. As shown in fig. 3B, the components of server 300 may include, but are not limited to: at least one processor 331, at least one memory 332, a bus 333 connecting different system components, including memory 332 and processor 331.
Bus 333 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, and a local bus using any of a variety of bus architectures.
Memory 332 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 3321 and/or cache memory 3322, and may further include Read Only Memory (ROM) 3323.
Memory 332 may also include a program/utility 3325 having a set (at least one) of program modules 3324, such program modules 3324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The server 300 may also be in communication with one or more external devices 334 (e.g., keyboard, pointing device, etc.), one or more devices that enable the target object to interact with the server 300, and/or any device (e.g., router, modem, etc.) that enables the server 300 to communicate with one or more other electronic devices. Such communication may occur through an input/output (I/O) interface 335. Also, the server 300 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet via the network adapter 336. As shown, network adapter 336 communicates with other modules for server 300 via bus 333. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with server 300, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
In some embodiments, aspects of a virtual syndication method provided by the present disclosure may also be implemented in the form of a program product comprising program code for causing a computer device to perform the steps of a virtual syndication method according to various exemplary embodiments of the disclosure described above when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for virtual syndication of embodiments of the present disclosure may employ a portable compact disk read-only memory (CD-ROM) and include program code and may run on an electronic device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing, and may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the target object electronic device, partly on the target object device, as a stand-alone software package, partly on the target object electronic device, partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic device may be connected to the target object electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., connected through the internet using an internet service provider).
After the controller, the smart television and the server related to the disclosure are generally introduced, the virtual syndication method provided by the disclosure is further described below. In order to facilitate understanding of the scheme, the intelligent television is taken as terminal equipment for illustration. As shown in fig. 4A, a timing diagram of a virtual syndication method is exemplarily shown, where the timing diagram may include: smart tv 400a, smart tv 400b, and server 400c. The smart television 400a is a terminal device corresponding to the target object a initiating the group photo, and the smart television 400B is a terminal device corresponding to the target object B participating in the group photo.
A large number of photo background pictures are pre-stored in the server 400c, and in order to ensure the authenticity of the virtual photo, all photo background pictures in the photo background picture library are obtained by live-action shooting. In implementation, a photo background image library can be established, the library comprises a plurality of photo background images, and each photo background image is associated with expected position relation information when the background images are actually acquired, wherein the expected position relation information is used for expressing the distance and/or angle between each target object participating in the photo and the image acquisition device. In order to facilitate the generation of a combined image approaching the actual acquisition effect on the expected position relation, the target object can be guided to move through the expected position relation information so as to guide the target object to carry out combined image.
In addition, in order to make the combined image approach to the actual effect, the position information of the target object in the combined background image is also associated in the combined background image. The server 400c can generate a combined image according to the actual position of each target object in the combined background image, so as to improve the authenticity of the combined image. Of course, in implementation, any terminal device participating in the photo assembly can also complete the operation of synthesizing the photo image.
When the method is implemented, the adaptive participation photo number of each photo background picture when the photo background picture is used as the background picture of the virtual photo can be obtained through experiments, and corresponding positions are arranged in the photo background picture according to the adaptive participation photo number, so that the position information of each object participating in the photo background picture is obtained. In particular applications, the location information may be implemented as an anchor point for locating a certain body part (e.g., foot) of the corresponding target object on the image. The positional information may be implemented as a rectangular region divided in the synthetic background map.
The following describes a method for virtual syndication according to the present disclosure with reference to the flow shown in fig. 4A, taking the location information of the target object in the syndicated background map as an anchor point position as an example, and includes the following steps:
step 400: the smart tv 400a transmits a syndication request to the server 400 c.
The smart tv 400a may request the smart tv 400b (either one or a plurality of) to participate in the syndication via the server 400c. After receiving the indication that 400b agrees to participate in the group photo, the server performs step 401. When there are a plurality of smart televisions 400b requesting the group photo, if at least one smart television 400b agrees to participate in the group photo, step 401 is executed, otherwise, when all the smart televisions 400b requesting to participate in the group photo reject the group photo, the operation is ended.
In addition, the smart tv 400a may request the smart tv 400b to participate in the group photo through other ways instead of the server 400c, and step 401 may be executed if the server 400c receives an instruction that at least one smart tv 400b agrees to the group photo.
Step 401: server 400c issues a gallery of photo backgrounds to smart tv 400a.
In one possible embodiment, the server 400c issues a pre-stored photo background gallery to the intelligent tv 400a in response to the photo request. And each photo-combining background picture is associated with an anchor point position, and the anchor point position is used for determining the position of the image in the photo-combining background picture when the image is fused in the photo-combining background picture. Specifically, the lower edge center of the portrait is attached to the anchor point, so that the portrait is fused into the photo background picture, the anchor point is shown in fig. 4B, and the anchor point is the solid dot in the picture.
In some embodiments, at least one of the desired positional relationship, the positional information of the user image in the virtual background image, and the recommended syndication pose may be used as the syndication guide information, which may be associated with the syndication background image as a configuration file. The method is illustrated by taking the participation photo number adapted by the photo background diagram as 5 persons, and specifically comprises the following steps:
Mode 1, a mapping background map may correspond to a plurality of configuration files:
For example, a snap-shot background map may be suitable for 2-5 people. Then different people in the 2-5 people correspond to one configuration file respectively, namely, one configuration file respectively in the case of 2 people, 3 people, 4 people and 5 people. The configuration file of the 2 persons records the anchor point positions of the two persons and the expected position relation of the 2 persons, the configuration file of the 3 persons records the anchor point positions of the 3 persons and the expected position relation corresponding to the 3 persons respectively, and the other 4-5 persons are similar.
In implementation, the target object may select the number of persons participating in the group photo, or automatically determine the number of persons participating in the group photo according to the terminal device participating in the group photo, thereby selecting an appropriate configuration file. For example, when the participation group photo number is determined to be 5, a configuration file recording 5 anchor points is selected, and when the participation group photo number is determined to be 3, a configuration file recording 3 anchor points is selected. The selection of the configuration file may be performed by the server or by any of the terminal devices involved. The configuration file is stored in the server, and the server transmits the configuration file to the terminal equipment every time a compliance request is made. Or stored in the terminal device to facilitate the terminal device to invoke a local configuration file.
Mode 2, the configuration files corresponding to only a limited number of all the usage scenarios in the syndicated background map.
Taking a 5-person support at most as an example, the synopsis background diagram is continued. Assuming that the combined background diagram is suitable for 2-5 people, but only one configuration file recorded with 3 people is associated, the embodiment of the disclosure can perform interpolation processing according to actual conditions to obtain the configuration file using the actual people's demands.
For example, when the number of participating persons is less than 3, n anchor points may be randomly selected from the 3 anchor points as anchor points for use. Where n is the number of people participating in the group photo.
When the number of people participating in the photo exceeds 3, the area capable of fusing the photo objects at the anchor point position (the horizontal direction of the photo background picture) can be detected from the photo background picture. For example, the synopsis is a bridge on water. The area of the anchor location bridge is the area where the synopsis object can be fused. And then n anchors are sampled in equal proportion in the area to serve as actually adopted anchors, and n anchors can be acquired according to the width of the target object actually participating in the combination, and meanwhile, the expected position relationship in the original configuration file is used as the expected position relationship of the n anchors.
Step 402: the smart tv 400a selects a synopsis background map.
After receiving the photo background gallery issued by the server 400c, the smart tv 400a selects a photo background picture to be used as a virtual photo background from the photo background gallery.
In another embodiment, when the target object wants to use its own background image, the server 400c may also be informed of the locally uploaded background image, and obtain the configuration file corresponding to the local background image in a semi-automatic or fully-automatic manner.
For example, the target object may wish to virtually fit friends during travel. The target object can collect the actual image of the photo-combining object at the expected position, collect the expected position relation during photo-combining at the same time when collecting the image, and mark the anchor point of each photo-combining object in the photo-combining image. Or the server 400c may perform human body detection on the background map to determine the anchor point of each photo partner. Thus, a profile of the local background map may be generated for specifying the subsequent syndication process.
In one possible embodiment, a local album is provided in the smart tv 400a, and the target object a transmits, to the smart tv 400a, a custom picture prepared in advance as a virtual photo background through a transmission device, and saves the custom picture in the local album. When the server 400c issues the photo background gallery to the smart tv 400a, the URL address (UniformResourceLocation uniform resource locator) of the local album is read in response to the click operation of the custom background option of the target object a, and the target object a selects a pre-stored custom picture from the local album and uploads the custom picture to the server 400c as the photo background gallery.
In another embodiment, only multiple anchor points are associated in the synopsis background map. I.e. which locations in the syndicated background map are used to add the syndicated object. It is not defined which anchor point a particular syndicated object should correspond to. In order to better guide the target objects to make the combination, the present disclosure provides a station order of different target objects to help the target objects to make the combination.
A simple implementation is to select the corresponding anchor point position for different photo objects at random, or manually arrange the position of each photo object by the target object initiating the photo and notify the server 400c. In the disclosed embodiments, to give a reasonable ranking order, it is proposed that the ranking order between target objects participating in a group can be determined based on social relationships between the target objects.
For example, in order to accurately obtain social relationships between different target objects, a virtual syndication application is provided in the present disclosure, and any target object of the application may perform an information registration operation in advance. After registration, friends can be recommended to the target object based on the address book, and then social relations between the friends and different friends can be perfected by the target object. In addition, it may be required to add friends between each target object participating in the syndication. Social relationships (e.g., father-son, lovers, colleagues, classmates, etc.) between different target objects are refined when friends are added.
In implementation, the server 400C reads the social relationship between the target object a and the target object B, and constructs a social relationship graph according to the social relationship, as shown in fig. 4C, and determines the relative ranking between the target object a and the target object B according to the social relationship graph. For example, in an embodiment of the present disclosure, to enrich the applicable scenario of the virtual syndication method of the present disclosure, in one possible embodiment, after constructing a social relationship graph according to a social relationship between a target object a and a target object B, the server 400c may default to a position of the target object a initiating the syndication, take the target object a as a central position, and determine a ranking of the target object B relative to the target object a based on the social relationship graph. And determining the anchor point positions of the target object A and the target object B in the conjunctive background picture according to the station ordering.
In one possible embodiment, as shown in fig. 4D, the social relationship between the target object a and the target object B 1 is a parent-child relationship, and the social relationship between the target object a and the target object B 2 is a parent-child relationship. The relative positions corresponding to the social relationship are ordered as the position of the target object A in the middle of the synopsis background diagram, the target object B 1 is positioned on the left side of the target object A, and the target object B 2 is positioned on the left side of the target object A. The server 400c determines, according to the relative station ranking, the position of the anchor point in the middle position in the composite background image as the portrait position of the target object a, the anchor point placed on the left side of the anchor point in the middle position as the portrait position of the target object B 1, and the anchor point placed on the right side of the anchor point in the middle position as the portrait position of the target object B 2, and fuses the portraits of the target object a, the target object B 1, and the target object B 2 into the positions shown in the figure when synthesizing the virtual composite, as shown in fig. 4E.
In addition, as described above, the target object may also select the ranking of the stations in the photo according to preference, and the server 400c may also carry a selection menu for selecting the position of the target object in the virtual photo while sending the ranking to the smart tv 400a photo background gallery. The target object a may select the location in the selection menu where it is located in the syndicated background view.
In the embodiment of the disclosure, in order to better guide the target object to be combined. The server 400c also pre-stores a group photo gesture library for group photo, and the server 400c can screen the recommended group photo gesture for recommending to the target object a and the target object B from the group photo gesture library according to the social relationship after the social relationship among all the target objects participating in the group photo.
In one possible embodiment, after obtaining that the target object a and the target object B are in a parent-child relationship, the server 400c screens the pose applicable to the combination between the parent and child from the combination pose library and recommends the pose to the target object a and the target object B.
Step 403: the server 400c determines the delivered syndication background map and the syndication guide information.
In implementation, the server 400c determines, according to the photo background image selected by the smart tv 400a, expected positional relationship information of each of the plurality of target objects participating in the photo background image relative to the respective image acquisition device, relative station ordering among different target objects, and recommended photo pose as photo guidance information. And issues the syndication guide information to the smart tv 400b.
In consideration of privacy of the target objects, in implementation, the information of each target object participating in the snapshot may be set in the snapshot guiding information issued to the smart tv 400b, where the information may be browsed by each target object participating in the snapshot. It may also be configured that the group guide information issued to the smart tv 400B only includes information of the local target object B of the smart tv 400B, which is only allowed to be viewed by the target object B.
In practice, a social relationship may correspond to a plurality of site orders and syndication gestures. Therefore, based on the photo background image and the social relationship, various photo guide information can be determined, and the first few recommended photo guide information can be selected to be recommended to the target object so as to facilitate the target object to select the required photo guide information. For example, the syndication guide information is selected according to user preference or frequency of use of the syndication guide information.
After determining the syndication guide information, the service 400c may push the syndication guide information to other target objects participating in the syndication, and the smart televisions of the target objects may guide the respective target objects to perform the syndication. Among the group photo guidance information issued to different target objects, the desired positional relationship information may be set to include only the desired positional relationship information of the target object, or may include the desired positional relationship information of all the target objects. In step 403, the smart tv 400a and the smart tv 400b each direct the respective target objects to make a group photo.
Since the image capturing operation performed by the smart tv 400a at the photo initiator on the target object a is the same as the image capturing operation performed by the smart tv 400B at the photo participant on the target object B, only the image capturing operation performed by the smart tv 400a on the target object a is explained here.
In implementation, after receiving the syndication guide information issued by the server 400c, the smart tv 400a provides the syndication guide interface to the target object a. The recommended conjunctive gesture recommended to the target object A is attached to the conjunctive guide interface, and the target object A can automatically select whether to adopt the recommended conjunctive gesture. When the image acquisition operation is performed, the intelligent television 400a monitors the actual position relation information of the target object a and the image acquisition device in real time, and when the actual position relation information is not matched with the expected position relation information in the combination instruction information, movement instruction information for guiding the target object a to move is output based on the expected position relation information. The movement indication information can be sent through a voice broadcasting system. The voice content is, for example, indication information such as forward and backward movement or angle indication information such as a point to the left and a point to the right of the image acquisition device. In another embodiment, the movement indication information may also be output through a menu bar, for example, a related text prompting the user how to move is output to the target object a in the menu bar. The guidance can also be performed in the form of dynamic pictures combined with voices.
In addition, in order to ensure that even if the target object does not perform the image capturing operation within the range set in the desired positional relationship information, a virtual snapshot with a true effect can be obtained, the user image can be optimized in the present disclosure. For example, when the smart tv 400a uploads an image of a target object, the actual positional relationship information between the image capturing device and the target object a is obtained when the image is captured, and the actual positional relationship information is simultaneously reported to the server 400c as final positional relationship information.
That is, in step 404: the smart tv 400a and the smart tv 400b upload the acquired image of the target object and final positional relationship information to the server 400c.
In one possible embodiment, when the uploading operation is performed, each target object participating in the group photo can preview the acquired image and then upload the image. In addition, in order to meet more application scenarios, when the uploading operation is executed, clicking operation of uploading instruction can be performed by each target object in the process of previewing the preview image displayed by the image acquisition device, the intelligent television 400a and the intelligent television 400b respond to the uploading instruction to execute image acquisition operation on each target object, and after acquisition is finished, the images are uploaded to the server 400c.
Step 405: the server 400c receives the images uploaded by the smart tv 400a and the smart tv 400b, and fuses the figures in the obtained images.
After receiving the images of the target object a and the target object B, the server 400c reads final positional relationship information associated with the images, if the final positional relationship information and the expected positional relationship information are not within an allowable tolerance range, scales the acquired portrait in the target object image to a portrait size corresponding to the desired positional relationship information according to the final positional relationship information, and fuses the scaled images into a synopsis background image.
In one possible embodiment, the server 400c is pre-stored with a scaling parameter for adjusting the size of the portrait, corresponding to the difference between the final positional relationship information and the desired positional relationship information. The parameter is used for adjusting the portrait of the target object to the size of the portrait corresponding to the expected position relation information. The server 400c, after acquiring the final positional relationship information associated with the target image, makes a difference with the desired positional relationship information. If the absolute value of the obtained difference value is larger than a preset threshold value, a scaling parameter corresponding to the range of the difference value is found in the pre-stored data parameter, the human image is scaled according to the scaling parameter through an affine transformation technology, and the scaled human image is fused into the combined background image.
In addition, to adapt to various application environments, the related operations of synthesizing the image of the target object may be independently performed by the terminal device 400a and/or the terminal device 400 b.
In one possible embodiment, a user may subjectively select a terminal device with a better network speed to synthesize the portrait of each target object. Taking the terminal device for synthesizing the portrait as 400a as an example, the terminal device 400b transmits the image after the target object is acquired to the terminal device 400a, the terminal device 400a reads final position relation information associated with the image, if the final position relation information and the expected position relation information are not within an allowable tolerance range, the portrait in the acquired target object image is scaled to the portrait size corresponding to the expected position relation information according to the final position relation information, and the scaled image is fused into the photo background image. And finally, displaying the fused virtual photo.
Step 406: and distributing the fused virtual photo to the intelligent television 400a and the intelligent television 400b.
Based on the same inventive concept, fig. 5 also shows a virtual syndication method provided by an embodiment of the present disclosure, where the method may be applied to a terminal device, and includes:
Step 501: when the background image is combined with other terminal equipment, acquiring expected position relation information associated with the background image; the expected position relation information is used for representing an expected relative position between the target object participating in the combination and the image acquisition device.
Step 502: acquiring expected position relation information associated with a background image; the expected position relation information is used for representing an expected relative position between the target object participating in the combination and the image acquisition device.
In a possible embodiment, the desired positional relationship information includes a desired distance and/or a desired angle between the target object and the image acquisition device;
the guiding the target object to move according to the expected position relation information comprises the following steps:
when the actual positional relationship between the target object and the image pickup device does not match the desired positional relationship, the controller generates and outputs instruction information for guiding the target object to move based on the desired positional relationship.
In a possible embodiment, the synopsis graph further associates position information of the target object in the synopsis graph, and the method further includes:
And if an image uploading instruction is received, uploading the image of the target object to a server, so that the server fuses the target object into the photo background image according to the position information of the target object in the photo background image.
In a possible embodiment, the synopsis graph further has associated with it a recommended synopsis for recommending to the target object, the method further comprising:
and outputting the recommended conjunctive gesture, wherein the recommended conjunctive gesture is determined according to the social relationship of the target object.
In one possible embodiment, before uploading the image of the target object to the server, the method further comprises:
acquiring actual position relation information between the image acquisition device and the target object as final position relation information when the image acquisition device acquires the image;
and sending the final position relation information to the server, so that the server scales the image of the target object to the size corresponding to the expected position relation information according to the final position relation information, and then fusing the target object into the photo background picture.
Based on the same inventive concept, fig. 6 shows a virtual syndication method provided by another embodiment of the present disclosure, which is applicable to a server, including:
step 601: when the plurality of terminal devices are combined, the method is executed for any one of the plurality of terminal devices.
Step 602: transmitting the expected position relation information associated with the preselected synopsis background picture to the terminal equipment; the expected position relation information is used for representing an expected relative position between a target object participating in the combination and the image acquisition device.
In a possible embodiment, the synopsis background map is further associated with position information of the target object in the synopsis background map; after sending the expected position relation information associated with the preselected synopsis background picture to the terminal equipment, the method further comprises:
receiving an image of the target object sent by the terminal equipment;
and fusing the image of the target object into the photo background image according to the position information of the target object in the photo background image.
In one possible embodiment, before sending the desired positional relationship information associated with the preselected photo background map to the terminal device, the method further includes:
Acquiring social relation graphs between the target object and other photo objects participating in the photo;
And determining the position information of the target object in the photo background diagram according to the social relation graph.
In a possible embodiment, at least one anchor point position associated with the synopsis background graph is pre-stored, and the determining the position information of the target object in the synopsis background graph according to the social relationship graph includes:
deducing relative position ordering between the target object and the group photo object according to the social relation graph;
And determining the anchor point position corresponding to the target object as the position information of the target object in the synopsis background picture according to the corresponding relation between at least one anchor point position associated with the synopsis background picture and the relative station position sequencing.
In one possible embodiment, after obtaining the social relationship graph between the target object and the other photo objects participating in the photo, the method further includes:
And determining a recommended syndication gesture recommended to the target object according to the social relation of the target object in the social relation graph, and pushing the recommended syndication gesture to the terminal equipment.
In one possible embodiment, the fusing the image of the target object into the synopsis background map, the method further includes:
Receiving final position relation information between the target object and the image acquisition device, which is uploaded by the terminal equipment, wherein the final position relation information is actual position relation information between the image acquisition device and the target object when the terminal equipment controls the image acquisition device to acquire the image;
When the final position relation information and the expected position relation information are not in the allowable tolerance range, scaling the image of the target object to a size corresponding to the expected position relation information according to the final position relation information;
And fusing the zoomed image of the target object into the combined background image.

Claims (12)

1. A terminal device, comprising: display, image acquisition ware, memory and controller, wherein:
The display is used for displaying information;
The image collector is used for collecting images;
the memory is used for storing a computer program which can be executed by the controller;
The controller is connected with the display, the image collector and the memory respectively and is configured to:
When the terminal equipment performs the photo-combining with other terminal equipment, acquiring expected position relation information associated with a photo-combining background picture; the expected position relation information comprises an expected distance and/or an expected angle between the target object and the image acquisition device and is used for representing an expected relative position between the target object participating in the combination and the image acquisition device;
When the actual position relation between the target object and the image acquisition device is not matched with the expected position relation, generating and outputting indication information for guiding the target object to move based on the expected position relation.
2. The terminal device according to claim 1, wherein the synopsis background map is further associated with position information of the target object in the synopsis background map; the controller is further configured to:
And if an image uploading instruction is received, uploading the image of the target object to a server, so that the server fuses the target object into the photo background image according to the position information of the target object in the photo background image.
3. The terminal device of claim 1, wherein the syndicated background map is further associated with a recommended syndication gesture for recommending to the target object; the controller is further configured to:
and outputting the recommended conjunctive gesture, wherein the recommended conjunctive gesture is determined according to the social relationship of the target object.
4. The terminal device of claim 2, wherein the controller, prior to performing uploading the image of the target object to the server, is further configured to:
acquiring actual position relation information between the image acquisition device and the target object as final position relation information when the image acquisition device acquires the image;
and sending the final position relation information to the server, so that the server scales the image of the target object to the size corresponding to the expected position relation information according to the final position relation information, and then fusing the target object into the photo background picture.
5. A server comprising a memory and a processor, wherein:
The memory is used for storing a computer program executable by the processor;
The processor, coupled to the memory, is configured to:
When a plurality of terminal devices are combined, executing for any one of the plurality of terminal devices:
Transmitting expected position relation information associated with a preselected photo background picture to the terminal equipment, so that the terminal equipment generates and outputs indication information for guiding the target object to move based on the expected position relation when determining that the actual position relation between the target object and the image acquisition device is not matched with the expected position relation; the expected position relation information comprises an expected distance and/or an expected angle between the target object and the image acquisition device, and is used for representing an expected relative position between the target object participating in the combination and the image acquisition device.
6. The server of claim 5, wherein the synopsis graph is further associated with location information of the target object in the synopsis graph; the processor, after performing the sending of the pre-selected desired positional relationship information associated with the synopsis context graph to the terminal device, is further configured to:
receiving an image of the target object sent by the terminal equipment;
and fusing the image of the target object into the photo background image according to the position information of the target object in the photo background image.
7. The server of claim 6, wherein before the processor sends the desired positional relationship information associated with the preselected photo background map to the terminal device, the processor is further configured to:
Acquiring social relation graphs between the target object and other photo objects participating in the photo;
And determining the position information of the target object in the photo background diagram according to the social relation graph.
8. The server of claim 7, wherein at least one anchor point location associated with the synopsis graph is pre-stored, the processor, when executing the determining the location information of the target object in the synopsis graph from the social relationship graph, is configured to:
deducing relative position ordering between the target object and the group photo object according to the social relation graph;
And determining the anchor point position corresponding to the target object as the position information of the target object in the synopsis background picture according to the corresponding relation between at least one anchor point position associated with the synopsis background picture and the relative station position sequencing.
9. The server of claim 7, wherein after the processor obtains the social relationship graph between the target object and other photo objects participating in the photo, the processor is further configured to:
And determining a recommended syndication gesture recommended to the target object according to the social relation of the target object in the social relation graph, and pushing the recommended syndication gesture to the terminal equipment.
10. The server of claim 6, wherein the processor, when performing fusing the image of the target object into the synopsis graph, is further configured to:
Receiving final position relation information between the target object and the image acquisition device, which is uploaded by the terminal equipment, wherein the final position relation information is actual position relation information between the image acquisition device and the target object when the terminal equipment controls the image acquisition device to acquire the image;
When the final position relation information and the expected position relation information are not in the allowable tolerance range, scaling the image of the target object to a size corresponding to the expected position relation information according to the final position relation information;
And fusing the zoomed image of the target object into the combined background image.
11. A method of virtual syndication, the method comprising:
When the terminal equipment performs the photo-combining with other terminal equipment, acquiring expected position relation information associated with a photo-combining background picture; the expected position relation information comprises an expected distance and/or an expected angle between the target object and the image acquisition device, and is used for representing an expected relative position between the target object participating in the combination and the image acquisition device;
When the actual position relation between the target object and the image acquisition device is not matched with the expected position relation, generating and outputting indication information for guiding the target object to move based on the expected position relation.
12. A method of virtual syndication, the method comprising:
When a plurality of terminal devices are combined, executing for any one of the plurality of terminal devices:
Transmitting expected position relation information associated with a preselected photo background picture to the terminal equipment, so that the terminal equipment generates and outputs indication information for guiding the target object to move based on the expected position relation when determining that the actual position relation between the target object and the image acquisition device is not matched with the expected position relation; the expected position relation information comprises an expected distance and/or an expected angle between the target object and the image acquisition device, and is used for representing an expected relative position between the target object participating in the combination and the image acquisition device.
CN202011173504.5A 2020-10-28 2020-10-28 Terminal equipment, server and virtual photo combining method Active CN113489918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011173504.5A CN113489918B (en) 2020-10-28 2020-10-28 Terminal equipment, server and virtual photo combining method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011173504.5A CN113489918B (en) 2020-10-28 2020-10-28 Terminal equipment, server and virtual photo combining method

Publications (2)

Publication Number Publication Date
CN113489918A CN113489918A (en) 2021-10-08
CN113489918B true CN113489918B (en) 2024-06-21

Family

ID=77932581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011173504.5A Active CN113489918B (en) 2020-10-28 2020-10-28 Terminal equipment, server and virtual photo combining method

Country Status (1)

Country Link
CN (1) CN113489918B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108632543A (en) * 2018-03-26 2018-10-09 广东欧珀移动通信有限公司 Method for displaying image, device, storage medium and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017039348A1 (en) * 2015-09-01 2017-03-09 Samsung Electronics Co., Ltd. Image capturing apparatus and operating method thereof
CN105578035B (en) * 2015-12-10 2019-03-29 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN106331529A (en) * 2016-10-27 2017-01-11 广东小天才科技有限公司 Image capturing method and apparatus
CN106657791A (en) * 2017-01-03 2017-05-10 广东欧珀移动通信有限公司 Method and device for generating synthetic image
CN108933891B (en) * 2017-05-26 2021-08-10 腾讯科技(深圳)有限公司 Photographing method, terminal and system
CN107404617A (en) * 2017-07-21 2017-11-28 努比亚技术有限公司 A kind of image pickup method and terminal, computer-readable storage medium
KR20200121078A (en) * 2019-04-15 2020-10-23 남기원 Sharing system of image and method of the same

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108632543A (en) * 2018-03-26 2018-10-09 广东欧珀移动通信有限公司 Method for displaying image, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113489918A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
WO2021179359A1 (en) Display device and display picture rotation adaptation method
WO2021031623A1 (en) Display apparatus, file sharing method, and server
CN112073798B (en) Data transmission method and equipment
CN112533037B (en) Method for generating Lian-Mai chorus works and display equipment
CN110798622B (en) Shared shooting method and electronic equipment
CN113590059A (en) Screen projection method and mobile terminal
CN112073762A (en) Information acquisition method based on multi-system display equipment and multi-system display equipment
WO2021212470A1 (en) Display device and projected-screen image display method
CN114285986B (en) Method for shooting image by camera and display equipment
CN113395554B (en) Display device
CN114285985A (en) Method for determining preview direction of camera and display equipment
CN113115092B (en) Display device and detail page display method
CN112351334A (en) File transmission progress display method and display equipment
CN113489918B (en) Terminal equipment, server and virtual photo combining method
CN115250357B (en) Terminal device, video processing method and electronic device
CN113395600B (en) Interface switching method of display equipment and display equipment
CN111726695B (en) Display device and audio synthesis method
CN113115093B (en) Display device and detail page display method
CN112533023B (en) Method for generating Lian-Mai chorus works and display equipment
CN113473239B (en) Intelligent terminal, server and image processing method
CN113497958A (en) Display device and picture display method
CN111314739A (en) Image processing method, server and display device
CN111914511B (en) Remote file browsing method, intelligent terminal and display device
CN113497962B (en) Configuration method of rotary animation and display device
CN113825007B (en) Video playing method and device and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 266555, No. 218, Bay Road, Qingdao economic and Technological Development Zone, Shandong

Applicant after: Hisense Group Holding Co.,Ltd.

Address before: 266555, No. 218, Bay Road, Qingdao economic and Technological Development Zone, Shandong

Applicant before: QINGDAO HISENSE ELECTRONIC INDUSTRY HOLDING Co.,Ltd.

Country or region before: China

GR01 Patent grant