CN114302200A - Display device and photographing method based on user posture triggering - Google Patents

Display device and photographing method based on user posture triggering Download PDF

Info

Publication number
CN114302200A
CN114302200A CN202110720108.8A CN202110720108A CN114302200A CN 114302200 A CN114302200 A CN 114302200A CN 202110720108 A CN202110720108 A CN 202110720108A CN 114302200 A CN114302200 A CN 114302200A
Authority
CN
China
Prior art keywords
image
user
target person
target
group photo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110720108.8A
Other languages
Chinese (zh)
Inventor
杨鲁明
程晋
马乐
刘兆磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110720108.8A priority Critical patent/CN114302200A/en
Publication of CN114302200A publication Critical patent/CN114302200A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of display equipment, in particular to display equipment and a photographing method triggered based on user gestures, which can solve the problem that a user always displays redundant articles or abnormal body gestures when photographing pictures due to the fact that the user uses a remote controller, or uses a voice command, or triggers photographing through gestures in a television photographing scene, and the pictures cannot meet the requirements of the user to a certain extent. The display device includes: a camera; a display; a controller configured to: when the object is in the detection range of the camera, acquiring a target person contained in the image; and when the similarity between the current limb posture of the target person and the target limb posture reaches a preset threshold value, controlling a user interface to display a group photo generated based on the swinging photo and the target person, wherein the group photo comprises the target person and a group photo object, and the target person is displayed as the current limb posture of the target person in the group photo.

Description

Display device and photographing method based on user posture triggering
Technical Field
The application relates to the technical field of display equipment, in particular to display equipment and a photographing method based on user gesture triggering.
Background
The television photographing means that the intelligent television photographs through an image collector such as a camera arranged on the intelligent television, and photographs are displayed on a large-size, high-resolution and colorful television screen, so that the effect of the photographs can be better displayed through the television photographing, and the intelligent television has a family entertainment function and social value.
In some implementations of television photographing, a television is configured with a front-facing camera supporting a photographing function, and when photographing, a user generally needs to operate a remote controller to control and trigger the television to photograph; or a voice command is sent to control and trigger the television to take a picture; or a fixed gesture is needed to control and trigger the television to take a picture.
However, when the remote control is used to trigger a photograph, it will display the remote control in taking the photograph; when a voice instruction is used for triggering photographing, the user mouth shape in a photographed picture can be displayed abnormally; when the gesture is used to trigger the photographing, the user can display the redundant limb gesture in the photographed picture.
Disclosure of Invention
In order to solve the problem that a user always displays redundant articles or abnormal body postures when shooting is triggered by using a remote controller, a voice command or a gesture in a television shooting scene, and the picture cannot meet the requirements of the user, the application provides display equipment and a shooting method based on user posture triggering.
The embodiment of the application is realized as follows:
a first aspect of an embodiment of the present application provides a display device, including: the camera is used for acquiring images in the detection range of the camera; the display can display a photographic image and a swinging image comprising a first object and a photographic image, wherein the first object is used for showing the target limb gesture simulated by the user; a controller configured to: when the user is in the detection range of the camera, acquiring a target person contained in the image; and when the similarity between the current limb posture of the target person and the target limb posture reaches a preset threshold value, controlling a user interface to display a group photo generated based on the swinging photo and the target person, wherein the group photo comprises the target person and a group photo object, and the target person is displayed as the current limb posture of the target person in the group photo.
A second aspect of the embodiments of the present application provides a photographing method triggered based on a user gesture, where the method includes: when the user is in the detection range, acquiring a target person contained in the user image shot by the camera; when the similarity between the current limb posture of the target person and the target limb posture of the first object in the swinging image reaches a preset threshold value, displaying a group photo generated based on the swinging image and the target person, wherein the group photo comprises the target person and the group photo object, and the target person is displayed as the current limb posture of the target person in the group photo.
The beneficial effect of this application: by constructing a pendulum image containing the first object, a target limb gesture for simulation can be provided for the user; further, by acquiring a target person, the current body posture of the user can be acquired and automatic photographing logic can be triggered; further, whether automatic photographing is triggered or not can be judged by acquiring the similarity between the current limb posture and the target limb posture; further, a synthetic image is generated based on the swinging image and the target person, so that automatic synthetic image acquisition can be realized, and unnecessary remote controllers, abnormal mouth shapes of users and redundant limb postures in the synthetic image are reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 illustrates a usage scenario of a display device according to some embodiments;
fig. 2 illustrates a hardware configuration block diagram of the control apparatus 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of the display apparatus 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in the display device 200 according to some embodiments;
FIG. 5 illustrates an icon control interface display of an application in display device 200, in accordance with some embodiments;
FIG. 6A is a schematic diagram illustrating a user interface of a display device according to an embodiment of the present application;
FIG. 6B is a schematic diagram illustrating a display device user interface according to another embodiment of the present application;
FIG. 7 is a schematic diagram of a user interface for a beat graph according to an embodiment of the application;
FIG. 8A is a diagram illustrating a television implementation of an automatic group photo user interface according to an embodiment of the present application;
FIG. 8B is a schematic diagram of a television-implemented auto-group user interface according to another embodiment of the present application;
FIG. 8C is a schematic diagram of a television-implemented auto-group user interface according to another embodiment of the present application;
FIG. 8D is a schematic diagram of a television-implemented auto-group user interface according to another embodiment of the present application;
FIG. 8E is a schematic diagram of a television-implemented auto-group user interface of another embodiment of the present application;
FIG. 9A illustrates a schematic diagram of an automatic group photo countdown user interface according to an embodiment of the present application;
FIG. 9B is a schematic diagram illustrating an automatic group photo countdown user interface according to another embodiment of the present application;
FIG. 10A shows a schematic view of a beat graph configuration user interface of another embodiment of the present application;
FIG. 10B is a schematic diagram of a beat graph configuration user interface of another embodiment of the present application;
FIG. 11A is a schematic diagram illustrating a display device identifying a user's limb according to another embodiment of the present application;
fig. 11B is a schematic diagram illustrating a display device identifying a limb of a user according to another embodiment of the present application.
Detailed Description
To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," "third," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the display apparatus 200 through the smart device 300 or the control device 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication methods, and controls the display device 200 in a wireless or wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200.
In some embodiments, the smart device 300 (e.g., mobile terminal, tablet, computer, laptop, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application program running on the smart device.
In some embodiments, the display device 200 may also be controlled in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received by a module configured inside the display device 200 to obtain a voice command, or may be received by a voice control device provided outside the display device 200.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 according to an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction from a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200.
Fig. 3 shows a hardware configuration block diagram of the display apparatus 200 according to an exemplary embodiment.
In some embodiments, the display apparatus 200 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, a user interface.
In some embodiments the controller comprises a processor, a video processor, an audio processor, a graphics processor, a RAM, a ROM, a first interface to an nth interface for input/output.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal from the controller output, performing display of video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
In some embodiments, the display 260 may be a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the external control apparatus 100 or the server 400 through the communicator 220.
In some embodiments, the user interface may be configured to receive control signals for controlling the apparatus 100 (e.g., an infrared remote control, etc.).
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for collecting ambient light intensity; alternatively, the detector 230 includes an image collector, such as a camera, which may be used to collect external environment scenes, attributes of the user, or user interaction gestures, or the detector 230 includes a sound collector, such as a microphone, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.
In some embodiments, the tuner demodulator 210 receives broadcast television signals via wired or wireless reception, and demodulates audio/video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in different separate devices, that is, the modem 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other actionable control. The operations related to the selected object are: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon.
In some embodiments the controller comprises at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphics Processing Unit (GPU), a RAM Random Access Memory (RAM), a ROM (Read-Only Memory), a first to nth interface for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions receiving external input, so as to finally display and play various audio-video contents. The CPU processor may include a plurality of processors. E.g. comprising a main processor and one or more sub-processors.
In some embodiments, a graphics processor for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The graphic processor comprises an arithmetic unit, which performs operation by receiving various interactive instructions input by a user and displays various objects according to display attributes; the system also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, and perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a signal that can be displayed or played on the direct display device 200.
In some embodiments, the video processor includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like. And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received video output signal after the frame rate conversion, and changing the signal to be in accordance with the signal of the display format, such as an output RGB data signal.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, and amplification processing to obtain an audio signal that can be played in the speaker.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on display 260, and the user input interface receives the user input commands through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
In some embodiments, a "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in the display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, a system of a display device may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together make up the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and interprocess communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user application program. The application program is compiled into machine code after being started, and a process is formed.
Referring to fig. 4, in some embodiments, the system is divided into four layers, which are, from top to bottom, an Application (Applications) layer (referred to as an "Application layer"), an Application Framework (Application Framework) layer (referred to as a "Framework layer"), an Android runtime (Android runtime) layer and a system library layer (referred to as a "system runtime library layer"), and a kernel layer.
In some embodiments, at least one application program runs in the application program layer, and the application programs may be windows (windows) programs carried by an operating system, system setting programs, clock programs or the like; or an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an Application Programming Interface (API) and a programming framework for the application. The application framework layer includes a number of predefined functions. The application framework layer acts as a processing center that decides to let the applications in the application layer act. The application program can access the resources in the system and obtain the services of the system in execution through the API interface.
As shown in fig. 4, in the embodiment of the present application, the application framework layer includes a manager (Managers), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used for interacting with all activities running in the system; the Location Manager (Location Manager) is used for providing the system service or application with the access of the system Location service; a Package Manager (Package Manager) for retrieving various information related to an application Package currently installed on the device; a notification manager (notifiationmanager) for controlling display and clearing of notification messages; a Window Manager (Window Manager) is used to manage the icons, windows, toolbars, wallpapers, and desktop components on a user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the various applications as well as general navigational fallback functions, such as controlling exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of a display screen, judging whether a status bar exists, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window, displaying a shake, displaying a distortion deformation, and the like), and the like.
In some embodiments, the system runtime layer provides support for the upper layer, i.e., the framework layer, and when the framework layer is used, the android operating system runs the C/C + + library included in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 4, the core layer includes at least one of the following drivers: audio drive, display driver, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (like fingerprint sensor, temperature sensor, pressure sensor etc.) and power drive etc..
In some embodiments, the display device may directly enter the interface of the preset vod program after being activated, and the interface of the vod program may include at least a navigation bar 510 and a content display area located below the navigation bar 510, as shown in fig. 5, where the content displayed in the content display area may change according to the change of the selected control in the navigation bar. The programs in the application program layer can be integrated in the video-on-demand program and displayed through one control of the navigation bar, and can also be further displayed after the application control in the navigation bar is selected.
In some embodiments, the display device may directly enter a display interface of a signal source selected last time after being started, or a signal source selection interface, where the signal source may be a preset video-on-demand program, or may be at least one of an HDMI interface, a live tv interface, and the like, and after a user selects different signal sources, the display may display contents obtained from different signal sources.
The embodiment of the application can be applied to various types of display devices (including but not limited to smart televisions, liquid crystal televisions and the like). The following explains the display device and the photographing method triggered based on the user gesture, by taking a control scheme and a user interface for realizing the photographing triggered based on the user gesture by the smart television as an example.
FIG. 6A shows a schematic diagram of a user interface of a display device according to an embodiment of the present application.
The display device provided by the application comprises a display for displaying a user interface and a controller. In some embodiments, the application UI interface may include, for example, 4 television-installed applications, respectively news headlines, movie theatre on demand, auto-group movies, karaoke, and the like. By moving the focus on the display screen using a controller such as a remote control, different applications, or other function buttons, may be selected.
In some embodiments, the user interface may also present interactive elements, which may include, for example, television home page controls, search controls, message button controls, mailbox controls, browser controls, favorites controls, signal bar controls, etc., and by moving focus on the display screen using a controller such as a remote control, different applications, function buttons may be selected.
To improve the convenience and the image of the UI of the television, in some embodiments, the controller of the display device in the embodiments of the present application controls the UI of the television in response to the manipulation of the interactive element. For example, a user clicks the auto group photo icon through a controller such as a remote controller, and the configuration user interface of the auto group photo application may be activated, and the UI of the auto group photo application is displayed on the top of the global UI, so as to control the application component UI mapped by the interactive element to be enlarged or run and displayed in a full screen.
In some embodiments, the interactive element may also be operated by a sensor, which may be, but is not limited to, an acoustic input sensor, such as a display device configured microphone, which may detect voice commands including indications of the desired interactive element. For example, the user may identify the desired interactive element using "launch an auto-movie" or any other suitable identification, and may also describe that the desired action associated with the desired interactive element is to be performed. The controller may recognize the voice command and submit data characterizing the interaction to the UI or its processing component or engine.
FIG. 6B is a schematic diagram of a display device user interface according to another embodiment of the present application.
In some embodiments, a user may control the focus of the display screen via a remote control, selecting an auto-group icon such that its icon is highlighted on a user interface of the television display screen; and then, by clicking the high-brightness icon, the automatic group photo application configuration user interface for opening the icon mapping can be realized.
It should be noted that, in the embodiment of the present application, icons and characters of the UI interface are only described as examples of the display device and the photographing method triggered based on the user gesture, and the icons and characters of the UI interface in the drawings may also be implemented as other contents, and the drawings of the present application are not specifically limited.
FIG. 7 shows a schematic diagram of a user interface for a beat graph according to an embodiment of the present application.
In some embodiments, the display of the smart tv may display a user interface that may display a flyshot and a ghost as referred to herein.
The swinging image comprises a first object and a group photo object. For example, the group photo object may be configured as a tagol of a favorite star poetry of the user, and the first object is used for showing the target body posture simulated by the user, that is, when the body posture simulated by the user of the first object reaches a preset condition of the smart television, the smart television synthesizes a photo, that is, a group photo image mentioned in the present application, so as to realize the group photo effect of the user and the group photo object.
It can be understood that the smart television provided by the application can shoot a group photo image including the user and the group photo object when the user imitates the target limb posture of the first object.
For example, a love gesture of a photo-combination object, namely a target limb gesture mentioned in the application, appears in the photo-combination image, and also appears in the same way as a first object of the photo-combination image, after the television starts automatic photo-combination, when a user makes a limb gesture similar to that of the first object in front of the television, a photographing function of the television is automatically triggered and a photo-combination of the user and the photo-combination image of the photo-combination object is generated, wherein the user and the photo-combination object keep the limb gesture in the photo-combination image, which is also commonly called as Pose.
In some embodiments, the first object included in the swinging image of fig. 7 may display an outline image in the tv user interface, and the user may perform a group photo with the group photo object and obtain the group photo by simulating a target body posture displayed by the outline image. In some embodiments, when the user learns the target body posture of the first object in the swinging image, the real-time image of the user before the television can be displayed in the user interface outline image in real time, so that the user can better imitate and train the target body posture of the first object. It is noted that in some embodiments the first object contained in the fig. 7 rendering may be displayed as a specific example character photograph in the television user interface.
It should be noted that although the swinging photo described in the present application includes the first object and the group photo object, the replacement of the group photo object by a pet, an animal, a plant, a building, or another object of interest is not affected.
Fig. 8A is a schematic diagram illustrating an implementation of an automatic group photo user interface for a tv according to an embodiment of the present application.
In some embodiments, the camera configured for the television may be configured to acquire a user image within a detection range of the camera, that is, in an on state of the auto group photo function, as long as a user enters the detection range of the camera, the camera may automatically acquire the user image in front of the television.
In some embodiments, the display device provided by the application comprises a controller, and when the user is in the camera detection range, the controller can calculate the user image acquired by the camera to acquire the target person included in the user image, namely the automatic group photo user positioned in front of the television and in the camera detection range.
In some embodiments, the controller obtains a current body posture of the target person, i.e. the current body posture of the user before the television; when the similarity between the target body posture of the first object in the swinging picture and the body posture reaches a preset threshold value, the fact that the user imitates the first object in the swinging picture can be judged, namely the user is experiencing automatic group photo at the current moment; the controller controls the user interface to display a synthetic image generated based on the swinging image and the user image, replaces the first object in the swinging image with the synthetic image generated by the target person acquired by the controller, and displays the current limb posture of the user at the current time in the synthetic image.
The generation of the synthetic image based on the swinging image and the target person may be realized by replacing the first object in the swinging image with the target person, or may be realized by synthesizing a plurality of image layers respectively including the synthetic object and the target person; the former is to generate a synthetic image by processing a panning image alone, and the latter is to generate a synthetic image by processing a plurality of images.
As shown in fig. 8A, the user has a three-experience automatic group photo function, and the user interface displays a panning image at the current moment; the three-limb initial posture is that the left hand lifts up the right hand to droop, and the three-limb posture is changed to lift up the right hand to droop, so that the user interface can still display the swinging image, and the television does not realize automatic shooting and display of the combined image.
It can be understood that after the posture of the third object is changed, the body posture of the third object is not similar to the body posture of the first object target in the swinging picture, the television controller obtains the target person and the current body posture of the target person through the user image obtained by the camera, compares the current body posture with the target body posture to obtain the similarity, and then compares the similarity with a preset threshold value to judge whether the user currently imitates the body posture of the first object target meets the condition of automatic shooting.
In some embodiments, as shown in FIG. 8B, the user Zhang III continues to transform the limb motion; the Zhang-triplet displays the love gesture by adjusting the upper limb, and at this time, the user interface does not display the previous shooting picture containing the first object and the group photo object, but instead displays the group photo picture containing Zhang-triplet and group photo objects.
It can be understood that after the zhang san changes the body posture to the love gesture in fig. 8B, the zhang san body posture is similar to the first object target body posture in the panning image, the television controller obtains the zhang san target person and the current body posture thereof through the user image acquired by the camera, compares the current body posture with the target body posture to obtain the similarity, compares the similarity with a preset threshold, for example, the preset threshold is 80% and the similarity is 85%, determines that the user currently simulates the first object target body posture to achieve the condition of television automatic photographing, and synthesizes the user image acquired by the camera and the panning image to obtain the group photo shown in the television user interface in fig. 8B.
It is understood that before the controller controls the user interface to display the group photo generated based on the swinging photo and the user image, the user interface can be displayed as the swinging photo containing the first object and the group photo object. In the process of realizing automatic group photo, the television screen always displays the swing photo, and the user can more accurately simulate the target limb posture of the first object.
It should be noted that, in some embodiments, the controller controls the user interface to display the photographic image generated based on the swinging image and the user image, and the user interface may not display the swinging image. That is, the smart tv user interface normally plays the tv or performs system configuration operation, for example, the master-slave signal source configures the user interface, as shown in fig. 8C; and as long as the automatic group photo function of the television is started, the user opens three into the camera detection area, the similarity between the current limb posture of the user opens three and the limb posture of the first object target reaches a preset threshold value, the television controller generates the group photo image in the background, and the group photo image is displayed in the user interface.
In some embodiments, when the similarity between the current limb posture of the user and the target limb posture reaches a second preset threshold, the controller controls the user interface to display an auxiliary dotted line until a shadowgraph is obtained, wherein the auxiliary dotted line is used for prompting the user to put out the corresponding target limb posture. Wherein the auxiliary dotted line is not shown in the finally acquired group photo image.
The method comprises the steps that a user figure object enters a camera detection range to shoot an image, when the current limb posture of a user approaches the limb posture of a first object in a shooting picture, a posture outline auxiliary dotted line can be displayed in a television screen and used for assisting the user figure to be in a corresponding posture, when shooting is carried out when a group photo triggering condition is met and countdown is carried out, the auxiliary dotted line disappears, and the posture dotted line cannot be shot into a group photo.
For example, when the similarity between the current limb posture and the target limb posture reaches 40%, the controller controls the user interface to display an auxiliary dotted line, wherein the auxiliary dotted line is embodied as a first object of a dotted line, and the controller controls the auxiliary dotted line to be displayed in an uppermost layer of the user interface in an overlaying manner. As in the system configuration interface, the controller may control the auxiliary dotted line to be displayed in an overlay of the system configuration interface, as shown in fig. 8D.
The user adjusts the body posture through the prompt of the auxiliary dotted line, when the similarity between the current body posture and the target body posture reaches 80%, the controller controls the user interface to display prompt information, for example, the prompt information can be displayed as that the posture recognition is successful, and controls the user interface to acquire the user image at the moment so as to synthesize a group photo, wherein the user interface is shown as fig. 8E.
It should be noted that, when the similarity reaches a preset threshold, the controller may automatically acquire the user image to generate a synthetic image; in some embodiments, when the similarity reaches a preset threshold, the television may be further configured to capture an image of the user by detecting a voice command or a limb command and taking a picture immediately to generate a group photo. In some embodiments, the controller may further control the user interface to display operation keys such as a photograph, a scene, a prop, a magic show, and the like, so that the user performs manual control of the group photo acquisition through the remote controller.
FIG. 9A illustrates a schematic diagram of an automatic group photo countdown user interface according to an embodiment of the present application.
In some embodiments, after the user opens three times to perform the camera detection area, when the television controller determines that the acquired similarity reaches a preset threshold, the controller controls the user interface to display a countdown value, and the countdown may prompt the user that the camera will take a picture and synthesize a group photo image when the user is about to finish.
For example, the countdown amounts to 3 seconds, and the user interface changes from initially displaying 2 seconds, to 1 second, and finally to 0 second; after the time reaches 0 second, the controller controls a camera of the intelligent television to acquire a user image containing Zhang III of the target person; and then generating a group photo image containing Zhangsan of the user and Tagol of a group photo object poetry by an image algorithm based on the swinging photo image and the user image, and displaying the group photo image on a user interface.
In some embodiments, the controller may generate the synthetic image based on the swinging image and the user image by the following method. The controller can copy the group photo object in the swinging image independently or draw the group photo object in an independent image layer, draw a user image target person for replacing the first object of the swinging image in another independent image layer, and realize generation of the group photo by using a covering display method through configuring the transparent pair of the two independent image layers.
For example, after acquiring a user image from a camera, the controller draws a target person included in the user image at a position of the first image layer corresponding to the first object, where the similarity between the body posture of the drawn target task and the target body posture reaches a preset threshold; then the controller draws a group photo object at the position of the second layer corresponding to the group photo object, wherein the group photo object is still displayed as the body posture of the group photo object in the swinging photo image; the controller then controls the second layer to be overlaid and displayed on the first layer in a texture mixing mode in the user interface so as to generate a synthetic image.
In some embodiments, after determining that the obtained similarity reaches the preset threshold, the television controller controls the user interface to display a countdown value of high brightness, and the user interface may further prompt the user interface to display that 3 seconds from the beginning of countdown will be automatically photographed to prompt the user to keep the current posture, as shown in fig. 9B.
FIG. 10A shows a schematic view of a beat graph configuration user interface of another embodiment of the present application.
In some embodiments, still taking the configuration rendering of the smart television with three-open operation as an example, after the configuration user interface of the automatic group photo application is started with three-open operation, the user body posture is obtained again through the television, and the controller may replace the first object in fig. 10A to implement the different body posture embodiment of group photo with the group photo object.
For example, if the user opens three limbs initially to lift the right hand to droop, and then opens three limbs to change the limbs to lift the left hand to droop, the determination operation is performed through the remote controller, and the controller will acquire the user image captured by the camera to acquire the limbs that can be opened three at this time, that is, the target limbs of the second object in the newly-photographed image, as shown in fig. 10B. When the controller forms the acquired second object in the user interface, the user can move the position of the second object by using the remote controller, and determine a proper relative position with the group photo object; after the position of the second object is determined, the controller controls the user interface not to display the first object any more, and therefore the final new beat images with different effects are obtained. A
Comparing FIG. 10B with FIG. 10A, it can be seen that the second object is located on the right side of the group object in the new swing image of FIG. 10B; whereas the first object in the original swinging image of fig. 10A is located on the left side of the photographic subject.
Fig. 11A is a schematic diagram illustrating a display device identifying a limb of a user according to another embodiment of the present application.
In some embodiments, the smart tv controller may obtain an instant limb positioning point containing the target person based on the user image, where the instant limb positioning point corresponds to the current limb pose of the target person, i.e., the current limb pose of the user before the tv.
For example, after the television starts the automatic group photo, that is, the related limb detection function, the controller captures a preview image from the FrameBuffer previewed by the camera, and then sends the preview image to the limb detection application, and key information of the returned limb positioning point can be obtained through image processing, and the user instant limb positioning points obtained by the controller can be 18, or 9, or configured into other required numbers according to different requirements on precision and operation rate.
In some embodiments, the controller obtains the similarity between the instant limb positioning point and the first object target limb positioning point in the swing image in a contact degree matching mode, and then determines the relationship between the similarity and a preset threshold value to determine whether the photographing condition is met.
For example, the controller performs contact degree matching according to limb information detected by the user image and posture information of the existing first object schematic diagram in the swinging image; the degree of contact is determined by judging the error value between the limb key point identified by image detection and the predicted limb key point, and if the error value is less than a certain range, the limb key point is considered to be in contact with the predicted limb key point; and when the similarity between the swing gesture of the person in the preview and the action of the person in the schematic picture exceeds 80%, triggering automatic photographing.
In some embodiments, the controller identifies the left elbow as the first location point, the right elbow as the second location point, and the other joints as the third location points, wherein the third location points are located at the left hand, and/or the left shoulder, and/or the neck, and/or the right shoulder, and/or the right hand, and/or the left waist, and/or the right waist, of the target user in the images acquired by the camera.
For example, the user's left hand is identified as point 1, left arm elbow as point 2, left shoulder as point 3, neck as point 4, right shoulder as point 5, right arm elbow as point 6, right hand as point 7, left waist as point 8, and right waist as point 9, as shown in FIG. 11B.
Where the first anchor point is implemented as point 2, the second anchor point is implemented as point 6, and the third anchor point may be implemented as one of the remaining points, or a combination of more. The controller identifies the plurality of points in the acquired image, tracks the plurality of points, and obtains an instantaneous limb positioning point of the current user from the plurality of points, for example, from 9 points.
In some embodiments, the controller extracts a single frame image from the captured video every predetermined number of frames (e.g., every 90 frames) for the captured images received from the camera. Detecting each joint point of a human body by applying a human body detection algorithm (such as a Convolutional attitude Machine conditional position Machine) in an image recognition technology, determining a range formed by the joint points as a human body frame of the human body, performing human body detection on the single-frame image, determining the motion direction of a target user through the position change of a plurality of points, and determining whether the upper body trunk of the target user is completely in an acquisition area and whether the target user leaves the acquisition area.
In some embodiments, the upper body edge map of the user is obtained by sequentially connecting the individual upper body torso feature points of the user in the captured video according to a predetermined connection rule (e.g., connecting the left shoulder to the right shoulder, then connecting the right shoulder to the right elbow, and then connecting the right elbow to the right hand). And similarly, obtaining the upper body edge map of each family member.
And determining the similarity between each connecting line of the upper body edge graph of a certain family member and the positioning point of the first object target limb according to the angle of the connecting line for the user image of the certain family member. For example: for family member a, the angle of the line drawn by the left hand, left wall elbow, left shoulder in the edge view of the user's upper body is 27 degrees. The angle of a line drawn by the left hand, the left elbow wall, and the left shoulder in the upper body edge diagram of the target user is 30 degrees. The similarity between the connection line of the upper body edge map of the target user and the connection line of the upper body edge map of family member a is 1- (|27-30|/30) ═ 0.9.
In some embodiments, the television controller may further perform determination by detecting a limb contour of the target person in the user image, and calculating a pixel proportion of the difference between the limb contour position and a preset position in the burst image by xor, and may trigger countdown photographing if the proportion is less than a certain value, for example, 20%.
Based on the above technical scheme for realizing user gesture trigger photographing by the display device and the introduction of the related drawings, the application also provides a photographing method based on user gesture trigger, and the method comprises the following steps: when the user is in the detection range, acquiring a target person contained in the user image shot by the camera; when the similarity between the current limb posture of the target person and the target limb posture of the first object in the swinging picture reaches a preset threshold value, displaying a photographic picture generated based on the swinging picture and the user image; the synthetic image is the swinging image obtained by replacing the first object with the target person, the swinging image further comprises a synthetic object, the target person is displayed as the current limb posture in the synthetic image, and the first object is used for showing the target limb posture simulated by the user. The method has been elaborated in detail in the technical scheme of implementing user gesture-triggered photographing by the display device, and is not described herein again.
In some embodiments, when the similarity between the current body posture and the target body posture of the target person reaches a preset threshold, displaying a photographic image generated based on the swinging image and the user image, specifically including: when the similarity reaches a preset threshold value, displaying a countdown value for photographing by the camera; when the countdown value is zero, controlling the camera to acquire a user image containing the target person; and displaying a photographic image generated based on the swinging image and the user image. The method has been elaborated in detail in the technical scheme of implementing user gesture-triggered photographing by the display device, and is not described herein again.
In some embodiments, displaying a photographic image generated based on the swinging image and the user image specifically includes: drawing the target person at a position of the first image layer corresponding to the first object; drawing the group photo object at the position of the second image layer corresponding to the group photo object; and controlling the second image layer to be covered and displayed on the first image layer in a texture mixing mode so as to generate a synthetic image. The method has been elaborated in detail in the technical scheme of implementing user gesture-triggered photographing by the display device, and is not described herein again.
In some embodiments, the determining that the similarity between the current body posture of the target person and the target body posture reaches a preset threshold specifically includes: acquiring an instant limb positioning point of a target person based on the target person in the user image, wherein the instant limb positioning point corresponds to the current limb posture of the target person; and obtaining the similarity between the instant limb positioning point and the first object target limb positioning point in the swing image in a contact degree matching mode, and comparing the similarity with a preset threshold value to judge whether the similarity is reached. The method has been elaborated in detail in the technical scheme of implementing user gesture-triggered photographing by the display device, and is not described herein again.
In some embodiments, the first object comprised by the beat graph is displayed as: a photo of the person, or a silhouette of the person, showing the target extremity pose. The method has been elaborated in detail in the technical scheme of implementing user gesture-triggered photographing by the display device, and is not described herein again.
In some embodiments, before displaying a photographic image generated based on the swinging image and the user image, the method further comprises: displaying a swinging picture containing the first object and the group photo object; or not displaying the swinging picture containing the first object and the group photo object. The method has been elaborated in detail in the technical scheme of implementing user gesture-triggered photographing by the display device, and is not described herein again.
In some embodiments, the method further comprises: acquiring a second object which is shot by a camera and used for simulating the posture of a target limb of a user in a user image; controlling the second object to move in the swinging image, and determining the position of the second object to generate a new swinging image; the first object is no longer displayed. The method has been elaborated in detail in the technical scheme of implementing user gesture-triggered photographing by the display device, and is not described herein again.
The method has the advantages that the target limb posture for simulation can be provided for a user by constructing the swing picture containing the first object; further, by acquiring a target person, the current body posture of the user can be acquired and automatic photographing logic can be triggered; further, whether automatic photographing is triggered or not can be judged by acquiring the similarity between the current limb posture and the target limb posture; further, a synthetic image is generated based on the swinging image and the target person, so that automatic synthetic image acquisition can be realized, and unnecessary remote controllers, abnormal mouth shapes of users and redundant limb postures in the synthetic image are reduced.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data blocks," controllers, "" engines, "" units, "" components, "or" systems. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.

Claims (15)

1. A display device, comprising:
the camera is used for acquiring images in the detection range of the camera;
the display can display a photographic image and a swinging image comprising a first object and a photographic image, wherein the first object is used for showing the target limb gesture simulated by the user;
a controller configured to:
when the user is in the detection range of the camera, acquiring a target person contained in the image;
and when the similarity between the current limb posture of the target person and the target limb posture reaches a preset threshold value, controlling a user interface to display a group photo generated based on the swinging photo and the target person, wherein the group photo comprises the target person and a group photo object, and the target person is displayed as the current limb posture of the target person in the group photo.
2. The display device of claim 1, wherein the controller controls the user interface to display a group photo generated based on the swinging photo and the target person when the similarity between the current body posture and the target body posture of the target person reaches a preset threshold, and specifically comprises the controller:
when the similarity reaches a preset threshold value, controlling a user interface to display a countdown value for photographing by a camera;
when the countdown value is zero, controlling the camera to acquire an image containing the target person;
and controlling a user interface to display a group photo image generated based on the swinging image and the target person.
3. The display device of claim 1, wherein the controller controls the user interface to display a group photo generated based on the swinging photo and the target person, and specifically comprises the controller:
drawing the target person at a position of the first image layer corresponding to the first object;
drawing the group photo object at the position of the second image layer corresponding to the group photo object;
and controlling the second layer to be overlaid and displayed on the first layer in the user interface in a texture mixing mode to generate a synthetic image.
4. The display device of claim 1, wherein the controller determining that the similarity between the current body posture and the target body posture of the target person reaches a preset threshold specifically comprises the controller:
acquiring an instant limb positioning point containing a target person based on the image, wherein the instant limb positioning point corresponds to the current limb posture of the target person;
and obtaining the similarity between the instant limb positioning point and the first object target limb positioning point in a contact degree matching mode, and comparing the similarity with a preset threshold value to judge whether the similarity is reached.
5. The display device of claim 1, wherein the first object included in the beat graph is displayed in the user interface as:
a photo of the person, or a silhouette of the person, showing the target extremity pose.
6. The display device of claim 5, wherein the controller is further configured to, before controlling the user interface to display a group photo generated based on the panning image and the target person:
controlling the user interface to display a swinging picture containing the first object and the group photo object; or
And controlling the user interface not to display the swinging image containing the first object and the group photo object.
7. The display device of claim 1, wherein the display is capable of displaying a flipchart comprising a first object, a group object, the controller further configured to:
acquiring a second object which is used for simulating the posture of a target limb of a user and is shot by the camera in the user image;
controlling the second object to move in the swinging image, and determining the position of the second object to generate a new swinging image;
the first object is no longer displayed.
8. The display device of claim 1, wherein prior to the controller controlling the user interface to display the group photo generated based on the pan-shot image, the target person, the controller is further configured to:
controlling a user interface to always display an auxiliary dotted line for prompting a user to put out a corresponding target limb posture before acquiring the photographic image; wherein the auxiliary dotted line is not shown in the ghost.
9. A photographing method triggered based on user gestures is characterized by comprising the following steps:
when the user is in the detection range, acquiring a target person contained in the user image shot by the camera;
when the similarity between the current limb posture of the target person and the target limb posture of the first object in the swinging image reaches a preset threshold value, displaying a group photo generated based on the swinging image and the target person, wherein the group photo comprises the target person and the group photo object, and the target person is displayed as the current limb posture of the target person in the group photo.
10. The user gesture-based shooting method as claimed in claim 9, wherein when the similarity between the current body gesture and the target body gesture of the target person reaches a preset threshold, displaying a group photo generated based on the swinging photo and the target person, specifically comprising:
when the similarity reaches a preset threshold value, displaying a countdown value for photographing by the camera;
when the countdown value is zero, controlling the camera to acquire an image containing the target person;
and displaying a group photo image generated based on the swinging image and the target person.
11. The user gesture-based shooting method as claimed in claim 9, wherein displaying a group photo generated based on the swinging photo and the target person specifically comprises:
drawing the target person at a position of the first image layer corresponding to the first object;
drawing the group photo object at the position of the second image layer corresponding to the group photo object;
and controlling the second image layer to be covered and displayed on the first image layer in a texture mixing mode so as to generate a synthetic image.
12. The user gesture-based photographing method as claimed in claim 9, wherein the determining that the similarity between the current body gesture and the target body gesture of the target person reaches a preset threshold specifically comprises:
acquiring an instant limb positioning point of a target person based on the target person in the image, wherein the instant limb positioning point corresponds to the current limb posture of the target person;
and obtaining the similarity between the instant limb positioning point and the first object target limb positioning point in the swing image in a contact degree matching mode, and comparing the similarity with a preset threshold value to judge whether the similarity is reached.
13. The user gesture-based photographing method as claimed in claim 9, wherein the first object included in the swinging image is displayed as:
a photo of the person, or a silhouette of the person, showing the target extremity pose.
14. The user gesture-based triggered photographing method of claim 13, wherein before displaying the silhouette generated based on the swinging image and the target person, the method further comprises:
displaying a swinging picture containing the first object and the group photo object; or
And not displaying the swinging image containing the first object and the group photo object.
15. The user gesture-triggered photographing method as claimed in claim 9, wherein the method further comprises:
acquiring a second object used for simulating the posture of a target limb by a user in an image shot by a camera;
controlling the second object to move in the swinging image, and determining the position of the second object to generate a new swinging image;
the first object is no longer displayed.
CN202110720108.8A 2021-06-28 2021-06-28 Display device and photographing method based on user posture triggering Pending CN114302200A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110720108.8A CN114302200A (en) 2021-06-28 2021-06-28 Display device and photographing method based on user posture triggering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110720108.8A CN114302200A (en) 2021-06-28 2021-06-28 Display device and photographing method based on user posture triggering

Publications (1)

Publication Number Publication Date
CN114302200A true CN114302200A (en) 2022-04-08

Family

ID=80964485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110720108.8A Pending CN114302200A (en) 2021-06-28 2021-06-28 Display device and photographing method based on user posture triggering

Country Status (1)

Country Link
CN (1) CN114302200A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291348A (en) * 2017-05-31 2017-10-24 珠海市魅族科技有限公司 Photographic method and device, computer equipment and computer-readable recording medium
CN107734251A (en) * 2017-09-29 2018-02-23 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN108737715A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 A kind of photographic method and device
WO2019127395A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Image capturing and processing method and device for unmanned aerial vehicle
JP2019125833A (en) * 2018-01-12 2019-07-25 日本電気株式会社 Imaging system
WO2020103526A1 (en) * 2018-11-21 2020-05-28 Oppo广东移动通信有限公司 Photographing method and device, storage medium and terminal device
CN112399235A (en) * 2019-08-18 2021-02-23 海信视像科技股份有限公司 Method for enhancing photographing effect of camera of smart television and display device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291348A (en) * 2017-05-31 2017-10-24 珠海市魅族科技有限公司 Photographic method and device, computer equipment and computer-readable recording medium
CN107734251A (en) * 2017-09-29 2018-02-23 维沃移动通信有限公司 A kind of photographic method and mobile terminal
WO2019127395A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Image capturing and processing method and device for unmanned aerial vehicle
JP2019125833A (en) * 2018-01-12 2019-07-25 日本電気株式会社 Imaging system
CN108737715A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 A kind of photographic method and device
WO2020103526A1 (en) * 2018-11-21 2020-05-28 Oppo广东移动通信有限公司 Photographing method and device, storage medium and terminal device
CN112399235A (en) * 2019-08-18 2021-02-23 海信视像科技股份有限公司 Method for enhancing photographing effect of camera of smart television and display device

Similar Documents

Publication Publication Date Title
CN112866773B (en) Display equipment and camera tracking method in multi-person scene
CN113645494B (en) Screen fusion method, display device, terminal device and server
CN112698905B (en) Screen saver display method, display device, terminal device and server
CN112672062B (en) Display device and portrait positioning method
US11960674B2 (en) Display method and display apparatus for operation prompt information of input control
CN111556350B (en) Intelligent terminal and man-machine interaction method
CN113825002B (en) Display device and focal length control method
CN112929750B (en) Camera adjusting method and display device
CN111984167B (en) Quick naming method and display device
CN111464869B (en) Motion position detection method, screen brightness adjustment method and intelligent device
CN112584213A (en) Display device and display method of image recognition result
CN112399233A (en) Display device and position self-adaptive adjusting method of video chat window
CN112473121B (en) Display device and avoidance ball display method based on limb identification
CN112363683B (en) Method and display device for supporting multi-layer display by webpage application
CN111939561B (en) Display device and interaction method
CN115082959A (en) Display device and image processing method
CN112905008B (en) Gesture adjustment image display method and display device
CN114302200A (en) Display device and photographing method based on user posture triggering
CN111787350B (en) Display device and screenshot method in video call
CN112261289B (en) Display device and AI algorithm result acquisition method
CN114302203A (en) Image display method and display device
CN113485613A (en) Display equipment and method for realizing free-drawing screen edge painting
CN115185392A (en) Display device, image processing method and device
CN112437284A (en) Projection picture correction method, terminal equipment and display equipment
CN112399235A (en) Method for enhancing photographing effect of camera of smart television and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination