CN110489040B - Method and device for displaying feature model, terminal and storage medium - Google Patents

Method and device for displaying feature model, terminal and storage medium Download PDF

Info

Publication number
CN110489040B
CN110489040B CN201910755261.7A CN201910755261A CN110489040B CN 110489040 B CN110489040 B CN 110489040B CN 201910755261 A CN201910755261 A CN 201910755261A CN 110489040 B CN110489040 B CN 110489040B
Authority
CN
China
Prior art keywords
information
image
image information
window
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910755261.7A
Other languages
Chinese (zh)
Other versions
CN110489040A (en
Inventor
方迟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910755261.7A priority Critical patent/CN110489040B/en
Publication of CN110489040A publication Critical patent/CN110489040A/en
Application granted granted Critical
Publication of CN110489040B publication Critical patent/CN110489040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The disclosure provides a method and a device for displaying a feature model, a terminal and a storage medium. The method for displaying the feature model comprises the following steps: acquiring N pieces of image information, wherein N is an integer not less than 1; outputting the N pieces of image information through N first windows respectively; and acquiring real-time image information and outputting the real-time image information through a second window. The characteristic model display method can simultaneously present various display results so as to facilitate the user to intuitively compare.

Description

Method and device for displaying feature model, terminal and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying a feature model, a terminal, and a storage medium.
Background
People often wear clothes repeatedly when selecting the accessories before going out to determine whether the style and the matching are suitable. However, a plurality of clothes and trousers can be combined in various ways, and the repeated putting on and taking off can bring troubles. Meanwhile, after frequent changing of the garment, the user may not remember the evaluation of the first try-on garment and may not intuitively compare various choices, thereby causing inconvenience to the selection.
Disclosure of Invention
In order to solve the existing problems, the present disclosure provides a method and an apparatus for displaying a feature model, a terminal, and a storage medium.
The present disclosure adopts the following technical solutions.
In some embodiments, the present disclosure provides a method of feature model display, comprising:
acquiring N pieces of image information, wherein N is an integer not less than 1;
outputting the N pieces of image information through N first windows respectively; and
and acquiring real-time image information, and outputting the real-time image information through a second window.
In some embodiments, the present disclosure provides an apparatus for feature model exhibition, comprising:
the acquisition module is used for acquiring image information; and
the output module is used for outputting the image information;
wherein the image information includes at least one of input image information and real-time image information.
In some embodiments, the present disclosure provides a terminal comprising: at least one memory and at least one processor;
the memory is used for storing program codes, and the processor is used for calling the program codes stored in the memory to execute the method.
In some embodiments, the present disclosure provides a storage medium for storing program code for performing the above-described method.
The method, the device, the terminal and the storage medium for displaying the feature model at least can support multi-form simultaneous display of multiple results, facilitate visual comparison of users and improve selection efficiency.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a flowchart of a feature model display method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a picture arrangement display according to an embodiment of the disclosure.
Fig. 3 is a schematic diagram of a picture arrangement and a window distribution according to an embodiment of the disclosure.
FIG. 4 is a flow chart of a multi-window parallel method of an embodiment of the present disclosure.
FIG. 5 is a flowchart of a method of loading an activation window according to an embodiment of the present disclosure.
FIG. 6 is a flow diagram of a method of loading an activation window of yet another embodiment of the present disclosure.
Fig. 7 is a schematic diagram of a focus label transfer process of an embodiment of the present disclosure.
FIG. 8 is a schematic diagram of a multi-window arrangement of an embodiment of the present disclosure.
Fig. 9 is a schematic diagram of specified characteristic information reorganization according to an embodiment of the present disclosure.
Fig. 10 is a schematic structural diagram of a feature model display apparatus according to an embodiment of the present disclosure.
Fig. 11 is a schematic structural diagram of a feature model display apparatus according to another embodiment of the present disclosure.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that various steps recited in method embodiments of the present disclosure may be performed in parallel and/or in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that reference to "one or more" unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Current terminals, especially smart tvs, are dedicated to providing more functions to users. The application provides a scheme, so that the terminal can provide more help for people to choose to wear clothes for traveling, and the method comprises but is not limited to shooting effect graphs, displaying the shot effect graphs simultaneously, carrying out various combinations on the effect graphs, dynamically displaying effects and the like.
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, fig. 1 is a flowchart of a feature model display method according to an embodiment of the present disclosure. The feature model display method provided by the embodiment of the disclosure comprises the following steps.
S11, acquiring N pieces of image information, where N is an integer not less than 1.
The acquisition of the N pieces of image information may be triggered according to an instruction or according to set specific feature information. Specifically, the manner in which the image is acquired may be based on a user selection, such as triggering a physical key to perform a photographing operation. Alternatively, another triggering manner may be that the terminal performs shooting in response to preset specified feature information, for example, when the terminal detects that the user is standing in front of the terminal, the acquired human body image or human face image may be set as reference information, that is, a triggering condition, and the shooting operation is started. It is to be understood that the user may also set other trigger conditions, for example, photographing through specific gesture recognition, and the embodiment of the present application is not particularly limited.
S12, outputting the N image information through the N first windows, respectively.
Specifically, after the picture information is acquired, it can be projected to a screen so as to be visually presented to the user. More specifically, the image information may be presented in multiple windows, which are not overlapped with each other for comparison, and may be arranged in multiple ways. For example, as shown in fig. 2, fig. 2 is a schematic diagram of a picture arrangement display according to an embodiment of the disclosure. Pictures 1, 2 and 3 are listed on the left side of the screen in a vertical arrangement. Furthermore, when the number of the pictures exceeds the displayable range of the current screen, the pictures can be selectively displayed and arranged in a scroll bar mode, and a user can select the displayed pictures according to needs. It is understood that the specific arrangement of the pictures in the embodiment of the present invention is not particularly limited, and other arrangements, such as horizontal arrangement, tiled arrangement, etc., besides the vertical arrangement described in the above embodiment are also included in the scope of the embodiment of the present application.
Furthermore, the picture is displayed in a single frame form, and the image output of continuous multiple frames is the video output mode. In other words, if the acquired image information is image information with consecutive time stamps, the user sees a video mode when multiple frames of image information are displayed in a consecutive manner. Compared with the display mode of picture information, the display mode is more vivid and comprehensive. Therefore, the image information output by the window may be a single-frame image output or a multi-frame continuous image output.
And S13, acquiring the real-time image information and outputting the real-time image information through the second window.
In addition to the acquired image information, the clothing currently selected by the user is also included in the selection range. Therefore, the terminal also needs to capture and display real-time image information for the user to compare. As shown in fig. 3, fig. 3 is a schematic diagram of the arrangement of pictures and the distribution of windows according to the embodiment of the disclosure. In fig. 3, a real-time window may be further included on the screen for displaying the image information acquired in real time. In particular, in the embodiment of the present application, the real-time window and the picture scroll bar are not overlapped with each other and may be in a running operation state at the same time, for the principle of facilitating the user to view the comparison.
In the embodiment of the present disclosure, a specific implementation manner of the multi-window parallel operation display is not limited, for example, the following scheme may be adopted to implement:
as shown in fig. 4, the multi-window parallel method proposed by the embodiment of the present disclosure includes the following steps.
S100, loading a first window, and setting the first window to be in an activated state.
Specifically, the disclosed embodiment may include a start path of a program, an identifier of a display window of the program, and a display area of the display window. The starting path of the program is, for example, an installation path or a desktop shortcut of the program on the running terminal. The identification of the display window of the program may be the name or other identification of the display window. The display area of the display window may belong to a screen map area.
It should be noted that, when a terminal of the android operating system creates a window, the terminal is implemented by creating an Activity component, and when the terminal creates an Activity component, a default manner is to transfer a parameter of full-screen display into the created Activity component, so that the created window is also displayed in full screen. However, in the embodiment of the invention, when the first window is created, the preset display parameters of the first window are transmitted to the created Activity component, so that the created first window can be displayed in the screen in a windowing way.
S200, outputting the N pieces of image information through N first windows respectively.
Specifically, the loading step described above may be the same as the loading step in S100. After the first window is loaded and activated, the second windows are continuously loaded in sequence, and because the display parameters of the windows are different, the screen mapping areas occupied by the windows are different, and the windows can be simultaneously operated in the same screen mapping area. In particular, when the terminal detects a new window creation instruction, it will typically transfer the focus label to the newly created window, i.e., perform focus transfer. The detection focus label belongs to the concept of a master task and a slave task on the terminal. Wherein, the window designated as the focus is the primary task, and the primary task can perform various operations, such as dragging, playing, and the like. A slave task may refer to a window that has been created but not operating, at which time the window of the slave task is typically in a paused or frozen state because it is not designated as a focus window. In order to solve the problem of suspension or freezing of the slave task window, the embodiment of the present disclosure proposes that when the focus of an application is switched, the window with the focus being transferred is not notified, the application state is not changed, and the window continues to run. For example, the current focus window is a video window, and the focus tab is transferred to the chat page window after the user newly creates the chat page window. It should be noted that, while the chat page window obtains the focus tab, the video window still keeps the playing status due to not receiving the notification of focus transition, and does not become a pause or freeze status due to losing focus.
And S300, acquiring real-time image information and outputting the real-time image information through a second window.
The above scheme can be expressed as: (1) acquiring a starting path of a program and starting a window; (2) detecting windows of the started program, generally all valid windows of the program; (3) all detected windows are set as active windows.
As shown in fig. 5, fig. 5 is a flowchart of a method for loading an activation window according to an embodiment of the present disclosure. In step S100, loading the active first window may include the following steps.
S101, obtaining a first window triggering request, responding to the first window triggering request, and loading the first window.
S102, simultaneously sending a focus label to the first window.
More specifically, when there is no application task on the current terminal, the user selects an application to start, the terminal opens the application, creates a window and displays the window to the user, and simultaneously sends a focus tab to the newly created window. It is understood that the terminal may not acquire data information when the focus window is not detected.
As shown in fig. 6, fig. 6 is a flowchart of a method for loading an activation window according to another embodiment of the present disclosure. In step S200, loading and activating the plurality of second windows may further include the following steps.
S201, obtaining a second window triggering request, responding to the second window triggering request, and loading the second window.
S202, receiving a focus transfer request, and transferring the focus label from the first window to the second window based on the focus transfer request, while not notifying the first window that the focus label has been transferred, so that the first window is still in an activated state.
More specifically, at least one second window may also be created while the first window is running. And responding to the plurality of window triggering requests, and correspondingly and respectively loading the new windows. For example, as shown in FIG. 7, while current window 1 is running, window 2 is created, while the focus tab is transferred from window 1 to window 2. At this point, if window 1 is notified of the loss of the focus tab, window 1 may enter a paused or frozen state. However, if only the focus label is transferred without notifying window 1, window 1 also continues to run in a state where the focus label is not transferred, i.e., the continued running of window 1 is not affected. On the other hand, window 2 receives the focus label, and window 2 operates as a normal focus window. At this time, both the window 1 and the window 2 are in a state of having a focus label, and are operated at the same time. Then, when more windows are created to the window N, the focus label is transferred according to the creation sequence of the windows, and the window with the focus label removed in advance does not receive the focus transfer notice and still runs in the focus mode, so that the windows 1-N in the same screen mapping area are in an operable state at the same time, and the window utilization rate is improved.
Specifically, in the embodiment of the present disclosure, a window 1 is opened, where the window 1 may be a video or an image; then window 2 is opened again, window 2 possibly being a video or image. When window 1 and window 2 are playing video at the same time, window 1 is not frozen or paused by the opening of window 2. Because, at this point, although the focus label has been transferred to Window 2, Window 1 will still operate in the focused mode because it is not notified that the focus label has been lost. At this time, both end plays can be performed simultaneously.
In addition, the windows are arranged in the screen mapping area, and the windows can be partially overlapped with each other or not overlapped with each other. However, in the embodiment of the present disclosure, the arrangement manner is selected to be not overlapped with each other for the presentation effect. Further, the arrangement of the windows may be determined by display parameters. In this disclosure, for example, the display parameter of the first window may include position information and size information of the first window, and specifically may include an X coordinate of a first pixel in an upper left corner of the first window on the screen mapping area, a Y coordinate of the first pixel in the upper left corner of the first window on the terminal screen, a length of the first window, and a width of the first window. The terminal can display the first window on the terminal screen mapping area according to the four display parameters. The display area of the display window may be represented by a display start coordinate and a window length and width, and the display area of the display window is set, for example, represented by the display start coordinate and the window length and width, where the size of the display area is generally smaller than or equal to the screen mapping area size of the display screen to be operated, i.e., located in the screen mapping area. The window of the program may then be displayed in a display area within the screen mapping area in which it is running. It will be appreciated that the display area may occupy only a portion of the screen map, while other portions may display other content, such as a running window for other programs.
Therefore, the user can simultaneously operate a plurality of windows, and the information processing efficiency of the user is improved.
Because there is no operation priority distinction between multiple windows, the user can also control multiple windows at the same time. In the embodiment of the disclosure, the terminal receives and executes the operation instructions respectively aiming at the first window and the second window at the same time. Specifically, the operation instruction may be an operation instruction for modifying a display parameter of the window. For example, the terminal receives position information and/or size information in display parameters of a window, and the like, which are changed by dragging the window on the screen with a mouse or a finger, and is not specifically limited in this embodiment.
In addition, in the embodiment of the present disclosure, the window manager may also be utilized to determine the size of the respective display areas of the plurality of windows according to the size and the number of displayable areas available for displaying the plurality of windows on the screen map. Accordingly, the situation that the arrangement of the windows is unreasonable to influence the operation of the user can be avoided.
FIG. 8 is a schematic diagram of a multi-window arrangement of an embodiment of the present disclosure. As shown in fig. 8, the plurality of windows are not overlapped with each other and are located in the same layer. The arrangement ensures that the windows are not influenced mutually, and can obtain better operation experience. It is understood that the window in this embodiment may also be dragged or otherwise manipulated at the same time, and will not be repeated here. In addition, although fig. 8 illustrates that 4 windows are displayed in the same size, the embodiments of the present disclosure are not limited thereto and may be displayed in different sizes.
Furthermore, after obtaining a plurality of image information, the obtained image information can be analyzed and recombined. For example, as shown in fig. 9, the first image information may include first tag information a and second tag information B; the second image information may include first tag information C and second tag information D. The label information A, B, C, D is analyzed, extracted and recombined to obtain recombined third image information and fourth image information. The third image information includes first tag information a and second tag information D, and the fourth image information includes first tag information C and second tag information B. Specifically, in this embodiment, the first tag information may include an upper garment, and the second tag information may include a lower garment. Understandably, the present disclosure may also contain more tag information, which is not specifically limited herein.
In practical applications, there may also occur a case where the tag information contained in the image information cannot be directly reorganized. For example, when the sizes of the displays of the separately acquired image information are different, which may be due to a mismatch in the scale of the tag information caused by a difference in the distance of a person from, for example, a wide-angle lens each time the image information is acquired, it is not possible to directly extract and combine the original size information contained in the image information. In view of this situation, the embodiments of the present application propose that image information may be adjusted and then combined. For example, when the proportion of the label information is detected to be different, the image size is adjusted to match the proportion of the image information, and then the adjusted proportion matching label information is combined and output.
The embodiment of the present application further provides a device 10 for displaying a feature model, as shown in fig. 10, including an obtaining module 30 and an outputting module 50. The obtaining module 30 is configured to obtain image information; the output module 50 is used for outputting image information. In particular, the acquisition module 30 may be a wide-angle lens. The wide-angle lens is a photographic lens with a focal length shorter than that of a standard lens, a visual angle larger than that of the standard lens, a focal length longer than that of a fisheye lens and a visual angle smaller than that of the fisheye lens. The wide-angle lens has short focal length and large visual angle, so that a scene with a large area can be shot in a short shooting distance range. The embodiment of the present application can more easily obtain the image information of the whole body of the human body by using the wide-angle lens, however, the present application is not limited to the wide-angle lens, and any module capable of obtaining an image may be included in the embodiment of the present application.
In another embodiment of the present application, as shown in fig. 11, the apparatus 10 for feature model exhibition may further include a storage module 20, at least one processing module 40, and a partitioning module 60. The storage module 20 may be configured to store the specific feature information and the preset rule. Processing module 40 may be used to parse and reassemble the specified feature information in the image information. The partitioning module 60 may be configured to manage the arrangement of the image information not to overlap each other. Specifically, the specified feature information may be clothing feature information; the apparel characteristic information may include first tag information and second tag information. The first tag information may be, for example, top-up, and the second tag information may be, for example, bottom-up; in addition, the clothing feature information provided by this embodiment may further include more tag information, and this embodiment does not specifically limit the number and the designation of the tag information. The preset rule may be to analyze the clothing feature information in the at least one image information, and to recombine the first tag information and the second tag information in the clothing feature information, for example, a combination of one piece of upper clothing and multiple pieces of lower clothing, which is not limited herein.
For the embodiments of the apparatus, since they correspond substantially to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described apparatus embodiments are merely illustrative, wherein the modules described as separate modules may or may not be separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The feature model display method and device of the present disclosure are described above based on the embodiments and application examples. In addition, the present disclosure also provides a terminal and a storage medium, which are described below.
Referring now to fig. 12, a schematic diagram of an electronic device (e.g., a terminal device or server) 800 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 12, the electronic device 800 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 12 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods of the present disclosure as described above.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a method for feature model exhibition, including:
acquiring N pieces of image information, wherein N is an integer not less than 1;
outputting the N pieces of image information through N first windows respectively; and
and acquiring real-time image information, and outputting the real-time image information through a second window.
According to one or more embodiments of the present disclosure, there is provided a method for feature model display, where the step of obtaining N image information includes:
acquiring N image acquisition instructions, and responding to the N image acquisition instructions to respectively acquire the N image information; or
Setting appointed characteristic information, and acquiring the N pieces of image information according to the appointed characteristic information.
According to one or more embodiments of the present disclosure, there is provided a method for displaying a feature model, where the step of setting specified feature information and obtaining the N pieces of image information according to the specified feature information includes:
acquiring a first image, analyzing the first image and acquiring first information;
acquiring a second image, analyzing the second image and acquiring second information; and
and comparing the first information with the second information, and if the first information and the second information both contain the specified feature information and the specified feature information is different, setting the first image and the second image as the N pieces of image information.
According to one or more embodiments of the present disclosure, there is provided a method for feature model exhibition, which is characterized in that after the step of acquiring N image information, the method includes:
recombining the specified characteristic information contained in the first information and the specified characteristic information contained in the second information to obtain a recombined image; and are
And setting the recombined image as the N pieces of image information.
According to one or more embodiments of the present disclosure, a method for displaying a feature model is provided, in which the specified feature information at least includes first tag information and second tag information;
the recombining the specific feature information included in the first information and the specific feature information included in the second information includes:
recombining first tag information in the specified feature information included in the first information and the second tag information in the specified feature information included in the second information; and
and recombining second label information in the specified characteristic information contained in the first information with the first label information in the specified characteristic information contained in the second information.
According to one or more embodiments of the present disclosure, there is provided a method for obtaining, displaying and displaying a feature model, where the recombining specified feature information included in the first information and specified feature information included in the second information further includes:
when first tag information in the specified feature information contained in the first information does not match with second tag information in the specified feature information contained in the second information, setting the first tag information in the specified feature information contained in the first information as a standard, and adjusting the second tag information in the specified feature information contained in the second information according to the standard so that the first tag information in the specified feature information contained in the first information matches with the second tag information in the specified feature information contained in the second information.
According to one or more embodiments of the present disclosure, a method for feature model display is provided, wherein the specified feature information is clothing feature information.
According to one or more embodiments of the present disclosure, there is provided a method for displaying a feature model, wherein the step of outputting the N pieces of image information through N first windows respectively includes:
outputting the N pieces of image information through the N first windows which are arranged in a tiled mode respectively; wherein the N first windows do not overlap with each other; and/or
Outputting the N pieces of image information through the N first windows which are arranged in a rolling mode respectively; wherein the N first windows do not completely overlap.
According to one or more embodiments of the present disclosure, there is provided a method for feature model display, where the step of obtaining real-time image information includes:
and setting reference information, and acquiring the real-time image information when the reference information is detected.
According to one or more embodiments of the present disclosure, a method for feature model exhibition is provided, wherein the reference information includes human body feature image information and/or human face feature image information.
According to one or more embodiments of the present disclosure, a method for feature model display is provided, in which the N first windows and the second window do not overlap with each other.
According to one or more embodiments of the present disclosure, there is provided an apparatus for feature model exhibition, including:
the acquisition module is used for acquiring image information; and
the output module is used for outputting the image information;
wherein the image information includes at least one of input image information and real-time image information.
According to one or more embodiments of the present disclosure, there is provided an apparatus for feature model exhibition, wherein the apparatus further includes:
and the storage module is used for storing the image information.
According to one or more embodiments of the present disclosure, there is provided an apparatus for feature model exhibition, wherein the apparatus further includes:
at least one processing module for analyzing, comparing and recombining the designated characteristic information in the image information; and/or
The partitioning module is used for managing the arrangement of the image information;
wherein the arrangement of the at least one input image information and the real-time image information is not overlapped with each other.
According to one or more embodiments of the present disclosure, there is provided an apparatus for feature model exhibition, wherein the specified feature information is clothing feature information, and the clothing feature information includes first tag information and second tag information.
According to one or more embodiments of the present disclosure, an apparatus for feature model exhibition is provided, wherein the obtaining module is a wide-angle lens.
According to one or more embodiments of the present disclosure, there is provided an apparatus for feature model exhibition, wherein the output module is configured to output the image information of a single frame or the image information of consecutive frames.
According to one or more embodiments of the present disclosure, there is provided a terminal including: at least one memory and at least one processor;
wherein the at least one memory is configured to store program code, and the at least one processor is configured to call the program code stored in the at least one memory to perform any of the methods described above.
According to one or more embodiments of the present disclosure, there is provided a storage medium for storing program code for performing the above-described method.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (16)

1. A feature model display method comprises the following steps:
acquiring N pieces of image information, wherein N is an integer not less than 1;
outputting the N pieces of image information through N first windows respectively; and
acquiring real-time image information and outputting the real-time image information through a second window;
wherein the N pieces of image information include a recombined image of images having different specified feature information; the specific feature information includes clothing feature information including different tag information.
2. The method of claim 1, wherein the acquiring N image information comprises:
acquiring N image acquisition instructions, and responding to the N image acquisition instructions to respectively acquire the N image information; or
And setting the specified characteristic information, and acquiring the N pieces of image information according to the specified characteristic information.
3. The method according to claim 2, wherein the setting of the specified feature information, the acquiring of the N image information according to the specified feature information includes:
acquiring a first image, analyzing the first image and acquiring first information;
acquiring a second image, analyzing the second image and acquiring second information; and
and comparing the first information with the second information, and setting the first image and the second image as the N pieces of image information if the first information and the second information both contain the specified feature information and the specified feature information is different.
4. The method according to claim 1, wherein the specified characteristic information includes at least first tag information and second tag information;
the N pieces of image information include a recombined image of images having different specified feature information including:
recombining first tag information in the specified feature information included in the first information and second tag information in the specified feature information included in the second information; and
and recombining second label information in the specified characteristic information contained in the first information with the first label information in the specified characteristic information contained in the second information.
5. The method according to claim 4, wherein the N pieces of image information include a recombined image of images having different specified feature information further includes:
when first tag information in the specified feature information contained in the first information does not match with second tag information in the specified feature information contained in the second information, setting the first tag information in the specified feature information contained in the first information as a standard, and adjusting the second tag information in the specified feature information contained in the second information according to the standard so that the first tag information in the specified feature information contained in the first information matches with the second tag information in the specified feature information contained in the second information.
6. The method according to claim 1, wherein the step of outputting the N image information through the N first windows, respectively, comprises:
outputting the N pieces of image information through the N first windows which are arranged in a tiled mode respectively; wherein the N first windows do not overlap with each other; and/or
Outputting the N pieces of image information through the N first windows which are arranged in a rolling mode respectively; wherein the N first windows do not completely overlap.
7. A feature model display method comprises the following steps:
acquiring N pieces of image information, wherein N is an integer not less than 1;
outputting the N pieces of image information through N first windows respectively; and
setting reference information, acquiring real-time image information when the reference information is detected, and outputting the real-time image information through a second window;
the reference information comprises human body characteristic image information and/or human face characteristic image information.
8. The method of claim 1, wherein the N first windows and the second window do not overlap.
9. An apparatus for feature model exhibition, comprising:
the acquisition module is used for acquiring image information; and
the output module is used for outputting the image information;
the image information comprises at least one input image information and real-time image information, and the image information comprises a recombined image of images with different specified characteristic information; the specific feature information includes clothing feature information including different tag information.
10. The apparatus of claim 9, further comprising:
and the storage module is used for storing the image information.
11. The apparatus of claim 9, further comprising:
at least one processing module for analyzing, comparing and recombining the designated characteristic information in the image information; and/or
The partitioning module is used for managing the arrangement of the image information;
wherein the arrangement of the at least one input image information and the real-time image information is not overlapped with each other.
12. The apparatus according to claim 9, wherein the specified characteristic information is clothing characteristic information; the apparel characteristic information includes first tag information and second tag information.
13. The apparatus of claim 9, wherein the acquisition module is a wide-angle lens.
14. The apparatus of claim 9, wherein the output module is configured to output the image information of a single frame or the image information of consecutive frames.
15. A terminal, comprising:
at least one memory and at least one processor;
wherein the at least one memory is configured to store program code and the at least one processor is configured to invoke the program code stored in the at least one memory to perform the method of any of claims 1 to 8.
16. A storage medium for storing program code for performing the method of any one of claims 1 to 8.
CN201910755261.7A 2019-08-15 2019-08-15 Method and device for displaying feature model, terminal and storage medium Active CN110489040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910755261.7A CN110489040B (en) 2019-08-15 2019-08-15 Method and device for displaying feature model, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910755261.7A CN110489040B (en) 2019-08-15 2019-08-15 Method and device for displaying feature model, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110489040A CN110489040A (en) 2019-11-22
CN110489040B true CN110489040B (en) 2021-08-03

Family

ID=68551248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910755261.7A Active CN110489040B (en) 2019-08-15 2019-08-15 Method and device for displaying feature model, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110489040B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256173A (en) * 2020-10-20 2021-01-22 北京字节跳动网络技术有限公司 Window display method and device of electronic equipment, terminal and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5704825B2 (en) * 2010-03-08 2015-04-22 キヤノン株式会社 Information processing apparatus, control method thereof, and program
CN103310342A (en) * 2012-03-15 2013-09-18 凹凸电子(武汉)有限公司 Electronic fitting method and electronic fitting device
CN104423946B (en) * 2013-08-30 2018-02-27 联想(北京)有限公司 A kind of image processing method and electronic equipment
US10430985B2 (en) * 2014-03-14 2019-10-01 Magic Leap, Inc. Augmented reality systems and methods utilizing reflections
CN104657206B (en) * 2015-02-09 2018-09-28 青岛海信移动通信技术股份有限公司 A kind for the treatment of method and apparatus of image data
CN105657239B (en) * 2015-04-27 2018-05-15 宇龙计算机通信科技(深圳)有限公司 A kind of image processing method and device
US10712927B2 (en) * 2015-06-12 2020-07-14 Avaya Inc. System and method for call management in single window communication endpoints
CN106055834A (en) * 2016-06-22 2016-10-26 江西服装学院 Three-dimensional garment design system
CN105979156A (en) * 2016-06-30 2016-09-28 维沃移动通信有限公司 Panoramically photographing method and mobile terminal
US10547776B2 (en) * 2016-09-23 2020-01-28 Apple Inc. Devices, methods, and graphical user interfaces for capturing and recording media in multiple modes
CN108564612A (en) * 2018-03-26 2018-09-21 广东欧珀移动通信有限公司 Model display methods, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN110489040A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN113489937B (en) Video sharing method, device, equipment and medium
CN111510645B (en) Video processing method and device, computer readable medium and electronic equipment
CN112261226A (en) Horizontal screen interaction method and device, electronic equipment and storage medium
EP4333440A1 (en) Video interaction method and apparatus, electronic device, and storage medium
CN113076048B (en) Video display method and device, electronic equipment and storage medium
US20230316529A1 (en) Image processing method and apparatus, device and storage medium
CN111309225B (en) Screen clearing processing method and device
CN109947506B (en) Interface switching method and device and electronic equipment
CN113315924A (en) Image special effect processing method and device
CN110321042B (en) Interface information display method and device and electronic equipment
US20230421857A1 (en) Video-based information displaying method and apparatus, device and medium
US20240121349A1 (en) Video shooting method and apparatus, electronic device and storage medium
CN110489040B (en) Method and device for displaying feature model, terminal and storage medium
EP3425533A1 (en) Displaying page
CN111833459A (en) Image processing method and device, electronic equipment and storage medium
US11847758B2 (en) Material display method and apparatus, terminal, and storage medium
CN115396716B (en) Live video processing method, device, equipment and medium
US20220245920A1 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN115933936A (en) Task prompting method and device, storage medium and electronic equipment
CN110312117B (en) Data refreshing method and device
CN113127101A (en) Application program control method, device, equipment and medium
CN113253847A (en) Terminal control method and device, terminal and storage medium
CN114063843A (en) Interaction method, interaction device, electronic equipment, storage medium and computer program product
CN113721874A (en) Virtual reality picture display method and electronic equipment
CN113342440A (en) Screen splicing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant