CN111309212A - Split-screen comparison fitting method, device, equipment and storage medium - Google Patents

Split-screen comparison fitting method, device, equipment and storage medium Download PDF

Info

Publication number
CN111309212A
CN111309212A CN202010129301.XA CN202010129301A CN111309212A CN 111309212 A CN111309212 A CN 111309212A CN 202010129301 A CN202010129301 A CN 202010129301A CN 111309212 A CN111309212 A CN 111309212A
Authority
CN
China
Prior art keywords
video
user
fitting
preview
split
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010129301.XA
Other languages
Chinese (zh)
Inventor
林蓉
洪朝群
罗潇澧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University of Technology
Original Assignee
Xiamen University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University of Technology filed Critical Xiamen University of Technology
Priority to CN202010129301.XA priority Critical patent/CN111309212A/en
Publication of CN111309212A publication Critical patent/CN111309212A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Psychiatry (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a fitting method, a fitting device, fitting equipment and a storage medium for split screen comparison, wherein the method comprises the following steps: when detecting that the user executes the video recording operation, starting the Kinect to capture a real-time image of fitting performed by the user on the current screen until the user executes the operation of stopping the video recording; saving the captured multi-frame real-time image to a specified path, and synthesizing a video; selecting a frame in a video as a preview, and placing the preview under a video list so that a user can select and play the video according to the preview under the video list; and receiving at least two videos selected to be played by a user, and playing the at least two videos on the same screen in a split screen mode based on the ROI, so that split screen contrast fitting is realized. The method can record the fitting condition of the user every time, and performs split screen comparison on the recorded video, thereby solving the problem that the user can not choose which garment to buy and needs to frequently change the garment for fitting comparison, and further selecting satisfactory garments.

Description

Split-screen comparison fitting method, device, equipment and storage medium
Technical Field
The invention relates to the field of virtual fitting rooms, in particular to a fitting method, a fitting device, fitting equipment and a storage medium for split screen comparison.
Background
Along with the world spread of electronic commerce, the products of the relevant novel online shopping mode are also numerous in clouds and clouds, for example, the virtual fitting room of body sensing interaction is realized by using gesture recognition, the virtual fitting room can enable a user to finish fitting and replacement of clothes only through a plurality of specific gestures without using a mouse or other wired devices for remote control, the user can directly see the fitting effect on a fitting mirror and directly carry out self fitting matching on the fitting mirror, and therefore a large amount of time is saved. However, although the virtual fitting room saves the time for fitting and changing the clothes, the virtual fitting room can only meet the requirement that the user individually presents the fitting effect of each time, and cannot present the fitting effects of different clothes to be fitted at the same time, so that the user needs to frequently change the clothes to fit and compare when deciding which clothes to buy, and the user is difficult to select a commodity suitable for the user.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a fitting method, apparatus, device and storage medium for split-screen comparison, which can record the fitting situation of a user each time, and perform split-screen comparison on the recorded video, thereby solving the problem that the user is difficult to decide which garment to purchase and needs to frequently change the garment for fitting comparison, and thus being able to select a satisfactory garment.
The invention provides a fitting method for split screen comparison, which comprises the following steps:
when detecting that the user executes the video recording operation, starting the Kinect to capture a real-time image of fitting performed by the user on the current screen until the user executes the operation of stopping the video recording;
saving the captured multi-frame real-time image to a specified path, and synthesizing a video;
selecting one frame in the video as a preview, and placing the preview under a video list so that a user can select and play the video according to the preview under the video list;
and receiving at least two videos selected to be played by a user, and playing the at least two videos on the same screen in a split screen mode based on the ROI, so that split screen contrast fitting is realized.
Preferably, the captured multiple frames of real-time images are saved to a specified path, and a video is synthesized, specifically:
converting the captured real-time image of fitting performed by the user on the current screen into Mat-type image data;
and storing the Mat image data to a specified path, and synthesizing a video.
Preferably, one frame in the video is selected as a preview, and the preview is placed under a video list, so that a user can select and play the video according to the preview under the video list, specifically:
after judging that the plurality of frames of real-time images are synthesized into the video, selecting one frame of the video as a preview image;
converting the preview image into a resolution required by preview, and placing the preview image after the resolution is converted under a video list;
and clearing the rest real-time images by using a command line so as to enable a user to select and play the videos according to the preview images under the video list.
Preferably, the method further comprises the following steps: when the user sharing operation is detected, the video selected by the user is uploaded to a social platform in a network mode, wherein the social platform comprises a music and video platform.
Preferably, the method further comprises the following steps:
sending a video sharing request to the social platform by using a libcurl library, and verifying the sent request based on an MD5 algorithm to generate a string with 32 bits, so that the social platform returns a string after receiving the request;
and analyzing the character string to generate a sharing two-dimensional code so that a user can watch or download the uploaded video by scanning the two-dimensional code.
In a second aspect, an embodiment of the present invention further provides a fitting apparatus for split-screen comparison, where the real-time image capturing unit is configured to, when it is detected that a user performs a video recording operation, start a Kinect to capture a real-time image of a fitting performed by the user on a current screen until the user performs an operation of stopping the video recording;
the video synthesis unit is used for saving the captured multi-frame real-time images to a specified path and synthesizing a video;
the preview image selecting unit is used for selecting one frame in the video as a preview image and placing the preview image under a video list so that a user can select and play the video according to the preview image under the video list;
and the video split-screen playing unit is used for receiving at least two videos selected to be played by a user and performing split-screen playing on the at least two videos on the same screen based on the ROI, so that split-screen comparison fitting is realized.
Preferably, the video composition unit includes:
the real-time image conversion module is used for converting the captured real-time image of fitting performed by the user on the current screen into Mat-type image data;
and the video synthesis module is used for storing the Mat image data to a specified path and synthesizing a video.
Preferably, the preview selecting unit includes:
the preview image selecting module is used for selecting one frame of the video as a preview image after judging the multi-frame real-time image synthesized video;
the resolution conversion module is used for converting the preview image into the resolution required by the preview and placing the preview image after the resolution conversion under a video list;
and the image removing module is used for removing the rest real-time images by using the command line so as to enable the user to select and play the video according to the preview image in the video list.
Preferably, the method further comprises the following steps: the video uploading unit is used for uploading videos selected by users to a social platform in a network mode when the user sharing operation is detected, wherein the social platform comprises a music and video platform.
Preferably, the method further comprises the following steps:
the request sending unit is used for sending a video sharing request to the social platform by using a libcurl library, and verifying the sent request based on an MD5 algorithm to generate a string with 32 bits, so that the social platform returns a string after receiving the request;
and the character string analysis unit is used for analyzing the character string and generating a sharing two-dimensional code so that a user can watch or download the uploaded video by scanning the two-dimensional code.
In a third aspect, an embodiment of the present invention further provides a fitting apparatus for split-screen comparison, including a processor, a memory, and a computer program stored in the memory, where the computer program is executable by the processor to implement the fitting method for split-screen comparison according to the foregoing embodiment.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, where when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to perform the fitting method of split-screen comparison according to the foregoing embodiment.
In the embodiment, the recorded fitting condition of the user is recorded every time, and the recorded video is compared in a split screen mode based on the ROI, so that the problem that the user can decide which garment to buy and needs to change the garment frequently for fitting comparison is solved, and the user is helped to perform comparison selection to select satisfactory garments.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a fitting method for split-screen comparison according to an embodiment of the present invention;
FIG. 2 is a schematic view of a display window interface of a fitting apparatus for split-screen comparison according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a screen splitting comparison provided in an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a fitting device for split-screen comparison according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to fig. 3, a fitting method of split screen comparison according to a first embodiment of the present invention is performed by a fitting apparatus of split screen comparison, and in particular, by one or more processors in the fitting apparatus of split screen comparison, and includes at least the following steps:
s101, when detecting that the user executes the video recording operation, starting a Kinect to capture a real-time image of fitting performed by the user on a current screen until the user executes the operation of stopping the video recording.
In this embodiment, a Kinect is built in the fitting device for split-screen comparison, wherein the Kinect is a motion sensing camera based on human body motion capture, and comprises 3 cameras for reading data streams; the middle camera is an RGB color camera, the left camera and the right camera are respectively a 3D depth sensor composed of an infrared emitter and an infrared semiconductor camera, and the left infrared emitter is mainly used for realizing three-dimensional positioning on the surrounding environment.
It should be noted that, the Kinect senses the surrounding environment in a black and white spectrum manner, and adopts an optical coding technology which uses continuous pulse-free light for measurement, and the optical coding technology only needs a traditional semiconductor sensor, so that the cost is low. Specifically, optical encoding encodes an object to be inspected by irradiating a light beam, and when a rough object is irradiated with laser light or when the laser light penetrates ground glass, random diffraction spots are formed on the surface of the object. Therefore, the imaging principle of Kinect is to obtain baseline, focal length and parallax data between two cameras, and then construct a matrix, and use the reprojectimageTo3D function of OpenCV to realize the imaging effect. The specific implementation process is that Kinect captures the hand movement of a customer through a camera and reads hand joint data, then builds a virtual model through the collected data, compares the virtual model with a human body model stored in the Kinect, and once an object which is the same as the human body model stored in the Kinect is compared, the object is identified as a human body and a bone model corresponding to the human body is generated. The generated skeleton model is further defined as a virtual character, and the corresponding function is triggered by identifying the specific behavior action of the virtual character. By utilizing the built related human skeleton model, 25 key areas of the human body can be detected and identified, and further the standing and sitting postures of the human body can be identified.
In this embodiment, the depth information obtained by using the Kinect is recorded in units of pixels, the default is 640 × 480 pixels, when the Kinect 101 is used for identifying common gestures in daily life, a large number of gesture pictures need to be collected to perform identification training on an experimental model, in order to prevent the same gesture from being repeatedly read, an experimental sample only selects a gesture with obvious hand motion change, and the depth data corresponding to different gestures are recorded. In many studies on human body movement behaviors, a human body bone model is extracted and constructed from a two-dimensional image, data read from the two-dimensional image has a certain difference from different angles, the two-dimensional image has a limitation, and in a three-dimensional image, a human body bone can be determined by a joint point and expressed as a vector having a length and a direction. Furthermore, a complete human skeleton model can be constructed, after segmentation and feature extraction are completed, each gesture is analyzed, identified and classified, the features of each gesture need to be learned through a training model, and different gestures are effectively distinguished.
In this embodiment, a circle with a hand skeleton as a center, which is approximately as large as the video recording function frame, is used for marking, when the Kinect recognizes that the hand of the user falls into the circular area and the staying time is as long as 2s, it is determined that the user selects the video recording function, the recording of the fitting process is started, and at this time, the video recording function is displayed to be stopped. When Kinect 101 recognizes that the user's hand falls into the circular area and the stay time is as long as 2s, it is determined that the user selects the function of stopping recording video, and at this time, the process of fitting recording is saved. And capturing and analyzing the hand action of the user by utilizing the principle that the Kinect somatosensory camera can capture external image information, and executing a corresponding trigger function defined by the gesture. The gesture recognition device is more intelligent, is convenient for users to use and learn, and enables the users to fully feel the comfort brought by human-computer interaction.
S102, storing the captured multi-frame real-time images to a specified path, and synthesizing a video.
In this embodiment, the fitting device for split-screen comparison converts the captured real-time image of fitting performed by the user on the current screen into Mat-like image data, and then stores the Mat-like image data in a specified path, and synthesizes a video. The fitting equipment for split screen comparison further defines a global variable for recording the number of real-time images stored on the designated road. Of course, it can be understood that, in order to record a video repeatedly, by defining an array for storing the name of the video, the file name of the video stored each time needs to be modified along with the recording of the video, and the use of the array can prevent the video from being overwritten.
S103, selecting one frame in the video as a preview, and placing the preview under a video list so that the user can select and play the video according to the preview under the video list.
In this embodiment, after the multi-frame real-time image composite video is judged, one frame of the video is selected as a preview, the preview is converted into a resolution required by preview, the preview after the resolution conversion is placed under a video list, and the rest of the real-time images are removed by using a command line, so that a user can select and play the video according to the preview under the video list. In particular, the invention marks a circle which is approximately as large as the video list function frame and takes the skeleton of the hand as the center of the circle, when the Kinect recognizes that the user's hand falls within the circular area and stays for 2s, the user is considered to select the video list function, three video preview pictures appear on the left side of the interface, if the video preview picture is selected, the video of the specified path is read, whether the reading operation is successful is judged by using the if statement, defining a window at the same time, printing a corresponding state on the window, if the state is successful or failed, reading the frame rate and the frame number of the video, calculating the playing time of each frame, capturing the frame and reading the relative position of the frame in the video list, and displaying the video to be played in the window, and releasing the captured frame, even if the video in the folder under the video preview image is played. The order of the preview images is sorted according to the saved time, the video is played when the preview images are selected, in addition, the number of the video preview images is regulated according to the actual situation, and the invention is not limited in particular.
S104, receiving at least two videos selected to be played by a user, and playing the at least two videos on the same screen in a split-screen mode based on the ROI, so that split-screen comparison fitting is achieved.
Referring to fig. 3, in the embodiment of the present invention, a circle with a center of a hand skeleton approximately equal to the size of the split-screen comparison function box is identified, when the Kinect recognizes that the hand of the user falls into the circular area and the staying time is as long as 2s, it is considered that the user selects the split-screen comparison function, and the ROI is used to implement the segmentation and connection of the videos in the video list.
To sum up, through recording the user fitting condition at every turn, select two videos that want to compare, split screen contrast based on the ROI, just can play these two videos simultaneously in same window, convenience of customers compares these two videos, selects the commodity that more is fit for oneself. The problem that the user is difficult to decide which garment to buy and needs to frequently change the garment for fitting comparison is solved.
On the basis of the above embodiment, in a preferred embodiment of the present invention, the method further includes: when the user sharing operation is detected, the video selected by the user is uploaded to a social platform in a network mode, wherein the social platform comprises a music and video platform. It should be noted that the invention uses QrShare to generate the shared QR code to complete the video sharing function. Before uploading the video, the uploaded video needs to be initialized, and specifically, the name of the video, the IP address of the user, the id of the uploaded video, the identification code of the video, and the uploading address of the video are defined respectively. The invention uploads videos in a web mode, wherein the interface address is determined by the video uploading address returned by the API interface. The interface of the web mode only supports a POST parameter transfer mode. Then, a libcurl library is used for sending a video sharing request to the social platform, the sent request is verified based on an MD5 algorithm to generate a string with 32 bits, and the social platform returns a string after receiving the request; and then analyzing the character string to generate a sharing two-dimensional code, so that a user can watch or download the uploaded video by scanning the two-dimensional code. Specifically, the transmitted character string is converted into a "name/value" pair format by using a json method, and besides using the json method, an xml method can be used. Then, the 32-bit character string is verified by using an MD5 algorithm to generate an encrypted character string, and the main steps of generating the signature are firstly sorting according to ascending order of keys and then splicing keys and values into the character string according to the previous order. The assigned key is then concatenated after the string, and finally the 16-ary value of md5 is calculated as the generated sign. Received from the interface is a code it returns to indicate the status of the reception, success or failure. The terminator "\ n" needs to be added, otherwise a string of thousands of random characters is returned. Returning 0 indicates success and returning other values fails. Finally, when the user executes the sharing operation, the video which the user selects to share is uploaded to the music and video platform, and the user can also directly enter a website generated by the music and video platform to click and play the video. In addition, because the video platform can transcode and audit the uploaded video, the user can play the video only after clicking the website for a moment. Specifically, a library for generating the two-dimensional code is called to generate the two-dimensional code from the stored video, the video can be watched through the two-dimensional code scanned by the mobile phone end, and the video is shared to a social platform (a friend circle, a WeChat friend and a QQ friend) through a menu bar at the upper right corner of the WeChat. The method and the system support the recorded video to be shared on the social platform, and are convenient for inquiring opinions of relatives and friends.
Second embodiment of the invention:
referring to fig. 4, the second embodiment of the present invention further provides a fitting apparatus for split-screen comparison, including:
the real-time image capturing unit 100 is used for starting a Kinect to capture a real-time image of fitting performed by a user on a current screen when the fact that the user performs the video recording operation is detected, until the user performs the operation of stopping the video recording;
a video synthesizing unit 200, configured to save the captured multiple frames of real-time images to a specified path, and synthesize a video;
a preview selecting unit 300, configured to select a frame in the video as a preview, and place the preview under a video list, so that a user performs video selection and playing according to the preview under the video list;
and the video split-screen playing unit 400 is configured to receive at least two videos selected to be played by a user, and perform split-screen playing on the at least two videos on the same screen based on the ROI, so as to implement split-screen comparison fitting.
On the basis of the above embodiments, in a preferred embodiment of the present invention, the video synthesizing unit 200 includes:
the real-time image conversion module is used for converting the captured real-time image of fitting performed by the user on the current screen into Mat-type image data;
and the video synthesis module is used for storing the Mat image data to a specified path and synthesizing a video.
On the basis of the above embodiment, in a preferred embodiment of the present invention, the preview selecting unit 300 includes:
the preview image selecting module is used for selecting one frame of the video as a preview image after judging the multi-frame real-time image synthesized video;
the resolution conversion module is used for converting the preview image into the resolution required by the preview and placing the preview image after the resolution conversion under a video list;
and the image removing module is used for removing the rest real-time images by using the command line so as to enable the user to select and play the video according to the preview image in the video list.
On the basis of the above embodiment, in a preferred embodiment of the present invention, the method further includes: the video uploading unit is used for uploading videos selected by users to a social platform in a network mode when the user sharing operation is detected, wherein the social platform comprises a music and video platform.
On the basis of the above embodiment, in a preferred embodiment of the present invention, the method further includes:
the request sending unit is used for sending a video sharing request to the social platform by using a libcurl library, and verifying the sent request based on an MD5 algorithm to generate a string with 32 bits, so that the social platform returns a string after receiving the request;
and the character string analysis unit is used for analyzing the character string and generating a sharing two-dimensional code so that a user can watch or download the uploaded video by scanning the two-dimensional code.
The third embodiment of the present invention further provides a fitting device for split-screen comparison, which includes a processor, a memory, and a computer program stored in the memory, and the computer program can be executed by the processor to implement the fitting method for split-screen comparison as described in the above embodiments.
The fourth embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, where when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the fitting method of split-screen comparison according to the above embodiment.
Illustratively, the computer program may be divided into one or more units, which are stored in the memory and executed by the processor to accomplish the present invention. The one or more units may be a series of instruction segments of a computer program capable of performing specific functions, and the instruction segments are used for describing the execution process of the computer program in the fitting device for split screen comparison.
The fitting device for split screen comparison can include, but is not limited to, a processor and a memory. It will be understood by those skilled in the art that the schematic diagram is merely an example of a fitting device for split screen comparison, and does not constitute a limitation of the fitting device for split screen comparison, and may include more or less components than those shown, or combine some components, or different components, for example, the fitting device for split screen comparison may further include an input-output device, a network access device, a bus, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general processor may be a microprocessor or the processor may be any conventional processor or the like, and the control center of the fitting apparatus for split screen comparison connects the various parts of the fitting apparatus for entire split screen comparison using various interfaces and lines.
The memory can be used for storing the computer program and/or the module, and the processor can realize various functions of the fitting equipment for split screen comparison by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the fitting device integrated unit for split screen comparison can be stored in a computer readable storage medium if it is realized in the form of software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A fitting method of split screen contrast is characterized by comprising the following steps:
when detecting that the user executes the video recording operation, starting the Kinect to capture a real-time image of fitting performed by the user on the current screen until the user executes the operation of stopping the video recording;
saving the captured multi-frame real-time image to a specified path, and synthesizing a video;
selecting one frame in the video as a preview, and placing the preview under a video list so that a user can select and play the video according to the preview under the video list;
and receiving at least two videos selected to be played by a user, and playing the at least two videos on the same screen in a split screen mode based on the ROI, so that split screen contrast fitting is realized.
2. The fitting method of split-screen comparison according to claim 1, wherein the captured multi-frame real-time image is saved to a designated path and a video is synthesized, specifically:
converting the captured real-time image of fitting performed by the user on the current screen into Mat-type image data;
and storing the Mat image data to a specified path, and synthesizing a video.
3. The fitting method of split-screen comparison according to claim 2, wherein a frame in the video is selected as a preview, and the preview is placed under a video list, so that a user can play the video according to the preview under the video list, specifically:
after judging that the plurality of frames of real-time images are synthesized into the video, selecting one frame of the video as a preview image;
converting the preview image into a resolution required by preview, and placing the preview image after the resolution is converted under a video list;
and clearing the rest real-time images by using a command line so as to enable a user to select and play the videos according to the preview images under the video list.
4. The fitting method of split screen contrast of claim 1, further comprising: when the user sharing operation is detected, the video selected by the user is uploaded to a social platform in a network mode, wherein the social platform comprises a music and video platform.
5. The fitting method of split screen contrast of claim 4, further comprising:
sending a video sharing request to the social platform by using a libcurl library, and verifying the sent request based on an MD5 algorithm to generate a string with 32 bits, so that the social platform returns a string after receiving the request;
and analyzing the character string to generate a sharing two-dimensional code so that a user can watch or download the uploaded video by scanning the two-dimensional code.
6. The utility model provides a fitting device of split screen contrast which characterized in that includes:
the real-time image capturing unit is used for starting a Kinect to capture a real-time image of fitting performed by a user on a current screen when the fact that the user performs the video recording operation is detected, and stopping the video recording operation until the user performs the video recording operation;
the video synthesis unit is used for saving the captured multi-frame real-time images to a specified path and synthesizing a video;
the preview image selecting unit is used for selecting one frame in the video as a preview image and placing the preview image under a video list so that a user can select and play the video according to the preview image under the video list;
and the video split-screen playing unit is used for receiving at least two videos selected to be played by a user and performing split-screen playing on the at least two videos on the same screen based on the ROI, so that split-screen comparison fitting is realized.
7. The fitting device of split screen contrast of claim 6, wherein the video synthesizing unit comprises:
the real-time image conversion module is used for converting the captured real-time image of fitting performed by the user on the current screen into Mat-type image data;
and the video synthesis module is used for storing the Mat image data to a specified path and synthesizing a video.
8. The fitting device with split screen comparison according to claim 7, wherein the preview selecting unit comprises:
the preview image selecting module is used for selecting one frame of the video as a preview image after judging the multi-frame real-time image synthesized video;
the resolution conversion module is used for converting the preview image into the resolution required by the preview and placing the preview image after the resolution conversion under a video list;
and the image removing module is used for removing the rest real-time images by using the command line so as to enable the user to select and play the video according to the preview image in the video list.
9. Fitting device for split screen comparison, characterized in that it comprises a processor, a memory and a computer program stored in said memory, said computer program being executable by said processor to implement a fitting method for split screen comparison according to any of claims 1 to 5.
10. A computer-readable storage medium, comprising a stored computer program, wherein when the computer program runs, the computer-readable storage medium controls a device to execute the fitting method of split-screen contrast according to any one of claims 1 to 5.
CN202010129301.XA 2020-02-28 2020-02-28 Split-screen comparison fitting method, device, equipment and storage medium Pending CN111309212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010129301.XA CN111309212A (en) 2020-02-28 2020-02-28 Split-screen comparison fitting method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010129301.XA CN111309212A (en) 2020-02-28 2020-02-28 Split-screen comparison fitting method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111309212A true CN111309212A (en) 2020-06-19

Family

ID=71145300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010129301.XA Pending CN111309212A (en) 2020-02-28 2020-02-28 Split-screen comparison fitting method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111309212A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112162814A (en) * 2020-09-27 2021-01-01 维沃移动通信有限公司 Image display method and device and electronic equipment
CN112862561A (en) * 2020-07-29 2021-05-28 友达光电股份有限公司 Image display method and display system
WO2022012154A1 (en) * 2020-07-14 2022-01-20 海信视像科技股份有限公司 Display device, screen recording method, and screen recording sharing method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201557200U (en) * 2009-11-06 2010-08-18 上海十条电子有限公司 Self-help comparison system
JP2011060005A (en) * 2009-09-10 2011-03-24 Hitachi Solutions Ltd Online shopping virtual try-on system
CN104750712A (en) * 2013-12-27 2015-07-01 珠海金山办公软件有限公司 Document sharing method and device
CN107211165A (en) * 2015-01-09 2017-09-26 艾斯适配有限公司 Devices, systems, and methods for automatically delaying video display
CN108022124A (en) * 2016-11-01 2018-05-11 纬创资通股份有限公司 Interactive clothing fitting method and display system thereof
CN109299989A (en) * 2017-07-24 2019-02-01 深圳市点网络科技有限公司 Virtual reality dressing system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011060005A (en) * 2009-09-10 2011-03-24 Hitachi Solutions Ltd Online shopping virtual try-on system
CN201557200U (en) * 2009-11-06 2010-08-18 上海十条电子有限公司 Self-help comparison system
CN104750712A (en) * 2013-12-27 2015-07-01 珠海金山办公软件有限公司 Document sharing method and device
CN107211165A (en) * 2015-01-09 2017-09-26 艾斯适配有限公司 Devices, systems, and methods for automatically delaying video display
CN108022124A (en) * 2016-11-01 2018-05-11 纬创资通股份有限公司 Interactive clothing fitting method and display system thereof
CN109299989A (en) * 2017-07-24 2019-02-01 深圳市点网络科技有限公司 Virtual reality dressing system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022012154A1 (en) * 2020-07-14 2022-01-20 海信视像科技股份有限公司 Display device, screen recording method, and screen recording sharing method
CN112862561A (en) * 2020-07-29 2021-05-28 友达光电股份有限公司 Image display method and display system
CN112162814A (en) * 2020-09-27 2021-01-01 维沃移动通信有限公司 Image display method and device and electronic equipment

Similar Documents

Publication Publication Date Title
JP4032776B2 (en) Mixed reality display apparatus and method, storage medium, and computer program
JP6595714B2 (en) Method and apparatus for generating a two-dimensional code image having a dynamic effect
CN107204031B (en) Information display method and device
CN106816077B (en) Interactive sandbox methods of exhibiting based on two dimensional code and augmented reality
CN111309212A (en) Split-screen comparison fitting method, device, equipment and storage medium
WO2017087568A1 (en) A digital image capturing device system and method
CN102165404B (en) Object detection and user settings
US10169629B2 (en) Decoding visual codes
CN104281864A (en) Method and equipment for generating two-dimensional codes
CN110213458B (en) Image data processing method and device and storage medium
CN113469200A (en) Data processing method and system, storage medium and computing device
CN108038760B (en) Commodity display control system based on AR technology
TWI744962B (en) Information processing device, information processing system, information processing method, and program product
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
JP6267809B1 (en) Panorama image synthesis analysis system, panorama image synthesis analysis method and program
CN106709427B (en) Keyboard action detection method and device
CN112529770B (en) Image processing method, device, electronic equipment and readable storage medium
KR20120087232A (en) System and method for expressing augmented reality-based content
US8534542B2 (en) Making an ordered element list
WO2024024437A1 (en) Learning data generation method, learning model, information processing device, and information processing method
WO2024029533A1 (en) Learning data generation method, trained model, information processing device, and information processing method
CN116055708B (en) Perception visual interactive spherical screen three-dimensional imaging method and system
WO2018120353A1 (en) Vr capturing method, system and mobile terminal
KR102520445B1 (en) Apparatus and method for real-time streaming display of mapping content linked to a single printed photo image
CN112581418B (en) Virtual content identification and display method and system based on augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619