CN111724296A - Method, device, equipment and storage medium for displaying image - Google Patents

Method, device, equipment and storage medium for displaying image Download PDF

Info

Publication number
CN111724296A
CN111724296A CN202010612628.2A CN202010612628A CN111724296A CN 111724296 A CN111724296 A CN 111724296A CN 202010612628 A CN202010612628 A CN 202010612628A CN 111724296 A CN111724296 A CN 111724296A
Authority
CN
China
Prior art keywords
image frame
candidate image
image frames
candidate
rotation angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010612628.2A
Other languages
Chinese (zh)
Other versions
CN111724296B (en
Inventor
郭武彪
胡慈娇
孙学青
袁振坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010612628.2A priority Critical patent/CN111724296B/en
Publication of CN111724296A publication Critical patent/CN111724296A/en
Application granted granted Critical
Publication of CN111724296B publication Critical patent/CN111724296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for displaying images, and relates to the field of image processing and video processing. The specific implementation scheme is as follows: acquiring a target video, wherein the target video comprises a plurality of image frames of a target object; determining a rotation angle of each image frame relative to the initial image frame; according to the rotation angle, candidate image frames are determined from the plurality of image frames; processing the candidate image frame; and displaying the processed candidate image frame. The implementation mode improves the loading speed of the high-definition images and improves the browsing experience of the user.

Description

Method, device, equipment and storage medium for displaying image
Technical Field
The present application relates to the field of computer technologies, and in particular, to the field of image processing and video processing, and more particularly, to a method, an apparatus, a device, and a storage medium for displaying an image.
Background
With the development of the times, the information amount is exploded, and the display of products by using plane pictures and text introduction to make a form similar to a catalog is the current mainstream product display mode, but the display of the products basically stays on a two-dimensional static form, and the appearance and the characteristics of the products cannot be fully expressed, so that the mode cannot meet the requirements of customers on information collection.
By adopting the three-dimensional product display method, the product entity can be displayed in front of the client in the most direct and intuitive way. Therefore, the customer can know the appearance and the characteristics of the product more intuitively and comprehensively, and can decide how to observe the product, and the interaction process is difficult to achieve in a two-dimensional display mode.
Existing three-dimensional display methods may display an item by taking a video of the product. However, as the resolution of images increases, displaying high-definition video tends to occupy a large amount of bandwidth. When the network does not meet the requirements, the phenomenon of video loading jamming is easily caused.
Disclosure of Invention
A method, apparatus, device, and storage medium for presenting an image are provided.
According to a first aspect, there is provided a method for presenting an image, comprising: acquiring a target video, wherein the target video comprises a plurality of image frames of a target object; determining a rotation angle of each image frame relative to the initial image frame; according to the rotation angle, candidate image frames are determined from the plurality of image frames; processing the candidate image frames; and displaying the processed candidate image frame.
According to a second aspect, there is provided an apparatus for presenting an image, comprising: a target video acquisition unit configured to acquire a target video, the target video including a plurality of image frames of a target item; a rotation angle determining unit configured to determine a rotation angle of each image frame with respect to an initial image frame; a candidate image frame determining unit configured to determine a candidate image frame from the plurality of image frames according to the rotation angle; a candidate image frame processing unit configured to process a candidate image frame; and the candidate image frame display unit is configured to display the processed candidate image frame.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method as described in the first aspect.
According to the application, the problem that jamming easily occurs when high-definition videos or images are loaded in an existing article display scheme is solved, the loading speed of the high-definition images is increased, and the browsing experience of a user is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become readily apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for presenting images according to the present application;
FIG. 3 is a flow diagram of another embodiment of a method for presenting images according to the present application;
FIG. 4 is a schematic illustration of an application scenario of a method for presenting images according to the present application;
FIG. 5 is a schematic illustration of a background board employed in the application scenario of FIG. 4;
FIG. 6 is a schematic block diagram of one embodiment of an apparatus for displaying images according to the present application;
fig. 7 is a block diagram of an electronic device for implementing the method for presenting images of the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the purpose of understanding, which are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for presenting images or the apparatus for presenting images of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a camera 101 and a terminal device 102. The camera 101 and the terminal device 102 may be connected by various connection means, such as a data line connection, a network connection, and the like.
The user may take a video or a photograph of an item using the camera 101 and send the taken video or photograph to the terminal device 102. Various communication client applications, such as an image processing application, a video processing application, and the like, may be installed on the terminal device 102.
The terminal device 102 may be hardware or software. When the terminal device 102 is hardware, it may be various electronic devices, including but not limited to a smart phone, a tablet computer, a vehicle-mounted computer, a laptop portable computer, a desktop computer, and the like. When the terminal device 102 is software, it may be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for presenting an image provided by the embodiment of the present application is generally performed by the terminal device 102. Accordingly, the means for presenting the image is generally provided in the terminal device 102.
It should be understood that the number of cameras and terminal devices in fig. 1 is merely illustrative. There may be any number of cameras and terminal devices, depending on implementation needs.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for presenting images in accordance with the present application is shown. The method for displaying the image of the embodiment comprises the following steps:
step 201, acquiring a target video.
In this embodiment, an execution subject of the method for presenting an image (e.g., the terminal device 102 shown in fig. 1) may acquire a target video in various ways. For example, it is possible to capture a locally stored video or a video photographed in real time by a camera connected to the terminal. A plurality of image frames of the target item may be included in the target video. The target item may be various items to be displayed, such as an on-line merchandise, and the like. The plurality of image frames in the target video may be continuously captured, and different image frames are used for displaying different angles of the target object. It is understood that the first image frame of the target video may be identified as the initial image frame, and the remaining image frames are rotated by different angles with respect to the initial image frame. In some specific applications, the target video includes an image frame of the target object at 360 °.
At step 202, the rotation angle of each image frame relative to the initial image frame is determined.
In this embodiment, the execution subject may determine the rotation angle of each image frame with respect to the initial image frame in various ways. Specifically, the execution subject may determine the rotation angle of each image frame relative to the initial image frame through various parameters of the article and article information present in each image frame. For example, the target object is a hexahedral toy, each face having different patterns and parts. The subject may determine the angle of rotation of each image frame relative to the initial image frame by analyzing features of the pattern of the target item and features of the part in each image frame.
Step 203, according to the rotation angle, a candidate image frame is determined from the plurality of image frames.
After the rotation angle of each image frame with respect to the initial image frame is determined, the execution subject may select a plurality of image frames from the plurality of image frames as candidate image frames. Specifically, the execution subject may uniformly select a plurality of image frames (one image frame at every 5 ° interval) from the plurality of image frames. Or, several image frames are selected at detailed parts of the target object, and several image frames are selected at parts without details.
And step 204, processing the candidate image frame.
After the candidate image frames are selected, the candidate image frames may be processed. For example, image frames are aligned, background is modified, and so on.
And step 205, displaying the processed candidate image frame.
After processing the candidate image frame, the execution subject may present the processed candidate image frame. Specifically, the execution subject may display the processed candidate image frames in batches. Alternatively, the execution subject may collectively load the image frames after reducing the resolution of each candidate image frame.
The method for displaying the image, provided by the embodiment of the application, solves the problem that the high-definition video or the image is easy to jam when being loaded in the existing article display scheme, improves the loading speed of the high-definition image, and improves the browsing experience of a user.
With continued reference to FIG. 3, a flow 300 of another embodiment of a method for presenting images in accordance with the present application is shown. As shown in fig. 3, the method for presenting an image of the present embodiment may include the steps of:
step 301, a target video is obtained.
In this embodiment, each image frame of the target video may include not only the target object but also the logo pattern. Here, the mark pattern may be any irregular pattern. For example, it may be a frame having a complex pattern, or it may be a line segment, etc.
Step 302, determining the rotation angle of each image frame relative to the initial image frame according to the marker pattern in each image frame and the marker pattern in the initial image frame.
In this embodiment, the execution subject may determine the rotation angle of each image frame with respect to the initial image frame according to the marker pattern in each image frame and the marker pattern in the initial image frame. For example, in the initial image frame, the center line of the marker pattern points to 0 °. In other image frames the centre line of the marker pattern points at 5 °, 10 °, etc., respectively. It may be determined that the other image frames are rotated by 5 °, 10 °, etc. with respect to the initial image frame.
In some optional implementations of this embodiment, the execution body may display the determined rotation angle in real time. And when the rotating angle does not reach 360 degrees, displaying prompt information to remind a user to shoot a 360-degree video of the target object so as to more comprehensively display the target object.
Step 303, determining candidate image frames from the plurality of image frames at intervals of a preset rotation angle.
After determining the rotation angle of each image frame, the execution body may determine candidate image frames from the plurality of image frames at intervals of a preset rotation angle. For example, the execution subject may select image frames of 0 °, 5 ° 10 °, 15 ° … … 360 °, respectively, as the candidate image frames. Therefore, the occupied bandwidth when the images are loaded can be reduced, the loading efficiency is improved, and the target objects can be displayed comprehensively.
Step 304, determining the mark points of the mark patterns in each candidate image frame; and carrying out alignment processing on each candidate image frame according to the mark points.
After determining each candidate image frame, the executive may determine the marker points of the marker pattern in each candidate image frame. Specifically, the marking point may be a specific point in the marking pattern. For example, if the marker pattern is a bird, the marker point may be the eye of the bird. Alternatively, the marker pattern is a line segment, and the marker point may be the midpoint of the line segment.
Then, the execution subject may perform an alignment process on each candidate image frame according to the above-described marker point. Here, the alignment refers to adjusting the coordinates of the marker points in each candidate image frame to a uniform coordinate value.
Step 305, removing the logo pattern and adjusting the solid background to be transparent so as to scratch the target object.
In this embodiment, each image frame further includes a solid background. The color of the background may be in sharp contrast to the color of the target article. For example, if the target item is dark, the background may be white in color. The target object may be light in color and the background may be black in color. The background is set to be pure color, so that the target object can be conveniently scratched.
After the execution main body aligns each candidate image frame, the mark patterns in each candidate image frame can be removed, and then the pure-color background is adjusted to be transparent, so that the matting of the target object is realized.
Step 306, determining a surrounding frame of the target object; and adjusting the size of each image frame according to the size of the surrounding frame.
After matting the target item, the executive body can also determine a bounding box of the target item. Here, the bounding box may be the smallest circumscribed rectangle of the target item in the image frame. The execution subject may determine the size of the bounding box, and specifically, the size of the bounding box may be determined according to the coordinate parameter of the bounding box. Then, the size of each image frame is adjusted according to the size of the surrounding frame. Specifically, the execution subject may determine a size of a square that can include the bounding box, and then adjust the size of the image frame according to the size of the square.
After the size of the candidate image frame is adjusted, the execution subject may present the adjusted candidate image frame. Specifically, the execution subject may present the adjusted candidate image frame through step 307 or step 308.
Step 307, dividing the candidate image frame into a plurality of image frame sets according to the rotation angle; a plurality of sets of image frames are loaded in batches.
In this embodiment, the execution subject may divide the candidate image frames into a plurality of image frame sets according to the rotation angle. Specifically, the executive body may use image frames of 0 °, 25 °, 50 °, 75 ° … … as one image frame set, use image frames of 5 °, 30 °, 55 °, 80 ° … … as another image frame set, and use image frames of 10 °, 35 °, 60 °, 85 ° … … as another image frame set … … until all the image frames of the rotation angles are divided. The execution entity may then batch load the plurality of sets of image frames. Specifically, the execution body may first load an image frame set including image frames of 0 °, 25 °, 50 °, 75 ° … …. After the image frame set is loaded, the image frame set comprising the image frames of 5 degrees, 30 degrees, 55 degrees and 80 degrees … … is loaded until all the image frame sets are loaded.
By this way, the user can first know the general view of the target object and then load more comprehensive details of the target object when the user looks at the target object, thereby reducing the waiting time of the user when browsing the target object,
step 308, adjusting the resolution of the candidate image frame; and loading the adjusted candidate image frame.
In this embodiment, when displaying the target object, the resolution of the candidate image frame may also be adjusted first. Specifically, the execution subject may grade the image according to the resolution of the image, for example, divide the image of 8K resolution into 14 levels, which are 4K resolution, 2K resolution, 1K resolution, and so on. The execution subject may load the lower resolution image frame first, e.g., load the 1K resolution candidate image frame first. When the user browses the target object carefully, the candidate image frames with high resolution are loaded in batches.
With continued reference to fig. 4, a schematic illustration of an application scenario of the method for presenting an image according to the present application is shown. In the application scenario of fig. 4, the user places the target item on the background board as shown in fig. 5 and fixes the mobile phone. The background plate is manually rotated, and the camera on the mobile phone is used for photographing the target object. The frame on the background plate shown in fig. 5 is a logo pattern, and the mobile phone can recognize the rotation angle of the target object in real time through the frame. After the video is captured by the camera, one frame image may be selected as a candidate image frame every 5 °. And after the candidate frames are subjected to alignment, background change, screenshot and other processing, the candidate image frames of the target object are displayed on the mobile phone.
The method for displaying the image provided by the above embodiment of the application can provide a reference coordinate system by using the mark pattern, and quickly determine the rotation angle of the article; the pure background is utilized, so that the image matting is easy; part of image frames are extracted from the video, so that the target object is comprehensively displayed, and the image loading speed is increased; by loading the image frames in batches, the waiting time of the user is reduced, and the browsing experience of the user is improved.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for presenting an image, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 6, the display image apparatus 600 of the present embodiment includes: a target video acquiring unit 601, a rotation angle determining unit 602, a candidate image frame determining unit 603, a candidate image frame processing unit 604, and a candidate image frame presenting unit 605.
A target video acquiring unit 601 configured to acquire a target video. The target video includes a plurality of image frames of the target item.
A rotation angle determination unit 602 configured to determine a rotation angle of each image frame with respect to the initial image frame.
A candidate image frame determining unit 603 configured to determine a candidate image frame from the plurality of image frames according to the rotation angle.
A candidate image frame processing unit 604 configured to process the candidate image frames.
A candidate image frame presentation unit 605 configured to present the processed candidate image frames.
In some optional implementations of this embodiment, the plurality of image frames further include a logo pattern. The rotation angle determination unit 602 may be further configured to: and determining the rotation angle of each image frame relative to the initial image frame according to the marker pattern in each image frame and the marker pattern in the initial image frame.
In some optional implementations of the present embodiment, the candidate image frame determining unit 603 may be further configured to: candidate image frames are determined from the plurality of image frames at intervals of a preset rotation angle.
In some optional implementations of the present embodiment, the candidate image frame processing unit 604 may be further configured to: determining a marker point of a marker pattern in each candidate image frame; and carrying out alignment processing on each candidate image frame according to the mark points.
In some optional implementations of the present embodiment, the plurality of image frames includes a solid color background. Candidate image frame processing unit 604 may be further configured to: removing the mark pattern and adjusting the solid background to be transparent so as to scratch the target object.
In some optional implementations of the present embodiment, the candidate image frame processing unit 604 may be further configured to: determining a bounding box of the target object; and adjusting the size of each image frame according to the size of the surrounding frame.
In some optional implementations of this embodiment, the candidate image frame presentation unit 605 may be further configured to: dividing the candidate image frame into a plurality of image frame sets according to the rotation angle; a plurality of sets of image frames are loaded in batches.
In some optional implementations of this embodiment, the candidate image frame presentation unit 605 may be further configured to: adjusting the resolution of the candidate image frame; and loading the adjusted candidate image frame.
It should be understood that units 601 to 605 recited in the apparatus 600 for presenting an image correspond to respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for presenting an image are equally applicable to the apparatus 600 and the units comprised therein and will not be described in detail here.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 7, is a block diagram of an electronic device performing a method for presenting an image according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on a memory to display graphical information of a GUI on an external input/output device (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the methods provided herein for presenting images. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the methods provided herein for presenting images.
The memory 702, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for presenting an image (e.g., the target video acquiring unit 601, the rotation angle determining unit 602, the candidate image frame determining unit 603, the candidate image frame processing unit 604, and the candidate image frame presenting unit 605 shown in fig. 6) in the embodiments of the present application. The processor 701 executes various functional applications and data processing of the server by running non-transitory software programs, instructions and modules stored in the memory 702, namely, implements the method for displaying an image performed in the above method embodiment.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of an electronic device performed to present an image, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 702 optionally includes memory located remotely from processor 701, which may be connected via a network to an electronic device executing instructions for presenting images. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device performing the method for presenting an image may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, as exemplified by the bus connection in fig. 7.
The input device 703 may receive input numeric or character information and generate key signal inputs related to performing user settings and function control of an electronic apparatus for displaying an image, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, capable of receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
According to the technical scheme of the embodiment of the application, the problem that the high-definition video or the image is easy to jam when being loaded in the existing article display scheme is solved, the loading speed of the high-definition image is increased, and the browsing experience of a user is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (18)

1. A method for presenting an image, comprising:
acquiring a target video, wherein the target video comprises a plurality of image frames of a target object;
determining a rotation angle of each image frame relative to the initial image frame;
determining candidate image frames from the plurality of image frames according to the rotation angle;
processing the candidate image frame;
and displaying the processed candidate image frame.
2. The method of claim 1, wherein the plurality of image frames further comprise a logo pattern; and
the determining the rotation angle of each image frame relative to the initial image frame comprises:
and determining the rotation angle of each image frame relative to the initial image frame according to the marker pattern in each image frame and the marker pattern in the initial image frame.
3. The method of claim 1, wherein said determining candidate image frames from said plurality of image frames according to said angle of rotation comprises:
candidate image frames are determined from the plurality of image frames every a preset rotation angle.
4. The method of claim 2, wherein the processing the candidate image frames comprises:
determining a marker point of a marker pattern in each candidate image frame;
and carrying out alignment processing on each candidate image frame according to the mark points.
5. The method of claim 4, wherein the plurality of image frames comprise a solid background; and
the processing the candidate image frame comprises:
and removing the mark pattern and adjusting the solid background to be transparent so as to scratch the target object.
6. The method of claim 5, wherein the processing the candidate image frames comprises:
determining a bounding box of the target item;
and adjusting the size of each image frame according to the size of the surrounding frame.
7. The method of claim 1, wherein said presenting the processed candidate image frames comprises:
dividing the candidate image frames into a plurality of image frame sets according to the rotation angle;
batch loading the plurality of sets of image frames.
8. The method of claim 1, wherein said presenting the processed candidate image frames comprises:
adjusting a resolution of the candidate image frame;
and loading the adjusted candidate image frame.
9. An apparatus for presenting images, comprising:
a target video acquisition unit configured to acquire a target video, the target video including a plurality of image frames of a target item;
a rotation angle determining unit configured to determine a rotation angle of each image frame with respect to an initial image frame;
a candidate image frame determining unit configured to determine a candidate image frame from the plurality of image frames according to the rotation angle;
a candidate image frame processing unit configured to process the candidate image frame;
and the candidate image frame display unit is configured to display the processed candidate image frame.
10. The apparatus of claim 9, wherein the plurality of image frames further comprise a logo pattern; and
the rotation angle determination unit is further configured to:
and determining the rotation angle of each image frame relative to the initial image frame according to the marker pattern in each image frame and the marker pattern in the initial image frame.
11. The apparatus of claim 9, wherein the candidate image frame determination unit is further configured to:
candidate image frames are determined from the plurality of image frames every a preset rotation angle.
12. The apparatus of claim 10, wherein the candidate image frame processing unit is further configured to:
determining a marker point of a marker pattern in each candidate image frame;
and carrying out alignment processing on each candidate image frame according to the mark points.
13. The apparatus of claim 12, wherein the plurality of image frames comprise a solid background; and
the candidate image frame processing unit is further configured to:
and removing the mark pattern and adjusting the solid background to be transparent so as to scratch the target object.
14. The apparatus of claim 13, wherein the candidate image frame processing unit is further configured to:
determining a bounding box of the target item;
and adjusting the size of each image frame according to the size of the surrounding frame.
15. The apparatus of claim 9, wherein the candidate image frame presentation unit is further configured to:
dividing the candidate image frames into a plurality of image frame sets according to the rotation angle;
batch loading the plurality of sets of image frames.
16. The apparatus of claim 9, wherein the candidate image frame presentation unit is further configured to:
adjusting a resolution of the candidate image frame;
and loading the adjusted candidate image frame.
17. An electronic device for presenting images, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202010612628.2A 2020-06-30 2020-06-30 Method, apparatus, device and storage medium for displaying image Active CN111724296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010612628.2A CN111724296B (en) 2020-06-30 2020-06-30 Method, apparatus, device and storage medium for displaying image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010612628.2A CN111724296B (en) 2020-06-30 2020-06-30 Method, apparatus, device and storage medium for displaying image

Publications (2)

Publication Number Publication Date
CN111724296A true CN111724296A (en) 2020-09-29
CN111724296B CN111724296B (en) 2024-04-02

Family

ID=72570386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010612628.2A Active CN111724296B (en) 2020-06-30 2020-06-30 Method, apparatus, device and storage medium for displaying image

Country Status (1)

Country Link
CN (1) CN111724296B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160314596A1 (en) * 2015-04-26 2016-10-27 Hai Yu Camera view presentation method and system
US20170280130A1 (en) * 2016-03-25 2017-09-28 Microsoft Technology Licensing, Llc 2d video analysis for 3d modeling
CN109829467A (en) * 2017-11-23 2019-05-31 财团法人资讯工业策进会 Image labeling method, electronic device and non-transient computer-readable storage medium
CN110503725A (en) * 2019-08-27 2019-11-26 百度在线网络技术(北京)有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of image procossing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160314596A1 (en) * 2015-04-26 2016-10-27 Hai Yu Camera view presentation method and system
US20170280130A1 (en) * 2016-03-25 2017-09-28 Microsoft Technology Licensing, Llc 2d video analysis for 3d modeling
CN109829467A (en) * 2017-11-23 2019-05-31 财团法人资讯工业策进会 Image labeling method, electronic device and non-transient computer-readable storage medium
CN110503725A (en) * 2019-08-27 2019-11-26 百度在线网络技术(北京)有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of image procossing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李祥攀;张彪;孙凤池;刘杰;: "基于多视角RGB-D图像帧数据融合的室内场景理解", 计算机研究与发展, no. 06 *

Also Published As

Publication number Publication date
CN111724296B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
US10249089B2 (en) System and method for representing remote participants to a meeting
US11700417B2 (en) Method and apparatus for processing video
US20170169619A1 (en) Contextual local image recognition dataset
EP2972950B1 (en) Segmentation of content delivery
CA3083486C (en) Method, medium, and system for live preview via machine learning models
CN111695628B (en) Key point labeling method and device, electronic equipment and storage medium
CN107357503B (en) Self-adaptive display method and system for three-dimensional model of industrial equipment
JP7223056B2 (en) Image screening method, device, electronic device and storage medium
CN111598164A (en) Method and device for identifying attribute of target object, electronic equipment and storage medium
US9508120B2 (en) System and method for computer vision item recognition and target tracking
CN112559884B (en) Panorama and interest point hooking method and device, electronic equipment and storage medium
CN110148224B (en) HUD image display method and device and terminal equipment
CN111861991A (en) Method and device for calculating image definition
CN111767490A (en) Method, device, equipment and storage medium for displaying image
US11451721B2 (en) Interactive augmented reality (AR) based video creation from existing video
CN110941987B (en) Target object identification method and device, electronic equipment and storage medium
CN111724296B (en) Method, apparatus, device and storage medium for displaying image
CN111986263A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111275827A (en) Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment
CN110992297A (en) Multi-commodity image synthesis method and device, electronic equipment and storage medium
CN111898489B (en) Method and device for marking palm pose, electronic equipment and storage medium
CN111787389A (en) Transposed video identification method, device, equipment and storage medium
US20210149200A1 (en) Display information on a head-mountable apparatus corresponding to data of a computing device
CN110728227A (en) Image processing method and device
CN111601042B (en) Image acquisition method, image display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant