CN111783504A - Method and apparatus for displaying information - Google Patents

Method and apparatus for displaying information Download PDF

Info

Publication number
CN111783504A
CN111783504A CN201910358534.4A CN201910358534A CN111783504A CN 111783504 A CN111783504 A CN 111783504A CN 201910358534 A CN201910358534 A CN 201910358534A CN 111783504 A CN111783504 A CN 111783504A
Authority
CN
China
Prior art keywords
dimensional model
target page
target
model file
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910358534.4A
Other languages
Chinese (zh)
Inventor
刘登勇
卢毓智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910358534.4A priority Critical patent/CN111783504A/en
Publication of CN111783504A publication Critical patent/CN111783504A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for displaying information. One embodiment of the method comprises: identifying the image of the target page of the collected target reading material, and determining a target page identifier corresponding to the target page according to the identification result; sending the target page identification to a server, and receiving a three-dimensional model file sent by the server aiming at the target page identification; determining a first three-dimensional model subfile to be rendered from the three-dimensional model file; and rendering and displaying the three-dimensional model constructed by the first three-dimensional model subfile to be rendered. The embodiment realizes the rendering and display of the three-dimensional model based on the acquired image of the target page of the target reading material.

Description

Method and apparatus for displaying information
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for displaying information.
Background
Conventional books often contain a large number of illustrations. Especially for some books for children, a large number of pictures are usually inserted. The illustration can play a role in supplementary explanation or artistic appreciation of the text content, increase the interest given by the characters in the book and enable the character part to be more vivid and lively in the heart of the reader. The illustration of a large space is often limited in a conventional book by the area of the pages of the book. The common solutions include narrowing the illustration, displaying in pages, folding the page where the illustration is located, and the like, which often affect the presentation effect of the illustration on the displayed content and are inconvenient for the user to view the illustration.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for displaying information.
In a first aspect, an embodiment of the present disclosure provides a method for displaying information, the method including: identifying the image of the target page of the collected target reading material, and determining the target page identification corresponding to the target page according to the identification result; sending the target page identifier to a server, and receiving a three-dimensional model file sent by the server aiming at the target page identifier; determining a first three-dimensional model subfile to be rendered from the three-dimensional model file; and rendering and displaying the three-dimensional model constructed by the first three-dimensional model subfile to be rendered.
In some embodiments, the above method further comprises: in response to determining that the movement information is detected, determining a second three-dimensional model subfile to be rendered from the three-dimensional model file according to the movement information; and rendering and displaying the three-dimensional model constructed by the second three-dimensional model subfile to be rendered.
In some embodiments, the recognizing the image of the target page of the collected target reading material and determining the target page identifier corresponding to the target page according to the recognition result includes: carrying out target recognition on the image, and determining a target identifier of at least one target in the image according to a target recognition result; and determining a target page identifier corresponding to the target page according to the target identifier of the at least one target.
In some embodiments, the image includes a two-dimensional code; and the identifying the image of the target page of the collected target reading material, and determining the target page identification corresponding to the target page according to the identification result comprises the following steps: and decoding the two-dimensional code in the image to obtain a target page identifier corresponding to the target page.
In some embodiments, the three-dimensional model file is determined by the server side by: matching the target page identification with an identification contained in three-dimensional model file information in a preset three-dimensional model file information set, wherein the three-dimensional model file information comprises a three-dimensional model file and an identification corresponding to the three-dimensional model file; and taking the three-dimensional model file corresponding to the identifier matched with the identifier of the target page in the three-dimensional model file information set as the determined three-dimensional model file.
In a second aspect, an embodiment of the present disclosure provides an apparatus for displaying information, the apparatus including: the identification unit is configured to identify the acquired image of the target page of the target reading material and determine a target page identifier corresponding to the target page according to an identification result; a sending unit, configured to send the target page identifier to a server, and receive a three-dimensional model file sent by the server for the target page identifier; a first determining unit configured to determine a first three-dimensional model subfile to be rendered from the three-dimensional model file; and the first display unit is configured to render and display the three-dimensional model constructed by the first three-dimensional model subfile to be rendered.
In some embodiments, the above apparatus further comprises: a second determination unit configured to determine, in response to determining that movement information is detected, a second three-dimensional model subfile to be rendered from the three-dimensional model file according to the movement information; and the second display unit is configured to render and display the three-dimensional model constructed by the second three-dimensional model subfile to be rendered.
In some embodiments, the above-mentioned identification unit is further configured to: carrying out target recognition on the image, and determining a target identifier of at least one target in the image according to a target recognition result; and determining a target page identifier corresponding to the target page according to the target identifier of the at least one target.
In some embodiments, the image includes a two-dimensional code; and the identification unit is further configured to: and decoding the two-dimensional code in the image to obtain a target page identifier corresponding to the target page.
In some embodiments, the three-dimensional model file is determined by the server side by: matching the target page identification with an identification contained in three-dimensional model file information in a preset three-dimensional model file information set, wherein the three-dimensional model file information comprises a three-dimensional model file and an identification corresponding to the three-dimensional model file; and taking the three-dimensional model file corresponding to the identifier matched with the identifier of the target page in the three-dimensional model file information set as the determined three-dimensional model file.
In a third aspect, an embodiment of the present disclosure provides a terminal, where the terminal includes: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable medium on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for displaying information, the collected image of the target page of the target reading material is firstly identified, and the target page identification corresponding to the target page is determined according to the identification result. And then, the target page identifier is sent to the server, and the three-dimensional model file sent by the server aiming at the target page identifier is received. Then, a first to-be-rendered subfile is determined from the three-dimensional model file. And finally, rendering and displaying the three-dimensional model constructed by the first three-dimensional model subfile to be rendered. Therefore, rendering and displaying of the three-dimensional model are achieved based on the collected images of the target page of the target reading, and limitation of a page area to an illustration space when the illustration is drawn in a traditional book is avoided.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for displaying information in accordance with the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for displaying information according to the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a method for displaying information according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for displaying information according to the present disclosure;
FIG. 6 is a block diagram of a computer system suitable for use in implementing a terminal device of an embodiment of the disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 of a method for displaying information or an apparatus for displaying information to which embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a reading application, AR (Augmented Reality) rendering software, a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having an image capture device (e.g., a camera) and supporting rendering and display of a three-dimensional model, including but not limited to smart phones, tablet computers, e-book readers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for the three-dimensional models displayed on the terminal devices 101, 102, 103. The background server may analyze and perform other processing on the received data such as the target page identifier, and feed back a processing result (e.g., a three-dimensional model file) to the terminal devices 101, 102, and 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
It should be noted that the method for displaying information provided by the embodiment of the present disclosure is generally performed by the terminal devices 101, 102, 103, and accordingly, the apparatus for displaying information is generally disposed in the terminal devices 101, 102, 103.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for displaying information in accordance with the present disclosure is shown. The method for displaying information comprises the following steps:
step 201, identifying the image of the target page of the collected target reading material, and determining the target page identification corresponding to the target page according to the identification result.
In the present embodiment, the executing body of the method for displaying information (e.g., the terminal apparatuses 101, 102, 103 shown in fig. 1) can capture an image of a target page of a target reading material with an image capturing apparatus (e.g., a camera) mounted thereto. Here, the target reading may refer to a book bound in a book. The target reading material can include at least one sheet of paper. The target page of the target reading can refer to a page in the target reading. The execution main body can perform various kinds of recognition on the acquired images, and determines a target page identifier corresponding to a target page of the target reading according to a recognition result. As an example, a correspondence table in which a plurality of correspondences between feature information and a flag are described may be stored in advance in the execution main body. In this way, the executing body may first extract the feature information of the captured image and take the extracted feature information as the target feature information. Then, the execution subject may sequentially compare the target feature information with the plurality of pieces of feature information in the correspondence table, and if one piece of feature information in the correspondence table is the same as or similar to the target feature information, an identifier corresponding to the feature information in the correspondence table may be used as a target page identifier corresponding to the target page. Here, the target page identification may be used to uniquely identify the target page. In practice, the target page identification may include, but is not limited to, Chinese characters, letters, numbers, symbols, and the like.
Usually, the pages of a book are recorded with characters and pictures. These words and pictures can be used to convey information. The user can capture an image of a page of a book through the terminal device used.
In some optional implementations of this embodiment, the step 201 may specifically be performed as follows:
firstly, carrying out target recognition on the image, and determining the target identification of at least one target in the image according to the target recognition result.
In this implementation manner, the execution subject may perform target recognition on the image, and determine a target identifier of at least one target included in the image according to a target recognition result. Here, the object identification may be used to uniquely identify an object. The object included in the image may refer to an object represented by the image. For example, if the object shown in an image includes a bunny rabbit, an opening and a big tree, the object included in the image may include the bunny rabbit, the opening and the big tree.
And then, determining a target page identifier corresponding to the target page according to the target identifier of at least one target.
In this implementation, a correspondence table for describing a plurality of correspondences between the page identifier and the at least one target identifier may be stored in advance in the execution body. In this way, the executing body may compare the target identifier of at least one target in the image with at least one target identifier of each correspondence in the correspondence table. If at least one target identifier in a corresponding relationship in the corresponding relationship table is the same as or similar to the target identifier of at least one target in the image, the page identifier corresponding to the corresponding relationship in the corresponding relationship table may be used as the target page identifier corresponding to the target page.
In some optional implementations of the embodiment, the image of the target page of the target reading collected by the execution main body may include a two-dimensional code. And the step 201 may be specifically performed as follows:
and decoding the two-dimensional code in the image to obtain a target page identifier corresponding to the target page.
In this implementation manner, the execution subject may decode the two-dimensional code in the image, so as to obtain the target page identifier corresponding to the target page. In practical application, the target page identification can be converted into a two-dimensional code based on a ZXing library, and the two-dimensional code is printed on the target page of the target reading material. In this way, after the execution main body acquires the image of the target page of the target reading material, the execution main body can scan the two-dimensional code in the image and decode the scanned two-dimensional code based on the ZXing library, so as to obtain the target page identifier. Here, ZXing is an open source, 1D/2D barcode image processing library of various formats implemented in Java.
Step 202, sending the target page identifier to the server, and receiving the three-dimensional model file sent by the server for the target page identifier.
In this embodiment, the execution subject may send the target page identifier determined in step 201 to the server. Here, a server may refer to a server (e.g., server 105 shown in fig. 1). After receiving the target page identifier sent by the execution subject, the server may determine a three-dimensional model file corresponding to the target page identifier by various methods, and send the determined three-dimensional model file to the execution subject. Here, the three-dimensional model file may be a file for constructing a three-dimensional model. The three-dimensional model file may include various information of the three-dimensional model, including but not limited to: pictures, materials, lights, geometric meshes, bones, animations, scenes, etc. The format of the three-dimensional model file may be a file format for building a three-dimensional model, such as a 3ds format, an MD2 format, an obj format, a bsp format, and an x format.
By way of example, the target page identifies that a scene described by the three-dimensional model built by the corresponding three-dimensional model file matches a scene described by the target page. For example, assume that the target reading is a child reading album that tells a plurality of small stories. Wherein the target page tells a story that a bunny rabbit and a small animal in a forest carelessly fall into a deep hole during playing. At this time, the three-dimensional model built by the three-dimensional model file corresponding to the target page identifier of the target page of the children's reading picture album may include virtual objects such as little rabbits, holes, other small animals, and the like.
In some optional implementation manners of this embodiment, the three-dimensional model file may be determined by the server side in the following manner:
firstly, matching the target page identification with the identification included in the three-dimensional model file information in the preset three-dimensional model file information set.
In this implementation manner, the three-dimensional model file information set may be stored in the server in advance. Each piece of three-dimensional model file information in the set of three-dimensional model file information may include a three-dimensional model file and an identification corresponding to the three-dimensional model file. In this way, the server can match the target page identifier with the identifier of each piece of three-dimensional model file information in the three-dimensional model file information set, and if the target page identifier is the same as the identifier of the piece of three-dimensional model file information, it can be determined that the target page identifier matches with the identifier of the piece of three-dimensional model file information.
And then, taking the three-dimensional model file corresponding to the identifier matched with the identifier of the target page in the three-dimensional model file information set as the determined three-dimensional model file.
In this implementation manner, the server may use a three-dimensional model file corresponding to an identifier matching the identifier of the target page in the three-dimensional model file information set as the determined three-dimensional model file.
Step 203, determining a first three-dimensional model subfile to be rendered from the three-dimensional model file.
In this embodiment, the execution subject may determine the first to-be-rendered three-dimensional model subfile from the above three-dimensional model file in various ways. As an example, in practice, the three-dimensional model built by the three-dimensional model file described above may include a plurality of virtual objects. Since the virtual objects that can be displayed by the execution subject screen are limited, the execution subject can determine the first three-dimensional model subfile to be rendered from the three-dimensional model file according to the screen size.
And 204, rendering and displaying the three-dimensional model constructed by the first three-dimensional model subfile to be rendered.
In this embodiment, the executing body may render and display the three-dimensional model constructed by the first to-be-rendered three-dimensional model subfile. Here, rendering refers to a process of generating an image from a model with software. A model is a description of a three-dimensional object in a well-defined language or data structure that includes information such as geometry, viewpoint, texture, and lighting. In practical applications, a game engine may be adopted to render the three-dimensional model, or an underlying graphics API (Application programming interface) may be directly used to render the three-dimensional model. As one example, a three-dimensional model built from the first to-be-rendered three-dimensional model subfile may be rendered using a game engine (e.g., Unity 3D, a multi-platform integrated game development tool that can create three-dimensional video games, building visualizations, real-time three-dimensional animations, etc. type interactive content). As another example, the three-dimensional model built by the first three-dimensional model subfile to be rendered may be rendered using software developed based on OpenGL ES (OpenGL for Embedded Systems). Among them, OpenGL ES (OpenGL for Embedded Systems) is a subset of OpenGL three-dimensional graphics API.
In practice, the AR (Augmented Reality) technology integrates a three-dimensional model with the real world as seen through a camera, thereby bringing a more stereoscopic and real experience. In the current price segment, the model is integrated with the real world seen through a camera mainly from three aspects.
One) motion tracking: when the terminal device moves in the real world, the engine for rendering the three-dimensional model understands the position of the terminal device relative to the surrounding world through a process called parallel Odometry and Mapping (COM). The engine detects visual difference features (called feature points) in the captured camera image and uses these points to calculate its change in position. These visual information will be used in combination with inertial measurements of the IMU (inertial measurement unit) of the terminal device to estimate the pose (position and orientation) of the camera with respect to the surrounding world over time. By aligning the pose of the virtual camera rendering the 3D content with the pose of the camera of the terminal device provided by the engine, the virtual content can be rendered from the correct perspective. The rendered virtual image may be superimposed on an image obtained from a terminal device camera, making the virtual content appear as part of the real world.
Two) environment understanding: engines will continually improve their understanding of the real world environment by detecting feature points and planes. The engine can look for clustered feature points that appear to be located on common horizontal or vertical surfaces (e.g., tables or walls) and use these surfaces as planes. The engine may also determine the boundaries of each plane, based on which virtual objects are placed on a flat surface.
Three) light estimation: the engine can detect the relevant information of the ambient light, obtain the average light intensity of the camera image and correct the color. Based on this information, virtual objects can be illuminated with the same illumination as the surrounding environment, increasing the realism.
At this stage, the common rendering process is as follows: firstly, loading vertex and texture information of an OBJ model to a data buffer area; then, loading the vertex and fragment shader programs; then, the vertex data and texture data are transmitted into a rendering pipeline; and finally, drawing the loading model.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for displaying information according to the present embodiment. In the application scenario of fig. 3, a user first uses a smart phone 301 to acquire an image of a target page 302 of a target reading, the smart phone 301 identifies the acquired image of the target page 302 of the target reading, and determines a target page identifier a corresponding to the target page 302 according to an identification result. After that, the smartphone 301 sends the target page identifier a to the server 303, and receives the three-dimensional model file sent by the server 303 for the target page identifier a. Then, the smartphone 301 determines a first to-be-rendered three-dimensional model subfile from the three-dimensional model file. Finally, the smart phone 301 renders and displays the three-dimensional model constructed by the first three-dimensional model subfile to be rendered.
According to the method provided by the embodiment of the disclosure, the three-dimensional model is rendered and displayed through the acquired image of the target page of the target reading material, and the limitation of a page area to an illustration space when the illustration is drawn in a traditional book is avoided.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for displaying information is shown. The process 400 of the method for displaying information includes the steps of:
step 401, identifying the image of the target page of the collected target reading material, and determining the target page identifier corresponding to the target page according to the identification result.
In this embodiment, step 401 is similar to step 201 of the embodiment shown in fig. 2, and is not described here again.
Step 402, sending the target page identifier to the server, and receiving the three-dimensional model file sent by the server for the target page identifier.
In this embodiment, step 402 is similar to step 202 of the embodiment shown in fig. 2, and is not described herein again.
Step 403, determining a first three-dimensional model subfile to be rendered from the three-dimensional model file;
in this embodiment, step 403 is similar to step 203 of the embodiment shown in fig. 2, and is not described herein again.
And step 404, rendering and displaying the three-dimensional model constructed by the first three-dimensional model subfile to be rendered.
In this embodiment, step 404 is similar to step 204 of the embodiment shown in fig. 2, and is not described here again.
Step 405, in response to determining that the movement information is detected, determining a second three-dimensional model subfile to be rendered from the three-dimensional model file according to the movement information.
In this embodiment, various sensors for collecting movement information may be installed inside the execution main body, including but not limited to an accelerometer, a gyroscope, and the like. In this way, the sensor can detect the movement information of the execution main body when the position transmission movement of the execution main body. Here, the movement information may include a movement direction, a movement distance, and the like. If the movement information is detected, the execution subject may determine a second three-dimensional model subfile to be rendered from the three-dimensional model file according to the movement information. In practice, the execution subject may be predefined therein with a correspondence between the movement information and the second three-dimensional model subfile to be rendered. As an example, assuming that the movement information includes a movement direction, wherein the movement direction may include up, down, left, and right, the inside of the execution body may define in advance corresponding second three-dimensional model subfiles to be rendered when the movement direction is up, down, left, and right. In this way, the executing body may determine the second three-dimensional model subfile to be rendered from the three-dimensional model file according to the movement information detected by the sensor.
And 406, rendering and displaying the three-dimensional model constructed by the second three-dimensional model subfile to be rendered.
In this embodiment, the executing body may render and display the three-dimensional model constructed by the second three-dimensional model subfile to be rendered determined in step 405.
As can be seen from fig. 4, compared to the embodiment corresponding to fig. 2, the flow 400 of the method for displaying information in the present embodiment highlights the step of determining the second three-dimensional model subfile to be rendered according to the detected movement information. Therefore, the scheme described in the embodiment can render and display the three-dimensional model constructed by the second three-dimensional model subfile to be rendered according to the movement information, so that the displayed content is richer.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for displaying information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for displaying information of the present embodiment includes: an identification unit 501, a transmission unit 502, a first determination unit 503, and a first display unit 504. The recognition unit 501 is configured to recognize an image of a target page of the collected target reading material, and determine a target page identifier corresponding to the target page according to a recognition result; the sending unit 502 is configured to send the target page identifier to a server, and receive a three-dimensional model file sent by the server for the target page identifier; the first determining unit 503 is configured to determine a first three-dimensional model subfile to be rendered from the three-dimensional model file; the first display unit 504 is configured to render and display a three-dimensional model constructed by the first to-be-rendered three-dimensional model subfile.
In this embodiment, specific processes of the identifying unit 501, the sending unit 502, the first determining unit 503, and the first displaying unit 504 of the apparatus 500 for displaying information and technical effects brought by the specific processes can refer to related descriptions of step 201, step 202, step 203, and step 204 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementations of this embodiment, the apparatus 500 further includes: a second determining unit (not shown in the figure) configured to determine, in response to determining that the movement information is detected, a second three-dimensional model subfile to be rendered from the three-dimensional model file according to the movement information; and the second display unit (not shown in the figure) is configured to render and display the three-dimensional model constructed by the second three-dimensional model subfile to be rendered.
In some optional implementations of this embodiment, the identifying unit 501 is further configured to: carrying out target recognition on the image, and determining a target identifier of at least one target in the image according to a target recognition result; and determining a target page identifier corresponding to the target page according to the target identifier of the at least one target.
In some optional implementation manners of this embodiment, the image includes a two-dimensional code; and the above-mentioned identifying unit 501 is further configured to: and decoding the two-dimensional code in the image to obtain a target page identifier corresponding to the target page.
In some optional implementation manners of this embodiment, the three-dimensional model file is determined by the server side in the following manner: matching the target page identification with an identification contained in three-dimensional model file information in a preset three-dimensional model file information set, wherein the three-dimensional model file information comprises a three-dimensional model file and an identification corresponding to the three-dimensional model file; and taking the three-dimensional model file corresponding to the identifier matched with the identifier of the target page in the three-dimensional model file information set as the determined three-dimensional model file.
Referring now to fig. 6, shown is a schematic diagram of an electronic device (e.g., terminal device in fig. 1) 600 suitable for use in implementing embodiments of the present disclosure. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: identifying the image of the target page of the collected target reading material, and determining the target page identification corresponding to the target page according to the identification result; sending the target page identifier to a server, and receiving a three-dimensional model file sent by the server aiming at the target page identifier; determining a first three-dimensional model subfile to be rendered from the three-dimensional model file; and rendering and displaying the three-dimensional model constructed by the first three-dimensional model subfile to be rendered.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an identification unit, a transmission unit, a first determination unit, and a first display unit. The names of the units do not form a limitation on the units themselves under certain conditions, for example, the recognition unit may also be described as a unit that recognizes an image of a target page of the collected target reading material and determines a target page identifier corresponding to the target page according to a recognition result.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A method for displaying information, comprising:
identifying the image of a target page of the collected target reading material, and determining a target page identifier corresponding to the target page according to an identification result;
sending the target page identification to a server, and receiving a three-dimensional model file sent by the server aiming at the target page identification;
determining a first three-dimensional model subfile to be rendered from the three-dimensional model file;
and rendering and displaying the three-dimensional model constructed by the first three-dimensional model subfile to be rendered.
2. The method of claim 1, wherein the method further comprises:
in response to determining that movement information is detected, determining a second three-dimensional model subfile to be rendered from the three-dimensional model file according to the movement information;
and rendering and displaying the three-dimensional model constructed by the second three-dimensional model subfile to be rendered.
3. The method of claim 1, wherein the recognizing the image of the target page of the collected target reading material and determining the target page identifier corresponding to the target page according to the recognition result comprises:
performing target recognition on the image, and determining a target identifier of at least one target in the image according to a target recognition result;
and determining a target page identifier corresponding to the target page according to the target identifier of the at least one target.
4. The method of claim 1, wherein the image comprises a two-dimensional code; and
the identifying the image of the target page of the collected target reading material and determining the target page identification corresponding to the target page according to the identification result comprises the following steps:
and decoding the two-dimensional code in the image to obtain a target page identifier corresponding to the target page.
5. The method of claim 1, wherein the three-dimensional model file is determined by the server by:
matching the target page identification with an identification contained in three-dimensional model file information in a preset three-dimensional model file information set, wherein the three-dimensional model file information comprises a three-dimensional model file and an identification corresponding to the three-dimensional model file;
and taking the three-dimensional model file corresponding to the identifier matched with the target page identifier in the three-dimensional model file information set as the determined three-dimensional model file.
6. An apparatus for displaying information, comprising:
the identification unit is configured to identify the acquired image of the target page of the target reading material and determine a target page identifier corresponding to the target page according to an identification result;
the sending unit is configured to send the target page identifier to a server and receive a three-dimensional model file sent by the server aiming at the target page identifier;
a first determination unit configured to determine a first three-dimensional model subfile to be rendered from the three-dimensional model file;
and the first display unit is configured to render and display the three-dimensional model constructed by the first three-dimensional model subfile to be rendered.
7. The apparatus of claim 6, wherein the apparatus further comprises:
a second determination unit configured to determine, in response to determining that movement information is detected, a second three-dimensional model subfile to be rendered from the three-dimensional model file according to the movement information;
and the second display unit is configured to render and display the three-dimensional model constructed by the second three-dimensional model subfile to be rendered.
8. The apparatus of claim 6, wherein the identification unit is further configured to:
performing target recognition on the image, and determining a target identifier of at least one target in the image according to a target recognition result;
and determining a target page identifier corresponding to the target page according to the target identifier of the at least one target.
9. The apparatus of claim 6, wherein the image comprises a two-dimensional code; and
the identification unit is further configured to:
and decoding the two-dimensional code in the image to obtain a target page identifier corresponding to the target page.
10. The apparatus of claim 6, wherein the three-dimensional model file is determined by the server by:
matching the target page identification with an identification contained in three-dimensional model file information in a preset three-dimensional model file information set, wherein the three-dimensional model file information comprises a three-dimensional model file and an identification corresponding to the three-dimensional model file;
and taking the three-dimensional model file corresponding to the identifier matched with the target page identifier in the three-dimensional model file information set as the determined three-dimensional model file.
11. A terminal, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
an image capturing device configured to capture an image;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-5.
CN201910358534.4A 2019-04-30 2019-04-30 Method and apparatus for displaying information Pending CN111783504A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910358534.4A CN111783504A (en) 2019-04-30 2019-04-30 Method and apparatus for displaying information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910358534.4A CN111783504A (en) 2019-04-30 2019-04-30 Method and apparatus for displaying information

Publications (1)

Publication Number Publication Date
CN111783504A true CN111783504A (en) 2020-10-16

Family

ID=72754961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910358534.4A Pending CN111783504A (en) 2019-04-30 2019-04-30 Method and apparatus for displaying information

Country Status (1)

Country Link
CN (1) CN111783504A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590484A (en) * 2017-09-29 2018-01-16 百度在线网络技术(北京)有限公司 Method and apparatus for information to be presented
CN109213728A (en) * 2017-06-29 2019-01-15 深圳市掌网科技股份有限公司 Cultural relic exhibition method and system based on augmented reality
CN109427096A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of automatic guide method and system based on augmented reality
CN109684578A (en) * 2018-12-28 2019-04-26 北京字节跳动网络技术有限公司 Method and apparatus for showing information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213728A (en) * 2017-06-29 2019-01-15 深圳市掌网科技股份有限公司 Cultural relic exhibition method and system based on augmented reality
CN109427096A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of automatic guide method and system based on augmented reality
CN107590484A (en) * 2017-09-29 2018-01-16 百度在线网络技术(北京)有限公司 Method and apparatus for information to be presented
US20190102938A1 (en) * 2017-09-29 2019-04-04 Baidu Online Network Technology (Beijing) Co., Ltd Method and Apparatus for Presenting Information
CN109684578A (en) * 2018-12-28 2019-04-26 北京字节跳动网络技术有限公司 Method and apparatus for showing information

Similar Documents

Publication Publication Date Title
CN113383369B (en) Body posture estimation
CN113396443B (en) Augmented reality system
US10755485B2 (en) Augmented reality product preview
US10579134B2 (en) Improving advertisement relevance
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
US10186084B2 (en) Image processing to enhance variety of displayable augmented reality objects
CN109891365A (en) Virtual reality and striding equipment experience
JP4253567B2 (en) Data authoring processor
US12020383B2 (en) Facial synthesis in augmented reality content for third party applications
CN107798932A (en) A kind of early education training system based on AR technologies
WO2013120851A1 (en) Method for sharing emotions through the creation of three-dimensional avatars and their interaction through a cloud-based platform
CN109741462A (en) Showpiece based on AR leads reward device, method and storage medium
KR20230162977A (en) Interactive augmented reality content including face compositing
US20210166461A1 (en) Avatar animation
US20220319060A1 (en) Facial synthesis in augmented reality content for advertisements
WO2023045964A1 (en) Display method and apparatus, device, computer readable storage medium, computer program product, and computer program
CN108597034B (en) Method and apparatus for generating information
CN110458924A (en) A kind of three-dimensional facial model method for building up, device and electronic equipment
CN117083640A (en) Facial composition in content of online communities using selection of facial expressions
CN114445500A (en) Augmented reality scene construction method and device, terminal equipment and storage medium
Nóbrega et al. NARI: Natural Augmented Reality Interface-Interaction Challenges for AR Applications
CN109863746B (en) Immersive environment system and video projection module for data exploration
CN112686332A (en) AI image recognition-based text-based intelligence-creating reading method and system
CN116485983A (en) Texture generation method of virtual object, electronic device and storage medium
CN111783504A (en) Method and apparatus for displaying information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination