CN111145244A - Room area acquisition method and related device - Google Patents

Room area acquisition method and related device Download PDF

Info

Publication number
CN111145244A
CN111145244A CN201911331102.0A CN201911331102A CN111145244A CN 111145244 A CN111145244 A CN 111145244A CN 201911331102 A CN201911331102 A CN 201911331102A CN 111145244 A CN111145244 A CN 111145244A
Authority
CN
China
Prior art keywords
target
management platform
area
target room
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911331102.0A
Other languages
Chinese (zh)
Inventor
蒋薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanyi Digital Technology Co ltd
Original Assignee
Wanyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanyi Technology Co Ltd filed Critical Wanyi Technology Co Ltd
Priority to CN201911331102.0A priority Critical patent/CN111145244A/en
Publication of CN111145244A publication Critical patent/CN111145244A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a room area obtaining method and a related device, wherein the method comprises the following steps: the method comprises the steps that a webpage side management platform obtains information of space volume of a target room; the webpage end management platform constructs the target room according to the information of the space size; the webpage end management platform acquires the area of the target room through a preset room area acquisition method, and convenience in acquiring the judgment room area can be improved.

Description

Room area acquisition method and related device
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a room area obtaining method and a related device.
Background
In general, room space quality is generated by specifying and dividing room boundaries in a REVIT model, and based on the boundary, space management is performed on a PC or WEB platform based on a model containing room component information, that is, a component called "space" needs to be established in the REVIT model and a component ID is automatically formed, so that the component can be identified and viewed on the WEB management platform, and area calculation and the like are performed.
Disclosure of Invention
The embodiment of the application provides a room area obtaining method and a related device, which can improve the convenience of judging room area obtaining.
A first aspect of an embodiment of the present application provides a room area obtaining method, where the method includes:
the method comprises the steps that a webpage side management platform obtains information of space volume of a target room;
the webpage end management platform constructs the target room according to the information of the space size;
and the webpage end management platform acquires the area of the target room through a preset room area acquisition method.
With reference to the first aspect, in a possible implementation manner of the first aspect, the acquiring, by the web page management platform, the area of the target room through a preset room area acquiring method includes:
the webpage end management platform acquires the projection of the target room on a preset plane;
and the webpage end management platform acquires the area of the target room according to the projection.
With reference to the first aspect, in a possible implementation manner of the first aspect, the acquiring, by the web page management platform according to the projection, an area of the target room includes:
the webpage end management platform connects the center of the projection with each vertex in the at least one vertex of the projection to obtain N sub-projections;
the webpage end management platform acquires the areas of the N sub-projections to obtain N area values;
and the webpage end management platform determines the area of the target room according to the N area values.
With reference to the first aspect, in a possible implementation manner of the first aspect, the acquiring, by the web page management platform, the area of the target room through a preset room area acquiring method includes:
the webpage end management platform acquires a target image of the target room through a virtual camera, wherein the target image comprises a ground image of the target room;
the webpage side management platform extracts the number of pixel points of the ground image;
and the webpage end management platform determines the area of the target room according to the number of the pixel points.
With reference to the first aspect, in a possible implementation manner of the first aspect, the method further includes:
the webpage end management platform acquires the position information of the target room;
the webpage end management platform determines a first value of the target room according to the position information;
the webpage end management platform acquires the target times of browsing the target room on the webpage end management platform;
the webpage end management platform determines a correction factor of the first value according to the target times;
the webpage end management platform determines a second value according to the correction factor and the first value;
the webpage end management platform determines a third value of the target room according to the area of the target room;
the webpage end management platform determines a target value of the target room according to the second value and the third value;
and the webpage end management platform displays the target value.
A second aspect of embodiments of the present application provides a room area acquiring apparatus, which includes a first acquiring unit, a constructing unit, and a second acquiring unit, wherein,
a first acquisition unit configured to acquire information on a spatial volume of a target room;
the construction unit is used for constructing the target room according to the information of the space volume;
and the second acquisition unit is used for acquiring the area of the target room by a preset room area acquisition method.
With reference to the second aspect, in a possible implementation manner of the second aspect, the first obtaining unit is configured to:
acquiring the projection of the target room on a preset plane;
and acquiring the area of the target room according to the projection.
With reference to the second aspect, in a possible implementation manner of the second aspect, in the acquiring the area of the target room according to the projection, the first acquiring unit is configured to:
connecting the center of the projection with each vertex of the at least one vertex of the projection to obtain N sub-projections;
acquiring the areas of the N sub-projections to obtain N area values;
and determining the area of the target room according to the N area values.
With reference to the second aspect, in a possible implementation manner of the second aspect, the second obtaining unit is configured to:
acquiring a target image of the target room through a virtual camera, wherein the target image comprises a ground image of the target room;
extracting the number of pixel points of the ground image;
and determining the area of the target room according to the number of the pixel points.
With reference to the second aspect, in a possible implementation manner of the second aspect, the apparatus is further configured to:
acquiring the position information of the target room;
determining a first value of the target room according to the position information;
acquiring the target times of browsing the target room on the webpage end management platform;
determining a correction factor for the first value according to the target times;
determining a second value according to the correction factor and the first value;
determining a third value of the target room according to the area of the target room;
determining a target value of the target room according to the second value and the third value;
and displaying the target value.
A third aspect of the embodiments of the present application provides a terminal, including a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the step instructions in the first aspect of the embodiments of the present application.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps as described in the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has at least the following beneficial effects:
the method comprises the steps that information of the space volume of a target room is obtained through a webpage end management platform, the webpage end management platform constructs the target room according to the information of the space volume, and the webpage end management platform obtains the area of the target room through a preset room area obtaining method.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a web page management platform according to an embodiment of the present application; (ii) a
Fig. 2 is a schematic flow chart of a room area obtaining method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of another room area obtaining method provided in the embodiment of the present application;
fig. 4 is a schematic flow chart of another room area obtaining method provided in the embodiment of the present application;
fig. 5 is a schematic flow chart of another room area obtaining method provided in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a room area obtaining apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to better understand the room area obtaining method provided in the embodiment of the present application, first, a brief description is given below to a web page end management platform to which the room area obtaining method is applied. Referring to fig. 1, fig. 1 is a schematic diagram of a web page management platform according to an embodiment of the present disclosure. As shown in fig. 1, the terminal device may be a personal computer, a tablet computer, a server, etc., the web page management platform is applied to the terminal device, and the web page management platform may be configured to construct a corresponding room according to the information of the space volume of the room input by the user, calculate the area of the room, and finally output the area of the room. The information of the spatial volume may be understood as boundary information or structural information of one spatial range, and may specifically be information of coordinate points, information of line segments, and the like. Therefore, compared with the existing scheme, complicated steps are needed when the REVIT model is operated to construct a room and acquire the room area, and convenience in room area acquisition can be improved.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating a room area obtaining method according to an embodiment of the present disclosure. As shown in fig. 2, the room area obtaining method includes steps 201 and 203, which are as follows:
201. and the webpage end management platform acquires the information of the space volume of the target room.
When the webpage side management platform acquires the information of the space volume, the information of the space volume can be acquired by receiving the information of the space volume input by the target user, or acquired by acquiring gesture information of the target user. When the volume information of the space is acquired through the gesture information, the gesture information of the user can be acquired by acquiring an image of a target user and analyzing the image, and then the corresponding information of the volume of the space is determined according to the mapping relation between the gesture information and the information of the volume of the space, wherein the mapping relation can be set through experience values or historical data. The image analysis may be performed using existing image processing algorithms, such as local binary methods.
202. And the webpage side management platform constructs a target room according to the information of the space size.
And the webpage side management platform determines the specific position, the space shape and the like of the target room according to the information of the space size, and then constructs the target room. When a target room is constructed, a WEB Graphics Library (WEB Graphics Library, webbl) is adopted for construction, a world coordinate system is arranged in the WEB Graphics Library, the world coordinate system is a unified coordinate system in the WEB Graphics Library, the coordinate system is a three-dimensional coordinate system and comprises an x axis, a y axis and a z axis, and the specific construction method of the target room comprises the following steps:
1) and drawing a function generation surface according to the points and the line segments in the information of the space volume by the webpage end management platform, and generating the space volume by combining elevations (the height of each part of the building). On the basis of floor elevation determination, determining a Z axis where a room boundary line segment is drawn, determining XY axis points on a page of a webpage end management platform, connecting two points, closing a plurality of line segments to generate a surface, and forming the space volume according to the floor elevation where a target room is located in the information of the space volume. The generation mode of the space volume does not depend on any model component, the space can be drawn at will and can be in any shape, and convenience in construction of the target room is improved.
2) The boundary of the target room is directly obtained by identifying the existing members in the REVIT model, such as the wall, the door and other members. Firstly, a graph engine in WEBGL can analyze a REVIT model, extract information such as geometric size, position, member ID, attribute and the like of all members, select any member on a webpage end management platform, obtain coordinates and elevation information of X, Y axes of a wall when the wall is selected, obtain the position and elevation information when a door or a window is used as a boundary, and directly generate a space boundary of a room based on the position and elevation information. And the corresponding space can be formed by combining the elevation of the floor where the building is located. The generation mode of the space volume can be used for generating the accurate room space along the wall of the room, and has higher accuracy.
203. And the webpage end management platform acquires the area of the target room through a preset room area acquisition method.
The preset area obtaining method may obtain a projection of the target room and obtain the area according to the projection, or may obtain a target image of the target room through the virtual camera and obtain the area of the target room according to the target image.
In this example, the information of the space volume of the target room is acquired through the web page side management platform, the web page side management platform constructs the target room according to the information of the space volume, and the web page side management platform acquires the area of the target room through a preset room area acquisition method.
In a possible embodiment, a possible web page management platform obtains the area of a target room through a preset room area obtaining method, where the method for obtaining the area of the target room includes steps a1-a2, which are as follows:
a1, the webpage end management platform obtains the projection of the target room on a preset plane;
a2, the webpage end management platform obtains the area of the target room according to the projection.
The preset plane may be the plane on which the floor of the target room is located. When the area of the target room is obtained from the projection, the area of the projection may be determined as the area of the target room.
In this example, the area of the projection of the target room on the plane where the ground is located is determined as the area of the target room, so that the area of the target room can be quickly determined, and the efficiency of area acquisition is improved.
In a possible embodiment, a method for acquiring the area of the target room by the web-side management platform according to the projection includes steps B1-B3, which are as follows:
b1, connecting the center of the projection with each vertex in at least one vertex of the projection by the webpage end management platform to obtain N sub-projections;
b2, the webpage end management platform acquires the areas of the N sub-projections to obtain N area values;
b3, the webpage end management platform determines the area of the target room according to the N area values.
The projected vertex can be understood as the point of the convex part of the projected corner, for example, when the projection is an ellipse, the end point of the major axis and the end point of the minor axis of the ellipse can be determined as the vertex. When the area of each sub-projection is obtained, the area of the sub-projection may be obtained by using a corresponding area calculation formula, for example, if the sub-projection is in a fan shape, the area may be obtained by using a fan-shaped area calculation formula.
The sum of the N area values is determined as the area of the target room.
In this example, the projection is divided into N sub-projections, and the areas of the N sub-projections are respectively calculated to obtain the area of the target room, so that the efficiency of obtaining the area of the target room can be improved.
In a possible embodiment, another possible web page side management platform obtains the area of the target room through a preset room area obtaining method, where the method includes:
c1, the webpage end management platform acquires a target image of the target room through the virtual camera, wherein the target image comprises a ground image of the target room;
c2, the webpage end management platform extracts the number of pixel points of the ground image;
c3, the webpage end management platform determines the area of the target room according to the number of the pixel points.
Because when the picture was shot, the area that every pixel can be determined, when the area in target room was confirmed according to the number of pixel, can multiply the number of pixel by the area of pixel, then can confirm the reference area in target room, again through the scaling ratio between image and the target room, multiply the scaling ratio with the reference area, measure the area in target room, the area in target room is confirmed through the mode of pixel, need not to calculate the projection of target room on ground, the area in target room is confirmed to the number of direct through the pixel section, efficiency and accuracy when the area in target room acquireed can be promoted.
In one possible embodiment, the value of the target room may also be evaluated, and one possible method of evaluating the value of the target room includes steps D1-D8, as follows:
d1, the webpage end management platform acquires the position information of the target room;
d2, determining the first value of the target room by the webpage end management platform according to the position information;
d3, the webpage end management platform acquires the target times of browsing the target room on the webpage end management platform;
d4, determining a correction factor of the first value by the webpage end management platform according to the target times;
d5, determining a second value by the webpage end management platform according to the correction factor and the first value;
d6, determining a third value of the target room by the webpage end management platform according to the area of the target room;
d7, determining the target value of the target room by the webpage end management platform according to the second value and the third value;
d8, displaying the target value by the webpage end management platform.
The position information includes position information of a building where the target room is located, floor information of the target room in the building, and the like, and the position information of the building where the target room is located can be understood as a city, a cell, and the like where the building is located.
Different locations have different values, a first value for the target room can be determined, e.g., the value at the center of the city is higher than the value at the edge of the city, the value at the middle floors of the building is higher than the value at the upper and lower floors, etc.
The method for determining the correction factor according to the target browsing times may be that the higher the browsing times, the larger the correction factor, and the lower the browsing times, the smaller the correction factor. The browsing times can reflect the popularity of the target room among the users, the higher the browsing times, the higher the popularity, the higher the value, the lower the browsing times, and the lower the popularity, the lower the value.
When the third value is determined according to the area of the target room, the larger the area of the target room is, the higher the third value is, and the smaller the area of the target is left on, the lower the third value is.
The sum of the second value and the third value may be determined as the target value, or the average of the second value and the third value may be determined as the target value.
When the target value is displayed, the target value may be displayed in the form of a window, or may be displayed in the form of voice, and the display is not particularly limited herein.
In this example, the value of the target room is determined by the position information and the area, and the value is displayed, so that the accuracy of value acquisition can be improved, and the convenience of a user in value evaluation can be improved by displaying the value.
In one possible embodiment, the web page management platform may further perform music recommendation, and one possible music recommendation method includes steps E1-E2, as follows:
e1, acquiring the working time of the target user, and if the working time is longer than the preset time, acquiring the action information and the expression information of the target user;
e2, determining music corresponding to the target user according to the action information and the expression information of the target user.
The method for obtaining the working duration of the target user may be to obtain the duration that the target user uses the web page end management platform, and determine the duration as the working duration of the target user, or may determine the working duration of the target user through other methods, which is only exemplified here.
When determining the music corresponding to the target user according to the action information and the expression information, determining a mood parameter of the target user according to the action information and the expression information, and determining the corresponding music according to the mood parameter.
When the action information and the expression information are acquired, an image of the target user can be acquired, and the image is analyzed to obtain the action information and the expression information, which specifically can be:
the images of the target user are collected through the multiple Internet of things collecting devices, and different Internet of things collecting devices can irradiate the target user by adopting light rays with different wave bands, so that multiple images of the target user under different wave bands are collected, and N target images are obtained. The target image may be a whole-body image of the user or the like.
When the N target images are subjected to feature extraction to obtain target feature data, feature extraction may be performed on each of the N target images to obtain the target feature data, and the feature extraction may be performed by using a computing device or by using a feature extraction algorithm, for example, a local binary method. The characteristic data may include gray values or the like.
The target images in different wave bands are subjected to feature extraction to obtain feature data, and the images in different wave bands can reflect information of users in different colors, so that the content of details can be reflected more accurately compared with the images acquired by a common camera, and the accuracy of feature data acquisition can be improved compared with the existing scheme.
And after the target characteristic data is obtained, determining action information and expression information according to the target characteristic data. Specifically, the method for determining the action information and the expression information according to the target feature data may be: and determining the outline information of the target user according to the gray value, and determining the action information and the expression information according to the outline information. Contour information may be understood as contour information of various body parts of the user, for example eye contour information, face contour information, mouth contour information, hand contour information, leg contour information, etc. Because the gray value of the part with the contour is subjected to abrupt change, the contour information can be determined according to the mode of abrupt change of the gray value. Of course, the contour information may also be determined in other ways. When the expression information is determined according to the contour information, the expression information corresponding to the contour information may be determined according to a mapping relationship between the contour information and the expression information. The mapping relationship may be obtained by training of a computing device, or may be obtained by way of manual labeling, which is only an example and is not limited specifically herein.
When the action information and the expression information are determined according to the contour data, the contour information of the target user in each image can be determined according to the feature data corresponding to the N images through the feature data corresponding to the N images, the contour data are analyzed according to the time sequence of the acquisition of the N target images, the variation of the contour information of each part of the target user is determined, and the action information and the expression information are determined according to the variation. If the variation of the contour information is zero, it indicates that the expression or the action of the corresponding part of the user is not changed, and the action information and the expression information corresponding to one of the target images are used as the action information and the expression information of the target user. And if the variation is not zero, determining a variation trend according to the variation, and determining action information and expression information according to the variation trend. Taking the motion information as an example, the trend of the change may be, for example, that the hand contour has a trend of moving downward, and it may be determined that the user hand motion is downward.
In this example, the music corresponding to the target user is determined by analyzing the action information and the expression information of the target user, so that the accuracy of acquiring the music corresponding to the target user is improved.
In one possible embodiment, the motion information includes body motion information and body motion information, and a possible method for determining music corresponding to the target user according to the motion information and the expression information includes steps F1-F6, which are as follows:
f1, acquiring body motion amplitude information of the target user according to the body motion information;
f2, determining a first music set according to the motion amplitude information;
f3, determining a first mood parameter set of the target user according to the limb action information;
f4, determining a second mood parameter set of the target user according to the expression information;
f5, determining the mood parameters of the target user according to the first mood parameter set and the second mood parameter set;
f6, determining music corresponding to the target user in the first music set according to the mood parameters.
The body motion information may be understood as motion information of other parts than the hand and the leg of the target user, for example, the head, the waist, the shoulder, etc., and the body motion amplitude information of the user may be understood as motion amplitudes of the above-mentioned parts, for example, the larger the head swing, the larger the body motion amplitude, and the smaller the head swing, the smaller the body motion amplitude.
The different motion amplitude information may reflect current psychological information of the user, for example, the larger the swing amplitude of the head is, the more pleasant the current mood of the user may be, the smaller the swing amplitude is, the more calm or sad the current mood of the user is, the larger the motion amplitude information is, the faster the music in the corresponding first music set may be, and the smaller the motion amplitude information is, the lower the music in the corresponding first music set may be. Cheerful music may be, for example, music with a fast tempo, and quiet music may be, for example, music with a slow tempo.
The body movement information may be understood as information of hand movements and leg movements of the user, for example, the hand movements and the leg movements may reflect mood information of the user, for example, if the movement amplitude of the hand movements and the leg movements is small, the mood of the user may be calm and in a quiet state. If the range of the hand movement and the leg movement is large, the mood of the user may be excited, or the like. A first mood parameter set of the target user can be determined according to the limb action information. Mood parameters may include quiet, excited, sad, etc.
The expression information may include facial expression information, eye expression information, mouth expression information, and the like, and when the second mood parameter set is determined, the second mood parameter set may be determined according to the facial expression information, eye expression information, mouth expression information, and the like.
The intersection of the first mood parameter set and the second mood parameter set may be determined as the mood parameter of the target user. Therefore, the mood parameters acquired from various angles can be neutralized, and the accuracy of the target user in the mood parameter acquisition process is improved.
And determining the music corresponding to the mood parameter in the first music set as the music corresponding to the target user. Wherein, different mood parameters correspond to different music, and the corresponding relationship is set by experience values or historical data.
In the example, a first music set is determined according to the body motion amplitude information of the target user, a first mood parameter set is determined according to the body motion information, a second mood parameter set is determined according to the expression information, music corresponding to the target user is determined from the first music set according to the target mood parameters determined by the first mood parameter set and the second mood parameter set, the music corresponding to the target user can be determined by determining the mood parameters according to the motion information, and the accuracy of music determination is improved.
In a possible embodiment, the expression information includes eye expression information, facial expression information, and mouth expression information, and a possible method for determining the second mood parameter set of the target user according to the expression information includes steps G1-G7, which are as follows:
g1, determining first state information of the target user according to the eye expression information, determining second state information of the target user according to the facial expression information, and determining third state information of the target user according to the mouth expression information;
g2, determining first reference state information of the target user according to the first state information, the second state information and the third state information;
g3, acquiring event information of the target user in a preset time period, wherein the event information is information of an event processed by the target user;
g4, determining the environment information of the target user in a preset time period according to the event information, wherein the environment information comprises associated user information, and the associated user information is information of a user interacting with the target user;
g5, determining second reference state information of the target user according to the associated user information and the event information;
g6, determining target state information of the target user according to the first reference state information and the second reference state information;
g7, determining a second mood parameter set according to the mapping relation between the target state information and the mood parameters.
The first state information, the second state information and the third state information can be understood as the psychological state information of the target user, the state information can be represented through state information values, different expression information of the user can reflect different psychological state information, the psychological state information can be understood as the complexity of mood, the higher the value of the psychological state information is, the more complex the mood of the user is, the lower the value of the psychological state information is, the more calm the mood of the user is, the calm the mood can be understood as no mood fluctuation of the user, the more complicated the mood can be understood as no mood fluctuation of the user, and the mood fluctuation can be understood as clear change.
And determining the average value of the state information values corresponding to the first state information, the second state information and the third state information as first reference state information.
The event information of the target user in the preset time period can be obtained by obtaining a trip list of the target user from the terminal equipment and determining the event information according to the trip list. The environment information may be understood as information of an environment where the target user is located when the event is processed, and the environment information includes associated user information, and the associated user is information of a user who has interaction with the target user. For example, if the target user's event is a speech, then the associated user may be a person who raised a question when the target was used for the speech.
When determining the second reference state information according to the associated user information and the event information, the method may specifically be: the associated user information may include interaction information of the associated user and the target user, and the second reference state information may be determined according to the interaction information and the event information, the interaction information including session information. Determining a range of the second reference state information according to the event information; extracting keywords from the interactive information to obtain target keywords; and determining the second reference state information from the range of the second reference state information according to the target keyword. The keywords may be, for example, mood-related words, e.g., good, bad, unintelligible, ok, and so on. Second reference state information corresponding to the target keyword can be determined according to the mapping relation between the keyword and the reference state information, and the reference state information can be represented by a reference state information value.
The average of the first reference state information value and the second reference state information value may be used as a target state information value, and the target state information value is used to represent the target state information.
The mapping relation between the target state information and the mood parameters is set through empirical values or historical data.
In this example, the first reference state information of the target user is determined through the eye expression information, the facial expression information and the mouth expression information, the second reference state information is determined according to the associated user and the event information, the target state information is determined according to the first reference state information and the second reference state information, and the second mood parameter set is determined according to the target state information, so that the accuracy of determining the second mood parameter set can be improved.
Referring to fig. 3, fig. 3 is a schematic flow chart of another room area obtaining method according to an embodiment of the present application. As shown in fig. 3, the method includes steps 301 and 304, which are as follows:
301. the method comprises the steps that a webpage side management platform obtains information of space volume of a target room;
302. the webpage side management platform constructs a target room according to the information of the space size;
303. the method comprises the steps that a webpage end management platform obtains projection of a target room on a preset plane;
304. and the webpage end management platform acquires the area of the target room according to the projection.
In this example, the area of the projection of the target room on the plane where the ground is located is determined as the area of the target room, so that the area of the target room can be quickly determined, and the efficiency of area acquisition is improved.
Referring to fig. 4, fig. 4 is a schematic flow chart of another room area obtaining method according to an embodiment of the present application. As shown in fig. 4, the method includes steps 401 and 405 as follows:
401. the method comprises the steps that a webpage side management platform obtains information of space volume of a target room;
402. the webpage side management platform constructs a target room according to the information of the space size;
403. the method comprises the steps that a webpage end management platform obtains a target image of a target room through a virtual camera, wherein the target image comprises a ground image of the target room;
404. the webpage side management platform extracts the number of pixel points of the ground image;
405. and the webpage end management platform determines the area of the target room according to the number of the pixel points.
In this example, the projection is divided into N sub-projections, and the areas of the N sub-projections are respectively calculated to obtain the area of the target room, so that the efficiency of obtaining the area of the target room can be improved.
Referring to fig. 5, fig. 5 is a schematic flow chart of another room area obtaining method according to an embodiment of the present application. As shown in fig. 5, the method includes steps 501-511, which are as follows:
501. the method comprises the steps that a webpage side management platform obtains information of space volume of a target room;
502. the webpage side management platform constructs a target room according to the information of the space size;
503. the webpage side management platform acquires the area of a target room through a preset room area acquisition method;
504. the method comprises the steps that a webpage end management platform obtains position information of a target room;
505. the webpage end management platform determines a first value of the target room according to the position information;
506. the method comprises the steps that a webpage end management platform obtains target times of browsing a target room on the webpage end management platform;
507. the webpage end management platform determines a correction factor of the first value according to the target times;
508. the webpage end management platform determines a second value according to the correction factor and the first value;
509. the webpage end management platform determines a third value of the target room according to the area of the target room;
510. the webpage end management platform determines the target value of the target room according to the second value and the third value;
511. and the webpage end management platform displays the target value.
In this example, the value of the target room is determined by the position information and the area, and the value is displayed, so that the accuracy of value acquisition can be improved, and the convenience of a user in value evaluation can be improved by displaying the value.
In accordance with the foregoing embodiments, please refer to fig. 6, fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application, and as shown in the drawing, the terminal includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, the computer program includes program instructions, the processor is configured to call the program instructions, and the program includes instructions for performing the following steps;
acquiring information of the space volume of a target room;
constructing the target room according to the information of the space volume;
and acquiring the area of the target room by a preset room area acquisition method.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the terminal includes corresponding hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the terminal may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In accordance with the above, please refer to fig. 7, fig. 7 is a schematic structural diagram of a room area obtaining apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus comprises a first acquisition unit 701, a construction unit 702 and a second acquisition unit 703, wherein,
a first acquisition unit 701 for acquiring information of the space volume of the target room;
a construction unit 702, configured to construct the target room according to the information of the space volume;
a second obtaining unit 703 is configured to obtain the area of the target room by using a preset room area obtaining method.
In a possible implementation manner, the first obtaining unit 701 is configured to:
acquiring the projection of the target room on a preset plane;
and acquiring the area of the target room according to the projection.
In a possible implementation manner, in the acquiring the area of the target room according to the projection, the first acquiring unit 701 is configured to:
connecting the center of the projection with each vertex of the at least one vertex of the projection to obtain N sub-projections;
acquiring the areas of the N sub-projections to obtain N area values;
and determining the area of the target room according to the N area values.
In a possible implementation manner, the second obtaining unit 703 is configured to:
acquiring a target image of the target room through a virtual camera, wherein the target image comprises a ground image of the target room;
extracting the number of pixel points of the ground image;
and determining the area of the target room according to the number of the pixel points.
In one possible implementation, the apparatus is further configured to:
acquiring the position information of the target room;
determining a first value of the target room according to the position information;
acquiring the target times of browsing the target room on the webpage end management platform;
determining a correction factor for the first value according to the target times;
determining a second value according to the correction factor and the first value;
determining a third value of the target room according to the area of the target room;
determining a target value of the target room according to the second value and the third value;
and displaying the target value.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes a computer to execute part or all of the steps of any one of the room area acquisition methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program causes a computer to execute part or all of the steps of any one of the room area obtaining methods described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash memory disks, read-only memory, random access memory, magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A room area acquisition method, characterized in that the method comprises:
the method comprises the steps that a webpage side management platform obtains information of space volume of a target room;
the webpage end management platform constructs the target room according to the information of the space size;
and the webpage end management platform acquires the area of the target room through a preset room area acquisition method.
2. The method according to claim 1, wherein the web page management platform obtains the area of the target room through a preset room area obtaining method, which includes:
the webpage end management platform acquires the projection of the target room on a preset plane;
and the webpage end management platform acquires the area of the target room according to the projection.
3. The method of claim 2, wherein the web-side management platform obtains the area of the target room according to the projection, and comprises:
the webpage end management platform connects the center of the projection with each vertex in the at least one vertex of the projection to obtain N sub-projections;
the webpage end management platform acquires the areas of the N sub-projections to obtain N area values;
and the webpage end management platform determines the area of the target room according to the N area values.
4. The method according to claim 1, wherein the web page management platform obtains the area of the target room through a preset room area obtaining method, which includes:
the webpage end management platform acquires a target image of the target room through a virtual camera, wherein the target image comprises a ground image of the target room;
the webpage side management platform extracts the number of pixel points of the ground image;
and the webpage end management platform determines the area of the target room according to the number of the pixel points.
5. The method according to any one of claims 1 to 4, further comprising:
the webpage end management platform acquires the position information of the target room;
the webpage end management platform determines a first value of the target room according to the position information;
the webpage end management platform acquires the target times of browsing the target room on the webpage end management platform;
the webpage end management platform determines a correction factor of the first value according to the target times;
the webpage end management platform determines a second value according to the correction factor and the first value;
the webpage end management platform determines a third value of the target room according to the area of the target room;
the webpage end management platform determines a target value of the target room according to the second value and the third value;
and the webpage end management platform displays the target value.
6. A room area acquisition apparatus, characterized in that the apparatus comprises:
a first acquisition unit configured to acquire information on a spatial volume of a target room;
the construction unit is used for constructing the target room according to the information of the space volume;
and the second acquisition unit is used for acquiring the area of the target room by a preset room area acquisition method.
7. The apparatus of claim 6, wherein the first obtaining unit is configured to:
acquiring the projection of the target room on a preset plane;
and acquiring the area of the target room according to the projection.
8. The method of claim 7, wherein, when the acquiring the area of the target room according to the projection, the first acquiring unit is configured to:
connecting the center of the projection with each vertex of the at least one vertex of the projection to obtain N sub-projections;
acquiring the areas of the N sub-projections to obtain N area values;
and determining the area of the target room according to the N area values.
9. A terminal, comprising a processor, an input device, an output device, and a memory, the processor, the input device, the output device, and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-5.
CN201911331102.0A 2019-12-20 2019-12-20 Room area acquisition method and related device Pending CN111145244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911331102.0A CN111145244A (en) 2019-12-20 2019-12-20 Room area acquisition method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911331102.0A CN111145244A (en) 2019-12-20 2019-12-20 Room area acquisition method and related device

Publications (1)

Publication Number Publication Date
CN111145244A true CN111145244A (en) 2020-05-12

Family

ID=70519313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911331102.0A Pending CN111145244A (en) 2019-12-20 2019-12-20 Room area acquisition method and related device

Country Status (1)

Country Link
CN (1) CN111145244A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114329715A (en) * 2021-12-29 2022-04-12 深圳须弥云图空间科技有限公司 Area boundary line generation method, area boundary line generation device, medium, and electronic device
CN115187346A (en) * 2022-09-14 2022-10-14 深圳市明源云空间电子商务有限公司 Rental control graph display method and device, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1407313A (en) * 2001-09-05 2003-04-02 蔡南贤 Automatic room measuring method and apparatus
CN102567706A (en) * 2010-12-24 2012-07-11 汉王科技股份有限公司 Human face identification device and method
CN108679788A (en) * 2018-03-12 2018-10-19 珠海格力电器股份有限公司 A kind of temperature correction of air-conditioning, device, storage medium and air-conditioning
CN109895781A (en) * 2019-03-18 2019-06-18 百度在线网络技术(北京)有限公司 Method for controlling a vehicle and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1407313A (en) * 2001-09-05 2003-04-02 蔡南贤 Automatic room measuring method and apparatus
CN102567706A (en) * 2010-12-24 2012-07-11 汉王科技股份有限公司 Human face identification device and method
CN108679788A (en) * 2018-03-12 2018-10-19 珠海格力电器股份有限公司 A kind of temperature correction of air-conditioning, device, storage medium and air-conditioning
CN109895781A (en) * 2019-03-18 2019-06-18 百度在线网络技术(北京)有限公司 Method for controlling a vehicle and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114329715A (en) * 2021-12-29 2022-04-12 深圳须弥云图空间科技有限公司 Area boundary line generation method, area boundary line generation device, medium, and electronic device
CN115187346A (en) * 2022-09-14 2022-10-14 深圳市明源云空间电子商务有限公司 Rental control graph display method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN110531860B (en) Animation image driving method and device based on artificial intelligence
CN112379812B (en) Simulation 3D digital human interaction method and device, electronic equipment and storage medium
US9613261B2 (en) Inferring spatial object descriptions from spatial gestures
CN110163054B (en) Method and device for generating human face three-dimensional image
US10860838B1 (en) Universal facial expression translation and character rendering system
US12002160B2 (en) Avatar generation method, apparatus and device, and medium
CN110874557A (en) Video generation method and device for voice-driven virtual human face
EP3262616A1 (en) Molding and anchoring physically constrained virtual environments to real-world environments
CN109064387A (en) Image special effect generation method, device and electronic equipment
CN109144252B (en) Object determination method, device, equipment and storage medium
CN115244495A (en) Real-time styling for virtual environment motion
CN111445561A (en) Virtual object processing method, device, equipment and storage medium
CN111145244A (en) Room area acquisition method and related device
CN112990043A (en) Service interaction method and device, electronic equipment and storage medium
Arbeláez et al. Crowdsourcing Augmented Reality Environment (CARE) for aesthetic evaluation of products in conceptual stage
US20230290132A1 (en) Object recognition neural network training using multiple data sources
CN111159609A (en) Attribute information modification method and related device
CN114333018A (en) Shaping information recommendation method and device and electronic equipment
CN116204167B (en) Method and system for realizing full-flow visual editing Virtual Reality (VR)
Arora Creative visual expression in immersive 3D environments
CN116048273A (en) Virtual object simulation method and related equipment
CN116363251A (en) Image generation method, device and equipment
CN117978953A (en) Network conference interaction method, device, computer equipment and storage medium
CN116414236A (en) Reality scene virtualization method and device
CN115147520A (en) Method and equipment for driving virtual character based on visual semantics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230626

Address after: A601, Zhongke Naneng Building, No. 06 Yuexing 6th Road, Gaoxin District Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518051

Applicant after: Shenzhen Wanyi Digital Technology Co.,Ltd.

Address before: 519000 room 105-24914, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province (centralized office area)

Applicant before: WANYI TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right