CN114782677A - Image processing method, image processing apparatus, computer device, storage medium, and computer program - Google Patents

Image processing method, image processing apparatus, computer device, storage medium, and computer program Download PDF

Info

Publication number
CN114782677A
CN114782677A CN202210373306.6A CN202210373306A CN114782677A CN 114782677 A CN114782677 A CN 114782677A CN 202210373306 A CN202210373306 A CN 202210373306A CN 114782677 A CN114782677 A CN 114782677A
Authority
CN
China
Prior art keywords
target
virtual
picture
frame
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210373306.6A
Other languages
Chinese (zh)
Inventor
黄金福
余佳鑫
陆庭锴
王新军
陈彦明
马鹏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Building Technology Guangzhou Co Ltd
Original Assignee
Hitachi Building Technology Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Building Technology Guangzhou Co Ltd filed Critical Hitachi Building Technology Guangzhou Co Ltd
Priority to CN202210373306.6A priority Critical patent/CN114782677A/en
Publication of CN114782677A publication Critical patent/CN114782677A/en
Priority to PCT/CN2023/071161 priority patent/WO2023197705A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to an image processing method, an image processing apparatus, a computer device, a storage medium, and a computer program. The method comprises the following steps: acquiring a virtual building model corresponding to a target space, wherein the virtual building model is used for simulating the target space; acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object labeling area in the virtual picture, and obtaining an object labeling frame, wherein the pose information of the virtual camera is the same as that of the target camera in the target space; and acquiring a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, wherein the target picture is used for labeling the object in the target space. By adopting the method, the marking area can be adjusted in time according to the actual situation, the accuracy of object marking is improved, and the efficiency of object marking is improved.

Description

Image processing method, image processing apparatus, computer device, storage medium, and computer program
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, a storage medium, and a computer program.
Background
With the development of computer technology, elevator passenger flow detection technology appears, which is convenient for detecting the passenger flow condition of the elevator waiting hall area, and the data is generally counted manually based on server data. Before data statistics, information such as an entrance and an exit, a detection range and the like needs to be marked manually in a large amount of data of a server.
In the conventional technology, coordinates are input to determine a calculation area; or add a calculation region on the server data. If the data volume is large and the number of the entrances and exits is large, the workload of manual processing of inputting coordinates or drawing lines is large, so that the working efficiency is poor and the working accuracy is low.
Disclosure of Invention
In view of the foregoing, it is necessary to provide an image processing method, an apparatus, a computer device, a computer readable storage medium and a computer program product for solving the above technical problems.
In a first aspect, the present application provides an image processing method. The method comprises the following steps: acquiring a virtual building model corresponding to a target space, wherein the virtual building model is used for simulating the target space; acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object labeling area in the virtual picture, and obtaining an object labeling frame, wherein the pose information of the virtual camera is the same as that of the target camera in the target space; and acquiring a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, wherein the target picture is used for labeling the object in the target space.
In one embodiment, the obtaining a virtual picture obtained by observing the virtual building model with a virtual camera, and determining an object labeling area in the virtual picture to obtain an object labeling frame includes: determining a detection function corresponding to the object labeling area based on attribute information corresponding to each structural element contained in the virtual building model; dividing the object labeling area according to the detection function corresponding to the object labeling area to obtain a first object labeling frame and a second object labeling frame; and obtaining the object labeling frame based on the first object labeling frame and the second object labeling frame.
In one embodiment, after the synthesizing based on the first object labeling frame and the second object labeling frame to obtain the object labeling frame, the method further includes: counting the number of objects passing through the first object marking frame and entering and exiting the target elevator in the target picture according to the corresponding attribute and shape of the first object marking frame; calculating the passenger flow volume and the internal object number of the target elevator based on the object number; counting the number of the elevator waiting objects in the second object marking frame and the average distance between two adjacent elevator waiting objects in the target picture according to the corresponding attributes and shapes of the second object marking frame; and obtaining the predicted load of the target elevator based on the number of the elevator waiting objects and the average distance.
In one embodiment, after the acquiring a real picture obtained by shooting the target space with the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, the method further includes: judging whether the elevator waiting object in the target picture exceeds the object marking frame according to the real-time change condition of the elevator waiting object in the target picture and the real-time change condition; and if the elevator waiting object in the target picture exceeds the boundary corresponding to the object marking frame, adjusting the size of the object marking frame according to the exceeding amplitude of the elevator waiting object.
In one embodiment, the adjusting the size of the target picture according to the exceeding amplitude of the elevator waiting object includes: determining the variation corresponding to the boundary of the object marking frame according to the height corresponding to each elevator waiting object in the real picture; and adjusting the boundary of the object marking frame based on the variable quantity corresponding to the boundary of the object marking frame to obtain an adjusted target object marking frame, wherein each elevator waiting object is positioned in the adjusted target object marking frame by the adjusted target object marking frame.
In one embodiment, the method further comprises: acquiring virtual building information corresponding to the target space, wherein the virtual building information is attribute information contained in each structural element in a virtual model corresponding to the target space; and inputting the virtual building information corresponding to the target space into virtual building construction software to obtain a virtual building model corresponding to the target space.
In a second aspect, the application also provides an image processing device. The device comprises: the virtual building model acquisition module is used for acquiring a virtual building model corresponding to a target space, wherein the virtual building model is used for simulating the target space; an object labeling frame obtaining module, configured to obtain a virtual picture obtained by observing the virtual building model with a virtual camera, and determine an object labeling area in the virtual picture to obtain an object labeling frame, where pose information of the virtual camera is the same as pose information of a target camera in the target space; and the target picture obtaining module is used for obtaining a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object marking frame to obtain a target picture, wherein the target picture is used for marking the object in the target space.
In one embodiment, the object labeling box obtaining module is configured to determine a detection function corresponding to the object labeling area based on attribute information corresponding to each structural element included in the virtual building model; dividing the object labeling area according to the detection function corresponding to the object labeling area to obtain a first object labeling frame and a second object labeling frame; and obtaining the object labeling frame based on the first object labeling frame and the second object labeling frame.
In one embodiment, the object labeling frame function module is configured to count the number of objects passing through the first object labeling frame and entering and exiting the target elevator in the target picture according to the attribute and the shape corresponding to the first object labeling frame; calculating the passenger flow volume and the internal object number of the target elevator based on the object number; counting the number of the elevator waiting objects in the second object marking frame and the average distance between two adjacent elevator waiting objects in the target picture according to the corresponding attribute and shape of the second object marking frame; and obtaining the predicted load of the target elevator based on the number of the elevator waiting objects and the average distance.
In one embodiment, the object labeling frame adjusting module is configured to determine, according to a real-time change condition of the elevator waiting object in the target picture and according to the real-time change condition, whether the elevator waiting object in the target picture exceeds the object labeling frame; and if the elevator waiting object in the target picture exceeds the boundary corresponding to the object marking frame, adjusting the size of the object marking frame according to the exceeding amplitude of the elevator waiting object.
In one embodiment, the object labeling box adjusting module is configured to determine a variation corresponding to a boundary of the object labeling box according to a height corresponding to each of the elevator waiting objects in the real picture; and adjusting the boundary of the object marking frame based on the variable quantity corresponding to the boundary of the object marking frame to obtain an adjusted target object marking frame, wherein the adjusted target object marking frame enables all the elevator waiting objects to be positioned in the adjusted target object marking frame.
In an embodiment, the virtual building model obtaining module is configured to obtain virtual building information corresponding to the target space, where the virtual building information is attribute information included in each structural element in a virtual model corresponding to the target space; and inputting the virtual building information corresponding to the target space into virtual building construction software to obtain a virtual building model corresponding to the target space.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program: acquiring a virtual building model corresponding to a target space, wherein the virtual building model is used for simulating the target space; acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object labeling area in the virtual picture, and obtaining an object labeling frame, wherein the pose information of the virtual camera is the same as that of the target camera in the target space; and acquiring a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, wherein the target picture is used for labeling the object in the target space.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of: acquiring a virtual building model corresponding to a target space, wherein the virtual building model is used for simulating the target space; acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object labeling area in the virtual picture, and obtaining an object labeling frame, wherein the pose information of the virtual camera is the same as that of the target camera in the target space; and acquiring a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, wherein the target picture is used for labeling the object in the target space.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of: acquiring a virtual building model corresponding to a target space, wherein the virtual building model is used for simulating the target space; acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object marking area in the virtual picture, and obtaining an object marking frame, wherein the pose information of the virtual camera is the same as that of a target camera in the target space; and acquiring a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, wherein the target picture is used for labeling the object in the target space.
According to the image processing method, the image processing device, the computer equipment, the storage medium and the computer program product, the virtual building model corresponding to the target space is obtained, and the virtual building model is a virtual model used for simulating the target space; acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object marking area in the virtual picture, and obtaining an object marking frame, wherein the pose information of the virtual camera is the same as that of the target camera in a target space; and acquiring a real picture obtained by shooting a target space through a target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, wherein the target picture is used for labeling the object in the target space.
By acquiring the virtual building model corresponding to the target space, wherein the virtual building model is an idealized model established according to the target space, the information of the target space can be quickly acquired through the virtual building model corresponding to the target space, and the simulation effect of construction on the target space can also be acquired through the virtual building model. The method comprises the steps of obtaining a virtual picture obtained by observing a virtual building model by using a virtual camera, determining an object marking area in the virtual picture, and obtaining an object marking frame, wherein the pose information of the virtual camera is the same as that of a target camera in a target space, so that a computer can automatically generate the area to be marked according to the specific situation of the target space, further generate the object marking frame, and the two can be adjusted at any time. The method comprises the steps of obtaining a real picture obtained by shooting a target space by a target camera, synthesizing the real picture and an object labeling frame to obtain a target picture, wherein the target picture is used for labeling an object in the target space, and can be combined with the real picture to implement an area needing to be labeled to an actual scene, and real-time labeling and adjustment are carried out by using a computer according to actual conditions.
The virtual building model is used for obtaining a virtual picture, then the object marking frame for calculation is obtained through the virtual picture, and the target picture for marking the object is generated by combining the real picture and the object marking frame, so that the calculation area can be automatically set, the workload of manual intervention is reduced, the working efficiency is improved, meanwhile, the functional characteristics of the virtual building model are fully utilized, and the utilization rate of the system is improved.
Drawings
FIG. 1 is a diagram of an exemplary environment in which a method for image processing is implemented;
FIG. 2 is a flow diagram that illustrates a method for image processing according to one embodiment;
FIG. 3 is a schematic flow chart of image processing steps in one embodiment;
FIG. 4 is a flow diagram illustrating a method for implementing functionality in one embodiment;
FIG. 5 is a flowchart illustrating a method for adjusting an object label box according to an embodiment;
FIG. 6 is a flowchart illustrating a method for adjusting an object label box according to another embodiment;
FIG. 7 is a schematic flow chart diagram illustrating a method for obtaining a virtual building model in one embodiment;
FIG. 8 is a diagram illustrating video detection areas in one embodiment;
FIG. 9 is a schematic illustration of calculation regions A and B in one embodiment;
FIG. 10 is a diagram illustrating a compute region B blend, according to one embodiment;
FIG. 11 is an expanded view of the calculation region B in one embodiment;
FIG. 12 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
fig. 13 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The image processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. The terminal 102 acquires data, the server 104 receives the data of the terminal 102 in response to an instruction of the terminal 102 and performs calculation on the acquired data, and the server 104 transmits the calculation result of the data back to the terminal 102 and is displayed by the terminal 102. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be placed on the cloud or other network server. The server 104 acquires a virtual building model corresponding to the target space from the terminal 102, wherein the virtual building model is a virtual model for simulating the target space; acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object marking area in the virtual picture, and obtaining an object marking frame, wherein the pose information of the virtual camera is the same as that of the target camera in a target space; and acquiring a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object marking frame to obtain a target picture, wherein the target picture is used for marking the object in the target space. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, an image processing method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step 202, a virtual building model corresponding to the target space is obtained.
The target space may be a space formed by specific parameters of the building and objects to be labeled in the building, and since the specific parameters of the building forming the space are fixed, the change of the target space is generally determined by the objects to be labeled in the building and the positions and postures of the objects to be labeled in the building and the parameters of the space.
The virtual building model can be a model established by using virtual building construction software according to a target space, and because specific parameters of each element in the model need to be provided for generating the virtual building construction model, the virtual building model contains attribute information corresponding to each structural element, and a corresponding virtual camera in the virtual building model can be used for observing the virtual building model from multiple angles, and different virtual pictures can be obtained according to different observation angles.
Specifically, the server responds to the instruction of the terminal, and acquires the virtual building model corresponding to the target space with the adjusted parameters from the terminal, because elements contained in different physical buildings are different, the virtual building model and the physical buildings are in one-to-one correspondence, and for the corresponding virtual building model in the target space, attribute information corresponding to each structural element in the building is contained, so that the position and the size of each area can be determined, such as the coordinate position of each elevator door, the length, the width and the height of each elevator door, the position of each entrance and exit, the size of each elevator hall, the decoration style, the installation position of a camera, the position and the length of a dangerous area and the like. For each virtual Building model, the virtual Building model is obtained by inputting each structural element in the Building into virtual Building construction software, the virtual Building construction software may be a Building Information Model (BIM), and the virtual Building model obtained by the virtual Building construction model can be observed by a virtual camera at any position and posture to obtain pictures corresponding to different angles.
For example, the server responds to the instruction of the terminal, and obtains a virtual building model a corresponding to the target space a with the parameters adjusted from the terminal, wherein the model includes all structural elements corresponding to the target space a, and records attributes corresponding to all structural elements, such as the coordinate position of each elevator door in the target space a, the length, width and height of each elevator door, the position of each entrance/exit, the size of each elevator hall, the decoration style, the installation position of a camera, the position and length of a dangerous area, and the like. The virtual building model a is constructed by adopting a building information model, and virtual cameras in different directions such as a top view, a bottom view, a left view, a right view, a section view and the like are selected for observing the constructed virtual building model.
And 204, acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, and determining an object labeling area in the virtual picture to obtain an object labeling frame.
The virtual camera may be a camera that is virtualized out of the virtual building model for observing the specific situation inside, and the virtual camera may observe the virtual building model from various positions, angles, and postures. Because the virtual camera can be adjusted at will, the corresponding virtual camera can be found for the target camera at any position, and because the virtual camera corresponds to the pose of the target camera, the pictures shot by the virtual camera and the target camera can be combined by a synthesis method.
The target camera may be a real camera installed for observing the specific situation in the target space, and the target camera may be installed from various positions, angles, and postures and observe the situation in the target space. Because the virtual camera can be adjusted at will, the corresponding virtual camera can be found for the target camera at any position, and because the virtual camera corresponds to the pose of the target camera, the pictures shot by the virtual camera and the target camera can be combined by a synthesis method.
The virtual screen may be a screen obtained by shooting the virtual building model with a virtual camera. Because the virtual camera can observe the virtual building model from each position, each angle and each posture, each posture of the virtual camera has a corresponding virtual picture. For the situation that the pose information of the virtual camera is the same as the pose information of the target camera in the target space, the obtained virtual picture and the real picture can be superposed by a synthetic method.
The object labeling area may be an area that is required to label the object in the virtual screen, and the virtual screen is obtained through the virtual building model according to the size of the target space, so the object labeling area in the virtual screen is only smaller than or equal to the virtual screen. According to the attribute information corresponding to each structural element in the virtual building model, the object labeling area needing to be labeled can be accurately determined, and attribute data such as length, width and the like in the object labeling area can be read.
The object labeling box may be a box for labeling defined according to attribute data or a detection function of the object labeling area. The object marking area is provided with at least one object marking frame, each object marking frame can be endowed with different functions, when the object marking frames with different functions are overlapped, the overlapped parts respectively realize different functions, for example, the object marking frame A is used for counting the number of people entering and exiting, and the object marking frame B is used for marking personnel, so that the overlapped parts of the object marking frame A and the object marking frame B respectively execute respective tasks without mutual interference; when the object labeling frames with the same function are overlapped, the two object labeling frames are combined, and the range of the object labeling frames is increased.
Specifically, the information of the position and the posture of the virtual camera in the target space is set to be the same as the position and the posture of the real target camera, and meanwhile, it is required to ensure that the area range shot by the virtual camera and the target camera must cover the object marking frame except for the same posture information, so that the virtual picture shot by the virtual camera to the virtual building model and the real picture shot by the real target camera can be superposed. The method comprises the steps of shooting a scene in a virtual building model by using a virtual camera through determined position and posture information, obtaining a virtual picture corresponding to the virtual building model in the position and posture information after shooting, extracting an area needing to be marked according to the attribute of each building element in the obtained virtual picture, and obtaining an object marking area. Different object marking frames can realize different functions and can also realize the same function, when the object marking frames are overlapped, the overlapped part needs to be adjusted according to the realization function, and meanwhile, the object marking frames are not fixed and can be properly adjusted according to target objects in the picture, but do not exceed the object marking area.
For example, the position a and the posture information B corresponding to the virtual camera are determined according to the position A and the posture B information of the target camera in the target space, and the area range shot by the virtual camera and the target camera covers the object labeling frame. The virtual camera based on the fixed position a and the posture information b performs virtual shooting on the virtual building model, after shooting, boundary extraction is performed on each building construction element of the virtual picture, and adjustment is performed according to business requirements to obtain a corresponding virtual picture p. And dividing the determined object labeling area q into two object labeling frames, namely an object labeling frame 1 and an object labeling frame 2, based on the detection function and the attribute of the determined object labeling area q, wherein the function of the object labeling frame 1 is to count the number of people entering and exiting the area, and the function of the object labeling frame 2 is to count the number of people in the area, the average distance between two adjacent people and the like.
And step 206, acquiring a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture.
The real picture may be a picture obtained by shooting the target space with the target camera. The target camera can be set from each position, each angle and each posture, so that observation of the target space can be realized from each position, each angle and each posture, and the target camera at each position has a corresponding real picture. For the situation that the pose information of the virtual camera is the same as the pose information of the target camera in the target space, the obtained virtual picture and the real picture can be superposed by a synthetic method.
The synthesizing may be to synthesize a real picture captured by the target camera at a fixed position and posture in the target space and an object labeling frame corresponding to a virtual picture captured by the virtual camera at a corresponding position and posture in the virtual building model by image processing in a computer.
The target picture may be a picture obtained by synthesizing an object labeling frame obtained from a picture obtained by photographing the virtual building model by the virtual camera and a real picture obtained by photographing the target space by the target camera. Because the target camera can be installed from each position, each angle and each posture to obtain different real pictures, and the virtual camera can observe the virtual building model from each position, each angle and each posture, the virtual camera has a corresponding virtual picture for each posture, and based on the above situation, the target picture also has the conditions of each position, each angle and each posture which depend on the position and posture information of the target camera and the virtual camera.
The labeling may be performed by labeling objects in the target space, each labeled object corresponds to a unique identifier, and a position coordinate of each labeled object is given according to a coordinate system in the target space, so that the position of each object after movement can be determined when each object moves subsequently.
Specifically, the target space is photographed using a real target camera in the target space, and since the target camera is fixed, a real picture is also fixed. And superposing a real picture shot by a target camera with the same position information and posture information and an object labeling frame obtained by a virtual picture shot by a virtual camera in a mode that the real picture is a bottom layer and the object labeling frame is an upper layer to obtain a usable target picture after superposition. The target picture is used for marking each object in the target space, the marking mode is that one-to-one corresponding identification is established for each object, meanwhile, coordinates are given to each object according to a coordinate system preset in the target space, the position of each object is changed in real time, therefore, the coordinates of each object are adjusted in real time, the obtained coordinate points are dynamic, and the track of each object can be drawn according to the change of the coordinate points, so that the information of displacement, speed and the like of each object is obtained.
For example, a target camera is used to shoot the situation in the target space to obtain a real picture q, the object labeling frame 1 and the object labeling frame 2 corresponding to the virtual picture p are synthesized in a superposition mode, that is, the real picture is a bottom layer, and the object labeling frame 1 and the object labeling frame 2 are an upper layer, and the target picture is obtained after superposition. The target screen is used for labeling the objects in the target space, for example, labels 1-20 are respectively assigned to the objects 1-20, coordinate points 1-20 are also assigned, the coordinate data of the coordinate points 1-20 are also changed in real time due to the real-time change of the positions of the objects 1-20, and parameters such as the tracks, the displacement, the speed and the like of the objects 1-20 can be obtained through the change of the tool coordinate data.
In the image processing method, a virtual building model corresponding to a target space is obtained, wherein the virtual building model is a virtual model used for simulating the target space; acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object marking area in the virtual picture, and obtaining an object marking frame, wherein the pose information of the virtual camera is the same as that of the target camera in a target space; and acquiring a real picture obtained by shooting a target space through a target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, wherein the target picture is used for labeling the object in the target space.
Acquiring a virtual building model corresponding to a target space, wherein the virtual building model is used for simulating the target space; acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object marking area in the virtual picture, and obtaining an object marking frame, wherein the pose information of the virtual camera is the same as that of the target camera in a target space; and acquiring a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object marking frame to obtain a target picture, wherein the target picture is used for marking the object in the target space.
By acquiring the virtual building model corresponding to the target space, wherein the virtual building model is an idealized model established according to the target space, the information of the target space can be quickly acquired through the virtual building model corresponding to the target space, and the simulation effect of construction of the target space can also be acquired through the virtual building model. The method comprises the steps of obtaining a virtual picture obtained by observing a virtual building model by using a virtual camera, determining an object marking area in the virtual picture, and obtaining an object marking frame, wherein the pose information of the virtual camera is the same as that of a target camera in a target space, so that a computer can automatically generate the area to be marked according to the specific situation of the target space, further generate the object marking frame, and the two can be adjusted at any time. The method comprises the steps of obtaining a real picture obtained by shooting a target space by a target camera, synthesizing the real picture and an object marking frame to obtain a target picture, wherein the target picture is used for marking an object in the target space, and can be used for realizing an actual scene in combination with the real picture in an area needing to be marked, and real-time marking and adjustment are carried out by using a computer according to actual conditions.
The virtual building model is used for obtaining a virtual picture, then the object marking frame for calculation is obtained through the virtual picture, and the target picture for marking the object is generated by combining the real picture and the object marking frame, so that the calculation area can be automatically set, the workload of manual intervention is reduced, the working efficiency is improved, meanwhile, the functional characteristics of the virtual building model are fully utilized, and the utilization rate of the system is improved.
In an embodiment, as shown in fig. 3, acquiring a virtual picture obtained by observing a virtual building model with a virtual camera, and determining an object labeling area in the virtual picture to obtain an object labeling frame includes:
step 302, determining a detection function corresponding to the object labeling area based on the attribute information corresponding to each structural element contained in the virtual building model.
The structural elements may be elements related to the composition model included in the virtual building model, such as an elevator, a window, a beam, and the like in the virtual building space, which are the same as the elements of the target space corresponding to the virtual building model, such as a door in the virtual building model, a door in the target space correspondingly, a decoration style in the virtual building model, and a decoration style in the corresponding target space.
The attribute information may be inherent information included in each structural element in the target space and the virtual building model, and the information includes the type, performance, raw material, and the like of the structural element, such as the coordinate position of each elevator door, the length, width, and height of each elevator door, the position of each doorway, the size of each elevator hall, the decoration style, the installation position of the camera, the position and length of the dangerous area, and the like, and the attribute information is favorable for understanding the situation of the virtual building model or the target space.
The detection function may be the content to be detected for different areas of the virtual building model and the target space, the detection function for these areas is determined according to the actual situation, and the detection function may be the same or different for different areas, and the shape of these areas is determined according to the detection function, which is arbitrary.
Specifically, virtual building construction software is used for simulating the condition of the target space to obtain a virtual building model, and based on attribute information corresponding to each structural element which is contained in the virtual building model and is the same as the target space, a detection function which needs to be realized by a corresponding object labeling area in the virtual building model, namely the detection function which needs to be realized by the target space is determined.
For example, a virtual building model a is constructed by using virtual building software, which corresponds to a target space a, and based on attribute information corresponding to each mechanism element in the virtual building model a, detection functions 1 to 10 in an object labeling area B in the virtual building model a are determined, that is, the detection function of the object labeling area corresponding to the target space is also determined to be the detection functions 1 to 10.
And 304, dividing the object labeling area according to the detection function corresponding to the object labeling area to obtain a first object labeling frame and a second object labeling frame.
The division may mean that the whole is divided into several parts, for example, the elevator hall area is divided into an object elevator area and a people flow detection area according to a detection function.
The first object labeling frame may be a frame for labeling corresponding to a first object labeling area defined by dividing the object labeling area according to the detection function. The first object labeling frame has a corresponding and unique detection function, if the detection function is changed, the object labeling area is divided again, and the size and the detection content of the first object labeling frame are changed correspondingly.
The second object labeling frame may be a frame for labeling corresponding to the second object labeling area defined by dividing the object labeling area according to the detection function. And the second object labeling frame has a corresponding and unique detection function, if the detection function is changed, the object labeling area needs to be divided again, and the size and the detection content of the second object labeling frame are changed correspondingly.
Specifically, the object labeling area is divided according to the detection function that the object labeling area needs to implement, and a first object labeling frame corresponding to the first object labeling area and a second object labeling frame corresponding to the second object labeling area are obtained after the division. Since the detection function is divided, there is a possibility that the range included in the first object labeling frame overlaps with the range corresponding to the second object labeling frame, and if this occurs, the overlapping area simultaneously realizes two detection functions. Since the object labeling boxes obtained by dividing the object labeling area may be discontinuous, the object labeling boxes with the same function may overlap each other, and if this occurs, the two object labeling boxes with the same detection function are merged to form one continuous object labeling box.
For example, the current elevator waiting hall area is an object labeling area, the area needs to be divided according to a detection function, a first object labeling frame (passenger flow volume calculation area) corresponding to the first object labeling area and a second object labeling frame (elevator waiting object counting area) corresponding to the second object labeling area are obtained after the division, and the overlapped parts of the divided object labeling frames are sorted.
And step 306, obtaining an object labeling frame based on the first object labeling frame and the second object labeling frame.
Specifically, all the first object labeling frames and all the second object labeling frames which are defined based on the requirement of using the detection function are combined, and the overlapped parts are processed after combination, so that the object labeling frames formed by all the combined sub-object labeling frames are obtained.
For example, a passenger flow volume calculation labeling frame and a waiting object statistics labeling frame are obtained according to the function division of the elevator waiting hall area, sub-object labeling frames corresponding to the two functions are combined to form a set formed by the sub-object labeling frames, and the overlapped part in the set is processed to obtain the object labeling frame corresponding to the elevator waiting hall area.
In the embodiment, the target object area is divided according to the detection function required to be carried by the object labeling area to obtain the sub-object labeling areas with different functions, so that the functions corresponding to the target object area can be increased, a plurality of different detection functions can be operated in parallel, the computing capacity and efficiency of the target object area are improved, and computer resources can be fully utilized; the real-time state can be known, and corresponding adjustment can be made in time.
In an embodiment, as shown in fig. 4, after synthesizing the first object labeling box and the second object labeling box to obtain the object labeling box, the method further includes:
and 402, counting the number of objects entering and exiting the target elevator through the first object labeling frame in the target picture according to the corresponding attribute and shape of the first object labeling frame.
The number of the objects can be the number of the objects using the target elevator, the objects using the target elevator can be a hall area coming out of the target elevator and a hall area entering the target elevator, the objects can be people, animals and articles, generally, only the people or the animals are counted, and the articles are selected according to actual conditions, for example, a mobile phone does not belong to the objects, and a wheelchair belongs to the objects.
Specifically, since the first object labeling frame determined according to the detection function has corresponding and unique attributes and shapes, the objects entering the first object labeling frame are labeled, and after labeling, a corresponding identifier is obtained for each object. The objects entering and exiting the target elevator in the first object marking frame are counted through the identification, for example, the next person in the elevator performs the operation of subtracting 1, the other person in the elevator from the elevator waiting hall performs the operation of adding 1, and the number of all the objects entering and exiting the target elevator through the first object marking frame is obtained after counting.
For example, a passenger flow volume calculation marking frame (a first object marking frame) positioned at an elevator door in an elevator waiting hall area is rectangular, the number of people getting in and out of an elevator is counted by a detection function, an identification is obtained for people getting in and out of the elevator and passing through the marking frame, counting is carried out according to the identification, the number of people getting in and out of the elevator waiting hall area in the elevator waiting hall area is obtained after counting, and the number of people getting in and out of the elevator waiting hall area is obtained.
Based on the number of objects, the passenger flow volume of the target elevator and the number of internal objects are calculated, step 404.
The passenger flow of the target elevator can be the number of the objects which have the function of taking the elevator and run in the target elevator once or in a period of time, and for the passenger flow, the passenger flow can be counted once as single passenger flow or in a period of time, for example, the passenger flow in one hour is the passenger flow in a single hour.
The number of the internal objects can be the number of the elevator taking objects in the elevator car when the target elevator runs, and the counting is the number of the elevator taking objects of the elevator car, so the counting target is the same running, namely the number of the objects which are positioned in the elevator car after the door of the elevator car is closed and before the door of the elevator car is opened next time.
Specifically, the number of objects entering and exiting the elevator is calculated according to the counted number, and the number of elevator taking objects (passenger flow) of the target elevator in a unit time period and the number of corresponding objects in the elevator car are calculated through a preset calculation algorithm aiming at the type of the elevator.
For example, the number N of objects entering and exiting the target elevator is obtained by counting according to the first object labeling box c, and the number is input into a calculation algorithm corresponding to the elevator type for calculation, so that the passenger flow volume m corresponding to the target elevator and the number N of objects in the elevator car are calculated.
And 406, counting the number of the elevator waiting objects in the second object marking frame in the target picture according to the corresponding attributes and shapes of the second object marking frame.
The average distance may be the sum of the distances between all the adjacent elevator waiting objects in the second object labeling frame, and then the average value is obtained, for example, the actual distances 1 to 10 corresponding to the adjacent elevator waiting objects 1 to 10, and the average value is obtained by summing the actual distances 1 to 10 and then averaging the distances.
Specifically, according to the attributes and shapes in the second object labeling frame obtained by the detection function, the elevator waiting objects appearing in the target picture and appearing in the second object labeling frame at the same time are labeled, an identifier is obtained for each object, the number of the elevator waiting objects is counted according to the identifier, the distance between every two adjacent objects is obtained at the same time, the distance is summed and then averaged, and the average distance between every two adjacent elevator waiting objects is obtained.
For example, the detection function of the second object labeling frame in the elevator waiting hall is elevator waiting object statistics, so that the second object labeling frame has fixed attributes and shapes. Marking the elevator waiting objects 1-20 which are simultaneously appeared in the target picture and the second object marking frame, giving marks 1-20, and counting the number of the elevator waiting objects based on the marks to obtain statistical data; and (3) summing every two adjacent distances between the objects 1-20 during statistics, and averaging the summed values according to the number of the elevator waiting objects to obtain the average distance X between the two adjacent elevator waiting objects.
And step 408, obtaining the predicted load of the target elevator based on the number of the elevator waiting objects and the number of the internal objects.
The predicted load can be the number of the elevator waiting objects which are calculated by the target elevator each time or in a period of time, and for the calculation formula, each type of elevator has a corresponding calculation formula, and the elevator needs to be selected according to the type of the elevator before calculation.
Specifically, the number of the elevator waiting objects counted in the second object labeling frame and the number of the internal objects calculated in the first object labeling frame are input into an estimation calculation formula to calculate the estimated load of the target elevator, so that the estimated load corresponding to the elevator model is obtained. Because the predicted load is an estimated value, an error exists, the estimated value can be used when the error is smaller than a preset threshold, and when the error is larger than the preset threshold, the parameters of the second object labeling frame and the parameters of the estimation calculation formula are adjusted, so that the error falls within the range of the preset threshold, and meanwhile, the predicted load cannot be larger than the maximum load of the elevator, because the number of the elevator waiting objects is possibly larger than the maximum load of the elevator.
For example, according to a second object labeling frame (elevator waiting object statistics) in the elevator waiting hall area, the number of the elevator waiting objects is 20 and the average distance X between two adjacent elevator waiting objects are obtained through statistics, the two values are substituted into a calculation formula to calculate the predicted load of the target elevator, the predicted load value is obtained, and whether the error falls within a preset threshold value or not is estimated.
In this embodiment, by using the detection functions corresponding to the different first object labeling frames and the second object labeling frames, the passenger flow volume, the number of internal objects, and the predicted load of the target elevator can be directly obtained, and further, data collection and processing can be performed according to the passenger flow volume and the load of the elevator, so as to find a more expensive elevator operation scheme.
In an embodiment, as shown in fig. 5, after acquiring a real picture obtained by shooting a target space with a target camera, and synthesizing the real picture and an object labeling frame to obtain a target picture, the method further includes:
and 502, judging whether the elevator waiting object in the target picture exceeds the object marking frame according to the real-time change condition of the elevator waiting object in the target picture and the real-time change condition.
The real-time change condition of the object may be a real-time dynamic condition of each elevator waiting object in the target picture, for example, the position coordinates, the number of people, the distance and the like of each elevator waiting object change along with the change of time, so that the data acquisition is also required to be real-time.
Specifically, the terminal 102 continuously shoots the corresponding object labeling area in the target space, and can reflect the real situation in the object labeling area in real time, so that the target picture formed by superimposing the real picture and the object labeling frame changes in real time according to the actual situation. For the target picture, since the situation of the elevator waiting object in the elevator waiting hall area is constantly changed and the elevator waiting object exceeds the initial range of the object marking frame, the server 104 determines whether the elevator waiting object in the target picture exceeds the object marking frame according to the picture obtained by the terminal 102, if the elevator waiting object in the target picture does not exceed the original object marking frame, no action is taken, the original shape is kept, and if the elevator waiting object exceeds the original object marking frame, adjustment is performed.
For example, a camera in the target space a shoots a real picture in the elevator waiting hall area, and then the real picture is superimposed to the object labeling frame corresponding to the area to obtain a target picture corresponding to the elevator waiting hall area. Judging whether the elevator waiting objects 1-20 in the target picture exceed the original object marking frame B or not according to real-time change conditions of real-time positions, implementation moving speed, real-time moving direction and the like of the elevator waiting objects 1-20 in the target picture, if not, maintaining the original object marking frame, and if so, adjusting the boundary of the original object marking frame.
And step 504, if the elevator waiting object in the target picture exceeds the boundary corresponding to the object marking frame, adjusting the size of the object marking frame according to the exceeding amplitude of the elevator waiting object to obtain the adjusted target object marking frame.
The boundary corresponding to the object marking frame can be the boundary for distinguishing the area marked by the object marking frame from the area not marked, the boundary of the object marking frame can be dynamically changed, and once the elevator waiting object exceeds the original boundary, adjustment can be made.
The extent of the exceeding of the elevator waiting object can be the extent of the elevator waiting object being in the object marking frame, and the extent of the exceeding of the original object marking frame is caused because the limited range of the object marking frame is not enough to cover all the elevator waiting objects, and the extent is an important basis for adjusting the boundary of the object marking frame.
The adjusted target object marking frame can be a new object marking frame obtained after the object marking frame is adjusted according to the exceeding amplitude of the elevator waiting object. The area covered by the adjusted target object marking frame is larger than that of the target object marking frame, the shape of the adjusted target object marking frame is not necessarily a regular shape, and a calculus method is needed when area calculation is carried out due to the fact that the adjusted target object marking frame is irregular in most cases.
Specifically, if the elevator waiting object in the target frame exceeds the boundary of the original object marking frame due to various problems, the server calculates the extent of the elevator waiting object exceeding the boundary of the object marking frame according to the data of the target frame, and adjusts the size of the object marking frame according to the extent of the exceeding, so as to obtain the adjusted target object marking frame. Because each part of the elevator waiting object needs to be brought into the object marking frame, a margin exists between the boundary of the object marking frame and the boundary of the elevator waiting object all the time, namely, the boundary of the object marking frame and each part of the elevator waiting object have a minimum difference no matter before or after adjustment. For the adjustment of the size of the object marking frame, the adjusted object marking frame is not necessarily in the original shape, and can be adjusted according to the data of the elevator waiting object, and is in an irregular polygon shape after being adjusted.
For example, if the boundary ranges of 5 elevator waiting objects in the elevator waiting objects 1-25 displayed on the corresponding target pictures in the elevator waiting hall region exceed the corresponding boundaries of the object labeling frame B, the server calculates the exceeding range according to the pictures shot by the terminal, and adjusts the object labeling frame B according to the calculation result to obtain an adjusted object labeling frame B'.
In the embodiment, whether the target object exceeds the object marking frame or not is judged through the real-time change condition of the elevator waiting object, and the object marking frame is adjusted according to the condition that the target object exceeds the object marking frame, so that all the elevator waiting objects can be positioned in the object marking frame, the counting accuracy of a computer on the elevator waiting objects is improved, and the accuracy of a calculation result using the elevator waiting objects as parameters is also improved.
In one embodiment, as shown in fig. 6, the step of adjusting the size of the target picture according to the exceeding amplitude of the elevator waiting object to obtain an adjusted target object labeling frame includes:
step 602, determining the variation corresponding to the boundary of the object marking frame according to the height corresponding to each elevator waiting object in the real picture.
The variation corresponding to the boundary of the object marking frame may be that the object marking frame adjusts the inclusion range of the object marking frame according to the specific situation of the elevator waiting object, and the adjustment amplitude corresponds to the variation.
Specifically, the server determines the corresponding height of each elevator waiting object according to the object attribute corresponding to the object in the real picture, and because different heights can make the corresponding variation of the boundary of the object marking frame different, the server uses a differential principle to infinitely divide the excess part according to the differential principle, then calculates the excess of each sub-divided object of the height of the elevator waiting object to obtain the variation of the boundary of the object marking frame corresponding to each sub-divided object, and gathers all the divided objects, and the obtained result is the variation corresponding to the boundary of the object marking frame.
For example, the height of the objects 7, 8 and 9 exceeding the object labeling frame in the corresponding elevator waiting objects 1 to 25 in the real picture shot by the terminal is calculated to obtain the amplitude of the objects 7, 8 and 9 exceeding the object labeling frame B, and then the variation degree of the object labeling frame B is calculated based on the exceeding amplitude to obtain the variation K corresponding to the boundary of the object labeling frame B.
And 604, adjusting the boundary of the object marking frame based on the variation corresponding to the boundary of the object marking frame to obtain the adjusted target object marking frame.
Specifically, since the variation of the boundary of the object labeling frame has a direct relationship with the height corresponding to the elevator waiting object, the excess of the height corresponding to the elevator waiting object can be directly used as the variation of the boundary of the object labeling frame, and the set of heights corresponding to the segmented elevator waiting object obtained by the differentiation method is equivalent to the variation corresponding to the boundary of the object labeling frame, and can be directly applied to the adjustment of the boundary of the object labeling frame, so as to obtain the adjusted target object labeling frame.
For example, the method includes obtaining, from a real picture, that the elevator waiting objects 7, 8, 9 in the elevator waiting objects 1-25 exceed the object marking frame, and measuring the heights corresponding to the object marking frame, obtaining a variation H corresponding to the boundary of the object marking frame due to the heights by a differentiation method, adjusting the boundary of the object marking frame according to the variation H, and obtaining an adjusted target object marking frame C because the boundary corresponding to the boundary of the object marking frame needs a certain margin with the boundary of the elevator waiting object, and the adjustment amount of the boundary needs to be greater than or equal to the variation H corresponding to the boundary of the object marking frame.
In this embodiment, the height corresponding to the elevator waiting object is obtained, the variation corresponding to the boundary of the object marking frame is further determined, and then the boundary of the object marking frame is adjusted according to the variation, so that all the elevator waiting objects are located within the object marking frame, and the boundary of the object marking frame can be adjusted based on the height corresponding to the elevator waiting objects, so that the internal logic of a computer is relatively simple, and meanwhile, complicated calculation is avoided, and the reliability of the boundary adjustment of the object marking frame is improved.
In one embodiment, as shown in fig. 7, the method further comprises:
step 702, obtaining virtual building information corresponding to the target space.
The virtual building information may be information required for constructing a building model corresponding to the target space, and the more the information is, the more detailed the constructed model is and the closer the constructed model is to the target space. The information may be a coordinate position of each elevator door, a length, width, and height of each elevator door, a position of each doorway, a size of each lobby, a decoration style, a camera installation position, a position and a length of a danger area, and the like.
Specifically, the server 104 acquires attribute information included in each structural element of the target space through the terminal 102, and further converts the attribute information to obtain virtual building information for establishing a virtual building model. Since the virtual building information is obtained by converting the information collected by the target space, both can represent the same information.
For example, the server 104 acquires the attribute information of each structural element 1 to 50 of the target space a through the terminal 102, and converts the acquired attribute information 1 to 50 to obtain the attribute information used for establishing the virtual building model a.
Step 704, inputting the virtual building information corresponding to the target space into the virtual building construction software to obtain a virtual building model corresponding to the target space.
The virtual Building construction software may be software that performs Building model construction using virtual Building Information, such as Building Information Modeling (BIM). The virtual building model established by the software can sufficiently express the information of the target space, and improved simulation can be performed in the virtual building model.
Specifically, virtual building information of attribute information identical to the target space identifier is input into virtual building construction software, the virtual building information is represented in a three-dimensional mode according to an algorithm preset by a kernel of the virtual building construction software, a virtual building model corresponding to the target space is obtained, and the model can truly reflect the condition of the target space.
For example, the virtual building information 1-100 corresponding to the target space is input into the virtual building construction software S, and the virtual building information 1-100 is three-dimensionally constructed according to an algorithm built in the virtual building construction software S, so as to obtain a virtual building model including the virtual building information 1-100, wherein the model can actually reflect the condition of the target space.
In this embodiment, the virtual building model is established by using the virtual building information corresponding to the target space, so that the virtual building model and the actual target space can be kept consistent as much as possible, and the change condition of the target space can be predicted by using the virtual building model to change.
In one embodiment, the elevator doorway would set up a lobby area, as shown in fig. 8. The elevator door is set as a passenger flow calculation area A, and the elevator waiting hall is set as a statistical area B. Passenger flow volume calculation area a: on the BIM, the intersection line of the elevator door and the ground is used as a starting point, and the intersection line extends for a certain distance, such as 10cm, towards the direction of going out of the elevator door, so as to automatically form a rectangle which is a passenger flow calculation area A and is tightly attached to the ground. As shown in fig. 9. Passenger flow volume calculation area B: on the BIM, the intersection line of the elevator door and the ground is taken as a starting point, and the left side and the right side of the elevator door are respectively extended by a certain distance, such as 50 cm. Then, the passenger car moves 2m in the direction of going out of the door, and a rectangle is automatically formed, and the rectangle is a passenger flow calculation area B, as shown in FIG. 9. If there is an overlap of the calculation regions B with each other, they are automatically merged together to enlarge the calculation region, as shown in fig. 10.
In one embodiment, the shot picture and the shape of the calculation region are superimposed, and a two-dimensional picture is formed according to the installation position and the shooting direction of the camera. In the two-dimensional picture, the shape of the calculation area is superimposed on the actual picture taken by the camera. After the superimposition, the initial calculation regions a and B are not necessarily rectangular, and may have other shapes. In the initial or no-man case, this calculation area B is rectangular and of automatically generated dimensions. If a person enters the area and is within a certain length from the boundary, the boundary of the area will automatically extend and may become a polygon or an irregular figure, as shown in fig. 11. Since the waiting personnel may be too many and may exceed the boundary. The height of the rectangle in the calculation area B can be automatically adjusted according to the height of the human body entering the area, so that the head of the person can be completely covered. The camera can acquire height information of the person. The administrator can fine-tune the automatically generated initial shape in actual conditions to meet the application requirements of each site.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an image processing apparatus for implementing the image processing method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the method, so specific limitations in one or more embodiments of the image processing apparatus provided below may refer to the limitations on the image processing method in the foregoing, and details are not described here again.
In one embodiment, as shown in fig. 12, there is provided an image processing apparatus including: the system comprises a virtual building model obtaining module, an object marking frame obtaining module and a target picture obtaining module, wherein:
a virtual building model obtaining module 1202, configured to obtain a virtual building model corresponding to a target space, where the virtual building model is a virtual model used for simulating the target space;
an object labeling frame obtaining module 1204, configured to obtain a virtual picture obtained by observing the virtual building model with the virtual camera, and determine an object labeling area in the virtual picture to obtain an object labeling frame, where pose information of the virtual camera is the same as pose information of the target camera in the target space;
the target picture obtaining module 1206 is configured to obtain a real picture obtained by shooting the target space with the target camera, and synthesize the real picture and the object labeling frame to obtain a target picture, where the target picture is used for labeling the object in the target space.
In one embodiment, the object labeling box obtaining module is configured to determine a detection function corresponding to an object labeling area based on attribute information corresponding to each structural element included in the virtual building model; dividing the object labeling area according to the detection function corresponding to the object labeling area to obtain a first object labeling frame and a second object labeling frame; and obtaining an object labeling frame based on the first object labeling frame and the second object labeling frame.
In one embodiment, the object marking frame function module is used for counting the number of objects passing through the first object marking frame and entering and exiting the target elevator in the target picture according to the corresponding attribute and shape of the first object marking frame; calculating the passenger flow volume and the internal object number of the target elevator based on the object number; counting the number of the elevator waiting objects in the second object marking frame and the average distance between two adjacent elevator waiting objects in the target picture according to the corresponding attribute and shape of the second object marking frame; and obtaining the predicted load of the target elevator based on the number of the elevator waiting objects and the average distance.
In one embodiment, the object labeling frame adjusting module is configured to determine, according to a real-time change condition of the elevator waiting object in the target picture and according to the real-time change condition, whether the elevator waiting object in the target picture exceeds the object labeling frame; and if the elevator waiting object in the target picture exceeds the boundary corresponding to the object marking frame, adjusting the size of the object marking frame according to the exceeding amplitude of the elevator waiting object.
In one embodiment, the object labeling frame adjusting module is configured to determine a variation corresponding to a boundary of an object labeling frame according to a height corresponding to each elevator waiting object in a real picture; and adjusting the boundary of the object marking frame based on the variable quantity corresponding to the boundary of the object marking frame to obtain an adjusted target object marking frame, wherein each elevator waiting object is positioned in the adjusted target object marking frame by the adjusted target object marking frame.
In one embodiment, the virtual building model obtaining module is configured to obtain virtual building information corresponding to a target space, where the virtual building information is attribute information included in each structural element in a virtual model corresponding to the target space; and inputting the virtual building information corresponding to the target space into virtual building construction software to obtain a virtual building model corresponding to the target space.
The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 13. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing server data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 13 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The computer instructions are read by a processor of the computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps of the above-described method embodiments. It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases involved in the embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the various embodiments provided herein may be, without limitation, general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, or the like.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a virtual building model corresponding to a target space, wherein the virtual building model is used for simulating the target space;
acquiring a virtual picture obtained by observing the virtual building model by using a virtual camera, determining an object marking area in the virtual picture, and obtaining an object marking frame, wherein the pose information of the virtual camera is the same as that of a target camera in the target space;
and acquiring a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, wherein the target picture is used for labeling the object in the target space.
2. The method according to claim 1, wherein the obtaining a virtual frame obtained by observing the virtual building model with a virtual camera and determining an object labeling area in the virtual frame to obtain an object labeling frame comprises:
determining a detection function corresponding to the object labeling area based on attribute information corresponding to each structural element contained in the virtual building model;
dividing the object labeling area according to the detection function corresponding to the object labeling area to obtain a first object labeling frame and a second object labeling frame;
and obtaining the object labeling frame based on the first object labeling frame and the second object labeling frame.
3. The method of claim 2, wherein if the detection function is a load statistics function for the target elevator, after the synthesizing the object labeling box based on the first object labeling box and the second object labeling box to obtain the object labeling box, the method further comprises:
counting the number of objects passing through the first object marking frame in the target picture to enter and exit the target elevator according to the corresponding attribute and shape of the first object marking frame;
based on the number of objects, calculating the passenger flow volume and the number of internal objects of the target elevator;
counting the number of the elevator waiting objects in the second object marking frame in the target picture according to the corresponding attributes and shapes of the second object marking frame;
and obtaining the predicted load of the target elevator based on the number of the elevator waiting objects and the number of the internal objects.
4. The method according to claim 1, wherein after obtaining a real picture obtained by capturing the target space by the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, the method further comprises:
judging whether the elevator waiting object in the target picture exceeds the object marking frame according to the real-time change condition of the elevator waiting object in the target picture and the real-time change condition;
and if the elevator waiting object in the target picture exceeds the boundary corresponding to the object marking frame, adjusting the size of the object marking frame according to the exceeding amplitude of the elevator waiting object to obtain the adjusted target object marking frame.
5. The method according to claim 4, wherein the adjusting the size of the target picture according to the exceeding amplitude of the elevator waiting object comprises:
determining the variation corresponding to the boundary of the object marking frame according to the height corresponding to each elevator waiting object in the real picture;
and adjusting the boundary of the object marking frame based on the variable quantity corresponding to the boundary of the object marking frame to obtain the adjusted target object marking frame, wherein the adjusted target object marking frame enables all the elevator waiting objects to be positioned in the adjusted target object marking frame.
6. The method of claim 1, further comprising:
acquiring virtual building information corresponding to the target space, wherein the virtual building information is attribute information contained in each structural element in a virtual model corresponding to the target space;
and inputting the virtual building information corresponding to the target space into virtual building construction software to obtain a virtual building model corresponding to the target space.
7. An image processing apparatus, characterized in that the apparatus comprises:
the virtual building model acquisition module is used for acquiring a virtual building model corresponding to a target space, wherein the virtual building model is used for simulating the target space;
an object labeling frame obtaining module, configured to obtain a virtual picture obtained by observing the virtual building model with a virtual camera, and determine an object labeling area in the virtual picture to obtain an object labeling frame, where pose information of the virtual camera is the same as pose information of a target camera in the target space;
and the target picture obtaining module is used for obtaining a real picture obtained by shooting the target space through the target camera, and synthesizing the real picture and the object labeling frame to obtain a target picture, wherein the target picture is used for labeling the object in the target space.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202210373306.6A 2022-04-11 2022-04-11 Image processing method, image processing apparatus, computer device, storage medium, and computer program Pending CN114782677A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210373306.6A CN114782677A (en) 2022-04-11 2022-04-11 Image processing method, image processing apparatus, computer device, storage medium, and computer program
PCT/CN2023/071161 WO2023197705A1 (en) 2022-04-11 2023-01-09 Image processing method and apparatus, computer device, storage medium and computer program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210373306.6A CN114782677A (en) 2022-04-11 2022-04-11 Image processing method, image processing apparatus, computer device, storage medium, and computer program

Publications (1)

Publication Number Publication Date
CN114782677A true CN114782677A (en) 2022-07-22

Family

ID=82428577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210373306.6A Pending CN114782677A (en) 2022-04-11 2022-04-11 Image processing method, image processing apparatus, computer device, storage medium, and computer program

Country Status (2)

Country Link
CN (1) CN114782677A (en)
WO (1) WO2023197705A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023197705A1 (en) * 2022-04-11 2023-10-19 日立楼宇技术(广州)有限公司 Image processing method and apparatus, computer device, storage medium and computer program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11232636B2 (en) * 2018-02-08 2022-01-25 Edx Technologies, Inc. Methods, devices, and systems for producing augmented reality
CN111510701A (en) * 2020-04-22 2020-08-07 Oppo广东移动通信有限公司 Virtual content display method and device, electronic equipment and computer readable medium
CN111833423A (en) * 2020-06-30 2020-10-27 北京市商汤科技开发有限公司 Presentation method, presentation device, presentation equipment and computer-readable storage medium
CN113657307A (en) * 2021-08-20 2021-11-16 北京市商汤科技开发有限公司 Data labeling method and device, computer equipment and storage medium
CN114782677A (en) * 2022-04-11 2022-07-22 日立楼宇技术(广州)有限公司 Image processing method, image processing apparatus, computer device, storage medium, and computer program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023197705A1 (en) * 2022-04-11 2023-10-19 日立楼宇技术(广州)有限公司 Image processing method and apparatus, computer device, storage medium and computer program

Also Published As

Publication number Publication date
WO2023197705A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
US20210343027A1 (en) Object tracking method and apparatus, storage medium and electronic device
CN108292362B (en) Gesture recognition for cursor control
EP3312770B1 (en) Crowd state recognition device, method, and program
US8086027B2 (en) Image processing apparatus and method
JP4643766B1 (en) Moving body detection apparatus and moving body detection method
CN105955308B (en) The control method and device of a kind of aircraft
CN110458895A (en) Conversion method, device, equipment and the storage medium of image coordinate system
JP6196416B1 (en) 3D model generation system, 3D model generation method, and program
CN109299658B (en) Face detection method, face image rendering device and storage medium
WO2017084319A1 (en) Gesture recognition method and virtual reality display output device
JP2014211719A (en) Apparatus and method for information processing
CN110827320B (en) Target tracking method and device based on time sequence prediction
US11373329B2 (en) Method of generating 3-dimensional model data
Min et al. Human fall detection using normalized shape aspect ratio
Krinidis et al. A robust and real-time multi-space occupancy extraction system exploiting privacy-preserving sensors
CN111209811A (en) Method and system for detecting eyeball attention position in real time
CN113362441A (en) Three-dimensional reconstruction method and device, computer equipment and storage medium
CN114782677A (en) Image processing method, image processing apparatus, computer device, storage medium, and computer program
Xiao et al. Human action recognition based on convolutional neural network and spatial pyramid representation
CN114359825A (en) Monitoring method and related product
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense
JP6875646B2 (en) Image processing device and image processing program
CN111915713A (en) Three-dimensional dynamic scene creating method, computer equipment and storage medium
CN105631938A (en) Image processing method and electronic equipment
Rimboux et al. Smart IoT cameras for crowd analysis based on augmentation for automatic pedestrian detection, simulation and annotation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination