CN110191316A - A kind of information processing method and device, equipment, storage medium - Google Patents
A kind of information processing method and device, equipment, storage medium Download PDFInfo
- Publication number
- CN110191316A CN110191316A CN201910418876.0A CN201910418876A CN110191316A CN 110191316 A CN110191316 A CN 110191316A CN 201910418876 A CN201910418876 A CN 201910418876A CN 110191316 A CN110191316 A CN 110191316A
- Authority
- CN
- China
- Prior art keywords
- target object
- distance
- information
- acquisition device
- image acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
- H04N7/185—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Emergency Management (AREA)
- Business, Economics & Management (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present application discloses a kind of information processing method and device, equipment and storage medium, wherein the described method includes: obtaining the real scene image of image acquisition device;Obtain at least one first object for including in the real scene image;When, there are when target object, determining the distance between the target object and described image acquisition device at least one described first object;When the distance meets preset condition, prompt information is issued.
Description
Technical field
The invention relates to electronic technology, a kind of information processing method and device, equipment, storage are related to, but are not limited to
Medium.
Background technique
Augmented reality (Augmented Reality, AR), be a kind of position for calculating camera image in real time and
Angle and the technology for adding respective image, video, threedimensional model, the target of this technology is that virtual world is covered on the screen
Real world is simultaneously interacted.With the development of science and technology, the purposes of augmented reality also can be increasingly wider.
Wherein, AR aobvious equipment is a kind of to realize AR technology and wearable wearable set what human body head was shown
Standby, virtual information superposition to real world can be enable true environment and virtual object by computer technology by it
Be added in the same picture in real time, realize being complementary to one another for two kinds of information, and by the helmet show etc. equipment with
Family carries out picture exhibition at the moment, enhances the presence of user.For example, the AR glasses developed by Google (Google) company,
Virtual data can be added to by the collected realtime graphic of camera, and through minisize photography equipment before human eyeball
Picture exhibition is carried out, so as to realize a variety of application functions, such as is navigated or shown the parameter etc. of surrounding buildings.
But in the AR glasses being currently known, how do not account in the case that user is immersed in AR glasses use
The problem of in danger to user's early warning.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of information processing method and device, equipment, storage medium.
The technical solution of the embodiment of the present application is achieved in that
In a first aspect, the embodiment of the present application provides a kind of information processing method, which comprises
Obtain the real scene image of image acquisition device;
Obtain at least one first object for including in the real scene image;
When there are when target object, determine the target object and described image acquisition at least one described first object
The distance between device;
When the distance meets preset condition, prompt information is issued.
It is described when, there are when target object, determining the mesh at least one described first object in the embodiment of the present application
Mark the distance between object and described image acquisition device, comprising:
When, there are when target object, determining the depth information of the target object at least one described first object;
The depth information is determined as the distance between the target object and described image acquisition device.
It is described when the distance meets preset condition in the embodiment of the present application, issue prompt information, comprising:
When the distance meets preset condition, acceleration and angle of the described image acquisition device in moving process are determined
Speed;
According to the acceleration, angular speed and the distance between the target object and described image acquisition device, determine
The corresponding mobile duration when described image acquisition device, which is moved to, to be contacted with the target object;
Duration prompt information is issued, the duration prompt information includes the mobile duration, for prompting user by institute
It can be contacted with the target object after stating mobile duration.
In the embodiment of the present application, after the distance between the determination target object and described image acquisition device,
The method also includes:
Determine that the target object is adopted relative to described image according to the human body attitude information of acquisition and the real scene image
The direction of acquisition means;
Accordingly, described when the distance meets preset condition, issue prompt information, comprising:
When the distance meets preset condition, position indicating information is issued, the position indicating information includes the mesh
The direction of the distance between object and described image acquisition device and the target object relative to described image acquisition device is marked,
For prompting user, there are target objects on the position in the distance and the direction.
In the embodiment of the present application, after the distance between the determination target object and described image acquisition device,
The method also includes:
According to the human body attitude information of acquisition and the real scene image, reference direction is determined;
Mobile prompt information is issued, the mobile prompt information includes the reference direction, for prompting user to be based on institute
Reference direction is stated to be moved.
It is described when there are when target object at least one described first object in the embodiment of the present application, comprising:
Obtain the characteristic information of at least one first object;
According to the characteristic information, the attribute of at least one first object is determined;
When the attribute of first object is preset attribute, first object is determined as target object;
Wherein, the attribute includes at least one of shape, size, texture, color.
Second aspect, the embodiment of the present application provide a kind of information processing unit, and described device includes: acquisition unit, obtains
Unit, determination unit and prompt unit, in which:
The acquisition unit, for obtaining the real scene image of image acquisition device;
The acquiring unit, for obtaining at least one first object for including in the real scene image;
The determination unit, for when, there are when target object, determining the target at least one described first object
The distance between object and described image acquisition device;
The prompt unit, for issuing prompt information when the distance meets preset condition.
In the embodiment of the present application, the determination unit, comprising: the first determining module and the second determining module, in which:
First determining module, for described in there are when target object, determined at least one described first object
The depth information of target object;
Second determining module, for the depth information to be determined as the target object and described image acquisition dress
The distance between set.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, including memory and processor, the memory are deposited
The computer program that can be run on a processor is contained, the processor realizes information processing described above when executing described program
Step in method.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer program,
The computer program realizes the step in information processing method described above when being executed by processor.
The embodiment of the present application provides a kind of information processing method and device, equipment, storage medium, passes through and obtains Image Acquisition
The real scene image of device acquisition;Obtain at least one first object for including in the real scene image;When it is described at least one
There are when target object in an object, the distance between the target object and described image acquisition device are determined;When it is described away from
When from meeting preset condition, prompt information is issued, so, it is possible to remind in time when user is likely to occur danger, enhances product
Safety.
Detailed description of the invention
Fig. 1 is the implementation process schematic diagram one of the embodiment of the present application information processing method;
Fig. 2 is the implementation process schematic diagram two of the embodiment of the present application information processing method;
Fig. 3 is the implementation process schematic diagram three of the embodiment of the present application information processing method;
Fig. 4 is the implementation process schematic diagram four of the embodiment of the present application information processing method;
Fig. 5 is the composed structure schematic diagram of the embodiment of the present application information processing unit;
Fig. 6 is a kind of hardware entities schematic diagram of the embodiment of the present application electronic equipment.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application
In attached drawing, the specific technical solution of application is described in further detail.The following examples are only for illustrating the present application, does not have to
In limitation scope of the present application.
In subsequent description, it is only using the suffix for indicating such as " module ", " component " or " unit " of element
The explanation for being conducive to the application, itself does not have a specific meaning.Therefore, " module ", " component " or " unit " can mix
Ground uses.
The embodiment of the present application provides a kind of information processing method, and this method is applied to electronic equipment, what this method was realized
Function can realize that certain program code can be stored in computer and deposit by the processor caller code in server
In storage media, it is seen then that the server includes at least pocessor and storage media.Fig. 1 is the embodiment of the present application information processing method
Implementation process schematic diagram one, as shown in Figure 1, which comprises
Step S101, the real scene image of image acquisition device is obtained;
Here, the electronic equipment can be various types of equipment with information processing capability, such as mobile phone, individual
Digital assistants (Personal Digital Assistant, PDA), navigator, digital telephone, visual telephone, smartwatch, intelligence
Energy bracelet, wearable device, tablet computer, all-in-one machine etc..The server can be electronic equipment during realization, such as
Mobile phone, tablet computer, laptop, fixed terminal such as personal computer and server cluster etc. have information processing capability
Calculate equipment.
In the embodiment of the present application, when the electronic equipment is wearable device such as AR glasses, the real scene image can be with
For the collected real scene image of camera of the AR glasses.Certainly, the real scene image is also possible in addition to the AR glasses
Other equipment acquisition real scene image.In general, carrying out user's surrounding reality in real time while user uses AR glasses
The acquisition of image.
Here, the camera can be common rgb color mode camera (Red Green Blue color mode phase
Machine) camera, accordingly, the format of the real scene image is RGB image.Certainly, the camera is also possible to other classes
The camera of type camera, can determine target object in the real scene image, and the embodiment of the present application does not make this
Limitation.
Step S102, at least one first object for including in the real scene image is obtained;
Here, first object refers to all objects for including in the real scene image, for example, the first object can be with
It is the outdoor scenes object such as desk, chair, wall.
Step S103, when, there are when target object, determining the target object and institute at least one described first object
State the distance between image collecting device;
Here, the target object can be preset any first object, for example, the desk in the first object can
To be set to target object, desk, wall in the first object can be set to target object etc..
In the embodiment of the present application, when, there are when target object, determining the target pair at least one described first object
As the distance between with described image acquisition device.Certainly, it determines between the target object and described image acquisition device
There are many implementation methods for distance.For example, first method, carries out target object detection using the real scene image that RGB camera obtains,
The target object that will test corresponds in the outdoor scene depth image that depth camera obtains, to measure its distance.Second method,
By RGBD (Red Green Blue Depth) image of one four-way of the RGB image of real scene image and range image integration,
Then, it with the RGBD image training deep neural network of synthesis, finally returns and obtains position and the target object of target object
Distance apart from user's (i.e. image collecting device).The third method obtains real scene image using RGB camera and carries out target object
Detection determines the relative coordinate of user with the target object detected, and the relative coordinate is utilized to calculate distance.
In some embodiments, judge that following step can be passed through with the presence or absence of target object at least one first object
It is rapid to realize:
Step S11, the characteristic information of at least one the first object described in acquisition;
Step S12, according to the characteristic information, the attribute of at least one first object is determined;
Step S13, when the attribute of first object is preset attribute, first object is determined as target pair
As;Wherein, the attribute includes at least one of shape, size, texture, color.
Step S104, when the distance meets preset condition, prompt information is issued.
For example, prompt information can be issued, to cause the note of user when the distance is less than or equal to preset threshold
Meaning, and inform user there are target objects.
In the embodiment of the present application, by the real scene image for obtaining image acquisition device;It obtains in the real scene image
Including at least one first object;When, there are when target object, determining the target pair at least one described first object
As the distance between with described image acquisition device;When the distance meets preset condition, prompt information is issued, in this way, energy
It is enough to be reminded in time when user is likely to occur danger, enhance the safety of product.
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing method again, and this method is applied to electronics
Equipment, the function that this method is realized can realize by the processor caller code in server, certain program generation
Code can be stored in computer storage medium, it is seen then that the server includes at least pocessor and storage media.Fig. 2 is this Shen
Please embodiment information processing method implementation process schematic diagram two, as shown in Figure 2, which comprises
Step S201, the real scene image of image acquisition device is obtained;
In the embodiment of the present application, when the electronic equipment is wearable device such as AR glasses, the real scene image can be with
For the collected real scene image of camera of the AR glasses.Certainly, the real scene image is also possible in addition to the AR glasses
Other equipment acquisition real scene image.In general, carrying out user's surrounding reality in real time while user uses AR glasses
The acquisition of image.
Step S202, at least one first object for including in the real scene image is obtained;
Step S203, when, there are when target object, determining the depth of the target object at least one described first object
Spend information;
Here, first object can be all objects for including in the real scene image, accordingly, the target pair
As can be the barrier in the object.That is, can be to the real scene image using the RGB camera shooting in AR glasses
It is detected, to determine barrier.Then the barrier is corresponded in the depth image of depth camera shooting, to determine
The depth information of barrier is stated, and the depth information is determined as it at a distance from user.
In some embodiments, judge with the presence or absence of target object at least one first object, it can be by with lower section
Formula is realized: obtaining the characteristic information of at least one first object;According to the characteristic information, determine it is described at least one the
The attribute of an object;When the attribute of first object is preset attribute, first object is determined as target object;Its
In, the attribute includes at least one of shape, size, texture, color.
Step S204, by the depth information be determined as between the target object and described image acquisition device away from
From;
It is of course also possible to which the RGB image of same real scene image and depth image are synthesized, a four-way is obtained
RGBD image, then directly using the RGBD image training neural network model, with determine the barrier in RGBD image with
And the depth information of the barrier.
Step S205, when the distance meets preset condition, prompt information is issued.
For example, prompt information can be issued, to cause the note of user when the distance is less than or equal to preset threshold
Meaning, and inform user there are target objects.
In the embodiment of the present application, by the real scene image for obtaining image acquisition device;It obtains in the real scene image
Including at least one first object;When, there are when target object, determining the target pair at least one described first object
The depth information of elephant;The depth information is determined as the distance between the target object and described image acquisition device;When
When the distance meets preset condition, prompt information is issued, so, it is possible to remind in time when user is likely to occur danger, increased
The safety of strong product.
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing method again, which comprises
Step S211a, the real scene image of image acquisition device is obtained;
Step S212a, at least one first object for including in the real scene image is obtained;
Step S213a, when, there are when target object, determining the target object and institute at least one described first object
State the distance between image collecting device;
Step S214a, when the distance meets preset condition, determine described image acquisition device in moving process
Acceleration and angular speed;
Here it is possible to determine the figure using Inertial Measurement Unit (Inertial measurement unit, IMU)
As the acceleration and angular speed of acquisition device (i.e. user) in moving process.It is of course also possible to determined using other methods,
The embodiment of the present application is to this and with no restrictions.
Step S215a, according between the acceleration, angular speed and the target object and described image acquisition device
Distance determines mobile duration corresponding when described image acquisition device is moved to and contacts with the target object;
For example, user dresses AR glasses during walking, when the place for detecting an orientation certain distance
There are barrier, and user distance barrier there are also 8 meters it is remote when, to acceleration and angular speed of the user in moving process
It is detected, and calculates user and touch the scheduled time of the barrier (assuming that user is being moved to the touching obstacle
Acceleration and angular speed remains unchanged during object), after determining the scheduled time, prompt information can be issued, it is described to mention
Show that information includes the scheduled time, can be voice prompting, be also possible to text prompt, can also be visual prompts, this Shen
Please embodiment to prompting mode and with no restrictions.The prompt information is for prompting user by scheduled time (moving duration)
After can with the bar contact, pay attention to avoid or change moving direction.
Step S216a, duration prompt information is issued, the duration prompt information includes the mobile duration, for prompting
User can contact after the mobile duration with the target object.
In the embodiment of the present application, by the real scene image for obtaining image acquisition device;It obtains in the real scene image
Including at least one first object;When, there are when target object, determining the target pair at least one described first object
As the distance between with described image acquisition device;When the distance meets preset condition, described image acquisition device is determined
Acceleration and angular speed in moving process;It is adopted according to the acceleration, angular speed and the target object and described image
The distance between acquisition means, when determining corresponding mobile when described image acquisition device is moved to and contacts with the target object
It is long;Duration prompt information is issued, the duration prompt information includes the mobile duration, for prompting user by the movement
It can be contacted with the target object after duration, so, it is possible to remind in time when user is likely to occur danger, enhance the peace of product
Quan Xing.
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing method again, which comprises
Step S211b, the real scene image of image acquisition device is obtained;
Step S212b, at least one first object for including in the real scene image is obtained;
Step S213b, when, there are when target object, determining the depth of the target object at least one described first object
Spend information;
Step S214b, by the depth information be determined as between the target object and described image acquisition device away from
From;
Here, first object can be all objects for including in the real scene image, accordingly, the target pair
As can be the barrier in the object.That is, can be to the real scene image using the RGB camera shooting in AR glasses
It is detected, to determine barrier.Then the barrier is corresponded in the depth image of depth camera shooting, to determine
The depth information of barrier is stated, and the depth information is determined as it at a distance from user.
It is of course also possible to which the RGB image of same real scene image and depth image are synthesized, a four-way is obtained
RGBD image, then directly using the RGBD image training neural network model, with determine the barrier in RGBD image with
And the depth information of the barrier.
Step S215b, when the distance meets preset condition, determine described image acquisition device in moving process
Acceleration and angular speed;
Step S216b, according between the acceleration, angular speed and the target object and described image acquisition device
Distance determines mobile duration corresponding when described image acquisition device is moved to and contacts with the target object;
Step S217b, duration prompt information is issued, the duration prompt information includes the mobile duration, for prompting
User can contact after the mobile duration with the target object.
For example, user dresses AR glasses during walking, when the place for detecting an orientation certain distance
There are barrier, and user distance barrier there are also 5 meters it is remote when, to acceleration and angular speed of the user in moving process
It is detected, and calculates user and touch the mobile duration of the barrier (assuming that user is being moved to the touching obstacle
Acceleration and angular speed remains unchanged during object), after determining mobile duration, prompt information can be issued, prompts to use
Family can pay attention to avoiding or changing moving direction after mobile duration with the bar contact.
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing method again, which comprises
Step S221a, the real scene image of image acquisition device is obtained;
Step S222a, at least one first object for including in the real scene image is obtained;
Step S223a, when, there are when target object, determining the target object and institute at least one described first object
State the distance between image collecting device;
Step S224a, according to the human body attitude information of acquisition and the real scene image determine the target object relative to
The direction of described image acquisition device;
For example, when user dresses AR glasses, can be believed by obtaining the head of user relative to the posture of body
Breath, in conjunction with the real scene image that camera in AR glasses is shot, to determine the target object relative to described image acquisition device
The direction of (i.e. user).For example, when user's head is not turned to relative to body, the realistic picture of camera shooting in AR glasses
Direction of the target object in the real scene image as in, as side of the target object relative to described image acquisition device
To.For another example, the target pair when user's head is rotated relative to body, in AR glasses in the real scene image of camera shooting
As the direction in the real scene image is not direction of the target object relative to described image acquisition device, just need at this time
In conjunction with the specific posture information of human body, to determine direction of the target object relative to described image acquisition device under actual conditions.
Step S225a, when the distance meets preset condition, position indicating information, the position indicating information are issued
It is acquired including the distance between the target object and described image acquisition device and the target object relative to described image
The direction of device, for prompting user, there are target objects on the position in the distance and the direction.
For example, user dresses AR glasses and moves, it is determined that the barrier in the real scene image of shooting, Yi Jisuo
State the direction of the distance and barrier of barrier and user relative to user.In turn, position indicating information can be issued, is mentioned
Show that user has barrier at the location, for example, issuing voice prompting letter when 3 meters of user distance barrier remote
Breath prompts user " there are barriers apart from your orientation of left front 10, it is noted that evacuation ", can also directly will be prompted to information use
The mode of text, which is shown in, to be shown in picture, or directly barrier can be marked in displaying picture etc..
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing method again, which comprises
Step S221b, the real scene image of image acquisition device is obtained;
Step S222b, at least one first object for including in the real scene image is obtained;
Step S223b, when, there are when target object, determining the depth of the target object at least one described first object
Spend information;
Step S224b, by the depth information be determined as between the target object and described image acquisition device away from
From;
Here, first object can be all objects for including in the real scene image, accordingly, the target pair
As can be the barrier in the object.That is, can be to the real scene image using the RGB camera shooting in AR glasses
It is detected, to determine barrier.Then the barrier is corresponded in the depth image of depth camera shooting, to determine
The depth information of barrier is stated, and the depth information is determined as it at a distance from user.
It is of course also possible to which the RGB image of same real scene image and depth image are synthesized, a four-way is obtained
RGBD image, then directly using the RGBD image training neural network model, with determine the barrier in RGBD image with
And the depth information of the barrier.
Step S225b, according to the human body attitude information of acquisition and the real scene image determine the target object relative to
The direction of described image acquisition device;
It here, can be by obtaining posture information of the head of user relative to body, knot when user dresses AR glasses
The real scene image of camera shooting in AR glasses is closed, to determine that the target object (is used relative to described image acquisition device
Family) direction.For example, when user's head is not turned to relative to body, in the real scene image that camera is shot in AR glasses
Direction of the target object in the real scene image, as direction of the target object relative to described image acquisition device.Again
Such as, when user's head is rotated relative to body, the target object in real scene image that camera is shot in AR glasses exists
Direction in the real scene image is not direction of the target object relative to described image acquisition device, just needs to combine at this time
The specific posture information of human body, to determine direction of the target object relative to described image acquisition device under actual conditions.
Step S226b, when the distance meets preset condition, position indicating information, the position indicating information are issued
It is acquired including the distance between the target object and described image acquisition device and the target object relative to described image
The direction of device, for prompting user, there are target objects on the position in the distance and the direction.
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing method again, which comprises
Step S231a, the real scene image of image acquisition device is obtained;
Step S232a, at least one first object for including in the real scene image is obtained;
Step S233a, when, there are when target object, determining the target object and institute at least one described first object
State the distance between image collecting device;
In some embodiments, judge with the presence or absence of target object at least one first object, it can be by with lower section
Formula is realized: obtaining the characteristic information of at least one first object;According to the characteristic information, determine it is described at least one the
The attribute of an object;When the attribute of first object is preset attribute, first object is determined as target object;Its
In, the attribute includes at least one of shape, size, texture, color.
Step S234a, according to the human body attitude information of acquisition and the real scene image, reference direction is determined;
Step S235a, mobile prompt information is issued, the mobile prompt information includes the reference direction, for prompting
User is based on the reference direction and moves;
For example, when user, which dresses AR glasses, to move, if detecting that there are barriers for left front, and before the right side
When side does not have barrier, then user can be prompted to change moving direction, moved to right front.
Step S236a, when the distance meets preset condition, prompt information is issued.
For example, it if user does not change moving direction according to prompt information, advances also according to original direction
When, then when user soon touches barrier, user is prompted.
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing method again, which comprises
Step S231b, the real scene image of image acquisition device is obtained;
Step S232b, at least one first object for including in the real scene image is obtained;
Step S233b, when, there are when target object, determining the depth of the target object at least one described first object
Spend information;
Step S234b, by the depth information be determined as between the target object and described image acquisition device away from
From;
Step S235b, according to the human body attitude information of acquisition and the real scene image, reference direction is determined;
Here, when usage scenario is that user dresses AR glasses, the posture information of the human body can be the head of human body
Relative to the posture information of body, thus in conjunction with the location information of barrier in real scene image, to determine one or more movements
Reference direction is selected for user.
Step S236b, mobile prompt information is issued, the mobile prompt information includes the reference direction, for prompting
User is based on the reference direction and moves;
Step S237b, when the distance meets preset condition, prompt information is issued.
In the embodiment of the present application, by the real scene image for obtaining image acquisition device;It obtains in the real scene image
Including at least one first object;When, there are when target object, determining the target pair at least one described first object
The depth information of elephant;The depth information is determined as the distance between the target object and described image acquisition device;Root
Human body attitude information and the real scene image according to acquisition, determine reference direction;Issue mobile prompt information, the mobile prompt
Information includes the reference direction, is moved for prompting user to be based on the reference direction;It is preset when the distance meets
When condition, prompt information is issued, so, it is possible to remind in time when user is likely to occur danger, enhance the safety of product.
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing method again, and the method is a kind of auxiliary
The method for helping AR glasses to carry out detection of obstacles and early warning is prompted when detecting barrier to consumers risk, thus, it avoids
User wounds or falls down.Fig. 3 is the implementation process schematic diagram three of the embodiment of the present application information processing method, as shown in figure 3, institute
The method of stating includes:
Step S301, starting device;
Step S302, RGB image is obtained;
Step S303, whether there are obstacles in the detection RGB image;
Here, when there are when barrier, executing step S304 in the RGB image.When there is no barriers in the RGB image
When hindering object, step S302 is executed.
Step S304, according to the corresponding depth image of the RGB image, the distance of the barrier is determined;
In the embodiment of the present application, detection of obstacles is carried out using the image that RGB camera obtains, the object that will test is corresponding
The image obtained to depth camera, to measure its distance.It is less than the barrier of certain threshold value if there is distance, in time to user
Alarm.
Step S305, judge whether the distance of the barrier is less than predetermined threshold;
Here, when the distance of the barrier is less than predetermined threshold, step S306 is executed.Otherwise, step S302 is executed.
Step S306, user is prompted.
The embodiment of the present application solves user when using AR glasses, especially in the case where insufficient light, is easy to hit
The problem of being tripped to barrier or by barrier.
It is automatic to start detection of obstacles and warning function in equipment starting in the embodiment of the present application.It was run in equipment
Cheng Zhong issues the user with pre-warning signal if detecting that barrier and threshold value are less than given threshold in time, prompts user.
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing method again, and the method is arrived using end
The estimating and measuring method of the obstacle distance at end, instead of the method for common distribution measuring and calculating distance.Fig. 4 is at the embodiment of the present application information
The implementation process schematic diagram four of reason method, as shown in Figure 4, which comprises
Step S401, using the method for deep learning, the RGBD of one four-way of RGB image and range image integration is schemed
Picture;
Step S402, with the RGBD image training deep neural network of synthesis;
Step S403, the position for obtaining barrier and the distance of obstacle distance user are returned;
Step S404, when the distance meets preset condition, prompt information is issued.
Information processing method in the embodiment of the present application, strong to environmental suitability, step is few, and speed is fast.
Currently, common neural network framework is divided into following two: concatenated in order type neural network and residual error type nerve net
Network.Cascade related neural network is since a small network, and automatic training and the implicit unit of addition ultimately form a multilayer
Structure.And the embodiment of the present application uses structural intensive residual error type network, i.e., one image using all layered characteristics is super
Resolution ratio network is superior to remaining two kinds of network structure in speed and accuracy rate.
Based on embodiment above-mentioned, the embodiment of the present application provides a kind of information processing unit, which includes included
Each component included by each module and each module included by each unit and each unit, can be by electronic equipment
Processor is realized;Certainly it can also be realized by specific logic circuit;In the process of implementation, processor can be centre
Manage device (Central Processing Unit, CPU), microprocessor (Microprocessor Unit, MPU), digital signal
Processor (Digital Signal Processing, DSP) or field programmable gate array (Field Programmable
Gate Array, FPGA) etc..
Fig. 5 is the composed structure schematic diagram of the embodiment of the present application information processing unit, as shown in figure 5, described device 500 is wrapped
It includes: acquisition unit 501, acquiring unit 502, determination unit 503 and prompt unit 504, in which:
The acquisition unit 501, for obtaining the real scene image of image acquisition device;
The acquiring unit 502, for obtaining at least one first object for including in the real scene image;
The determination unit 503, for when, there are when target object, determining the mesh at least one described first object
Mark the distance between object and described image acquisition device;
The prompt unit 504, for issuing prompt information when the distance meets preset condition.
In some embodiments, the determination unit 503, comprising: the first determining module and the second determining module, in which:
First determining module, for described in there are when target object, determined at least one described first object
The depth information of target object;
Second determining module, for the depth information to be determined as the target object and described image acquisition dress
The distance between set.
In some embodiments, the prompt unit 504, comprising: third determining module, the 4th determining module and prompt mould
Block, in which:
The third determining module, for determining that described image acquisition device exists when the distance meets preset condition
Acceleration and angular speed in moving process;
4th determining module, for being adopted according to the acceleration, angular speed and the target object and described image
The distance between acquisition means, when determining corresponding mobile when described image acquisition device is moved to and contacts with the target object
It is long;
The cue module, for issuing duration prompt information, the duration prompt information includes the mobile duration, is used
It can be contacted with the target object after the mobile duration in prompt user.
In some embodiments, described device further include: direction-determining unit, in which:
The direction-determining unit, for determining the target according to the human body attitude information and the real scene image of acquisition
Direction of the object relative to described image acquisition device;
Accordingly, the cue module is also used to when the distance meets preset condition, issues position indicating information,
The position indicating information includes the distance between the target object and described image acquisition device and the target object phase
For the direction of described image acquisition device, for prompting user, there are targets pair on the position in the distance and the direction
As.
In some embodiments, described device further include: reference direction determination unit and refer to prompt unit, in which:
The reference direction determination unit, for according to acquisition human body attitude information and the real scene image, determine ginseng
Examine direction;
Described to refer to prompt unit, for issuing mobile prompt information, the mobile prompt information includes the reference side
To for prompting user to move based on the reference direction.
In some embodiments, the determination unit 503, comprising: obtain module and determining module, in which:
The acquisition module, for obtaining the characteristic information of at least one first object;
The determining module, for determining the attribute of at least one first object according to the characteristic information;
The determining module is also used to when the attribute of first object is preset attribute, and first object is true
It is set to target object;
Wherein, the attribute includes at least one of shape, size, texture, color.
The description of apparatus above embodiment, be with the description of above method embodiment it is similar, have same embodiment of the method
Similar beneficial effect.For undisclosed technical detail in the application Installation practice, the application embodiment of the method is please referred to
Description and understand.
It should be noted that in the embodiment of the present application, if realized in the form of software function module at above-mentioned information
Reason method, and when sold or used as an independent product, it also can store in a computer readable storage medium.Base
In such understanding, substantially the part that contributes to existing technology can be in other words for the technical solution of the embodiment of the present application
The form of software product embodies, which is stored in a storage medium, including some instructions to
So that an electronic equipment (can be personal computer, server etc.) executes the whole of each embodiment the method for the application
Or part.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read Only Memory, ROM), magnetic
The various media that can store program code such as dish or CD.In this way, the embodiment of the present application be not limited to it is any specific hard
Part and software combine.
Accordingly, the embodiment of the present application provides a kind of electronic equipment, including memory and processor, the memory storage
There is a computer program that can be run on a processor, the processor is realized when executing described program to be provided in above-described embodiment
Step in information processing method.
Accordingly, the embodiment of the present application provides a kind of readable storage medium storing program for executing, is stored thereon with computer program, the computer
The step in above- mentioned information processing method is realized when program is executed by processor.
It need to be noted that: the description of medium stored above and apparatus embodiments, with retouching for above method embodiment
It is similar for stating, and has with embodiment of the method similar beneficial effect.For in the application storage medium and apparatus embodiments not
The technical detail of disclosure please refers to the description of the application embodiment of the method and understands.
It should be noted that Fig. 6 is a kind of hardware entities schematic diagram of the embodiment of the present application electronic equipment, as shown in fig. 6,
The hardware entities of the electronic equipment 600 include: processor 601, communication interface 602 and memory 603, wherein
The overall operation of the usual controlling electronic devices 600 of processor 601.
Communication interface 602 can make electronic equipment 600 pass through network and other terminals or server communication.
Memory 603 is configured to store the instruction and application that can be performed by processor 601, can also cache device to be processed
601 and electronic equipment 600 in each module it is to be processed or processed data (for example, image data, audio data, voice
Communication data and video communication data), flash memory (FLASH) or random access storage device (Random Access can be passed through
Memory, RAM) it realizes.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can combine, or
It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion
Mutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unit
Or communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit, it can and it is in one place, it may be distributed over multiple network lists
In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, can be fully integrated into a processing module in each functional unit in each embodiment of the application, it can also
To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned
Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.This
Field those of ordinary skill, which is understood that, realizes that all or part of the steps of above method embodiment can be by program instruction phase
The hardware of pass is completed, and program above-mentioned can be stored in a computer readable storage medium, which when being executed, holds
Row step including the steps of the foregoing method embodiments;And storage medium above-mentioned includes: movable storage device, read-only memory (Read-
Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. it is various
It can store the medium of program code.
Disclosed method in several embodiments of the method provided herein, in the absence of conflict can be any group
It closes, obtains new embodiment of the method.
Disclosed feature in several product embodiments provided herein, in the absence of conflict can be any group
It closes, obtains new product embodiments.
Disclosed feature in several methods provided herein or apparatus embodiments, in the absence of conflict can be with
Any combination obtains new embodiment of the method or apparatus embodiments.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any
Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain
Lid is within the scope of protection of this application.Therefore, the protection scope of the application should be based on the protection scope of the described claims.
Claims (10)
1. a kind of information processing method, which comprises
Obtain the real scene image of image acquisition device;
Obtain at least one first object for including in the real scene image;
When, there are when target object, determining the target object and described image acquisition device at least one described first object
The distance between;
When the distance meets preset condition, prompt information is issued.
2. according to the method described in claim 1, described when there are when target object, being determined at least one described first object
The distance between the target object and described image acquisition device, comprising:
When, there are when target object, determining the depth information of the target object at least one described first object;
The depth information is determined as the distance between the target object and described image acquisition device.
3. method according to claim 1 or 2, described when the distance meets preset condition, prompt information, packet are issued
It includes:
When the distance meets preset condition, acceleration and angle speed of the described image acquisition device in moving process are determined
Degree;
According to the acceleration, angular speed and the distance between the target object and described image acquisition device, institute is worked as in determination
It states image collecting device and is moved to corresponding mobile duration when contacting with the target object;
Duration prompt information is issued, the duration prompt information includes the mobile duration, for prompting user by the shifting
It can be contacted with the target object after dynamic duration.
4. method according to claim 1 or 2, between the determination target object and described image acquisition device
After distance, the method also includes:
It determines that the target object is acquired relative to described image according to the human body attitude information of acquisition and the real scene image to fill
The direction set;
Accordingly, described when the distance meets preset condition, issue prompt information, comprising:
When the distance meets preset condition, position indicating information is issued, the position indicating information includes the target pair
As the direction with the distance between described image acquisition device and the target object relative to described image acquisition device, it is used for
Prompting user, there are target objects on the position in the distance and the direction.
5. method according to claim 1 or 2, between the determination target object and described image acquisition device
After distance, the method also includes:
According to the human body attitude information of acquisition and the real scene image, reference direction is determined;
Mobile prompt information is issued, the mobile prompt information includes the reference direction, for prompting user to be based on the ginseng
Direction is examined to be moved.
6. according to the method described in claim 1, it is described when at least one described first object there are when target object, packet
It includes:
Obtain the characteristic information of at least one first object;
According to the characteristic information, the attribute of at least one first object is determined;
When the attribute of first object is preset attribute, first object is determined as target object;
Wherein, the attribute includes at least one of shape, size, texture, color.
7. a kind of information processing unit, described device includes: acquisition unit, acquiring unit, determination unit and prompt unit,
In:
The acquisition unit, for obtaining the real scene image of image acquisition device;
The acquiring unit, for obtaining at least one first object for including in the real scene image;
The determination unit, for when, there are when target object, determining the target object at least one described first object
The distance between described image acquisition device;
The prompt unit, for issuing prompt information when the distance meets preset condition.
8. device according to claim 7, the determination unit, comprising: the first determining module and the second determining module,
In:
First determining module, for when, there are when target object, determining the target at least one described first object
The depth information of object;
Second determining module, for by the depth information be determined as the target object and described image acquisition device it
Between distance.
9. a kind of electronic equipment, including memory and processor, the memory are stored with the calculating that can be run on a processor
Machine program, the processor realize the step in any one of claim 1 to 6 information processing method when executing described program
Suddenly.
10. a kind of computer readable storage medium, is stored thereon with computer program, when which is executed by processor
Realize the step in any one of claim 1 to 6 information processing method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910418876.0A CN110191316A (en) | 2019-05-20 | 2019-05-20 | A kind of information processing method and device, equipment, storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910418876.0A CN110191316A (en) | 2019-05-20 | 2019-05-20 | A kind of information processing method and device, equipment, storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110191316A true CN110191316A (en) | 2019-08-30 |
Family
ID=67716873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910418876.0A Pending CN110191316A (en) | 2019-05-20 | 2019-05-20 | A kind of information processing method and device, equipment, storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110191316A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111595346A (en) * | 2020-06-02 | 2020-08-28 | 浙江商汤科技开发有限公司 | Navigation reminding method and device, electronic equipment and storage medium |
CN112733620A (en) * | 2020-12-23 | 2021-04-30 | 深圳酷派技术有限公司 | Information prompting method and device, storage medium and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122702A (en) * | 2017-03-13 | 2017-09-01 | 北京集创北方科技股份有限公司 | Safety device and safety method |
CN107831908A (en) * | 2011-10-07 | 2018-03-23 | 谷歌有限责任公司 | Wearable computer with the response of neighbouring object |
CN108151709A (en) * | 2016-12-06 | 2018-06-12 | 百度在线网络技术(北京)有限公司 | Localization method and device applied to terminal |
US20180276969A1 (en) * | 2017-03-22 | 2018-09-27 | T-Mobile Usa, Inc. | Collision avoidance system for augmented reality environments |
CN109011591A (en) * | 2018-08-28 | 2018-12-18 | 河南丰泰光电科技有限公司 | A kind of safety protection method and device in reality-virtualizing game |
-
2019
- 2019-05-20 CN CN201910418876.0A patent/CN110191316A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107831908A (en) * | 2011-10-07 | 2018-03-23 | 谷歌有限责任公司 | Wearable computer with the response of neighbouring object |
CN108151709A (en) * | 2016-12-06 | 2018-06-12 | 百度在线网络技术(北京)有限公司 | Localization method and device applied to terminal |
CN107122702A (en) * | 2017-03-13 | 2017-09-01 | 北京集创北方科技股份有限公司 | Safety device and safety method |
US20180276969A1 (en) * | 2017-03-22 | 2018-09-27 | T-Mobile Usa, Inc. | Collision avoidance system for augmented reality environments |
CN109011591A (en) * | 2018-08-28 | 2018-12-18 | 河南丰泰光电科技有限公司 | A kind of safety protection method and device in reality-virtualizing game |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111595346A (en) * | 2020-06-02 | 2020-08-28 | 浙江商汤科技开发有限公司 | Navigation reminding method and device, electronic equipment and storage medium |
CN111595346B (en) * | 2020-06-02 | 2022-04-01 | 浙江商汤科技开发有限公司 | Navigation reminding method and device, electronic equipment and storage medium |
CN112733620A (en) * | 2020-12-23 | 2021-04-30 | 深圳酷派技术有限公司 | Information prompting method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108615248B (en) | Method, device and equipment for relocating camera attitude tracking process and storage medium | |
US11393154B2 (en) | Hair rendering method, device, electronic apparatus, and storage medium | |
WO2019205879A1 (en) | Method for realizing virtual scene conversion and related device | |
CN105555373B (en) | Augmented reality equipment, methods and procedures | |
CN102763422B (en) | Projectors and depth cameras for deviceless augmented reality and interaction | |
CN110276840B (en) | Multi-virtual-role control method, device, equipment and storage medium | |
KR101637990B1 (en) | Spatially correlated rendering of three-dimensional content on display components having arbitrary positions | |
TW201214266A (en) | Three dimensional user interface effects on a display by using properties of motion | |
CN108604121A (en) | Both hands object manipulation in virtual reality | |
CN110006343A (en) | Measurement method, device and the terminal of object geometric parameter | |
CN111311757B (en) | Scene synthesis method and device, storage medium and mobile terminal | |
US11238651B2 (en) | Fast hand meshing for dynamic occlusion | |
CN108668108B (en) | Video monitoring method and device and electronic equipment | |
CN106780707B (en) | The method and apparatus of global illumination in simulated scenario | |
CN108694073A (en) | Control method, device, equipment and the storage medium of virtual scene | |
CN110191316A (en) | A kind of information processing method and device, equipment, storage medium | |
CN106680827A (en) | Positioning system in sealed space, and related method and device | |
CN109844600A (en) | Information processing equipment, information processing method and program | |
CN112367592A (en) | Directional display method and device for audio equipment and audio equipment | |
CN110192169A (en) | Menu treating method, device and storage medium in virtual scene | |
CN107534202A (en) | A kind of method and apparatus for measuring antenna attitude | |
US20220335638A1 (en) | Depth estimation using a neural network | |
CN110310325A (en) | A kind of virtual measurement method, electronic equipment and computer readable storage medium | |
CN110276794A (en) | Information processing method, information processing unit, terminal device and server | |
CN108829595A (en) | Test method, device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190830 |
|
RJ01 | Rejection of invention patent application after publication |