為了使本領域技術人員更好地理解本說明書實施例中的技術方案,下面將結合本說明書實施例中的附圖,對本說明書實施例中的技術方案進行詳細地描述,顯然,所描述的實施例僅僅是本說明書的一部分實施例,而不是全部的實施例。基於本說明書中的實施例,本領域普通技術人員所獲得的所有其他實施例,都應當屬於保護的範圍。
現代醫學認為,夢是人體睡眠時,體內外各種刺激因素,例如心理、生理、病理、環境因素等作用於大腦特定的皮層所產生的,也就是說,人在做夢時,大腦皮層處於興奮狀態,從而將產生腦電波,而腦電波與人的意識活動具有某種程度的對應,例如,人在看到兩幅不同的圖像,或者聽到兩段旋律不同的音樂時,大腦皮層的神經活動不同,從而所產生的腦電波不同,同樣的,人做夢夢到不同的場景時,大腦皮層的神經活動不同,從而所產生的腦電波也不同,基於此,本發明提出利用腦電波資料實現夢境重現。
請參見圖1,為本說明書一示例性實施例示出的實現夢境重現的應用場景示意圖。如圖1所示,包括用戶110、腦電波感測器120,以及電腦130,其中,腦電波感測器120被佩戴在用戶110的頭部,用於採集用戶110的腦電波資料,並將採集到的腦電波資料發送給電腦130,具體為:在用戶110處於清醒狀態時,腦電波感測器120採集用戶110在感知到可感知對象,例如用戶110看到一幅圖像時,用戶110的腦電波資料,將該腦電波資料,以及該腦電波資料對應的可感知對象的相關資訊發送給電腦130,由電腦130基於接收到的腦電波資料與可感知對象的相關資訊進行訓練,得到夢境重現模型,該夢境重現模型可以以腦電波資料相關資訊為輸入,以可感知對象相關資訊為輸出。本領域技術人員可以理解的是,為了得到夢境重現模型,需要若干樣本,亦即需要採集若干條用戶在感知不同的可感知對象時,所產生的腦電波資料。
在訓練得到夢境重現模型後,則可以使用腦電波感測器120採集用戶在睡眠狀態下所產生的腦電波資料,繼而腦電波感測器120將該腦電波資料發送至電腦130,電腦130則可以基於夢境重現模型輸出與該腦電波資料對應的可感知對象的相關資訊,繼而則可以基於該可感知對象的相關資訊生成夢境重現結果。待用戶110醒來後,則可以透過電腦130查看上述夢境重現結果,實現“重溫”夢境。
基於圖1所示的應用場景,本說明書示出下述實施例分別從夢境重現模型的構建,與基於夢境重現模型實現夢境重現兩個方面進行描述。
首先,從夢境重現模型的構建這一方面進行描述:
請參見圖2,為本說明書一示例性實施例示出的夢境重現模型的構建方法的流程圖,可以包括以下步驟:
步驟202:獲得至少一組包含可感知對象與用戶在感知可感知對象時的腦電波資料的對應關係。
在本說明書實施例中,可感知對象可以為單幅圖像,也可以為視頻中截取的一幅圖像幀,本領域技術人員可以理解的是,單幅圖像與圖像幀的本質都是圖像,因此,為了描述方便,在本說明書實施例中,即稱可感知對象可以為圖像。
在一實施例中,可以預設可感知對象集合,例如,該集合包括1000張不同的圖像,依次將該可感知對象集合中的每一可感知對象提供給用戶110,例如,在特定環境下,以幻燈片形式播放該每一可感知對象,並在向用戶110提供可感知對象時,同步採集用戶110在感知該可感知對象時的腦電波資料。透過該種處理,每採集一次腦電波資料,即可獲得一條包含可感知對象與用戶110感知該可感知對象時的腦電波資料的對應關係,例如,獲得1000條該對應關係。
在一實施例中,可以按照預設規則,例如每隔5秒鐘,向用戶110提供一個可感知對象,並在提供第一個可感知對象至最後一個可感知對象的整個過程中,持續採集用戶110的腦電波資料,之後,按照與上述預設規則對應的提取規則,例如按照腦電波資料的採集時間,每隔5秒鐘,截取一段腦電波資料,之後,建立腦電波資料與可感知對象之間的對應關係。透過該種處理,最終也可以獲取到多條包含可感知對象與用戶110感知該可感知對象時的腦電波資料的對應關係。
在一實施例中,也可以向用戶110提供一段視頻,並在用戶110觀看該視頻的整個過程中,持續採集用戶110的腦電波資料,之後,按照同樣的時間間隔,截取該視頻中的圖像幀,以及腦電波資料,之後,則可以建立起腦電波資料與可感知對象之間的對應關係。
需要說明的是,上述描述的兩個實施例僅僅作為兩種可選的實現方式,在實際應用中,還可以存在其他方式獲取到至少一組包含可感知對象與用戶在感知可感知對象時的腦電波資料的對應關係,例如,可以在用戶110按照自主意志進行活動時,採集用戶110的腦電波資料,並同步採集用戶110的視網膜成像,基於採集時間,即可建立腦電波資料與視網膜成像的對應關係,該視網膜成像即可等同於用戶感知到的可感知對象。
本領域技術人員可以理解的是,按照上述描述,圖1中所示例的腦電波感測器120還可以具有採集視網膜成像的功能,或者是由另一單獨的可佩戴智慧晶片(圖1中並未示出)負責採集用戶110的視網膜成像,本說明書實施例對此不作限制。
步驟204:分別對每一組對應關係進行特徵提取,獲得訓練樣本集合,其中,每條訓練樣本以提取到的腦電波資料的特徵值為輸入值,以提取到的可感知對象的特徵值為標籤值。
本說明書實施例中,針對步驟202獲得的每一組對應關係,分別對每一組對應關係中的可感知對象與用戶感知該可感知對象時的腦電波資料進行特徵提取,得到訓練樣本集合,該訓練樣本集合中的每一條訓練樣本則包括提取到的腦電波資料的特徵值與提取到的可感知對象的特徵值,並且基於上述圖1所示應用場景的相關描述,在實際的夢境重現過程中,是透過採集到的用戶110在睡眠狀態下的腦電波資料確定用戶110感知到的可感知對象,因此,上述每一條訓練樣本則以腦電波資料的特徵值為輸入值,以提取到的可感知對象的特徵值為標籤值。
提取腦電波資料的特徵值:
在一實施例中,透過複變的數學概念可知,任一頻率的實信號都可以表示成一系列週期函數的和,而將實信號表示成一系列週期函數和的過程則是對該實信號進行分析的過程,每一週期函數則相當於該實信號的組成成分,基於此,本說明書提出對腦電波資料進行複變分解,例如利用傅立葉變換對腦電波資料進行複變分解,將腦電波資料表示為至少一個複變函數的和,該至少一個複變函數則可以作為腦電波資料的特徵值,例如,所提到的腦電波資料的特徵值為(a1
f1
(sinx),a2
f2
(sinx),a3
f3
(sinx))。
需要說明的是,上述描述的提取腦電波資料特徵值的方式僅僅作為一種可選的實現方式,在實際應用中,還可以透過其他方式提取腦電波資料的特徵值,例如,可以透過相關性分析、AR參數估計、Butterworth低通濾波、遺傳演算法等等方式提取腦電波資料的特徵值,所提取到的特徵值的具體類型可以由實際演算法決定,例如,採用Butterworth低通濾波演算法提取到的特徵值則為信號幅度的平方值,採用AR參數估計演算法提取到的特徵值則為功率頻譜密度,本說明書實施例不再一一介紹。
提取可感知對象的特徵值:
以可感知對象為圖像為例,在一實施例中,可以對可感知對象進行顏色統計,得到可感知對象中每種顏色值對應的像素點個數,將所得到的像素點個數表示為2N
維向量,其中N為圖像的色彩位元數,即,該2N
維向量即可作為該可感知對象的特徵值,例如,所提取到的特徵值為(y1
、y2
、y3
、……y2^N
)。
進一步,考慮到不同圖像的色彩位元數可能不同,例如8位元圖像與16位元圖像,從而,所提取到的特徵值的維數也就不同,為了後續對訓練樣本進行訓練的統一化,規整化,可以將具有不同色彩位元數的圖像的顏色統計結果映射至統一的向量空間,這裡所說的“統一”是指基於顏色統計結果所得到向量的維數相同。
此外,需要說明的是,向量的維數越大,後續對訓練樣本進行訓練的複雜度也就越高,計算量也就越大,因此,在本說明書實施例中,在保證可感知對象特徵值的精細度滿足用戶期望時,可以盡可能地設定一個維數較小的向量空間。
需要說明的是,在實際應用中,針對不同色彩位元數的圖像,可以首先將這些圖像統一設置為相同的色彩位數,繼而再按照上述描述進行特徵提取,在得到顏色統計結果後,則無需再執行將每一顏色統計結果映射至統一向量空間的步驟。
步驟206:利用有監督學習演算法對訓練樣本進行訓練,得到夢境重現模型,夢境重現模型以腦電波資料的特徵值作為輸入值,以可感知對象的特徵值作為輸出值。
在本說明書實施例中,可以利用有監督學習演算法對步驟204中得到的訓練樣本進行訓練,得到夢境重現模型,該夢境重現模型以腦電波資料的特徵值作為輸入值,以可感知對象的特徵值作為輸出值。可以理解的是,訓練得到的夢境重現模型實質上可以理解為輸入值與輸出值之間的函數關係,其中,輸出值會受到輸入值中的全部或部分影響,因此,輸出值與輸入值之間的函數關係可以如下示例:
其中,x1
,x2
,…xM
表示M個輸入值,亦即M個腦電波資料的特徵值,y則表示輸出值,亦即可感知對象的特徵值,具體可以為可感知對象中每種顏色值對應的像素點個數之間的比例關係。
需要說明的是,該夢境重現模型的形式可以根據實際訓練需求選擇,例如線性回歸模型(linear regression model)、邏輯回歸模型(logistic regression model)等等。本說明書實施例對模型的選擇及具體的訓練演算法均不作限定。
此外,需要說明的是,不同用戶對同一可感知對象的感知能力可能不同,因此,本說明書實施例中提出針對不同用戶分別構建不同的夢境重現模型;進一步,同一用戶在自身心理、生理處於不同狀態下,對同一可感知對象的感知能力可能不同,因此,本說明書實施例中還提出針對同一用戶的不同時間段,分別構建不同的夢境重現模型。另外,在實際應用中,還可以存在其他可實現方式,例如,針對不同用戶均構建同一夢境重現模型,本說明書實施例對此並不作具體限制。
由上述實施例可見,本說明書實施例提供的技術方案,透過獲得至少一組包含可感知對象與用戶在感知可感知對象時的腦電波資料的對應關係,分別對每一組對應關係進行特徵提取,獲得訓練樣本集合,其中,每條訓練樣本以提取到的腦電波資料的特徵值為輸入值,以可感知對象的特徵值為標籤值,利用有監督學習演算法對訓練樣本進行訓練,得到夢境重現模型,該夢境重現模型以腦電波資料的特徵值作為輸入值,以可感知對象的特徵值為輸出值,後續,透過該夢境重現模型即可實現利用用戶在睡眠狀態下的腦電波資料重現用戶夢境,滿足用戶體驗。
至此,完成夢境重現模型的構建這一方面的相關描述。
其次,從基於夢境重現模型實現夢境重現這一方面進行描述:
請參見圖3,為本說明書一示例性實施例示出的夢境重現方法的流程圖,可以包括以下步驟:
步驟302:獲得用戶在睡眠狀態下的腦電波資料。
在本說明書實施例中,可以按照預設規則,例如每隔一分鐘、或每隔兩分鐘等透過圖1中所示例的腦電波感測器120獲得用戶在睡眠狀態下的腦電波資料。
步驟304:對所獲得的腦電波資料進行特徵提取,得到腦電波資料的特徵值。
本步驟的詳細描述可以參見上述圖2所示實施例中步驟204中的相關描述,在此不再詳述。
步驟306:將所得到的腦電波資料的特徵值輸入夢境重現模型,得到對應的輸出值。
由上述圖2所示實施例中描述的夢境重現模型可知,在本步驟中,可以將步驟304中提取到的腦電波資料的特徵值輸入夢境重現模型,得到對應的輸出值,該輸出值可以為可感知對象的特徵值。
步驟308:從對應關係中,確定與輸出值具有最高相似度的可感知對象,以生成夢境重現結果。
在本說明書實施例中,可以將訓練樣本集中每一可感知對象的特徵值與步驟306中的輸出值進行相似度計算,確定與該輸出值相似度最高的可感知對象的特徵值,繼而,則可以在上述圖2所示實施例描述的對應關係中,確定與該輸出值具有最高相似度的可感知對象,基於所確定的可感知對象即可生成夢境重現結果。
得到訓練樣本集中每一可感知對象的特徵值的具體過程可以參見上述圖2所示實施例中的相關描述,在此不再詳述。
進一步可以向用戶展示夢境重現結果,例如,按照腦電波資料的採集時間的先後順序,以幻燈片形式播放確定的多張圖像。
本領域技術人員可以理解的是,上述與輸出值具有最高相似度的可感知對象可以為一個或多個,本說明書實施例對此不作限制。
在上述描述中,計算輸出值與可感知對象的特徵值之間相似度的具體方式可以為歐式距離演算法、餘弦相似度計算演算法,等等,本說明書實施例對此不作限制。
由上述實施例可見,本說明書實施例提供的技術方案,透過獲得用戶在睡眠狀態下的腦電波資料,對該腦電波資料進行特徵提取,得到腦電波資料的特徵值,將該特徵值輸入夢境重現模型,得到對應的輸出值,之後,從預先獲取到的包含可感知對象與用戶感知可感知對象時的腦電波資料的對應關係中,確定與該輸出值具有最高相似度的可感知對象,以生成夢境重現結果,從而實現了用戶基於夢境重現結果“重溫”夢境。
至此,完成基於夢境重現模型實現夢境重現這一方面的相關描述。
相應於上述夢境重現模型的構建方法實施例,本說明書實施例還提供一種夢境重現模型的構建裝置,參見圖4所示,為本說明書一示例性實施例示出的夢境重現模型的構建裝置的實施例框圖,該裝置可以包括:資料獲取模組41、樣本獲取模組42,以及樣本訓練模組43。
其中,資料獲取模組41,可以用於獲得至少一組包含可感知對象與用戶在感知所述可感知對象時的腦電波資料的對應關係;
樣本獲取模組42,可以用於分別對每一組所述對應關係進行特徵提取,獲得訓練樣本集合,其中,每條訓練樣本以提取到的所述腦電波資料的特徵值為輸入值,以提取到的所述可感知對象的特徵值為標籤值;
樣本訓練模組43,可以用於利用有監督學習演算法對所述訓練樣本進行訓練,得到夢境重現模型,所述夢境重現模型以腦電波資料的特徵值作為輸入值,以可感知對象的特徵值作為輸出值。
在一實施例中,所述資料獲取模組41可以包括(圖4中未示出):
提供子模組,用於依次將預設的可感知對象集合中的每一可感知對象提供給用戶;
採集子模組,用於在向所述用戶提供所述可感知對象時,同步採集所述用戶在感知所述可感知對象時的腦電波資料。
在一實施例中,所述樣本獲取模組42可以包括(圖4中未示出):
第一分解子模組,用於對每一組所述對應關係中的腦電波資料進行複變分解,將所述腦電波資料表示為至少一個複變函數的和;
第一確定子模組,用於將所述至少一個複變函數作為所述腦電波資料的特徵值。
在一實施例中,所述可感知對象為圖像,所述樣本獲取模組42可以包括(圖4中未示出):
統計子模組,用於對每一組所述對應關係中的圖像進行顏色統計,得到所述圖像中每種顏色值對應的像素點個數;
第二確定子模組,用於將所得到的像素點個數表示為2N
維向量,其中N為圖像的色彩位元數。
在一實施例中,所述裝置還可以包括(圖4中未示出):
映射模組,用於將具有不同色彩位元數的圖像的顏色統計結果映射至統一的向量空間。
在一實施例中,針對不同用戶分別構建不同的夢境重現模型。
可以理解的是,資料獲取模組41、樣本獲取模組42,以及樣本訓練模組43作為三種功能獨立的模組,既可以如圖4所示同時配置在裝置中,也可以分別單獨配置在裝置中,因此圖4所示的結構不應理解為對本說明書實施例方案的限定。
此外,上述裝置中各個模組的功能和作用的實現過程具體詳見上述夢境重現模型的構建方法中對應步驟的實現過程,在此不再贅述。
相應於上述夢境重現方法實施例,本說明書實施例還提供一種夢境重現裝置,參見圖5所示,為本說明書一示例性實施例示出的夢境重現裝置的實施例框圖,該裝置可以包括:腦電波獲取模組51、特徵提取模組52、輸出模組53,以及重現模組54。
其中,腦電波獲取模組51,可以用於獲得用戶在睡眠狀態下的腦電波資料;
特徵提取模組52,可以用於對所獲得的腦電波資料進行特徵提取,得到所述腦電波資料的特徵值;
輸出模組53,可以用於將所得到的腦電波資料的特徵值輸入所述夢境重現模型,得到對應的輸出值;
重現模組54,可以用於從所述對應關係中,確定與所述輸出值具有最高相似度的可感知對象,以生成夢境重現結果。
在一實施例中,所述特徵提取模組52可以包括(圖5中未示出):
第二分解子模組,用於對所獲得的腦電波資料進行複變分解,將所述腦電波資料表示為至少一個複變函數的和;
第三確定子模組,用於將所述至少一個複變函數作為所述腦電波資料的特徵值。
在一實施例中,所述重現模組54可以包括(圖5中未示出):
第四確定子模組,用於確定所述對應關係中每一可感知對象的參考特徵值;
計算子模組,用於分別計算所述輸出值與所述每一可感知對象的參考特徵值之間的相似度;
第五確定子模組,用於確定具有最高相似度的可感知對象,以生成夢境重現結果。
可以理解的是,腦電波獲取模組51、特徵提取模組52、輸出模組53,以及重現模組54作為四種功能獨立的模組,既可以如圖5所示同時配置在裝置中,也可以分別單獨配置在裝置中,因此圖5所示的結構不應理解為對本說明書實施例方案的限定。
此外,上述裝置中各個模組的功能和作用的實現過程具體詳見上述夢境重現方法中對應步驟的實現過程,在此不再贅述。
相應於上述夢境重現模型的構建方法實施例,本說明書實施例還提供一種電腦設備,其至少包括記憶體、處理器及儲存在記憶體上並可在處理器上運行的電腦程式,其中,處理器執行所述程式時實現前述的夢境重現模型的構建方法,該方法至少包括:獲得至少一組包含可感知對象與用戶在感知所述可感知對象時的腦電波資料的對應關係;分別對每一組所述對應關係進行特徵提取,獲得訓練樣本集合,其中,每條訓練樣本以提取到的所述腦電波資料的特徵值為輸入值,以提取到的所述可感知對象的特徵值為標籤值;利用有監督學習演算法對所述訓練樣本進行訓練,得到夢境重現模型,所述夢境重現模型以腦電波資料的特徵值作為輸入值,以可感知對象的特徵值作為輸出值。
相應於上述夢境重現方法實施例,本說明書實施例還提供一種電腦設備,其至少包括記憶體、處理器及儲存在記憶體上並可在處理器上運行的電腦程式,其中,處理器執行所述程式時實現前述的夢境重現方法,該方法至少包括:獲得用戶在睡眠狀態下的腦電波資料;對所獲得的腦電波資料進行特徵提取,得到所述腦電波資料的特徵值;將所得到的腦電波資料的特徵值輸入所述夢境重現模型,得到對應的輸出值;從所述對應關係中,確定與所述輸出值具有最高相似度的可感知對象,以生成夢境重現結果。
圖6示出了本說明書實施例所提供的一種更為具體的計算設備硬體結構示意圖,該設備可以包括:處理器610、記憶體620、輸入/輸出介面630、通信介面640和匯流排650。其中處理器610、記憶體620、輸入/輸出介面630和通信介面640透過匯流排650實現彼此之間在設備內部的通信連接。
處理器610可以採用通用的CPU(Central Processing Unit,中央處理器)、微處理器、應用專用積體電路(Application Specific Integrated Circuit,ASIC)、或者一個或多個積體電路等方式實現,用於執行相關程式,以實現本說明書實施例所提供的技術方案。
記憶體620可以採用ROM(Read Only Memory,唯讀記憶體)、RAM(Random Access Memory,隨機存取記憶體)、靜態存放裝置,動態儲存裝置設備等形式實現。記憶體620可以儲存作業系統和其他應用程式,在透過軟體或者韌體來實現本說明書實施例所提供的技術方案時,相關的程式碼保存在記憶體620中,並由處理器610來調用執行。
輸入/輸出介面630用於連接輸入/輸出模組,以實現資訊輸入及輸出。輸入輸出/模組可以作為元件配置在設備中(圖6中未示出),也可以外接於設備以提供相應功能。其中輸入裝置可以包括鍵盤、滑鼠、觸控式螢幕、麥克風、各類感測器等,輸出設備可以包括顯示器、揚聲器、振動器、指示燈等。
通信介面640用於連接通信模組(圖6中未示出),以實現本設備與其他設備的通信交互。其中通信模組可以透過有線方式(例如USB、網線等)實現通信,也可以透過無線方式(例如移動網路、WIFI、藍牙等)實現通信。
匯流排650包括一通路,在設備的各個元件(例如處理器610、記憶體620、輸入/輸出介面630和通信介面640)之間傳輸資訊。
需要說明的是,儘管上述設備僅示出了處理器610、記憶體620、輸入/輸出介面630、通信介面640以及匯流排650,但是在具體實施過程中,該設備還可以包括實現正常運行所必需的其他元件。此外,本領域的技術人員可以理解的是,上述設備中也可以僅包含實現本說明書實施例方案所必需的組件,而不必包含圖中所示的全部元件。
相應於上述夢境重現模型的構建方法實施例,本說明書實施例還提供一種電腦可讀儲存媒體,其上儲存有電腦程式,該程式被處理器執行時實現前述的夢境重現模型的構建方法。該方法至少包括:獲得至少一組包含可感知對象與用戶在感知所述可感知對象時的腦電波資料的對應關係;分別對每一組所述對應關係進行特徵提取,獲得訓練樣本集合,其中,每條訓練樣本以提取到的所述腦電波資料的特徵值為輸入值,以提取到的所述可感知對象的特徵值為標籤值;利用有監督學習演算法對所述訓練樣本進行訓練,得到夢境重現模型,所述夢境重現模型以腦電波資料的特徵值作為輸入值,以可感知對象的特徵值作為輸出值。
相應於上述夢境重現方法實施例,本說明書實施例還提供一種電腦可讀儲存媒體,其上儲存有電腦程式,該程式被處理器執行時實現前述的夢境重現方法。該方法至少包括:獲得用戶在睡眠狀態下的腦電波資料;對所獲得的腦電波資料進行特徵提取,得到所述腦電波資料的特徵值;將所得到的腦電波資料的特徵值輸入所述夢境重現模型,得到對應的輸出值;從所述對應關係中,確定與所述輸出值具有最高相似度的可感知對象,以生成夢境重現結果。
電腦可讀媒體包括永久性和非永久性、可移動和非可移動媒體可以由任何方法或技術來實現資訊儲存。資訊可以是電腦可讀指令、資料結構、程式的模組或其他資料。電腦的儲存媒體的例子包括,但不限於相變記憶體(PRAM)、靜態隨機存取記憶體(SRAM)、動態隨機存取記憶體(DRAM)、其他類型的隨機存取記憶體(RAM)、唯讀記憶體(ROM)、電可擦除可編程唯讀記憶體(EEPROM)、快閃記憶體或其他記憶體技術、唯讀光碟唯讀記憶體(CD-ROM)、數位多功能光碟(DVD)或其他光學儲存、磁盒式磁帶,磁帶磁磁片儲存或其他磁性存放裝置或任何其他非傳輸媒體,可用於儲存可以被計算設備訪問的資訊。按照本文中的界定,電腦可讀媒體不包括暫存電腦可讀媒體(transitory media),如調製的資料信號和載波。
透過以上的實施方式的描述可知,本領域的技術人員可以清楚地瞭解到本說明書實施例可借助軟體加必需的通用硬體平台的方式來實現。基於這樣的理解,本說明書實施例的技術方案本質上或者說對現有技術做出貢獻的部分可以以軟體產品的形式體現出來,該電腦軟體產品可以儲存在儲存媒體中,如ROM/RAM、磁碟、光碟等,包括若干指令用以使得一台電腦設備(可以是個人電腦,伺服器,或者網路設備等)執行本說明書實施例各個實施例或者實施例的某些部分所述的方法。
上述實施例闡明的系統、裝置、模組或單元,具體可以由電腦晶片或實體實現,或者由具有某種功能的產品來實現。一種典型的實現設備為電腦,電腦的具體形式可以是個人電腦、膝上型電腦、蜂窩電話、相機電話、智慧型電話、個人數位助理、媒體播放機、導航設備、電子郵件收發設備、遊戲控制台、平板電腦、可穿戴設備或者這些設備中的任意幾種設備的組合。
本說明書中的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於裝置實施例而言,由於其基本相似於方法實施例,所以描述得比較簡單,相關之處參見方法實施例的部分說明即可。以上所描述的裝置實施例僅僅是示意性的,其中所述作為分離部件說明的模組可以是或者也可以不是物理上分開的,在實施本說明書實施例方案時可以把各模組的功能在同一個或多個軟體和/或硬體中實現。也可以根據實際的需要選擇其中的部分或者全部模組來實現本實施例方案的目的。本領域普通技術人員在不付出創造性勞動的情況下,即可以理解並實施。
以上所述僅是本說明書實施例的具體實施方式,應當指出,對於本技術領域的普通技術人員來說,在不脫離本說明書實施例原理的前提下,還可以做出若干改進和潤飾,這些改進和潤飾也應視為本說明書實施例的保護範圍。In order to enable those skilled in the art to better understand the technical solutions in the embodiments of this specification, the technical solutions in the embodiments of this specification will be described in detail below in conjunction with the drawings in the embodiments of this specification. Obviously, the described implementation The examples are only a part of the embodiments of this specification, not all the embodiments. Based on the embodiments in this specification, all other embodiments obtained by a person of ordinary skill in the art should fall within the scope of protection. Modern medicine believes that dreams are produced by various stimulating factors inside and outside the body, such as psychological, physiological, pathological, environmental factors, etc., acting on specific brain cortex during human sleep. In other words, when a person is dreaming, the cerebral cortex is in an excited state , Which will produce brain waves, and brain waves have a certain degree of correspondence with people’s conscious activities. For example, when people see two different images or hear two music with different melody, the neural activity of the cerebral cortex Different, the brain waves generated are different. Similarly, when people dream of different scenes, the neural activity of the cerebral cortex is different, and the brain waves generated are also different. Based on this, the present invention proposes to use brain wave data to realize dreams. Reproduce. Please refer to FIG. 1, which is a schematic diagram of an application scenario for realizing dream reproduction shown in an exemplary embodiment of this specification. As shown in FIG. 1, it includes a user 110, a brain wave sensor 120, and a computer 130. The brain wave sensor 120 is worn on the head of the user 110 to collect brain wave data of the user 110 and integrate The collected brainwave data is sent to the computer 130, specifically: when the user 110 is awake, the brainwave sensor 120 collects that the user 110 perceives a perceptible object, for example, when the user 110 sees an image, the user The brain wave data of 110, the brain wave data and the relevant information of the perceptible object corresponding to the brain wave data are sent to the computer 130, and the computer 130 performs training based on the received brain wave data and the relevant information of the perceptible object. A dream reproduction model is obtained. The dream reproduction model can take brain wave data-related information as input and perceivable object-related information as output. Those skilled in the art can understand that in order to obtain a dream reproduction model, several samples are needed, that is, several pieces of brain wave data generated when users perceive different perceptible objects need to be collected. After training to obtain the dream reproduction model, the brain wave sensor 120 can be used to collect brain wave data generated by the user in the sleep state, and then the brain wave sensor 120 sends the brain wave data to the computer 130. The computer 130 Then, based on the dream reproduction model, the relevant information of the perceivable object corresponding to the brain wave data can be output, and then the dream reproduction result can be generated based on the relevant information of the perceivable object. After the user 110 wakes up, he can view the above-mentioned dream reproduction result through the computer 130 to realize the "relive" the dream. Based on the application scenario shown in FIG. 1, this specification shows that the following embodiments describe the construction of a dream reproduction model and the realization of dream reproduction based on the dream reproduction model. First, the description is made from the aspect of the construction of the dream reproduction model: Please refer to Fig. 2, which is a flowchart of the method for constructing the dream reproduction model shown in an exemplary embodiment of this specification, which may include the following steps: Step 202: Obtain The at least one group includes the correspondence between the perceptible object and the brain wave data of the user when the user perceives the perceptible object. In the embodiments of this specification, the perceptible object can be a single image or an image frame intercepted from a video. Those skilled in the art can understand that the essence of a single image and an image frame are both It is an image. Therefore, for the convenience of description, in the embodiments of this specification, the perceptible object may be an image. In one embodiment, a set of perceptible objects may be preset, for example, the set includes 1000 different images, and each perceptible object in the set of perceptible objects is provided to the user 110 in turn, for example, in a specific environment Next, each perceptible object is played in the form of a slide show, and when the perceptible object is provided to the user 110, the brain wave data of the user 110 when the perceptible object is sensed is synchronously collected. Through this kind of processing, every time brain wave data is collected, a correspondence between the perceptible object and the brain wave data when the user 110 perceives the perceptible object can be obtained, for example, 1000 such correspondences are obtained. In one embodiment, a perceptible object may be provided to the user 110 according to a preset rule, for example, every 5 seconds, and the entire process from providing the first perceptible object to the last perceptible object is continuously collected. The brainwave data of user 110 is then extracted according to the above-mentioned preset rules. For example, according to the collection time of brainwave data, a segment of brainwave data is intercepted every 5 seconds, and then brainwave data and perceptible data are created. Correspondence between objects. Through this kind of processing, it is also possible to finally obtain multiple correspondences between the perceptible object and the brain wave data when the user 110 perceives the perceptible object. In an embodiment, it is also possible to provide a video to the user 110, and continuously collect the brainwave data of the user 110 during the entire process of the user 110 watching the video, and then, at the same time interval, capture the pictures in the video. Image frames and brain wave data, and then the correspondence between brain wave data and perceivable objects can be established. It should be noted that the two embodiments described above are only used as two optional implementations. In practical applications, there may be other ways to obtain at least one set of information including the perceptible object and the user’s perception of the perceptible object. Correspondence of brain wave data, for example, when the user 110 is performing activities according to the voluntary will, the brain wave data of the user 110 can be collected, and the retinal imaging of the user 110 can be synchronously collected. Based on the collection time, the brain wave data and the retinal imaging can be established Corresponding relationship, the retinal imaging can be equivalent to the perceptible object perceived by the user. Those skilled in the art can understand that, according to the above description, the brainwave sensor 120 illustrated in FIG. 1 may also have the function of collecting retinal imaging, or be composed of another separate wearable smart chip (not shown in FIG. 1 Not shown) is responsible for collecting retinal imaging of the user 110, which is not limited in the embodiment of this specification. Step 204: Perform feature extraction on each set of correspondences respectively to obtain a training sample set, where each training sample takes the characteristic value of the extracted brain wave data as the input value, and the characteristic value of the extracted perceptible object Label value. In the embodiment of the present specification, for each set of correspondences obtained in step 202, feature extraction is performed on the perceptible objects in each set of correspondences and the brain wave data when the user perceives the perceptible objects to obtain a training sample set. Each training sample in the training sample set includes the feature value of the extracted brain wave data and the feature value of the extracted perceptible object, and based on the related description of the application scenario shown in Figure 1 above, the actual dream is repeated In the current process, the brainwave data collected by the user 110 in the sleep state is used to determine the perceptible objects the user 110 perceives. Therefore, each of the above training samples takes the characteristic value of the brainwave data as the input value to extract The characteristic value of the obtained perceptible object is the label value. Extract the characteristic value of brain wave data: In one embodiment, through the mathematical concept of complex change, it can be known that the real signal of any frequency can be expressed as the sum of a series of periodic functions, and the real signal is expressed as the process of a series of periodic function sums It is the process of analyzing the real signal, and each periodic function is equivalent to the component of the real signal. Based on this, this specification proposes to decompose the brain wave data by complex variation, for example, using Fourier transform to complex the brain wave data. Variational decomposition, the brain wave data is expressed as the sum of at least one complex variable function, and the at least one complex variable function can be used as the characteristic value of the brain wave data. For example, the characteristic value of the mentioned brain wave data is (a 1 f 1 (sinx), a 2 f 2 (sinx), a 3 f 3 (sinx)). It should be noted that the method of extracting the characteristic value of brain wave data described above is only an optional implementation method. In practical applications, the characteristic value of brain wave data can also be extracted through other methods, for example, through correlation analysis , AR parameter estimation, Butterworth low-pass filtering, genetic algorithm and other methods to extract the eigenvalues of brainwave data, the specific type of the extracted eigenvalues can be determined by the actual algorithm, for example, the Butterworth low-pass filtering algorithm is used to extract The obtained eigenvalue is the square value of the signal amplitude, and the eigenvalue extracted by the AR parameter estimation algorithm is the power spectral density, which will not be described one by one in the embodiments of this specification. Extract the characteristic value of a perceptible object: Taking the perceptible object as an image as an example, in one embodiment, the color statistics of the perceptible object can be performed to obtain the number of pixels corresponding to each color value in the perceptible object. The number of pixels obtained is expressed as a 2 N -dimensional vector, where N is the number of color bits of the image, that is, the 2 N -dimensional vector can be used as the feature value of the perceptible object, for example, the extracted feature The value is (y 1 , y 2 , y 3 ,...y 2^N ). Further, considering that the number of color bits in different images may be different, such as 8-bit images and 16-bit images, the dimensions of the extracted feature values are also different, in order to train the training samples later Unification and regularization can map the color statistics results of images with different color bits to a unified vector space. The "unity" mentioned here means that the dimensions of the vectors obtained based on the color statistics results are the same. In addition, it should be noted that the greater the dimensionality of the vector, the higher the complexity of subsequent training of the training samples, and the greater the amount of calculation. Therefore, in the embodiments of this specification, the characteristics of the object are guaranteed to be perceivable. When the fineness of the value meets user expectations, a vector space with a smaller dimension can be set as much as possible. It should be noted that in practical applications, for images with different color bit numbers, you can first set these images to the same color bit number, and then perform feature extraction according to the above description, and after obtaining the color statistics results , There is no need to perform the step of mapping each color statistical result to a unified vector space. Step 206: Use a supervised learning algorithm to train the training samples to obtain a dream reproduction model. The dream reproduction model uses the characteristic value of the brain wave data as the input value and the characteristic value of the perceptible object as the output value. In the embodiment of this specification, a supervised learning algorithm can be used to train the training samples obtained in step 204 to obtain a dream reproduction model. The dream reproduction model uses the characteristic value of brain wave data as the input value to be perceptible The characteristic value of the object is used as the output value. It is understandable that the dream reproduction model obtained by training can essentially be understood as the functional relationship between the input value and the output value. The output value will be affected by all or part of the input value. Therefore, the output value and the input value The functional relationship between can be as follows: Among them, x 1 , x 2 ,...x M represent M input values, that is, the feature value of M brainwave data, and y represents the output value, that is, the feature value of the perceptible object, which can be specifically the perceptible object The proportional relationship between the number of pixels corresponding to each color value. It should be noted that the form of the dream reproduction model can be selected according to actual training requirements, such as a linear regression model, a logistic regression model, and so on. The embodiments of this specification do not limit the selection of models and specific training algorithms. In addition, it should be noted that different users may have different perception abilities of the same perceivable object. Therefore, in the embodiments of this specification, it is proposed to construct different dream reproduction models for different users; further, the same user is in psychological and physical conditions. In different states, the perception ability of the same perceivable object may be different. Therefore, in the embodiment of this specification, it is also proposed to construct different dream reproduction models for the same user in different time periods. In addition, in practical applications, there may also be other achievable ways, for example, the same dream reproduction model is constructed for different users, which is not specifically limited in the embodiment of this specification. As can be seen from the above-mentioned embodiments, the technical solutions provided by the embodiments of the present specification obtain at least one set of correspondences containing the perceptible objects and the brainwave data of the user when perceiving the perceptible objects, and then perform feature extraction on each set of correspondences. , Obtain the training sample set, where each training sample takes the characteristic value of the extracted brain wave data as the input value, takes the characteristic value of the perceptible object as the label value, and uses the supervised learning algorithm to train the training sample to obtain Dream reproduction model. The dream reproduction model takes the characteristic value of brain wave data as input value and the characteristic value of perceivable object as output value. Later, through this dream reproduction model, the user’s sleep state can be used The brainwave data reproduces the user's dream and satisfies the user experience. So far, complete the description of the construction of the dream reproduction model. Secondly, the description will be made from the aspect of realizing the dream reproduction based on the dream reproduction model: Please refer to Fig. 3, which is a flowchart of the dream reproduction method shown in an exemplary embodiment of this specification, which may include the following steps: Step 302: Obtain Brainwave data of users in sleep state. In the embodiment of the present specification, the brain wave data of the user in the sleep state can be obtained through the brain wave sensor 120 illustrated in FIG. 1 according to a preset rule, such as every one minute or every two minutes. Step 304: Perform feature extraction on the obtained brain wave data to obtain the feature value of the brain wave data. For a detailed description of this step, reference may be made to the related description in step 204 in the embodiment shown in FIG. 2, which will not be described in detail here. Step 306: Input the characteristic value of the obtained brain wave data into the dream reproduction model to obtain the corresponding output value. It can be seen from the dream reproduction model described in the embodiment shown in FIG. 2 that, in this step, the characteristic value of the brainwave data extracted in step 304 can be input into the dream reproduction model to obtain the corresponding output value. The value can be a characteristic value of a perceptible object. Step 308: From the corresponding relationship, determine the perceptible object with the highest similarity to the output value to generate a dream reproduction result. In the embodiment of this specification, the feature value of each perceptible object in the training sample set can be calculated for similarity with the output value in step 306 to determine the feature value of the perceptible object with the highest similarity to the output value, and then, Then, in the corresponding relationship described in the embodiment shown in FIG. 2, the perceptible object with the highest similarity to the output value can be determined, and the dream reproduction result can be generated based on the determined perceptible object. The specific process of obtaining the characteristic value of each perceivable object in the training sample set can refer to the related description in the embodiment shown in FIG. 2, which is not described in detail here. Further, the dream reproduction result can be shown to the user, for example, a certain number of images are played in the form of a slide show according to the sequence of the acquisition time of the brain wave data. Those skilled in the art can understand that there may be one or more perceptible objects with the highest similarity to the output value, which is not limited in the embodiment of the present specification. In the above description, the specific method for calculating the similarity between the output value and the characteristic value of the perceivable object may be Euclidean distance algorithm, cosine similarity calculation algorithm, etc., which are not limited in the embodiment of this specification. As can be seen from the above-mentioned embodiments, the technical solution provided by the embodiments of this specification is to obtain the brain wave data of the user in the sleep state, perform feature extraction on the brain wave data, obtain the characteristic value of the brain wave data, and input the characteristic value into the dream state. Reproduce the model to obtain the corresponding output value, and then determine the perceivable object with the highest similarity to the output value from the pre-acquired correspondence between the perceptible object and the brain wave data when the user perceives the perceptible object , In order to generate the dream recurring result, so that the user can "relive" the dream based on the dream recurring result. So far, the related description of the realization of dream reproduction based on the dream reproduction model is completed. Corresponding to the foregoing embodiment of the method for constructing a dream reproduction model, the embodiment of this specification also provides a device for constructing a dream reproduction model. See FIG. 4, which shows the construction of a dream reproduction model according to an exemplary embodiment of this specification. A block diagram of an embodiment of the device. The device may include: a data acquisition module 41, a sample acquisition module 42, and a sample training module 43. Among them, the data acquisition module 41 can be used to obtain at least one set of correspondences between the perceptible object and the brain wave data of the user when the user perceives the perceptible object; The corresponding relationship is grouped to perform feature extraction to obtain a training sample set, wherein each training sample uses the extracted feature value of the brain wave data as an input value, and the extracted feature value of the perceptible object is labeled The sample training module 43 can be used to train the training samples using a supervised learning algorithm to obtain a dream reproduction model. The dream reproduction model takes the characteristic value of the brain wave data as the input value and can be The characteristic value of the sensing object is used as the output value. In an embodiment, the data acquisition module 41 may include (not shown in FIG. 4): a providing sub-module for sequentially providing each perceptible object in the preset set of perceptible objects to the user The collection sub-module is used to synchronously collect brain wave data of the user when the user perceives the perceivable object when the perceivable object is provided to the user. In an embodiment, the sample acquisition module 42 may include (not shown in FIG. 4): a first decomposition sub-module for complex decomposition of brain wave data in each group of the corresponding relationship , Expressing the brain wave data as a sum of at least one complex variable function; a first determining sub-module, configured to use the at least one complex variable function as a feature value of the brain wave data. In an embodiment, the perceivable object is an image, and the sample acquisition module 42 may include (not shown in FIG. 4): a statistical sub-module, which is used to determine each group of the corresponding relationship Perform color statistics on the image to obtain the number of pixels corresponding to each color value in the image; the second determining sub-module is used to express the obtained number of pixels as a 2 N -dimensional vector, where N is The number of color bits of the image. In an embodiment, the device may further include (not shown in FIG. 4): a mapping module for mapping the color statistics results of images with different color bits to a unified vector space. In one embodiment, different dream reproduction models are constructed for different users. It is understandable that the data acquisition module 41, the sample acquisition module 42, and the sample training module 43, as three functional independent modules, can be configured in the device at the same time as shown in FIG. 4, or separately configured in the device. In the device, therefore, the structure shown in FIG. 4 should not be construed as a limitation to the embodiment scheme of this specification. In addition, the implementation process of the functions and roles of each module in the above-mentioned device is detailed in the implementation process of the corresponding steps in the above-mentioned method for constructing the dream reproduction model, which will not be repeated here. Corresponding to the foregoing embodiment of the dream reproduction method, the embodiment of this specification also provides a dream reproduction device. See FIG. 5, which is a block diagram of an embodiment of the dream reproduction device shown in an exemplary embodiment of this specification. It may include: a brain wave acquisition module 51, a feature extraction module 52, an output module 53, and a reproduction module 54. Among them, the brain wave acquisition module 51 can be used to obtain brain wave data of the user in a sleep state; the feature extraction module 52 can be used to perform feature extraction on the obtained brain wave data to obtain the brain wave data Feature value; output module 53, which can be used to input the obtained feature value of brain wave data into the dream reproduction model to obtain the corresponding output value; reproduction module 54 can be used to extract the corresponding relationship , Determine the perceptible object with the highest similarity to the output value to generate the dream reproduction result. In an embodiment, the feature extraction module 52 may include (not shown in FIG. 5): a second decomposition sub-module, used for complex decomposition of the obtained brain wave data, and the brain wave data The data is expressed as the sum of at least one complex variable function; and the third determining sub-module is used to use the at least one complex variable function as the characteristic value of the brain wave data. In an embodiment, the reproduction module 54 may include (not shown in FIG. 5): a fourth determining sub-module for determining the reference characteristic value of each perceivable object in the corresponding relationship; calculating The sub-module is used to respectively calculate the similarity between the output value and the reference feature value of each perceivable object; the fifth determining sub-module is used to determine the perceivable object with the highest similarity to Generate dream reproduction results. It is understandable that the brainwave acquisition module 51, the feature extraction module 52, the output module 53, and the reproduction module 54 are four independent modules, which can be configured in the device at the same time as shown in FIG. 5 , Can also be separately configured in the device, so the structure shown in FIG. 5 should not be construed as a limitation to the embodiments of the present specification. In addition, the implementation process of the functions and roles of each module in the above-mentioned device is detailed in the implementation process of the corresponding steps in the above-mentioned dream reproduction method, which will not be repeated here. Corresponding to the foregoing embodiment of the method for constructing the dream reproduction model, the embodiment of the present specification also provides a computer device, which at least includes a memory, a processor, and a computer program stored on the memory and running on the processor, wherein: When the processor executes the program, the method for constructing the aforementioned dream reproduction model is realized, and the method at least includes: obtaining at least one set of correspondences between the perceptible object and the brain wave data of the user when the perceptible object is perceived; respectively; Perform feature extraction on each set of the corresponding relationship to obtain a training sample set, where each training sample takes the extracted feature value of the brain wave data as an input value to extract the feature of the perceivable object The value is a label value; the training samples are trained using a supervised learning algorithm to obtain a dream reproduction model. The dream reproduction model takes the characteristic value of the brain wave data as the input value and the characteristic value of the perceptible object as the input value output value. Corresponding to the foregoing embodiment of the dream reproduction method, the embodiment of this specification also provides a computer device, which at least includes a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor executes The program realizes the aforementioned dream reproduction method, which at least includes: obtaining the brain wave data of the user in sleep; performing feature extraction on the obtained brain wave data to obtain the characteristic value of the brain wave data; The characteristic value of the obtained brain wave data is input into the dream reproduction model to obtain the corresponding output value; from the corresponding relationship, the perceptible object with the highest similarity to the output value is determined to generate the dream reproduction result. 6 shows a more specific hardware structure diagram of a computing device provided by an embodiment of this specification. The device may include a processor 610, a memory 620, an input/output interface 630, a communication interface 640, and a bus 650 . The processor 610, the memory 620, the input/output interface 630, and the communication interface 640 realize the communication connection between each other within the device through the bus 650. The processor 610 may be implemented by a general CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc., for Execute related programs to realize the technical solutions provided in the embodiments of this specification. The memory 620 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory), static storage device, dynamic storage device, etc. The memory 620 can store the operating system and other application programs. When the technical solutions provided in the embodiments of this specification are implemented through software or firmware, the related program codes are stored in the memory 620 and called and executed by the processor 610 . The input/output interface 630 is used to connect the input/output module to realize information input and output. The input/output/module can be configured in the device as a component (not shown in Figure 6), or can be connected to the device to provide corresponding functions. The input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and an output device may include a display, a speaker, a vibrator, an indicator light, and so on. The communication interface 640 is used to connect a communication module (not shown in FIG. 6) to realize communication interaction between the device and other devices. The communication module can realize communication through wired means (such as USB, network cable, etc.), or through wireless means (such as mobile network, WIFI, Bluetooth, etc.). The bus 650 includes a path for transmitting information between various components of the device (such as the processor 610, the memory 620, the input/output interface 630, and the communication interface 640). It should be noted that although the above device only shows the processor 610, the memory 620, the input/output interface 630, the communication interface 640, and the bus 650, in the specific implementation process, the device may also include a device for normal operation. Other necessary components. In addition, those skilled in the art can understand that the above-mentioned equipment may also include only the components necessary to implement the solutions of the embodiments of the specification, and not necessarily include all the elements shown in the figures. Corresponding to the foregoing embodiment of the method for constructing a dream recurrence model, the embodiment of this specification also provides a computer-readable storage medium on which a computer program is stored, and the program is executed by a processor to implement the foregoing method for constructing the dream recurrence model . The method at least includes: obtaining at least one set of correspondences between the perceptible object and the brainwave data of the user when the user perceives the perceivable object; performing feature extraction on each set of the correspondence to obtain a training sample set, wherein Each training sample takes the extracted characteristic value of the brain wave data as an input value, and uses the extracted characteristic value of the perceptible object as a label value; and a supervised learning algorithm is used to train the training sample , Obtain a dream reproduction model, the dream reproduction model uses the characteristic value of the brain wave data as the input value, and the characteristic value of the perceptible object as the output value. Corresponding to the foregoing embodiment of the dream reproduction method, the embodiment of this specification also provides a computer-readable storage medium on which a computer program is stored, and the program is executed by a processor to realize the aforementioned dream reproduction method. The method at least includes: obtaining the brain wave data of the user in a sleep state; performing feature extraction on the obtained brain wave data to obtain the characteristic value of the brain wave data; and inputting the characteristic value of the obtained brain wave data into the The dream reproduction model obtains the corresponding output value; from the corresponding relationship, the perceptible object with the highest similarity to the output value is determined to generate the dream reproduction result. Computer-readable media includes permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology. Information can be computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), and other types of random access memory (RAM) , Read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital multi-function disc (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves. From the description of the above embodiments, those skilled in the art can clearly understand that the embodiments of this specification can be implemented by means of software plus a necessary general hardware platform. Based on this understanding, the technical solutions of the embodiments of this specification can be embodied in the form of software products, which can be stored in storage media, such as ROM/RAM, magnetic A disc, an optical disc, etc., include a number of instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute the methods described in the various embodiments or some parts of the embodiments of this specification. The systems, devices, modules or units explained in the above embodiments may be implemented by computer chips or entities, or implemented by products with certain functions. A typical implementation device is a computer. The specific form of the computer can be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email receiving and sending device, and a game control A console, a tablet, a wearable device, or a combination of any of these devices. The various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the partial description of the method embodiment. The device embodiments described above are merely illustrative. The modules described as separate components may or may not be physically separated. When implementing the embodiments of this specification, the functions of each module can be Implemented in the same one or more software and/or hardware. It is also possible to select some or all of the modules according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work. The above are only specific implementations of the embodiments of this specification. It should be pointed out that for those of ordinary skill in the art, without departing from the principle of the embodiments of this specification, several improvements and modifications can be made. These Improvements and modifications should also be regarded as the protection scope of the embodiments of this specification.