主講人:周濤 中國(guó)科學(xué)院數(shù)學(xué)與系統(tǒng)科學(xué)研究院研究員
時(shí)間:2022年10月28日10:00
地點(diǎn):騰訊會(huì)議 855 981 699
舉辦單位:數(shù)理學(xué)院
主講人介紹:周濤,中國(guó)科學(xué)院數(shù)學(xué)與系統(tǒng)科學(xué)研究院研究員。曾于瑞士洛桑聯(lián)邦理工大學(xué)從事博士后研究。主要研究方向?yàn)椴淮_定性量化、隨機(jī)最優(yōu)控制以及時(shí)間并行算法等。在國(guó)際權(quán)威期刊如SIAM Review、SINUM、JCP等發(fā)表論文70余篇?,F(xiàn)擔(dān)任SIAM J Sci Comput、Commun. Comput. Phys、J Sci Comput等國(guó)際期刊編委,國(guó)際不確定性量化期刊(International Journal for UQ)副主編。
內(nèi)容介紹:Deep neural networks have emerged as an effective tool for solving PDEs. Recent research has demonstrated, however, that the performance of DNNs-based approaches (such as PINNs) can vary dramatically with different sampling procedures. For instance, a fixed set of (prior chosen) training points may fail to capture the effective solution region (especially for problems with singularities). To overcome this issue, we present in this talk an adaptive strategy -- failure-informed PINNs (FI-PINNs), which is inspired by the viewpoint of re-liability analysis. The basic idea is to define a failure probability by using the residual. Then, with the aim of placing more samples in the failure region, the proposed FI-PINNs employs a failure-informed enrichment technique to incrementally add new collocation points to the training set adaptively. Compared to the conventional PINNs, FI-PINNs can significantly improve the accuracy. We prove rigorous bounds on the error incurred by FI-PINNs and illustrate its perfor-mance through several problems。
