Adjoint operators $T_K$ and $T_{K^{*}}$ in BEM
In our last article, we introduced four integral operators in the boundary integral equations in BEM. Among them, the two compact operators \(T_K\) and \(T_{K^{*}}\) are of the second Fredholm type and have strong singularity when the model geometry contains sharp corners. This article will show that
- \(T_K\) and \(T_{K^{*}}\) are a pair of adjoint operators in the variational formulation of the boundary integral equations;
- when they are represented as matrices via Galerkin discretization, one is the conjugate transpose of the other.
Definition of dual operators
First, let's review the definition of dual operator, on which the adjoint operator depends.
Definition (Dual operators) Let \(X\) and \(Y\) be locally convex spaces and \(X_s'\) and \(Y_s'\) be their strong dual spaces. Let \(T\) be a linear operator from \(D(T) \subseteq X\) into \(Y\). A linear operator \(T'\) such that \(T'y' = x'\) is defined as
\[
\langle Tx, y' \rangle = \langle x, x' \rangle \quad \text{for all $x \in D(T)$ and $\{x', y'\} \in X_s' \times Y_s'$},
\]
where \(x'\) is uniquely determined by \(y'\) through the mapping \(T'\) if and only if \(D(T)\) is dense in \(X\). Then \(T'\) is called the dual operator of \(T\).
When finite dimensional spaces are considered, i.e. \(X = \mathbb{C}^n\), \(Y = \mathbb{C}^m\), \(X_s' = \mathbb{C}^n\) and \(Y_s' = \mathbb{C}^m\), we have the elements in these spaces represented as column vectors: \(x_h, x_h' \in \mathbb{C}^n\), \(y' \in \mathbb{C}^m\) and the linear operators as matrices: \(T_h \in \mathbb{C}^{m \times n}\) and \(T_h' \in \mathbb{C}^{n \times m}\). The application of a vector in the dual space onto the one in the original space can be defined as matrix product as below, where we use the subscript \(^T\) to represent matrix or vector transpose:
\[
\begin{aligned}
\langle T_h x_h, y_h' \rangle &= y_h'^{T} T_h x_h \\
\langle x_h, T_h' y_h' \rangle &= (T_h' y_h')^T x_h = y_h'^T T_h'^T x_h
\end{aligned}.
\]
Because the above two terms are equal, we have \(T_h' = T_h^T\), i.e. in finite dimensional case, the dual operator is the transpose of the original operator.
Definition of adjoint operators
Definition (Adjoint operators) Let \(X\), \(Y\) be Hilbert spaces and \(T\) a linear operator defined on \(D(T) \subseteq X\) into \(Y\). Let \(D(T)^a = X\) and let \(T'\) be the dual operator of \(T\) which satisfies
\[
\langle Tx ,y' \rangle = \langle x, T'y' \rangle \quad (\forall x \in D(T), y' \in D(T')).
\]
Let \(J_X\) be the one-to-one norm-preserving conjugate linear correspondence \(X_s' \ni f \leftrightarrow x_f \in X\) and \(J_Y\) is defined similarly as the correspondence \(Y_s' \ni g \leftrightarrow y_g \in Y\). On complex number field, the mappings \(J_X\) and \(J_Y\) along with their inverses \(J_X^{-1}\) and \(J_Y^{-1}\) are actually the operation of complex conjugate.
Then we have
\[
\langle Tx, y' \rangle = y'(Tx) = (Tx, J_Y y') \; \text{and} \; \langle x, T'y' \rangle = (T'y')(x) = (x, J_X T' y')
\]
and
\[
(Tx, J_Y y') = (x, J_X T' y').
\]
Let \(y = J_Y(y')\), hence \(y' = J_Y^{-1}(y)\) and
\[
\begin{equation}
(Tx, y) = (x, J_X T' J_Y^{-1} y).
\label{eq:adjoint-operator-condition}
\end{equation}
\]
When \(Y = X\), let \(T^{*} = J_X T' J_Y^{-1} = J_X T' J_X^{-1}\) and we call it the adjoint operator of \(T\).
Remark
Adjoint operators require the spaces to be Hilbert spaces, while dual operators only requires the spaces to be locally convex spaces.
The triangular brackets \(\langle \cdot, \cdot \rangle\) represent the application of the second component in the strong dual space to the first component in the original space.
The parentheses \((\cdot, \cdot)\) represent the inner product in the original space with the second component uniquely determined from an element in the strong dual space via the norm-preserving map \(J_X\) or \(J_Y\). This is ensured by a corollary derived from the famous Riesz representation theorem. They are given as follows for reference.
Theorem (Riesz' representation theorem). Let \(X\) be a Hilbert space and \(f\) be a bounded linear functional on \(X\). Then there exists a uniquely determined vector \(y_f\) of \(X\) such that
\[
\begin{equation}
f(x) = (x, y_f) \quad \text{for all $x \in X$, and $\norm{f} = \norm{y_{f}}$}.
\end{equation}
\]
Conversely, any vector \(y \in X\) defines a bounded linear functional \(f_y\) on \(X\) .
Corollary Let \(X\) be a Hilbert space and \(X'\) be its dual space. Then there exists a norm-preserving bijective mapping between \(X\) and \(X'\).
In finite dimensional case, equation \eqref{eq:adjoint-operator-condition} can be represented as:
\[
\begin{equation}
\begin{aligned}
(T_h x_h, y_h) &= (x_h, J_X T_h' J_Y^{-1} y_h) = (x_h, \overline{T_h' \overline{y_h}}) \\
&= (x_h, \overline{T_h'} y_h) = (x_h, \overline{T_h^T} y_h) = (x_h, T_h^* y_h)
\end{aligned}
\label{eq:adjoint-operator-finte-dimension-condition}
\end{equation}.
\]
Here we use over-line to represent complex conjugate. It can be seen that the adjoint operator is the Hermitian conjugate of the original operator.
Summary of dual and adjoint operators
In previous two sections, we present the definitions of dual and adjoint operators. In finite dimensional case, dual operator corresponds to matrix transpose while adjoint operator corresponds to Hermitian transpose. The commutative diagram for the relationships between the original and strong dual spaces as well as operators associating elements thereof can be illustrated as below.
\[
\require{AMScd}
\begin{CD}
X @>T>> Y \\
@AJ_XAA @AAJ_YA \\
X_s' @<<{T'}< Y_s'
\end{CD}
\]
The adjoint operator \(T^{*}\) is obtained by following the path \(Y \rightarrow Y_s' \rightarrow X_s' \rightarrow X\).
\(T_K\) and \(T_{K^{*}}\) in the variational formulation of boundary integral equations
In our previous article, the boundary integral equations in matrix form are obtained as
\[
\begin{equation}
\begin{pmatrix}
\gamma_0[u] \\
\gamma_1[t]
\end{pmatrix} =
\begin{pmatrix}
\frac{1}{2}I - T_K & V \\
D & \frac{1}{2}I + T_{K^{*}}
\end{pmatrix}
\begin{pmatrix}
\gamma_0[u] \\
\gamma_1[t]
\end{pmatrix} \quad (x \in \Gamma).
\label{eq:boundary-integral-equations-in-matrix-form}
\end{equation}
\]
We should note that these two equations hold for all \(x\) on \(\Gamma\). If we use the first row in Equation \eqref{eq:boundary-integral-equations-in-matrix-form} to match the Dirichlet data on \(\Gamma_D\) and the second row to match the Neumann data on \(\Gamma_N\), it is natural to separate the Dirichlet trace \(\gamma_0[u]\) into two parts: known data \(g_D = \gamma_0[u]\big\vert_{\Gamma_D}\) on \(\Gamma_D\) and unknown data \(\varphi_N = \gamma_0[u]\big\vert_{\Gamma_N}\) on \(\Gamma_N\); while the Neumann trace \(\gamma_1[t]\) comprises known data \(0\) on \(\Gamma_N\) and unknown data \(t_D = \gamma_1[t]\big\vert_{\Gamma_D}\) on \(\Gamma_D\). In addition, we need to remember that the operator \(\frac{1}{2}I\) in the first row comes from the direct value of double layer charge density when approaching to the Dirichlet boundary \(\Gamma_D\), so it only applies to \(g_D\) not \(\varphi_N\). Similarly, the operator \(\frac{1}{2}I\) in the second row only applies to \(0\) on \(\Gamma_N\) but not \(t_D\).
In this way, Equation \eqref{eq:boundary-integral-equations-in-matrix-form} becomes
\[
\begin{equation}
\begin{pmatrix}
-V & T_K \\
T_{K^{*}} & D
\end{pmatrix}
\begin{pmatrix}
t_D \\
\varphi_N
\end{pmatrix}=
\begin{pmatrix}
-\frac{1}{2}g_D - T_K(g_D) \\
-D(g_D)
\end{pmatrix}.
\end{equation}
\]
Now, it is obvious to see that the compact operator \(T_K\) maps a function defined on \(\Gamma_N\) to that on \(\Gamma_D\), while \(T_{K^{*}}\) maps a function defined on \(\Gamma_D\) to that on \(\Gamma_N\). This phenomenon has already reminded us of their adjoint property. But before showing this property in detail, we clarify the spaces on which \(T_K\) and \(T_{K^{*}}\) operate. Therefore the following two trace theorems are presented.
Theorem (Dirichlet trace theorem) Let \(\Omega\) be a bounded Lipschitz domain in \(\mathbb{R}\). Provided \(1/p < s \leq 1\), the Dirichlet trace operator \(\gamma_0\) defined on \(C^{\infty}(\bar{\Omega})\) has a unique continuous extension as a linear operator from \(W^{s,p}(\Omega)\) onto \(W^{s-1/p,p}(\pdiff\Omega)\). Specifically, when \(s = 1\) and \(p = 2\), we have \(\gamma_{0}: H^1(\Omega) \rightarrow H^{1/2}\).
Theorem (Neumann trace theorem) Let \(\Omega\) be a bounded Lipschitz domain in \(\mathbb{R}^3\) with unit outward normal \(\vect{n}\). Then the Neumann trace operator \(\gamma_1\) defined on \((C^{\infty}(\bar{\Omega}))^3\) can be extended by continuity to a continuous linear map from \(H(\divergence; \Omega)\) onto \(H^{-1/2}(\pdiff\Omega)\).
If we adopt \(t = \pdiff_{\vect{n}}u \in H(\divergence; \Omega)\), which implies finite excitation source charge in the domain, i.e. \(-\triangle u = f\) is square integrable; and \(u \in H^1(\Omega)\), which implies finite electric field energy in the domain, i.e. \(-\nabla u = \boldsymbol{E}\) is square integrable, according to the above trace theorems, we have \(\varphi_N \in H^{1/2}(\Gamma_N)\), \(t_D \in H^{-1/2}(\Gamma_D)\) and
\[
\begin{equation}
\begin{aligned}
& T_K: H^{1/2}(\Gamma_N) \rightarrow H^{1/2}(\Gamma_D) \\
& T_{K^{*}}: H^{-1/2}(\Gamma_D) \rightarrow H^{-1/2}(\Gamma_N)
\end{aligned},
\end{equation}
\]
where \(H^{-1/2}\) is the dual space of \(H^{1/2}\).
By selecting test functions \(\psi \in H^{-1/2}(\Gamma_D)\) and \(\xi \in H^{1/2}(\Gamma_N)\), we can obtain the variational formulation of the boundary integral equations:
\[
\begin{equation}
\begin{aligned}
\langle -V(t_D), \psi \rangle + \langle T_K(\varphi_N), \psi \rangle &= \langle -\frac{1}{2}g_D, \psi \rangle + \langle -T_K(g_D), \psi \rangle \\
\langle \xi, T_{K^{*}}(t_D) \rangle + \langle \xi, D(\varphi_N) &= \langle \xi, -D(g_D) \rangle
\end{aligned},
\label{eq:variational-formulation-of-bie}
\end{equation}
\]
where the triangle brackets \(\langle \cdot, \cdot \rangle\) represent applying the element in the dual space \(H^{-1/2}\) to the one in the original space \(H^{1/2}\). If we use \(x\) to represent the coordinate on \(\Gamma_D\) and \(y\) on \(\Gamma_N\), the two bilinear forms related to \(T_K\) and \(T_{K^{*}}\) on the left hand side of Equation \eqref{eq:variational-formulation-of-bie} can be expanded as below:
\[
\begin{equation}
\begin{aligned}
\langle (T_K \varphi_N(y))(x), \psi(x) \rangle_{\Gamma_D(x)} &= \int_{\Gamma_D(x)} \overline{\psi(x)} \left[ \int_{\Gamma_N(y)} K(x, y) \varphi_N(y) \intd o(y) \right] \intd o(x) \\
&= \int_{\Gamma_D(x)} \int_{\Gamma_N(y)} \overline{\psi(x)} K(x, y) \varphi_N(y) \intd o(y) \intd o(x) \\
\langle \xi(y), (T_{K^{*}} t_D(x))(y) \rangle_{\Gamma_N(y)} &= \int_{\Gamma_N(y)} \left[ \int_{\Gamma_D(x)} \overline{K^{*}(y, x)} \overline{t_D(x)} \intd o(x) \right] \xi(y) \intd o(y) \\
&= \int_{\Gamma_N(y)} \int_{\Gamma_D(x)} \overline{K(x, y)} \overline{t_D(x)} \xi(y) \intd o(x) \intd o(y) \\
&= \int_{\Gamma_D(x)} \int_{\Gamma_N(y)} \overline{t_D(x)} K(x, y) \xi(y) \intd o(y) \intd o(x)
\end{aligned},
\end{equation}
\]
where the over line represents complex conjugate if the adopted scalar field is \(\mathbb{K}\). Here we note that because the integral kernel \(K(x, y)\) only depends on coordinate, it is intrinsically real. In addition, we have also used the property \(K^{*}(y, x) = K(x, y)\) which is described in the previous article. Finally, by replacing \(\varphi_N(y)\) with \(\xi(y)\) and \(t_D(x)\) with \(\psi(x)\) in the above, we can show that \(T_K\) and \(T_{K^{*}}\) are a pair of adjoint operators due to
\[
\begin{equation}
\langle (T_K \xi(y))(x), \psi(x) \rangle_{\Gamma_D(x)} = \langle \xi(y), (T_{K^{*}} \psi(x))(y) \rangle_{\Gamma_N(y)}.
\end{equation}
\]
Matrix formulation of \(T_K\) and \(T_{K^{*}}\) in Galerkin discretization
By applying Galerkin discretization to the variational formulation of boundary integral equations in \eqref{eq:variational-formulation-of-bie}, we can migrate from infinite dimensional to finite dimensional spaces and the matrix formulations of the integral operators \(T_K\) and \(T_{K^{*}}\) can be obtained. This section introduces the procedure for this discretization.
Let \(\varphi_h(y) = \sum_{i=1}^n a_i \varphi_i(y)\) be a finite dimensional approximation of \(\varphi_N(y)\) and \(\psi_h(x) = \sum_{j=1}^m b_j p_j(x)\) be that of \(t_D(x)\). \(\{ \varphi_i(y) \}_{i=1}^n\) and \(\{ p_j(x) \}_{j=1}^m\) are finite dimensional bases for \(H^{1/2}(\Gamma_N)\) and \(H^{-1/2}(\Gamma_D)\) respectively. Then the bilinear forms related to \(T_K\) and \(T_{K^{*}}\) in Equation \eqref{eq:variational-formulation-of-bie} can be expanded as
\[
\begin{equation}
\begin{aligned}
\langle T_K \varphi_h, \psi_{h} \rangle_{\Gamma_D(x)} &= \left\langle T_K \left( \sum_{j=1}^n a_j \varphi_j(y) \right), \sum_{i=1}^m b_i p_i(x) \right\rangle_{\Gamma_D(x)} \\
&= \sum_{i,j} \langle T_k \varphi_j(y), p_i(x) \rangle_{\Gamma_D(x)} a_j \bar{b}_i \\
\langle \varphi_h, T_{K^{*}}\psi_h \rangle_{\Gamma_N(y)} &= \left\langle \sum_{i=1}^n a_i\varphi_i(y), T_{K^{*}} \left( \sum_{j=1}^m b_j p_j(x) \right) \right\rangle_{\Gamma_N(y)} \\
&= \sum_{i, j} \langle \varphi_i(y), T_{K^{*}} p_j(x) \rangle_{\Gamma_N(y)} a_i \bar{b}_j
\end{aligned}.
\end{equation}
\]
Note, here we adopt the convention that the subscript \(i\) is assigned to test function while \(j\) is assigned to basis function. Then let
\[
\tilde{\varphi}_h = (a_1, \cdots, a_n)^T,\; \tilde{\psi}_h = (b_1, \cdots, b_m)^T,
\]
\[
\left(\widetilde{T}_{K}\right)_{ij} = \langle T_K \varphi_j(y), p_i(x) \rangle_{\Gamma_D(x)},\; \left(\widetilde{T}_{K^{*}}\right)_{ij} = \overline{\langle \varphi_i(y), T_{K^{*}} p_j(x) \rangle_{\Gamma_N(y)}}
\]
and the matrix formulation can be obtained as
\[
\begin{equation}
\begin{aligned}
\langle T_K\varphi_h, \psi_h \rangle_{\Gamma_D(x)} &= \overline{\tilde{\psi}}_h^{T} \widetilde{T}_K \tilde{\varphi}_h \\
\langle \varphi_h, T_{K^{*}} \psi_h \rangle_{\Gamma_N(y)} &= \overline{\left( \widetilde{T}_{K^{*}} \tilde{\psi}_h \right)}^T \tilde{\varphi}_h = \overline{\tilde{\psi}}_h^{T} \overline{\widetilde{T}}_{K^{*}}^T \tilde{\varphi}_h
\end{aligned}.
\end{equation}
\]
It is obvious to see that the matrix formulation of \(T_{K^{*}}\) is the conjugate transpose of that for \(T_K\).
Summary
This article clarifies:
- Based on the standard definition in functional analysis, integral operators \(T_K\) and \(T_{K^{*}}\) obtained from the representation formula are a pair of adjoint operators, which is shown in the variational formulation after applying test functions to the boundary integral equations;
- In the finite dimensional approximation of boundary integral equations using Galerkin discretization, matrix formulations can be obtained for the two operator \(T_K\) and \(T_{K^{*}}\). On the complex scalar field, finite dimensional versions of adjoint operators are correlated by matrix conjugate transpose.
随机推荐
- 【转】C++标准转换运算符reinterpret_cast
reinterpret_cast<new_type> (expression) reinterpret_cast运算符是用来处理无关类型之间的转换:它会产生一个新的值,这个值会有与原始参数 ...
- bootstrap简单使用布局、栅格系统、modal标签页等常用组件入门
<!DOCTYPE html> <html> <head> <title>bootstrap</title> <!-- 引入boots ...
- Jmeter之csv参数化
创建数据源csv文件 在线程组中添加CSV Data Set Config 1.添加CSV Data Set Config 添加CSV Data Set Config 2.配置CSV Data Set ...
- mysql修改字段长度及pymysql.err.DataError: (1406, "Data too long for column 'name' at row 1")错误
文章链接:修改字段:https://blog.csdn.net/xiejunna/article/details/78580682 错误分析:https://blog.csdn.net/qq_4214 ...
- Codeforces 280D k-Maximum Subsequence Sum [模拟费用流,线段树]
洛谷 Codeforces bzoj1,bzoj2 这可真是一道n倍经验题呢-- 思路 我首先想到了DP,然后矩阵,然后线段树,然后T飞-- 搜了题解之后发现是模拟费用流. 直接维护选k个子段时的最优 ...
- redhat7.3安装yum源
#检查rehat自带的yum源[root@localhost ~]# rpm -qa | grep yum -.el7.noarch -.el7.noarch -.el7.noarch -.el7.n ...
- 为 Confluence 6 配置发送邮件消息
如何配置 Confluence 向外发送邮件: 进入 > 基本配置(General Configuration) > 邮件服务器(Mail Servers).这里列出了所有当前配置的 S ...
- iOS运行时使用(动态添加方法)
1 举例 我们实现一个Person类 然后Person 其实是没得对象方法eat:的 下面调用person的eat方法 程序是会奔溃的 那么需要借助运行时动态的添加方法 Person *p = [[ ...
- Python实操
有两个列表,分别存放报名学习linux和python课程的学生名字 linux=['钢弹','小壁虎','小虎比','alex','wupeiqi','yuanhao'] python=['drago ...
- vue 中样式的绑定
1.class的对象绑定 //对应的css <style> .active { color: red; } </style> <!--html 对应的代码--> & ...