
There have been extensive research activities on numerical solutions of boundary integral equations. Many boundary value problems of partial differential equations can be reduced to Fredholm integral equations of the second kind. We consider in this paper only the classical Fredholm integral equations of the second kind, that is,
u−Ku=f,f∈X, | (1) |
where u is unknown,
(Ku)(t):=∫IK(t,τ)u(τ)dτ,t∈I:=[0,1]. |
The quadrature methods (Nystrom's method) and projection methods (collocation method and Galerkin method) are the two commonly used methods for solving (1) numerically (see Refs. [1-2]). It is well known that the Galerkin methods enjoy nice convergence properties but at the same time suffer from having high computational costs. In Ref. [3], the authors found that the Calderon-Zygmund integral operators have an almost sparse matrix representation under the wavelet basis. This discovery inspired a series of research on fast wavelet algorithms[4-7]. Based on a compression strategy, a fast wavelet Galerkin method was proposed in Ref. [7]. In Refs. [6-7], the fast wavelet Petrov-Galerkin methods and fast collocation methods were developed for integral equations. Recently, in Refs. [8-12], a fast Fourier-Galerkin method was proposed, solving the boundary integral equations deduced from the boundary value problems of the Laplace equations or biharmonic equations.
A theoretical framework for the analysis of super-convergence for the iterated Petrov-Galerkin method was established in Ref. [13]. The Petrov-Galerkin methods admit the trial spaces and test spaces to be different. Therefore, for the trial spaces, we employ piecewise polynomials of higher degrees. While, for the test spaces, we use the piecewise polynomials of lower degrees. In this way, we can still obtain the same convergence rate as the Galerkin method with less computational cost.
In this paper, we aim to develop an Iterated Fast wavelet Petrov-Galerkin method (IFPG) for solving (1). The key idea is to combine the fast wavelet Petrov-Galerkin method with Sloan iteration. The fast wavelet Petrov-Galerkin method has been well studied to get the optimal convergence rate by a sparse linear system. The well-known Sloan iteration is an example of iteration post-processing methods[14], which can achieve super-convergence. In this paper, we first apply a truncation strategy to the wavelet Petrov-Galerkin method to generate a sparse linear system. Then, we employ the Sloan iteration to improve the convergence rate. We will prove that the proposed method can achieve super-convergence by a sparse linear system.
1 Iterated Petrov-Galerkin methodWe will recall the standard iterated Petrov-Galerkin method for (1) in this section.
To state the Petrov-Galerkin method, let us begin with the trial space and test space
Let
Xn:=Skμn,Yn:=Sk′μn+1,n∈N0:={0,1,2,⋯}. |
It was shown in Ref. [15] that {
dimXn=dimYn:=S(n)=kμn. |
For any n∈
Un:={(i,j):j∈Zw(i),i∈Zn+1} |
where
w(i)={k,i=0,k(μ−1)μi−1,i∈N. | (2) |
As constructed in section 2 in Ref. [6], let {fi, j∶(i, j)∈
Proposition 1.1 Assume that {fi, j∶(i, j)∈
Then, we have
1) {fi, j∶(i, j)∈
2) {gi, j∶(i, j)∈
⟨gi,j,fi′,j′⟩=δii′δjj′,i⩾i′. |
3) The functions fi, j, gi, j, (i, j)∈
⟨fi,j,h⟩=0,⟨gi,j,h⟩=0, |
where h(t)∶=tr, t∈I, r∈
4) If u∈Hm with 0 < m≤k, then
infxn∈Xn‖u−xn‖⩽ck,mμ−mn|u|Hm |
where ck, m=(k+1-m)m2-m+1/2.
5) The length of the supports supp(fi, j) and supp(gi, j) satisfy the condition
meas(suppfi,j)⩽1μi−1, |
meas (suppgi,j)⩽1μi−1,i−1∈Zn. |
Next, we recall the generalized best approximation as follows (see Ref. [16]).
Definition[generalized best approximation] Given x∈
⟨x−Pnx,yn⟩=0 for all yn∈Yn. |
Similarly, given y∈
⟨xn,y−P′ny⟩=0 for all xn∈Xn. |
Since
To represent the operator
1) For i′-1∈
⟨ξi′,j′,tr⟩=0,r∈Zk′,j=∈Zw(i′),i′−1∈Zn. |
2) The auxiliary functions {ξi, j∶(i, j)∈
⟨ξi′,j′,gi,j⟩=δi′iδj′j,(i′,j′),(i,j)∈Un. |
These properties imply that the generalized best approximation from
P′ny=∑(i,j)∈Un⟨ξi,j,y⟩gi,j. |
With these preparation, we are ready to state the iterated Petrov-Galerkin method as follow.
The Petrov-Galerkin method for equation (1) is: finding un∈
⟨un−Kun,vn⟩=⟨f,vn⟩ for all vn∈Yn. | (3) |
The iterated projection method is defined by Sloan iteration,
u′n:=f+Kun. | (4) |
It has been proved in Ref. [14] that the iterated projection approximation u′n satisfies the following integral equation
u′n−Knu′n=f, | (5) |
where
To present the convergence results of the iterated Petrov-Galerkin method, we need the following estimate on projection operator
Lemma 1.1 If the kernel function K∈Ck(I×I), then, for any i∈
|K(I−Pi)|⩽c1μ−k′i. |
Moreover, if u∈Hm with 0 < m≤k, we have
|K(I−Pi)u|⩽c2μ−(m+k′)i|u|Hm. |
Proof Let Kt∶=K(t, ·). Since K∈Ck(I×I), we have for all t∈I, Kt∈Hk′(I), and
For any ϕ∈
|K(I−Pi)ϕ|=|∫IK(t,τ)(ϕ(τ)−Piϕ(τ))ds|=|⟨ϕ−Piϕ,Kt⟩|=|⟨ϕ−Piϕ,Kt−P′iKt⟩|⩽∣⟨ϕ−Piϕ||Kt−P′iKt|⩽c1μ−k′i|ϕ| |
This implies |
If u∈Hm, there exists a positive constant c2, such that |u-
∣K(I−Piu∣⩽|u−Piu||Kt−P′iKt|⩽c1c2μ−(m+k′)i|u|Hm, |
from which the desired estimate follows.
Theorem 1.1 Assume that
‖u−u′n‖⩽Cμ−(m+k′)n|u|Hm. | (6) |
Proof It has been proved in Ref. [14] that
‖u−u′n‖⩽C‖KPn−K‖infxn∈Xn‖u−xn‖. | (7) |
From Lemma 1.1, we know that there exists a positive constant c such that
‖KPn−K‖⩽cμ−k′n. |
Meanwhile, by the Proposition 1.1, we have
infxn∈Xn‖u−xn‖⩽ck,mμ−mn|u|Hm. |
Substituting the above two inequalities into (7), we obtain the estimate (6).
This is the super-convergence property of the iterated method. It is well known that the Galerkin method has high-order accuracy but suffers from high computational costs to generate the coefficient matrix. Then, a question arises naturally: Can we design a fast method that can preserve the super-convergence property within quasi-linear computational costs? We will give our answers in the following sections.
2 Iterated fast wavelet Petrov-Galerkin methodsThis section will present a fast and effective algorithm by truncating the dense coefficient matrix to obtain a sparse one. The number of nonzero entries of the sparse matrix will be estimated.
We first show the discrete form of equation (3). By introducing the vector notations
cn:=[ci,j](i,j)∈Un,gn:=[⟨gi′,j′,f⟩](i′,j′)∈Un, |
and matrix notations
En:=[⟨gi′,j′,fi,j⟩](i′,j′),(i,j)∈Un,K:=[Ki′,i]i′,i∈Zn+1, |
where Ki′, i∶=
(En−Kn)cn=gn. | (8) |
In equation (8), the matrix En is sparse, but Kn is dense. Fortunately, the elements in matrix Kn decay speedy, and many of them are insignificant. This observation makes it is possible to obtain a sparse matrix by abandoning some tiny elements. In the following two lemmas, the estimate on the elements′ decay in matrix Kn will be presented.
Lemma 2.1 If the kernel function K∈Ck(I×I), then, for any i∈
|Ki′,j′;i,j|⩽cμ−(i+i′)(k+1/2), | (9) |
where Ki′, j′; i, j=〈gi′, j′,
Proof By Taylor's Theorem, the kernel K can be rewritten as
K=K1+K2+K3, |
where K1(t, ·), K2(·, τ) are polynomials of degree less than k-1 respect to τ and t respectively, and K3 is the remainder term in the integral form. That is
K3(t,τ):=(t−t0)k(τ−τ0)kk!k!γk,k(t,τ), |
where τ0∈Ii, j, t0∈I′(i′, j′), and
γk,k(t,τ):=∫I∫IK(k,k)(t0+θ1(t−t0),τ0+θ2(τ−τ0))(1−θ1)k−1(1−θ2)k−1 dθ1 dθ2. |
It can be proved that there exists a constant C>0 such that, for all (t, τ)∈I2,
|γk,k(t,τ)|⩽C. |
Using the vanishing moments of fi, j, and gi′, j′, we have
|Ki′,j′;i,j|=|∫I∫IK(t,τ)fi,j(t)gi′,j′(τ)dtdτ|=|∫I∫IK3(t,τ)fi,j(t)gi′,j′(τ)dtdτ|⩽|fi,j|∞|gi′,j′|∞∫Ii,j∫Ii′j′|K3(t,τ)|dtdτ⩽Ck!k!|fi,j|∞|gi′,j′|∞⋅∫Ii,j∫II′,j′|(t−t0)k(τ−τ0)k|dtdτ⩽C1|fi,j|∞|gi′,j′|∞(|Ii,j|)k+1(|I′i′,j′|)k+1. |
By the Proposition 1.1, we know, for all
j∈Zw(i),j′∈Zw(i′),i−1∈Zn⋅i′−1∈Zn,supτ∈Ii,j|fi,j(τ)|⩽cμi−12,supτ∈I′(i′,j′)|gi′,j′(τ)|⩽cμi′−12. | (10) |
The following proof is divided into four cases, depending on whether i and i′ equal to zero or not.
Case 1 Assume i≥1, and i′≥1. There exists a positive constant c1 such that
|Ki′,j′,i,j|⩽μi−12μi′−12μ−(k+1)(i−1)μ−(k+1)(i′−1)⩽c1μ−(k+1/2)(i+i′). |
Case 2 Assume i=0, and i′≥1. There exists a positive constant c2 such that
|Ki′j′;0,j|⩽C1|f0,j|∞|gi′,j′|∞(|I′i′,j′|)k+1⩽c2μ−i′(k+1/2) |
by noting that w(0)=k.
Case 3 Assume i≥1, and i′=0. There exists a positive constant c3 such that
|K0,j′;i,j|⩽C1|fi,j|∞|g0,j′|∞(|I(i,j)|)k+1⩽c3μ−i(k+1/2). |
Case 4 Assume i=0, and i′=0. There exists a positive constant c4 such that
|K0,j′;0,j|⩽c4. |
Taking c∶=max{c1, c2, c3, c4}, we obtain the desired estimate (9).
Lemma 2.2 If the kernel function K∈Ck(I×I), then there exists a positive constant C independent of n such that, for all i, i′∈
|Ki′,i|2⩽Cμ−k(i+i′). |
Proof By lemma 2.1, we have
|Ki′,i|22⩽|Ki′,i|2F⩽∑j∈w(i)∑j′∈w(i′)|Ki′,j′;i,j|2⩽c2w(i)w(i′)μ−(i+i′)(2k+1). |
Combing with (2), we get
|Ki′,i|22⩽c2μi−1μi′−1μ−(i+i′)(2k+1)⩽c2μ2μ−2k(i+i′), |
which completes the proof.
With the estimate in lemma 2.2, we present the following truncated matrix. Let
˜Kn:=[˜Ki′,i]i′,i∈Zn+1, |
where
˜Ki′,j:={Ki′,i,i+i′⩽n,0, else. | (11) |
Replacing Kn in (8) by the truncated matrix
(En−˜Kn)˜cn=gn, | (12) |
where
Then we can present the fast method as follow.
Algorithm 2.1 iterated fast wavelet Petrov-Galerkin methods
Step 1 Solving the linear equations (12), we get
Step 2 Let
˜u:=∑(i,j)∈Un˜ci,jfi,j, |
Step 3 Let the iterated approximation be
˜u′n:=K˜un+f. | (13) |
To show the sparsity, we present the estimation on the number of nonzero entries of matrix En-
Theorem 2.1 There exists a positive constant C such that for all n∈
N(En−Kn)⩽CS(n)logS(n) |
In this section, we will analyze the convergence of the proposed IFPG method.
Let
|(˜Kn−Kn)Pnu|⩽cμ−mn|u|Hm, | (14) |
where
With the help of estimate (14), we can make a conclusion on the iterated projection approximation.
Theorem 3.1 Assume that
˜u′n−KQnPn˜u′n=f, | (15) |
where
Proof From (13), we have
Pn˜u′n=PnK˜un+Pnf=PnK˜un+(˜un−˜Kn˜un)=(I+PnKPn−Kn)˜un. |
Since
|PnKPn−˜Kn|<12. |
By Geometric series theorem, the inverse operator
|(I+PnKPn−˜Kn)−1|⩽11−|PnKPn−˜Kn|<2. |
Let
˜un=QnPn˜u′n | (16) |
Submitting (16) into (13), we can obtain (15).
Since
KQnPn−K=KQn(Pn−I)+KQn(PnKPn−˜Kn), |
we can claim that
limn→∞|KQnPn−K|=0. |
Hence, for sufficiently large n, the inverse
From equations (1) and (15), we have
˜u′n−u=(I−KQnPn)−1(KQnPn−K)u=(I−KQnPn)−1(Γ1+Γ2+Γ3), | (17) |
where
Γ1∶=(
Γ2∶=
Γ3∶=
To analyze the convergence, we next focus on the estimate of Γi, i=1, 2, 3.
The estimate on Γ1 has been presented in Lemma 1.1. By Lemmas 1.1 and 2.2, we can state the following estimate on Γ2.
Lemma 3.1 If u∈Hm with 0 < m≤k, then, for a sufficiently large integer n, there exists a positive constant c independent of n such that,
|K(˜Kn−Kn)Pnu|⩽cnμ−(m+k′)n|u|Hm. |
Proof We know that
|K(˜Kn−Kn)Pnu|=supv∈H,v≠0|⟨K(˜Kn−Kn)Pnu,v⟩|/|v|=supv∈H,v≠0|⟨(˜Kn−Kn)Pnu,K∗v⟩|/|v|=supv∈H,v≠0|⟨(˜Kn−Kn)Pnu,P′nK∗v⟩|/|v|. |
Since
Hence,
|⟨(˜Kn−Kn)Pnu,P′nK∗v⟩|=∑i,i′∈Zn+1∑j∈Zw(i),j′∈Zw(j′)⟨Pnu,fi,j⟩⟨ξi′,j′,K∗v⟩⋅⟨(˜Kn−Kn)fi,j,gi′,j′⟩∣=∑i,i′∈Zn+1∑j∈Zw(i),j′∈Zw(j′)⟨Pnu,fi,j⟩⋅⟨ξi′,j′,K∗v(˜Ki′,j′;i,j−Ki′,j′;i,j)|⩽∑i,i′∈Zn+1|˜Ki′,i−Ki′,i|2⋅|[⟨Pnu,fi,j⟩]j∈w(i)|2|[⟨ξi′,j′,K∗v⟩]j′∈w(i′)|2⩽∑i∈Zn+1∑i+i′>n|Ki′,i|2⋅|[⟨Pnu,fi,j⟩]j∈w(i)|2|[⟨ξi′,j′,K∗v⟩]j′∈w(i′)|2 | (18) |
By Lemma 2.2, there exists a constant c1>0 such that,
|Ki′,i|2⩽c1μ−k(i+i′),i,i′∈Zn+1. | (19) |
To continue the analysis, let us focus on the estimate of |[〈
By noting 〈
|[⟨Pnu,fi,j⟩]j∈w(i)|2⩽|⟨Pnu−u,fi,j⟩|2+|⟨u,fi,j⟩|2. |
Meanwhile,
|[⟨Pnu−u,fi,j⟩]j∈w(i)|2⩽|Pnu−u|⩽cμ−nm|u|Hm⩽cμ−im|u|Hm,|[⟨u,fi,j⟩]j∈w(i)|2=|(χi−χi−1)u|⩽cμ−im|u|Hm. |
Hence, we can conclude that
|[⟨Pnu,fi,j⟩]j∈w(i)|2⩽cμ−im|u|Hm. | (20) |
For |[〈ξi′, j′,
hi′:=[⟨ξp,q,(P′i′−P′i′−1)K∗v⟩:(p,q)∈Ui′. |
Since the bases {ξi, j} and {gi, j} are semi-biorthogonal, we get
⟨ξi′,q,(P′i′−P′i′−1)K∗v⟩=⟨ξi′,q,∑j′∈w(i′)⟨ξi′j′,K∗v⟩gi′j′⟩=⟨ξi′,q,K∗v⟩,q∈w(i′), |
by noting
(P′i′−P′i′−1)K∗v=∑j′∈w(i′)⟨ξi′,j′,K∗v⟩gi′,j′∈Yi′ |
Thus,
|[⟨ξi′,j′,K∗v⟩]j′∈w(i′)|2=|hi′|l2. | (21) |
By Corollary 3.5 in Ref. [6], there exists a constant c2>0 such that
|hi′|l2⩽c2|(P′i′−P′i′−1)K∗v|. |
Then, by Lemma 1.1, the estimate (21) can be continued as
|[⟨ξi′j′,K∗v⟩]j′∈w(i′)|2⩽c2|(P′i′−P′i′−1)K∗v|⩽c2|K(Pi′−Pi′−1)||v|⩽cμ−k′i′|v|. | (22) |
Submitting (19), (20), and (22) into (18), we have
|[⟨˜Kn−Kn)Pnu,P′nK∗v⟩|⩽c∑i∈Zn+1∑i+i′>nμ−k(i+i′)μ−k′i′μ−im|u|Hm|v|. |
Noting that,
∑i∈Zn+1∑i+i′>nμ−k(i+i′)μ−k′i′μ−im=∑i∈Zn+1∑i+i′>nμ(k′−k)iμ(m−k)i′μ−(m+k′)(i+i′)⩽cμ−(m+k′)n∑i∈Zn+1∑i+i′>nμ−(k′+m)(i+u′−n)⩽cnμ−(m+k′)n, |
then we have
|K(˜Kn−Kn)Pnu|⩽cnμ−(m+k′)n|u|Hm, |
which completes the proof.
Now, we are ready to present the convergence results.
Theorem 3.2 Assume that
|u−˜u′n|⩽cS(n)−(m+k′)logS(n)|u|Hm, | (23) |
where c is a constant independent of n.
Proof From Lemma 1.1 and Lemma 3.1, we know there exist positive constants c1 and c2 such that
|Γ1|⩽c1μ−(m+k′)n|u|Hm,|Γ2|⩽c2nμ−(m+k′)n|u|Hm. | (24) |
Next, let us focus on Γ3. For any ϕ∈
|K(˜Kn−Kn)ϕ|=supv∈H,v≠0|⟨K(˜Kn−Kn)ϕ,v⟩|/|v|=supv∈H,v≠0|⟨(˜Kn−Kn)ϕ,K∗v⟩|/|v|=supv∈H,v≠0|⟨(˜Kn−Kn)ϕ,P′nK∗v⟩|/|v| |
where
|⟨(˜Kn−Kn)ϕ,P′nK∗v⟩|⩽∑i∈Zn+1∑i+i′>n|Ki′,i|2⋅|[⟨ϕ,fi,j⟩]j∈w(i)|2|[⟨ξi′,j′,K∗v⟩]j′∈w(i′)|2. |
Since |[〈ϕ, fi, j〉]j∈w(i)|2≤|ϕ|, we obtain
|K(˜Kn−Kn)ϕ|⩽∑i∈Zn+1∑i+i′>nμ−k(i+i′)μ−k′i′|ϕ|⩽cnμ−k′n|ϕ|, |
by employ Lemma 2.2 and the inequality (22).
Then, by the fact
|Γ3|=|K(˜Kn−PnKPn)Qn(˜Kn−PnK)Pnu|⩽cnμ−k′n|(˜Kn−PnK)Pnu|⩽c3nμ−(m+k′)n|u|Hm. |
Finally, submitting the estimate on Γi(i=1, 2, 3) into (17), we get desired (23), by noting the uniform boundedness of
[1] |
Atkinson K E. The numerical solution of integral equations of the second kind[M]. Cambridge: Cambridge University Press, 1997.
|
[2] |
Kress R. Singular integral equations[M]. New York, NY: Springer New York, 1999: 94-124. DOI:10.1007/978-1-4612-0559-3_7
|
[3] |
Beylkin G, Coifman R, Rokhlin V. Fast wavelet transforms and numerical algorithms I[J]. Communications on Pure and Applied Mathematics, 1991, 44(2): 141-183. DOI:10.1002/cpa.3160440202 |
[4] |
Harbrecht H, Schneider R. Wavelet Galerkin schemes for boundary integral equations-implementation and quadrature[J]. SIAM Journal on Scientific Computing, 2006, 27(4): 1347-1370. DOI:10.1137/S1064827503429387 |
[5] |
Harbrecht H, Multerer M. A fast direct solver for nonlocal operators in wavelet coordinates[J]. Journal of Computational Physics, 2021, 428: 110056. DOI:10.1016/j.jcp.2020.110056 |
[6] |
Huang M. A construction of multiscale bases for Petrov-Galerkin methods for integral equations[J]. Advances in Computational Mathematics, 2006, 25(1-3): 7-22. DOI:10.1007/S10444-003-7607-7 |
[7] |
Micchelli C A, Xu Y S, Zhao Y H. Wavelet Galerkin methods for second-kind integral equations[J]. Journal of Computational and Applied Mathematics, 1997, 86(1): 251-270. DOI:10.1016/S0377-0427(97)00160-X |
[8] |
Cai H T, Xu Y S. A fast Fourier-Galerkin method for solving singular boundary integral equations[J]. SIAM Journal on Numerical Analysis, 2008, 46(4): 1965-1984. DOI:10.1137/070703478 |
[9] |
Jiang Y, Xu Y S. Fast Fourier-Galerkin methods for solving singular boundary integral equations: Numerical integration and precondition[J]. Journal of Computational and Applied Mathematics, 2010, 234(9): 2792-2807. DOI:10.1016/j.cam.2010.01.022 |
[10] |
Wang B, Wang R, Xu Y S. Fast Fourier-Galerkin methods for first-kind logarithmic-kernel integral equations on open arcs[J]. Science in China Series A: Mathematics, 2010, 53(1): 1-22. DOI:10.1007/S11425-010-0014-X |
[11] |
Jiang Y, Wang B, Xu Y S. A fast Fourier-Galerkin method solving a boundary integral equation for the biharmonic equation[J]. SIAM Journal on Numerical Analysis, 2014, 52(5): 2530-2554. DOI:10.1137/140955744 |
[12] |
Jiang Y, Wang B, Xu Y S. A fully discrete fast Fourier-Galerkin method solving a boundary integral equation for the biharmonic equation[J]. Journal of Scientific Computing, 2018, 76(3): 1594-1632. DOI:10.1007/S10915-018-0688-8 |
[13] |
Chen Z Y, Xu Y S. The Petrov-Galerkin and iterated Petrov-Galerkin methods for second-kind integral equations[J]. SIAM Journal on Numerical Analysis, 1998, 35(1): 406-434. DOI:10.1137/S0036142996297217 |
[14] |
Sloan I H. Math. Concepts Methods Sci. Engrg. : volume 42 Super-convergence[M]. [S. l. ]: Plenum, New York, 1990: 35-70.
|
[15] |
Chen Z Y, Micchelli C A, Xu Y S. The Petrov-Galerkin method for second kind integral equations Ⅱ: Multiwavelet schemes[J]. Advances in Computational Mathematics, 1997, 7(3): 199-233. DOI:10.1023/A:1018994802659 |
[16] |
Chen Z Y, Micchelli C A, Xu Y S. Multiscale methods for Fredholm integral equations[M]. Cambridge: Cambridge University Press, 2015.
|