Processing math: 100%
  中国科学院大学学报  2023, Vol. 40 Issue (3): 289-296   PDF    
Iterated fast wavelet Petrov-Galerkin methods for Fredholm integral equations of the second kind
YU Dandan, YAN Dunyan     
School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100190, China
Abstract: In this paper, we develop an iterated fast Petrov-Galerkin method for Fredholm integral equations of the second kind with the smooth kernel. The corresponding convergence and the computational complexity are analyzed. Super-convergence can be achieved.
Keywords: Petrov-Galerkin method    integral equation    super convergence    iterated method    
求解第二型Fredholm积分方程的迭代快速小波Petrov-Galerkin方法
于丹丹, 燕敦验     
中国科学院大学数学科学学院, 北京 100190
摘要: 给出求解带光滑核的第二型Fredholm积分方程的迭代快速小波Petrov-Galerkin方法,并分析该方法的收敛性以及计算复杂度, 证明该方法可达到超收敛阶。
关键词: Petrov-Galerkin方法    积分方程    超收敛    迭代法    

There have been extensive research activities on numerical solutions of boundary integral equations. Many boundary value problems of partial differential equations can be reduced to Fredholm integral equations of the second kind. We consider in this paper only the classical Fredholm integral equations of the second kind, that is,

uKu=f,fX, (1)

where u is unknown, X is a Hilbert space and K: XX is a compact operator with smooth kernel function K defined by

(Ku)(t):=IK(t,τ)u(τ)dτ,tI:=[0,1].

The quadrature methods (Nystrom's method) and projection methods (collocation method and Galerkin method) are the two commonly used methods for solving (1) numerically (see Refs. [1-2]). It is well known that the Galerkin methods enjoy nice convergence properties but at the same time suffer from having high computational costs. In Ref. [3], the authors found that the Calderon-Zygmund integral operators have an almost sparse matrix representation under the wavelet basis. This discovery inspired a series of research on fast wavelet algorithms[4-7]. Based on a compression strategy, a fast wavelet Galerkin method was proposed in Ref. [7]. In Refs. [6-7], the fast wavelet Petrov-Galerkin methods and fast collocation methods were developed for integral equations. Recently, in Refs. [8-12], a fast Fourier-Galerkin method was proposed, solving the boundary integral equations deduced from the boundary value problems of the Laplace equations or biharmonic equations.

A theoretical framework for the analysis of super-convergence for the iterated Petrov-Galerkin method was established in Ref. [13]. The Petrov-Galerkin methods admit the trial spaces and test spaces to be different. Therefore, for the trial spaces, we employ piecewise polynomials of higher degrees. While, for the test spaces, we use the piecewise polynomials of lower degrees. In this way, we can still obtain the same convergence rate as the Galerkin method with less computational cost.

In this paper, we aim to develop an Iterated Fast wavelet Petrov-Galerkin method (IFPG) for solving (1). The key idea is to combine the fast wavelet Petrov-Galerkin method with Sloan iteration. The fast wavelet Petrov-Galerkin method has been well studied to get the optimal convergence rate by a sparse linear system. The well-known Sloan iteration is an example of iteration post-processing methods[14], which can achieve super-convergence. In this paper, we first apply a truncation strategy to the wavelet Petrov-Galerkin method to generate a sparse linear system. Then, we employ the Sloan iteration to improve the convergence rate. We will prove that the proposed method can achieve super-convergence by a sparse linear system.

1 Iterated Petrov-Galerkin method

We will recall the standard iterated Petrov-Galerkin method for (1) in this section.

To state the Petrov-Galerkin method, let us begin with the trial space and test space XnX and YnX and the multiscale wavelet bases for them.

Let Skm be the space of piecewise polynomials of degree at most k-1 defined on I, corresponding to m sub-intervals equally divided. We choose positive integers k, k′, μ satisfying k=k′μ and define trial spaces and test spaces

Xn:=Skμn,Yn:=Skμn+1,nN0:={0,1,2,}.

It was shown in Ref. [15] that {Xn, Yn} forms a regular pair of subspaces, we also call them (k, k′) elements. It is straightforward to compute that

dimXn=dimYn:=S(n)=kμn.

For any nN∶={1, 2, …}, let Un be the set of lattice points in R2 defined as

Un:={(i,j):jZw(i),iZn+1}

where Zn+1∶={0, 1, 2, …, n}, and

w(i)={k,i=0,k(μ1)μi1,iN. (2)

As constructed in section 2 in Ref. [6], let {fi, j∶(i, j)∈Un} and {gi, j∶(i, j)∈Un} be the bases for Xn and Yn, respectively. These bases enjoy the following propositions. We denote by Hm the classical Sobolev space.

Proposition 1.1  Assume that {fi, j∶(i, j)∈Un} and {gi, j∶(i, j)∈Un} are bases for Xn and Yn, respectively, constructed as in section 2 in Ref.[6].

Then, we have

1) {fi, j∶(i, j)∈Un} is an orthonormal basis for Xn.

2) {gi, j∶(i, j)∈Un} and {fi′, j′∶(i′, j′)∈Un} are semi-biorthogonal, namely

gi,j,fi,j=δiiδjj,ii.

3) The functions fi, j, gi, j, (i, j)∈Un have k order of vanishing moments, i.e.,

fi,j,h=0,gi,j,h=0,

where h(t)∶=tr, tI, rZk.

4) If uHm with 0 < mk, then

infxnXnuxnck,mμmn|u|Hm

where ck, m=(k+1-m)m2-m+1/2.

5) The length of the supports supp(fi, j) and supp(gi, j) satisfy the condition

meas(suppfi,j)1μi1,
 meas (suppgi,j)1μi1,i1Zn.

Next, we recall the generalized best approximation as follows (see Ref. [16]).

Definition[generalized best approximation] Given xX an element PnxXn is called a generalized best approximation from Xn to x with respect to Yn if it satisfies the equation

xPnx,yn=0 for all ynYn.

Similarly, given yX=X an element PnyYn is called a generalized best approximation from Yn to y with respect to Xn if it satisfies the equation

xn,yPny=0 for all xnXn

Since YnXn=0, the operator PnXXn is a projection.

To represent the operator Pn specifically, we also need a sequence of auxiliary functions {ξi, j∶(i, j)∈Un} in Xn (see Ref. [6]). These auxiliary functions enjoy the following properties.

1) For i′-1∈Zn, j′Zw(i), functions ξi′, j′ have k′ order of vanishing moments, that is

ξi,j,tr=0,rZk,j=∈Zw(i),i1Zn.

2) The auxiliary functions {ξi, j∶(i, j)∈Un} is biorthogonal relative to the set of functions {gi, j∶(i, j)∈Un} that is,

ξi,j,gi,j=δiiδjj,(i,j),(i,j)Un.

These properties imply that the generalized best approximation from Yn to y with respect to Xn is

Pny=(i,j)Unξi,j,ygi,j.

With these preparation, we are ready to state the iterated Petrov-Galerkin method as follow.

The Petrov-Galerkin method for equation (1) is: finding unXn, such that,

unKun,vn=f,vn for all vnYn (3)

The iterated projection method is defined by Sloan iteration,

un:=f+Kun. (4)

It has been proved in Ref. [14] that the iterated projection approximation u′n satisfies the following integral equation

unKnun=f, (5)

where Kn∶=PnK|Xn.

To present the convergence results of the iterated Petrov-Galerkin method, we need the following estimate on projection operator Pn.

Lemma 1.1  If the kernel function KCk(I×I), then, for any iN, there exist constants c1 and c2 independent of n such that

|K(IPi)|c1μki.

Moreover, if uHm with 0 < mk, we have

|K(IPi)u|c2μ(m+k)i|u|Hm.

Proof  Let Kt∶=K(t, ·). Since KCk(I×I), we have for all tI, KtHk(I), and

|KtPiKt|cμki|Kt|Hkc|K|Hkμki, where c is a constant independent of t and i.

For any ϕX, we have PiKtYi and 〈ϕ-Piϕ, PiKt〉=0. Hence,

|K(IPi)ϕ|=|IK(t,τ)(ϕ(τ)Piϕ(τ))ds|=|ϕPiϕ,Kt|=|ϕPiϕ,KtPiKt|⩽∣ϕPiϕ||KtPiKt|c1μki|ϕ|

This implies |K(-IPi)|≤c1μ-k′i.

If uHm, there exists a positive constant c2, such that |u-Piu|≤c2μ-mi. Then

K(IPiu|uPiu||KtPiKt|c1c2μ(m+k)i|u|Hm,

from which the desired estimate follows.

Theorem 1.1  Assume that K is a compact operator on X not having 1 as an eigenvalue. Let uL2(I) be the unique solution of (1) and u′n be the iterated projection approximation defined by (4). Then, there exists a positive constant C independent of n for which

uunCμ(m+k)n|u|Hm. (6)

Proof  It has been proved in Ref. [14] that

uunCKPnKinfxnXnuxn. (7)

From Lemma 1.1, we know that there exists a positive constant c such that

KPnKcμkn.

Meanwhile, by the Proposition 1.1, we have

infxnXnuxnck,mμmn|u|Hm.

Substituting the above two inequalities into (7), we obtain the estimate (6).

This is the super-convergence property of the iterated method. It is well known that the Galerkin method has high-order accuracy but suffers from high computational costs to generate the coefficient matrix. Then, a question arises naturally: Can we design a fast method that can preserve the super-convergence property within quasi-linear computational costs? We will give our answers in the following sections.

2 Iterated fast wavelet Petrov-Galerkin methods

This section will present a fast and effective algorithm by truncating the dense coefficient matrix to obtain a sparse one. The number of nonzero entries of the sparse matrix will be estimated.

We first show the discrete form of equation (3). By introducing the vector notations

cn:=[ci,j](i,j)Un,gn:=[gi,j,f](i,j)Un,

and matrix notations

En:=[gi,j,fi,j](i,j),(i,j)Un,K:=[Ki,i]i,iZn+1,

where Ki′, i∶=[gi,j,Kfi,j]jZw(i),jZw(i), the equation (3) can be rewritten as

(EnKn)cn=gn. (8)

In equation (8), the matrix En is sparse, but Kn is dense. Fortunately, the elements in matrix Kn decay speedy, and many of them are insignificant. This observation makes it is possible to obtain a sparse matrix by abandoning some tiny elements. In the following two lemmas, the estimate on the elements′ decay in matrix Kn will be presented.

Lemma 2.1  If the kernel function KCk(I×I), then, for any iN, there exist a positive constant c independent of n such that, for all (i, j), (i′, j′)∈Un

|Ki,j;i,j|cμ(i+i)(k+1/2), (9)

where Ki′, j′; i, j=〈gi′, j′, Kfi,j〉.

Proof  By Taylor's Theorem, the kernel K can be rewritten as

K=K1+K2+K3,

where K1(t, ·), K2(·, τ) are polynomials of degree less than k-1 respect to τ and t respectively, and K3 is the remainder term in the integral form. That is

K3(t,τ):=(tt0)k(ττ0)kk!k!γk,k(t,τ),

where τ0Ii, j, t0I′(i′, j′), and

γk,k(t,τ):=IIK(k,k)(t0+θ1(tt0),τ0+θ2(ττ0))(1θ1)k1(1θ2)k1 dθ1 dθ2.

It can be proved that there exists a constant C>0 such that, for all (t, τ)∈I2,

|γk,k(t,τ)|C.

Using the vanishing moments of fi, j, and gi′, j′, we have

|Ki,j;i,j|=|IIK(t,τ)fi,j(t)gi,j(τ)dtdτ|=|IIK3(t,τ)fi,j(t)gi,j(τ)dtdτ||fi,j||gi,j|Ii,jIij|K3(t,τ)|dtdτCk!k!|fi,j||gi,j|Ii,jII,j|(tt0)k(ττ0)k|dtdτC1|fi,j||gi,j|(|Ii,j|)k+1(|Ii,j|)k+1.

By the Proposition 1.1, we know, for all

jZw(i),jZw(i),i1Zni1Zn,supτIi,j|fi,j(τ)|cμi12,supτI(i,j)|gi,j(τ)|cμi12. (10)

The following proof is divided into four cases, depending on whether i and i′ equal to zero or not.

Case 1  Assume i≥1, and i′≥1. There exists a positive constant c1 such that

|Ki,j,i,j|μi12μi12μ(k+1)(i1)μ(k+1)(i1)c1μ(k+1/2)(i+i).

Case 2  Assume i=0, and i′≥1. There exists a positive constant c2 such that

|Kij;0,j|C1|f0,j||gi,j|(|Ii,j|)k+1c2μi(k+1/2)

by noting that w(0)=k.

Case 3  Assume i≥1, and i′=0. There exists a positive constant c3 such that

|K0,j;i,j|C1|fi,j||g0,j|(|I(i,j)|)k+1c3μi(k+1/2).

Case 4  Assume i=0, and i′=0. There exists a positive constant c4 such that

|K0,j;0,j|c4.

Taking c∶=max{c1, c2, c3, c4}, we obtain the desired estimate (9).

Lemma 2.2  If the kernel function KCk(I×I), then there exists a positive constant C independent of n such that, for all i, i′Zn+1,

|Ki,i|2Cμk(i+i).

Proof  By lemma 2.1, we have

|Ki,i|22|Ki,i|2Fjw(i)jw(i)|Ki,j;i,j|2c2w(i)w(i)μ(i+i)(2k+1).

Combing with (2), we get

|Ki,i|22c2μi1μi1μ(i+i)(2k+1)c2μ2μ2k(i+i),

which completes the proof.

With the estimate in lemma 2.2, we present the following truncated matrix. Let

˜Kn:=[˜Ki,i]i,iZn+1,

where

˜Ki,j:={Ki,i,i+in,0, else.  (11)

Replacing Kn in (8) by the truncated matrix ˜Kn, we get the sparse linear system

(En˜Kn)˜cn=gn, (12)

where ˜cn:=[˜ci,j:=(i,j)Un] is desired.

Then we can present the fast method as follow.

Algorithm 2.1  iterated fast wavelet Petrov-Galerkin methods

Step 1  Solving the linear equations (12), we get ˜cn.

Step 2  Let

˜u:=(i,j)Un˜ci,jfi,j,

Step 3  Let the iterated approximation be

˜un:=K˜un+f (13)

To show the sparsity, we present the estimation on the number of nonzero entries of matrix En-˜Kn, which has been proved in Ref. [16]. We use the notation N(M) to denote the number of nonzero entries of matrix M.

Theorem 2.1  There exists a positive constant C such that for all nN,

N(EnKn)CS(n)logS(n)
3 Convergence analysis

In this section, we will analyze the convergence of the proposed IFPG method.

Let Kn be a bounded linear integral operator relative to the basis {fi, j∶=(i, j)∈Un} having the matrix representation En-1˜Kn. It has been proved in Ref. [16, Theorem 3.4.12] that if uHm, then there exists a constant c > 0 such that

|(˜KnKn)Pnu|cμmn|u|Hm, (14)

where Kn∶=PnKPn.

With the help of estimate (14), we can make a conclusion on the iterated projection approximation.

Theorem 3.1  Assume that K is a compact operator on X not having 1 as an eigenvalue. Then, the iterated projection approximation ˜un, defined by (13), satisfies

˜unKQnPn˜un=f, (15)

where Qn=(I+PnKPn˜Kn)1.

Proof  From (13), we have

Pn˜un=PnK˜un+Pnf=PnK˜un+(˜un˜Kn˜un)=(I+PnKPnKn)˜un.

Since limn|PnKPn˜Kn|=0, then there exists N>0 such that, for all nN,

|PnKPn˜Kn|<12.

By Geometric series theorem, the inverse operator [I+PnKPn˜Kn]1 exists, and

|(I+PnKPn˜Kn)1|11|PnKPn˜Kn|<2.

Let Qn:=(I+PnKPn˜Kn)1, then we have

˜un=QnPn˜un (16)

Submitting (16) into (13), we can obtain (15).

Since

KQnPnK=KQn(PnI)+KQn(PnKPn˜Kn),

we can claim that

limn|KQnPnK|=0.

Hence, for sufficiently large n, the inverse [IKQnPn]1 exists. The equation (15) has a unique solution ˜u.

From equations (1) and (15), we have

˜unu=(IKQnPn)1(KQnPnK)u=(IKQnPn)1(Γ1+Γ2+Γ3), (17)

where

Γ1∶=(KPnK)u,

Γ2∶=K(˜KnPnK)Pnu,

Γ3∶=K(˜KnPnKPn)Qn(˜KnPnK)Pnu.

To analyze the convergence, we next focus on the estimate of Γi, i=1, 2, 3.

The estimate on Γ1 has been presented in Lemma 1.1. By Lemmas 1.1 and 2.2, we can state the following estimate on Γ2.

Lemma 3.1  If uHm with 0 < mk, then, for a sufficiently large integer n, there exists a positive constant c independent of n such that,

|K(˜KnKn)Pnu|cnμ(m+k)n|u|Hm.

Proof  We know that

|K(˜KnKn)Pnu|=supvH,v0|K(˜KnKn)Pnu,v|/|v|=supvH,v0|(˜KnKn)Pnu,Kv|/|v|=supvH,v0|(˜KnKn)Pnu,PnKv|/|v|.

Since PnuXn, PnvYn, we can represent them as Pnu=(i,j)UnPnu,fi,jfi,j, Pnv=(i,j)Unξi,j,vgi,j.

Hence,

|(˜KnKn)Pnu,PnKv|=i,iZn+1jZw(i),jZw(j)Pnu,fi,jξi,j,Kv(˜KnKn)fi,j,gi,j=i,iZn+1jZw(i),jZw(j)Pnu,fi,jξi,j,Kv(˜Ki,j;i,jKi,j;i,j)|i,iZn+1|˜Ki,iKi,i|2|[Pnu,fi,j]jw(i)|2|[ξi,j,Kv]jw(i)|2iZn+1i+i>n|Ki,i|2|[Pnu,fi,j]jw(i)|2|[ξi,j,Kv]jw(i)|2 (18)

By Lemma 2.2, there exists a constant c1>0 such that,

|Ki,i|2c1μk(i+i),i,iZn+1. (19)

To continue the analysis, let us focus on the estimate of |[〈Pnu, fi, j〉]jw(i)|2, and |[〈ξi′, j′, Kv〉]j′w(i′)|2, respectively.

By noting 〈 Pnu, fi, j〉=〈 Pnu-u, fi, j〉+〈u, fi, j〉, we have

|[Pnu,fi,j]jw(i)|2|Pnuu,fi,j|2+|u,fi,j|2.

Meanwhile,

|[Pnuu,fi,j]jw(i)|2|Pnuu|cμnm|u|Hmcμim|u|Hm,|[u,fi,j]jw(i)|2=|(χiχi1)u|cμim|u|Hm.

Hence, we can conclude that

|[Pnu,fi,j]jw(i)|2cμim|u|Hm. (20)

For |[〈ξi′, j′, Kv〉]j′w(i′)|2, we introduce a notation

hi:=[ξp,q,(PiPi1)Kv:(p,q)Ui.

Since the bases {ξi, j} and {gi, j} are semi-biorthogonal, we get

ξi,q,(PiPi1)Kv=ξi,q,jw(i)ξij,Kvgij=ξi,q,Kv,qw(i),

by noting

(PiPi1)Kv=jw(i)ξi,j,Kvgi,jYi

Thus,

|[ξi,j,Kv]jw(i)|2=|hi|l2 (21)

By Corollary 3.5 in Ref. [6], there exists a constant c2>0 such that

|hi|l2c2|(PiPi1)Kv|.

Then, by Lemma 1.1, the estimate (21) can be continued as

|[ξij,Kv]jw(i)|2c2|(PiPi1)Kv|c2|K(PiPi1)||v|cμki|v|. (22)

Submitting (19), (20), and (22) into (18), we have

|[˜KnKn)Pnu,PnKv|ciZn+1i+i>nμk(i+i)μkiμim|u|Hm|v|.

Noting that,

iZn+1i+i>nμk(i+i)μkiμim=iZn+1i+i>nμ(kk)iμ(mk)iμ(m+k)(i+i)cμ(m+k)niZn+1i+i>nμ(k+m)(i+un)cnμ(m+k)n,

then we have

|K(˜KnKn)Pnu|cnμ(m+k)n|u|Hm,

which completes the proof.

Now, we are ready to present the convergence results.

Theorem 3.2  Assume that K is a compact operator on X not having 1 as an eigenvalue. For 0 < mk, if uHm and KCk(I×I), then, for a sufficiently large integer n, the iterated projection approximation ˜un, defined by (13), satisfies.

|u˜un|cS(n)(m+k)logS(n)|u|Hm, (23)

where c is a constant independent of n.

Proof  From Lemma 1.1 and Lemma 3.1, we know there exist positive constants c1 and c2 such that

|Γ1|c1μ(m+k)n|u|Hm,|Γ2|c2nμ(m+k)n|u|Hm. (24)

Next, let us focus on Γ3. For any ϕXn, we have

|K(˜KnKn)ϕ|=supvH,v0|K(˜KnKn)ϕ,v|/|v|=supvH,v0|(˜KnKn)ϕ,Kv|/|v|=supvH,v0|(˜KnKn)ϕ,PnKv|/|v|

where

|(˜KnKn)ϕ,PnKv|iZn+1i+i>n|Ki,i|2|[ϕ,fi,j]jw(i)|2|[ξi,j,Kv]jw(i)|2.

Since |[〈ϕ, fi, j〉]jw(i)|2≤|ϕ|, we obtain

|K(˜KnKn)ϕ|iZn+1i+i>nμk(i+i)μki|ϕ|cnμkn|ϕ|,

by employ Lemma 2.2 and the inequality (22).

Then, by the fact Qn(˜KnPnK)PnuXn and the inequality (14), we know that

|Γ3|=|K(˜KnPnKPn)Qn(˜KnPnK)Pnu|cnμkn|(˜KnPnK)Pnu|c3nμ(m+k)n|u|Hm.

Finally, submitting the estimate on Γi(i=1, 2, 3) into (17), we get desired (23), by noting the uniform boundedness of (IKQnPn)1.

References
[1]
Atkinson K E. The numerical solution of integral equations of the second kind[M]. Cambridge: Cambridge University Press, 1997.
[2]
Kress R. Singular integral equations[M]. New York, NY: Springer New York, 1999: 94-124. DOI:10.1007/978-1-4612-0559-3_7
[3]
Beylkin G, Coifman R, Rokhlin V. Fast wavelet transforms and numerical algorithms I[J]. Communications on Pure and Applied Mathematics, 1991, 44(2): 141-183. DOI:10.1002/cpa.3160440202
[4]
Harbrecht H, Schneider R. Wavelet Galerkin schemes for boundary integral equations-implementation and quadrature[J]. SIAM Journal on Scientific Computing, 2006, 27(4): 1347-1370. DOI:10.1137/S1064827503429387
[5]
Harbrecht H, Multerer M. A fast direct solver for nonlocal operators in wavelet coordinates[J]. Journal of Computational Physics, 2021, 428: 110056. DOI:10.1016/j.jcp.2020.110056
[6]
Huang M. A construction of multiscale bases for Petrov-Galerkin methods for integral equations[J]. Advances in Computational Mathematics, 2006, 25(1-3): 7-22. DOI:10.1007/S10444-003-7607-7
[7]
Micchelli C A, Xu Y S, Zhao Y H. Wavelet Galerkin methods for second-kind integral equations[J]. Journal of Computational and Applied Mathematics, 1997, 86(1): 251-270. DOI:10.1016/S0377-0427(97)00160-X
[8]
Cai H T, Xu Y S. A fast Fourier-Galerkin method for solving singular boundary integral equations[J]. SIAM Journal on Numerical Analysis, 2008, 46(4): 1965-1984. DOI:10.1137/070703478
[9]
Jiang Y, Xu Y S. Fast Fourier-Galerkin methods for solving singular boundary integral equations: Numerical integration and precondition[J]. Journal of Computational and Applied Mathematics, 2010, 234(9): 2792-2807. DOI:10.1016/j.cam.2010.01.022
[10]
Wang B, Wang R, Xu Y S. Fast Fourier-Galerkin methods for first-kind logarithmic-kernel integral equations on open arcs[J]. Science in China Series A: Mathematics, 2010, 53(1): 1-22. DOI:10.1007/S11425-010-0014-X
[11]
Jiang Y, Wang B, Xu Y S. A fast Fourier-Galerkin method solving a boundary integral equation for the biharmonic equation[J]. SIAM Journal on Numerical Analysis, 2014, 52(5): 2530-2554. DOI:10.1137/140955744
[12]
Jiang Y, Wang B, Xu Y S. A fully discrete fast Fourier-Galerkin method solving a boundary integral equation for the biharmonic equation[J]. Journal of Scientific Computing, 2018, 76(3): 1594-1632. DOI:10.1007/S10915-018-0688-8
[13]
Chen Z Y, Xu Y S. The Petrov-Galerkin and iterated Petrov-Galerkin methods for second-kind integral equations[J]. SIAM Journal on Numerical Analysis, 1998, 35(1): 406-434. DOI:10.1137/S0036142996297217
[14]
Sloan I H. Math. Concepts Methods Sci. Engrg. : volume 42 Super-convergence[M]. [S. l. ]: Plenum, New York, 1990: 35-70.
[15]
Chen Z Y, Micchelli C A, Xu Y S. The Petrov-Galerkin method for second kind integral equations Ⅱ: Multiwavelet schemes[J]. Advances in Computational Mathematics, 1997, 7(3): 199-233. DOI:10.1023/A:1018994802659
[16]
Chen Z Y, Micchelli C A, Xu Y S. Multiscale methods for Fredholm integral equations[M]. Cambridge: Cambridge University Press, 2015.
Iterated fast wavelet Petrov-Galerkin methods for Fredholm integral equations of the second kind
YU Dandan, YAN Dunyan